Updates from: 06/09/2022 01:14:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Previously updated : 03/31/2022 Last updated : 06/08/2022
Open your web app in a code editor such as Visual Studio Code. Under the project
|Key |Value | ||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.1](#step-2-register-a-web-application). |
-|`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.2](#step-22-create-a-web-app-client-secret) |
+|`APP_CLIENT_SECRET`|The client secret value for the web app you created in [step 2.2](#step-22-create-a-web-app-client-secret) |
|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). | |`RESET_PASSWORD_POLICY_AUTHORITY`| The **Reset password** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<reset-password-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your Reset password user flow such as `B2C_1_reset_password_node_app`.| |`EDIT_PROFILE_POLICY_AUTHORITY`|The **Profile editing** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<profile-edit-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your reset password user flow such as `B2C_1_edit_profile_node_app`. |
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 03/30/2022 Last updated : 06/08/2022
Open your web app in a code editor such as Visual Studio Code. Under the `call-p
|Key |Value | ||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.3](#step-23-register-the-web-app). |
-|`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.4](#step-24-create-a-client-secret) |
+|`APP_CLIENT_SECRET`|The client secret value for the web app you created in [step 2.4](#step-24-create-a-client-secret) |
|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority for the user flow you created in [step 1](#step-1-configure-your-user-flow) such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). | |`AUTHORITY_DOMAIN`| The Azure AD B2C authority domain such as `https://<your-tenant-name>.b2clogin.com`. Replace `<your-tenant-name>` with the name of your tenant.| |`APP_REDIRECT_URI`| The application redirect URI where Azure AD B2C will return authentication responses (tokens). It matches the **Redirect URI** you set while registering your app in Azure portal. This URL need to be publicly accessible. Leave the value as is.|
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Previously updated : 09/15/2021 Last updated : 06/08/2022
Open the *app_config.py* file. This file contains information about your Azure A
||| |`b2c_tenant`| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso`).| |`CLIENT_ID`| The web API application ID from [step 2.1](#step-21-register-the-app).|
-|`CLIENT_SECRET`| The client secret you created in [step 2.2](#step-22-create-a-web-app-client-secret). To help increase security, consider storing it instead in an environment variable, as recommended in the comments. |
+|`CLIENT_SECRET`| The client secret value you created in [step 2.2](#step-22-create-a-web-app-client-secret). To help increase security, consider storing it instead in an environment variable, as recommended in the comments. |
|`*_user_flow`|The user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).| | | |
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Previously updated : 01/18/2022 Last updated : 06/08/2022
In this step, configure the claims AD FS application returns to Azure AD B2C.
1. For **Client ID**, enter the application ID that you previously recorded. 1. For the **Scope**, enter the `openid`.
-1. For **Response type**, select **id_token**, which makes the **Client secret** optional. Learn more about use of [Client ID and secret](identity-provider-generic-openid-connect.md#client-id-and-secret) when adding a generic OpenID Connect identity provider.
+1. For **Response type**, select **id_token**. So, the **Client secret** value isn't needed. Learn more about use of [Client ID and secret](identity-provider-generic-openid-connect.md#client-id-and-secret) when adding a generic OpenID Connect identity provider.
1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider). 1. Under **Identity provider claims mapping**, select the following claims:
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 09/16/2021 Last updated : 06/08/2022
If you want to get the `family_name` and `given_name` claims from Azure AD, you
For example, `https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration`. If you use a custom domain, replace `contoso.com` with your custom domain in `https://login.microsoftonline.com/contoso.com/v2.0/.well-known/openid-configuration`. 1. For **Client ID**, enter the application ID that you previously recorded.
-1. For **Client secret**, enter the client secret that you previously recorded.
+1. For **Client secret**, enter the client secret value that you previously recorded.
1. For **Scope**, enter `openid profile`. 1. Leave the default values for **Response type**, and **Response mode**. 1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider).
You need to store the application key that you created in your Azure AD B2C tena
1. Select **Policy keys** and then select **Add**. 1. For **Options**, choose `Manual`. 1. Enter a **Name** for the policy key. For example, `ContosoAppSecret`. The prefix `B2C_1A_` is added automatically to the name of your key when it's created, so its reference in the XML in following section is to *B2C_1A_ContosoAppSecret*.
-1. In **Secret**, enter your client secret that you recorded earlier.
+1. In **Secret**, enter your client secret value that you recorded earlier.
1. For **Key usage**, select `Signature`. 1. Select **Create**.
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Title: Tutorial for configuring N8 Identity with Azure Active Directory B2C
+ Title: Configure TheAccessHub Admin Tool by using Azure Active Directory B2C
-description: Tutorial for configuring TheAccessHub Admin Tool with Azure Active Directory B2C to address customer accounts migration and Customer Service Requests (CSR) administration.
+description: In this tutorial, configure TheAccessHub Admin Tool by using Azure Active Directory B2C to address customer account migration and customer service request (CSR) administration.
-# Tutorial for configuring TheAccessHub Admin Tool with Azure Active Directory B2C
+# Configure TheAccessHub Admin Tool by using Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate Azure Active Directory (AD) B2C with [TheAccessHub Admin Tool](https://n8id.com/products/theaccesshub-admintool/) from N8 Identity. This solution addresses customer accounts migration and Customer Service Requests (CSR) administration.
+In this tutorial, we provide guidance on how to integrate Azure Active Directory B2C (Azure AD B2C) with [TheAccessHub Admin Tool](https://n8id.com/products/theaccesshub-admintool/) from N8 Identity. This solution addresses customer account migration and customer service request (CSR) administration.
-This solution is suited for you, if you have one or more of the following needs:
+This solution is suited for you if you have one or more of the following needs:
-- You have an existing site and you want to migrate to Azure AD B2C. However, you're struggling with migration of your customer accounts including passwords
+- You have an existing site and you want to migrate to Azure AD B2C. However, you're struggling with migration of your customer accounts, including passwords.
-- You require a tool for your CSR to administer Azure AD B2C accounts.
+- You need a tool for your CSR to administer Azure AD B2C accounts.
- You have a requirement to use delegated administration for your CSRs. - You want to synchronize and merge your data from many repositories into Azure AD B2C.
-## Pre-requisites
+## Prerequisites
To get started, you'll need: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](./tutorial-create-tenant.md). Tenant must be linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md). The tenant must be linked to your Azure subscription.
-- A TheAccessHub Admin Tool environment: Contact [N8 Identity](https://n8id.com/contact/) to provision a new environment.
+- A TheAccessHub Admin Tool environment. Contact [N8 Identity](https://n8id.com/contact/) to provision a new environment.
-- [Optional] Connection and credential information for any databases or Lightweight Directory Access Protocols (LDAPs) you want to migrate customer data from.
+- (Optional:) Connection and credential information for any databases or Lightweight Directory Access Protocols (LDAPs) that you want to migrate customer data from.
-- [Optional] Configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you wish to integrate TheAccessHub Admin Tool into your sign-up policy flow.
+- (Optional:) A configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you want to integrate TheAccessHub Admin Tool into your sign-up policy flow.
## Scenario description
-The TheAccessHub Admin Tool runs like any other application in Azure. It can run in either N8 IdentityΓÇÖs Azure subscription, or the customerΓÇÖs subscription. The following architecture diagram shows the implementation.
+The TheAccessHub Admin Tool runs like any other application in Azure. It can run in either N8 Identity's Azure subscription or the customer's subscription. The following architecture diagram shows the implementation.
-![Image showing n8identity architecture diagram](./media/partner-n8identity/n8identity-architecture-diagram.png)
+![Diagram of the n8identity architecture.](./media/partner-n8identity/n8identity-architecture-diagram.png)
|Step | Description | |:--| :--|
-| 1. | User arrives at a login page. Users select sign-up to create a new account and enter information into the page. Azure AD B2C collects the user attributes.
-| 2. | Azure AD B2C calls the TheAccessHub Admin Tool and passes on the user attributes
+| 1. | Each user arrives at a login page. The user creates a new account and enters information on the page. Azure AD B2C collects the user attributes.
+| 2. | Azure AD B2C calls the TheAccessHub Admin Tool and passes on the user attributes.
| 3. | TheAccessHub Admin Tool checks your existing database for current user information.
-| 4. | The user record is synced from the database to TheAccessHub Admin Tool.
+| 4. | User records are synced from the database to TheAccessHub Admin Tool.
| 5. | TheAccessHub Admin Tool shares the data with the delegated CSR/helpdesk admin. | 6. | TheAccessHub Admin Tool syncs the user records with Azure AD B2C.
-| 7. |Based on the success/failure response from the TheAccessHub Admin Tool, Azure AD B2C sends a customized welcome email to the user.
+| 7. |Based on the success/failure response from the TheAccessHub Admin Tool, Azure AD B2C sends a customized welcome email to users.
-## Create a Global Admin in your Azure AD B2C tenant
+## Create a Global Administrator in your Azure AD B2C tenant
-The TheAccessHub Admin Tool requires permissions to act on behalf of a Global Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators won; t impact TheAccessHub Admin ToolΓÇÖs ability to interact with the tenant.
+The TheAccessHub Admin Tool requires permissions to act on behalf of a Global Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators won't affect TheAccessHub Admin Tool's ability to interact with the tenant.
-To create a Global Administrator, follow these steps:
+To create a Global Administrator:
-1. In the Azure portal, sign into your Azure AD B2C tenant as an administrator. Navigate to **Azure Active Directory** > **Users**
-2. Select **New User**
-3. Choose **Create User** to create a regular directory user and not a customer
-4. Complete the Identity information form
+1. In the Azure portal, sign in to your Azure AD B2C tenant as an administrator. Go to **Azure Active Directory** > **Users**.
+2. Select **New User**.
+3. Choose **Create User** to create a regular directory user and not a customer.
+4. Complete the identity information form:
- a. Enter the username such as ΓÇÿtheaccesshubΓÇÖ
+ a. Enter the username, such as **theaccesshub**.
- b. Enter the name such as ΓÇÿTheAccessHub Service AccountΓÇÖ
+ b. Enter the account name, such as **TheAccessHub Service Account**.
-5. Select **Show Password** and copy the initial password for later use
-6. Assign the Global Administrator role
+5. Select **Show Password** and copy the initial password for later use.
+6. Assign the Global Administrator role:
- a. Select the userΓÇÖs current roles **User** to change it
+ a. For **User**, select the user's current role to change it.
- b. Check the record for Global Administrator
+ b. Select the **Global Administrator** record.
- c. **Select** > **Create**
+ c. Select **Create**.
## Connect TheAccessHub Admin Tool with your Azure AD B2C tenant
-TheAccessHub Admin Tool uses MicrosoftΓÇÖs Graph API to read and make changes to your directory. It acts as a Global Administrator in your tenant. Additional permission is needed by TheAccessHub Admin Tool, which you can grant from within the tool.
+TheAccessHub Admin Tool uses the Microsoft Graph API to read and make changes to your directory. It acts as a Global Administrator in your tenant. TheAccessHub Admin Tool needs additional permission, which you can grant from within the tool.
-To authorize TheAccessHub Admin Tool to access your directory, follow these steps:
+To authorize TheAccessHub Admin Tool to access your directory:
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Azure AD B2C Config**
+2. Go to **System Admin** > **Azure AD B2C Config**.
-3. Select **Authorize Connection**
+3. Select **Authorize Connection**.
-4. In the new window sign-in with your Global Administrator account. You may be asked to reset your password if you're signing in for the first time with the new service account.
+4. In the new window, sign in with your Global Administrator account. You might be asked to reset your password if you're signing in for the first time with the new service account.
5. Follow the prompts and select **Accept** to grant TheAccessHub Admin Tool the requested permissions.
-## Configure a new CSR user using your enterprise identity
+## Configure a new CSR user by using your enterprise identity
-Create a CSR/Helpdesk user who accesses TheAccessHub Admin Tool using their existing enterprise Azure Active Directory credentials.
+Create a CSR/Helpdesk user who accesses TheAccessHub Admin Tool by using their existing enterprise Azure Active Directory credentials.
-To configure CSR/Helpdesk user with Single Sign-on (SSO), the following steps are recommended:
+To configure a CSR/Helpdesk user with single sign-on (SSO), we recommend the following steps:
-1. Log into TheAccessHub Admin Tool using credentials provided by N8 Identity.
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Manager Tools** > **Manage Colleagues**
+2. Go to **Manager Tools** > **Manage Colleagues**.
-3. Select **Add Colleague**
+3. Select **Add Colleague**.
-4. Select **Colleague Type Azure Administrator**
+4. For **Colleague Type**, select **Azure Administrator**.
-5. Enter the colleagueΓÇÖs profile information
+5. Enter the colleague's profile information:
- a. Choosing a Home Organization will control who has permission to manage this user.
+ a. Choose a home organization to control who has permission to manage this user.
- b. For Login ID/Azure AD User Name supply the User Principal Name from the userΓÇÖs Azure Active Directory account.
+ b. For **Login ID/Azure AD User Name**, supply the user principal name from the user's Azure Active Directory account.
- c. On the TheAccessHub Roles tab, check the managed role Helpdesk. It will allow the user to access the manage colleagues view. The user will still need to be placed into a group or be made an organization owner to act on customers.
+ c. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the **Manage Colleagues** view. The user will still need to be placed into a group or be made an organization owner to act on customers.
6. Select **Submit**.
-## Configure a new CSR user using a new identity
+## Configure a new CSR user by using a new identity
-Create a CSR/Helpdesk user who will access TheAccessHub Admin Tool with a new local credential unique to TheAccessHub Admin Tool. This will be used mainly by organizations that don't use an Azure AD for their enterprise.
+Create a CSR/Helpdesk user who will access TheAccessHub Admin Tool by using a new local credential that's unique to the tool. This user will be used mainly by organizations that don't use Azure Active Directory.
-To [setup a CSR/Helpdesk](https://youtu.be/iOpOI2OpnLI) user without SSO, follow these steps:
+To [set up a CSR/Helpdesk user](https://youtu.be/iOpOI2OpnLI) without SSO:
-1. Log into TheAccessHub Admin Tool using credentials provided by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Manager Tools** > **Manage Colleagues**
+2. Go to **Manager Tools** > **Manage Colleagues**.
-3. Select **Add Colleague**
+3. Select **Add Colleague**.
-4. Select **Colleague Type Local Administrator**
+4. For **Colleague Type**, select **Local Administrator**.
-5. Enter the colleagueΓÇÖs profile information
+5. Enter the colleague's profile information:
- a. Choosing a Home Organization will control who has permission to manage this user.
+ a. Choose a home organization to control who has permission to manage this user.
- b. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the manage colleagues view. The user will still need to be placed into a group or be made an organization owner to act on customers.
+ b. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the **Manage Colleagues** view. The user will still need to be placed into a group or be made an organization owner to act on customers.
-6. Copy the **Login ID/Email** and **One Time Password** attributes. Provide it to the new user. They'll use these credentials to log in to TheAccessHub Admin Tool. The user will be prompted to enter a new password on their first login.
+6. Copy the **Login ID/Email** and **One Time Password** attributes. Provide them to the new user. The user will use these credentials to log in to TheAccessHub Admin Tool. The user will be prompted to enter a new password on first login.
-7. Select **Submit**
+7. Select **Submit**.
## Configure partitioned CSR administration
-Permissions to manage customer and CSR/Helpdesk users in TheAccessHub Admin Tool are managed with the use of an organization hierarchy. All colleagues and customers have a home organization where they reside. Specific colleagues or groups of colleagues can be assigned as owners of organizations. Organization owners can manage (make changes to) colleagues and customers in organizations or suborganizations they own. To allow multiple colleagues to manage a set of users, a group can be created with many members. The group can then be assigned as an organization owner and all of the groupΓÇÖs members can manage colleagues and customers in the organization.
+Permissions to manage customer and CSR/Helpdesk users in TheAccessHub Admin Tool are managed through an organization hierarchy. All colleagues and customers have a home organization where they reside. You can assign specific colleagues or groups of colleagues as owners of organizations.
+
+Organization owners can manage (make changes to) colleagues and customers in organizations or suborganizations that they own. To allow multiple colleagues to manage a set of users, you can create a group that has many members. You can then assign the group as an organization owner. All of the group's members can then manage colleagues and customers in the organization.
### Create a new group
-1. Log into TheAccessHub Admin Tool using **credentials** provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Organization > Manage Groups**
+2. Go to **Organization > Manage Groups**.
-3. Select > **Add Group**
+3. Select **Add Group**.
-4. Enter a **Group name**, **Group description**, and **Group owner**
+4. Enter values for **Group name**, **Group description**, and **Group owner**.
-5. Search for and check the boxes on the colleagues you want to be members of the group then select >**Add**
+5. Search for and select the boxes on the colleagues you want to be members of the group, and then select **Add**.
6. At the bottom of the page, you can see all members of the group.
-7. If needed members can be removed by selecting the **x** at the end of the row
+ If necessary, you can remove members by selecting the **x** at the end of the row.
-8. Select **Submit**
+7. Select **Submit**.
### Create a new organization
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to Organization > **Manage Organizations**
+2. Go to **Organization** > **Manage Organizations**.
-3. Select > **Add Organization**
+3. Select **Add Organization**.
-4. Supply an **Organization name**, **Organization owner**, and **Parent organization**.
+4. Supply values for **Organization name**, **Organization owner**, and **Parent organization**:
- a. The organization name is ideally a value that corresponds to your customer data. When loading colleague and customer data, if you supply the name of the organization in the load, the colleague can be automatically placed into the organization.
+ a. The organization name is ideally a value that corresponds to your customer data. When you're loading colleague and customer data, if you supply the name of the organization in the load, the colleague can be automatically placed into the organization.
- b. The owner represents the person or group who will manage the customers and colleagues in this organization and any suborganization within.
+ b. The owner represents the person or group that will manage the customers and colleagues in this organization and any suborganization within it.
- c. The parent organization indicates which other organization is inherently, also responsible for this organization.
+ c. The parent organization indicates which other organization is also responsible for this organization.
5. Select **Submit**. ### Modify the hierarchy via the tree view
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
-
-2. Navigate to **Manager Tools** > **Tree View**
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-3. In this representation, you can visualize which colleagues and groups can manage which organizations.
+2. Go to **Manager Tools** > **Tree View**.
-4. The hierarchies can be modified by dragging organizations overtop organizations you want them to be parented by.
+3. In this representation, you can visualize which colleagues and groups can manage which organizations. Modify the hierarchy by dragging organizations into parent organizations.
5. Select **Save** when you're finished altering the hierarchy.
-## Customize welcome notification
+## Customize the welcome notification
-While you're using TheAccessHub to migrate users from a previous authentication solution into Azure AD B2C, you may want to customize the user welcome notification, which is sent to the user by TheAccessHub during migration. This message normally includes the link for the customer to set a new password in the Azure AD B2C directory.
+While you're using TheAccessHub Admin Tool to migrate users from a previous authentication solution into Azure AD B2C, you might want to customize the user welcome notification. TheAccessHub Admin Tool sends this notification to users during migration. This message normally includes a link for users to set a new password in the Azure AD B2C directory.
To customize the notification:
-1. Log into TheAccessHub using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Notifications**
+2. Go to **System Admin** > **Notifications**.
-3. Select the **Create Colleague template**
+3. Select the **Create Colleague** template.
-4. Select **Edit**
+4. Select **Edit**.
-5. Alter the Message and Template fields as necessary. The Template field is HTML aware and can send HTML formatted notifications to customer emails.
+5. Alter the **Message** and **Template** fields as necessary. The **Template** field is HTML aware and can send HTML-formatted email notifications to customers.
-6. Select **Save** when finished.
+6. Select **Save** when you're finished.
## Migrate data from external data sources to Azure AD B2C
-Using TheAccessHub Admin Tool, you can import data from various databases, LDAPs, and CSV files and then push that data to your Azure AD B2C tenant. It's done by loading data into the Azure AD B2C user colleague type within TheAccessHub Admin Tool. If the source of data isn't Azure itself, the data will be placed into both TheAccessHub Admin Tool and Azure AD B2C. If the source of your external data isn't a simple .csv file on your machine, set up a data source before doing the data load. The below steps describe creating a data source and doing the data load.
+By using TheAccessHub Admin Tool, you can import data from various databases, LDAPs, and .csv files and then push that data to your Azure AD B2C tenant. You migrate the data by loading it into the Azure AD B2C user colleague type within TheAccessHub Admin Tool.
+
+If the source of data isn't Azure itself, the data will be placed into both TheAccessHub Admin Tool and Azure AD B2C. If the source of your external data isn't a simple .csv file on your machine, set up a data source before doing the data load. The following steps describe creating a data source and loading the data.
### Configure a new data source
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Data Sources**
+2. Go to **System Admin** > **Data Sources**.
-3. Select **Add Data Source**
+3. Select **Add Data Source**.
-4. Supply a **Name** and **Type** for this data source
+4. Supply **Name** and **Type** values for this data source.
-5. Fill in the form data, depending on your data source type:
+5. Fill in the form data, depending on your data source type.
- **For databases**
+ For databases:
- a. **Type** ΓÇô Database
+ a. For **Type**, enter **Database**.
- b. **Database type** ΓÇô Select a database from one of the supported database types.
+ b. For **Database type**, select a database from one of the supported database types.
- c. **Connection URL** ΓÇô Enter a well-formatted JDBC connection string. Such as: ``jdbc:postgresql://myhost.com:5432/databasename``
+ c. For **Connection URL**, enter a well-formatted JDBC connection string, such as `jdbc:postgresql://myhost.com:5432/databasename`.
- d. **Username** ΓÇô Enter the username for accessing the database
+ d. For **Username**, enter the username for accessing the database.
- e. **Password** ΓÇô Enter the password for accessing the database
+ e. For **Password**, enter the password for accessing the database.
- f. **Query** ΓÇô Enter the SQL query to extract the customer details. Such as: ``SELECT * FROM mytable;``
+ f. For **Query**, enter the SQL query to extract the customer details, such as `SELECT * FROM mytable;`.
- g. Select **Test Connection**, you'll see a sample of your data to ensure the connection is working.
+ g. Select **Test Connection**. You'll see a sample of your data to ensure that the connection is working.
- **For LDAPs**
+ For LDAPs:
- a. **Type** ΓÇô LDAP
+ a. For **Type**, enter **LDAP**.
- b. **Host** ΓÇô Enter the hostname or IP for machine in which the LDAP server is running. Such as: ``mysite.com``
+ b. For **Host**, enter the host name or IP address for the machine in which the LDAP server is running, such as `mysite.com`.
- c. **Port** ΓÇô Enter the port number in which the LDAP server is listening.
+ c. For **Port**, enter the port number in which the LDAP server is listening.
- d. **SSL** ΓÇô Check the box if TheAccessHub Admin Tool should communicate to the LDAP securely using SSL. Using SSL is highly recommended.
+ d. For **SSL**, select the box if TheAccessHub Admin Tool should communicate to the LDAP securely by using SSL. We highly recommend using SSL.
- e. **Login DN** ΓÇô Enter the DN of the user account to log in and do the LDAP search
+ e. For **Login DN**, enter the distinguished name (DN) of the user account to log in and do the LDAP search.
- f. **Password** ΓÇô Enter the password for the user
+ f. For **Password**, enter the password for the user.
- g. **Base DN** ΓÇô Enter the DN at the top of the hierarchy in which you want to do the search
+ g. For **Base DN**, enter the DN at the top of the hierarchy in which you want to do the search.
- h. **Filter** ΓÇô Enter the LDAP filter string, which will obtain your customer records
+ h. For **Filter**, enter the LDAP filter string, which will obtain your customer records.
- i. **Attributes** ΓÇô Enter a comma-separated list of attributes from your customer records to pass to TheAccessHub Admin Tool
+ i. For **Attributes**, enter a comma-separated list of attributes from your customer records to pass to TheAccessHub Admin Tool.
- j. Select the **Test Connection**, you'll see a sample of your data to ensure the connection is working.
+ j. Select the **Test Connection**. You'll see a sample of your data to ensure that the connection is working.
- **For OneDrive**
+ For OneDrive:
- a. **Type** ΓÇô OneDrive for Business
+ a. For **Type**, select **OneDrive for Business**.
- b. Select **Authorize Connection**
+ b. Select **Authorize Connection**.
- c. A new window will prompt you to log in to **OneDrive**, login with a user with read access to your OneDrive account. TheAccessHub Admin Tool, will act for this user to read CSV load files.
+ c. A new window prompts you to sign in to OneDrive. Sign in with a user that has read access to your OneDrive account. TheAccessHub Admin Tool will act for this user to read .csv load files.
d. Follow the prompts and select **Accept** to grant TheAccessHub Admin Tool the requested permissions.
-6. Select **Save** when finished.
+6. Select **Save** when you're finished.
### Synchronize data from your data source into Azure AD B2C
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
+
+2. Go to **System Admin** > **Data Synchronization**.
-2. Navigate to **System Admin** > **Data Synchronization**
+3. Select **New Load**.
-3. Select **New Load**
+4. For **Colleague Type**, select **Azure AD B2C User**.
-4. Select the **Colleague Type** Azure AD B2C User
+5. Select **Source**. In the pop-up dialog, select your data source. If you created a OneDrive data source, also select the file.
-5. Select **Source**, in the pop-up dialog, select your data source. If you created a OneDrive data source, also select the file.
+6. If you don't want to create new customer accounts with this load, change the first policy (**IF colleague not found in TheAccessHub THEN**) to **Do Nothing**.
-6. If you donΓÇÖt want to create new customer accounts with this load, then change the first policy: **IF colleague not found in TheAccessHub THEN** to **Do Nothing**
+7. If you don't want to update existing customer accounts with this load, change the second policy (**IF source and TheAccessHub data mismatch THEN**) to **Do Nothing**.
-7. If you donΓÇÖt want to update existing customer accounts with this load, then change the second policy **IF source and TheAccessHub data mismatch THEN** to **Do Nothing**
+8. Select **Next**.
-8. Select **Next**
+9. In **Search-Mapping configuration**, you identify how to correlate load records with customers already loaded into TheAccessHub Admin Tool.
-9. In the **Search-Mapping configuration**, we identify how to correlate load records with customers already loaded into TheAccessHub Admin Tool. Choose one or more identifying attributes in the source. Match the attributes with an attribute in TheAccessHub Admin Tool that holds the same values. If a match is found, then the existing record will be overridden; otherwise, a new customer will be created. You can sequence a number of these checks. For example, you could check email first, and then first and last name.
+ Choose one or more identifying attributes in the source. Match the attributes with an attribute in TheAccessHub Admin Tool that holds the same values. If a match is found, the existing record will be overridden. Otherwise, a new customer will be created.
+
+ You can sequence a number of these checks. For example, you could check email first, and then check first and last name.
-10. On the left-hand side menu, select **Data Mapping**.
+10. On the left-side menu, select **Data Mapping**.
-11. In the Data-Mapping configuration, assign which TheAccessHub Admin Tool attributes should be populated from your source attributes. No need to map all the attributes. Unmapped attributes will remain unchanged for existing customers.
+11. In **Data-Mapping configuration**, assign the TheAccessHub Admin Tool attributes that should be populated from your source attributes. There's no need to map all the attributes. Unmapped attributes will remain unchanged for existing customers.
-12. If you map to the attribute org_name with a value that is the name of an existing organization, then new customers created will be placed in that organization.
+12. If you map to the attribute `org_name` with a value that is the name of an existing organization, newly created customers will be placed in that organization.
-13. Select **Next**
+13. Select **Next**.
-14. A Daily/Weekly or Monthly schedule may be specified if this load should be reoccurring. Otherwise keep the default **Now**.
+14. If you want this load to be recurring, specify a **Daily/Weekly** or **Monthly** schedule. Otherwise, keep the default of **Now**.
-15. Select **Submit**
+15. Select **Submit**.
-16. If the **Now schedule** was selected, a new record will be added to the Data Synchronizations screen immediately. Once the validation phase has reached 100%, select the **new record** to see the expected outcome for the load. For scheduled loads, these records will only appear after the scheduled time.
+16. If you selected the **Now** schedule, a new record will be added to the **Data Synchronizations** screen immediately. After the validation phase has reached 100 percent, select the new record to see the expected outcome for the load. For scheduled loads, these records will appear only after the scheduled time.
-17. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again. Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Finally, you can continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
+17. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again.
-18. When the **Data Synchronization** record becomes 100% on the load phase, all the changes resulting from the load will have been initiated. Customers should begin appearing or receiving changes in Azure AD B2C.
+ Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Another option is to continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
+
+18. When the **Data Synchronization** record becomes 100 percent on the load phase, all the changes resulting from the load have been initiated. Customers should begin appearing or receiving changes in Azure AD B2C.
## Synchronize Azure AD B2C customer data
-As a one-time or ongoing operation, TheAccessHub Admin Tool can synchronize all the customer information from Azure AD B2C into TheAccessHub Admin Tool. This ensures that CSR/Helpdesk administrators are seeing up-to-date customer information.
+As a one-time or ongoing operation, TheAccessHub Admin Tool can synchronize all the customer information from Azure AD B2C into TheAccessHub Admin Tool. This operation ensures that CSR/Helpdesk administrators see up-to-date customer information.
To synchronize data from Azure AD B2C into TheAccessHub Admin Tool:
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Data Synchronization**
+2. Go to **System Admin** > **Data Synchronization**.
-3. Select **New Load**
+3. Select **New Load**.
-4. Select the **Colleague Type** Azure AD B2C User
+4. For **Colleague Type**, select **Azure AD B2C User**.
5. For the **Options** step, leave the defaults.
-6. Select **Next**
+6. Select **Next**.
+
+7. For the **Data Mapping & Search** step, leave the defaults. Exception: if you map to the attribute `org_name` with a value that is the name of an existing organization, newly created customers will be placed in that organization.
-7. For the **Data Mapping & Search** step, leave the defaults. Except if you map to the attribute **org_name** with a value that is the name of an existing organization, then new customers created will be placed in that organization.
+8. Select **Next**.
-8. Select **Next**
+9. If you want this load to be recurring, specify a **Daily/Weekly** or **Monthly** schedule. Otherwise, keep the default of **Now**. We recommend syncing from Azure AD B2C on a regular basis.
-9. A Daily/Weekly or Monthly schedule may be specified if this load should be reoccurring. Otherwise keep the default: **Now**. We recommend syncing from Azure AD B2C on a regular basis.
+10. Select **Submit**.
-10. Select **Submit**
+11. If you selected the **Now** schedule, a new record will be added to the **Data Synchronizations** screen immediately. After the validation phase has reached 100 percent, select the new record to see the expected outcome for the load. For scheduled loads, these records will appear only after the scheduled time.
-11. If the **Now** schedule was selected, a new record will be added to the Data Synchronizations screen immediately. Once the validation phase has reached 100%, select the new record to see the expected outcome for the load. For scheduled loads, these records will only appear after the scheduled time.
+12. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again.
-12. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the More menu to remove the load. You can then correct the source data or load mappings and try again. Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Finally, you can continue with any errors and resolve them later as Support Interventions in TheAccessHub Admin Tool.
+ Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Another option is to continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
-13. When the **Data Synchronization** record becomes 100% on the load phase, all the changes resulting from the load will have been initiated.
+13. When the **Data Synchronization** record becomes 100 percent on the load phase, all the changes resulting from the load have been initiated.
## Configure Azure AD B2C policies
-Occasionally syncing TheAccessHub Admin Tool is limited in its ability to keep its state up to date with Azure AD B2C. We can leverage TheAccessHub Admin ToolΓÇÖs API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./user-flow-overview.md). In the next section, we'll give you an example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your Sign-Up custom policies.
+Occasional syncing of TheAccessHub Admin Tool limits the tool's ability to keep its state up to date with Azure AD B2C. You can use TheAccessHub Admin Tool's API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./user-flow-overview.md).
+
+The following procedures give you example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your sign-up custom policies.
-### Create a secure credential to invoke TheAccessHub Admin ToolΓÇÖs API
+### Create a secure credential to invoke TheAccessHub Admin Tool's API
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Admin Tools** > **API Security**
+2. Go to **System Admin** > **Admin Tools** > **API Security**.
-3. Select **Generate**
+3. Select **Generate**.
-4. Copy the **Certificate Password**
+4. Copy the **Certificate Password**.
5. Select **Download** to get the client certificate.
-6. Follow this [tutorial](./secure-rest-api.md#https-client-certificate-authentication ) to add the client certificate into Azure AD B2C.
+6. Follow [this tutorial](./secure-rest-api.md#https-client-certificate-authentication ) to add the client certificate into Azure AD B2C.
### Retrieve your custom policy examples
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Admin Tools** > **Azure B2C Policies**
+2. Go to **System Admin** > **Admin Tools** > **Azure B2C Policies**.
-3. Supply your Azure AD B2C tenant domain and the two Identity Experience Framework IDs from your Identity Experience Framework configuration
+3. Supply your Azure AD B2C tenant domain and the two Identity Experience Framework IDs from your Identity Experience Framework configuration.
-4. Select **Save**
+4. Select **Save**.
-5. Select **Download** to get a zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
+5. Select **Download** to get a .zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
-6. Follow this [tutorial](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to get started with designing custom policies in Azure AD B2C.
+6. Follow [this tutorial](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to get started with designing custom policies in Azure AD B2C.
## Next steps
-For additional information, review the following articles:
+For more information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
Previously updated : 05/03/2021 Last updated : 06/08/2022
If the type of authentication is set to `ApiKeyHeader`, the **CryptographicKeys*
| The name of the HTTP header, such as `x-functions-key`, or `x-api-key`. | Yes | The key that is used to authenticate. | > [!NOTE]
-> At this time, Azure AD B2C supports only one HTTP header for authentication. If your RESTful call requires multiple headers, such as a client ID and client secret, you will need to proxy the request in some manner.
+> At this time, Azure AD B2C supports only one HTTP header for authentication. If your RESTful call requires multiple headers, such as a client ID and client secret value, you will need to proxy the request in some manner.
```xml <TechnicalProfile Id="REST-API-SignUp">
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **CryptographicKeys** element contains the following attributes:
| Attribute |Required | Description | | | -- | -- | | SamlMessageSigning |Yes | The X509 certificate (RSA key set) to use to sign SAML messages. Azure AD B2C uses this key to sign the requests and send them to the identity provider. |
-| SamlAssertionDecryption |No | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. |
+| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP Encryts SAML assertions.|
| MetadataSigning |No | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. | ## Next steps
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
Previously updated : 04/05/2022 Last updated : 06/08/2022 zone_pivot_groups: b2c-policy-type
For a client credentials flow, you need to create an application secret. The cli
#### Create Azure AD B2C policy keys
-You need to store the client ID and the client secret that you previously recorded in your Azure AD B2C tenant.
+You need to store the client ID and the client secret value that you previously recorded in your Azure AD B2C tenant.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
active-directory-b2c View Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md
Previously updated : 02/20/2020 Last updated : 06/08/2022
You can try this script in the [Azure Cloud Shell](overview.md). Be sure to upda
# Constants $ClientID = "your-client-application-id-here" # Insert your application's client ID, a GUID
-$ClientSecret = "your-client-application-secret-here" # Insert your application's client secret
+$ClientSecret = "your-client-application-secret-here" # Insert your application's client secret value
$tenantdomain = "your-b2c-tenant.onmicrosoft.com" # Insert your Azure AD B2C tenant domain name $loginURL = "https://login.microsoftonline.com"
active-directory About Microsoft Identity Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/about-microsoft-identity-platform.md
Title: Evolution of Microsoft identity platform - Azure
+ Title: Evolution of Microsoft identity platform
description: Learn about Microsoft identity platform, an evolution of the Azure Active Directory (Azure AD) identity service and developer platform.
The [Microsoft identity platform](../develop/index.yml) is an evolution of the A
Many developers have previously worked with the Azure AD v1.0 platform to authenticate work and school accounts (provisioned by Azure AD) by requesting tokens from the Azure AD v1.0 endpoint, using Azure AD Authentication Library (ADAL), Azure portal for application registration and configuration, and the Microsoft Graph API for programmatic application configuration.
-With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the applicationΓÇÖs usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs.
+With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the application's usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs.
With Microsoft identity platform, expand your reach to these kinds of users:
The following diagram shows the Microsoft identity experience at a high level, i
### App registration experience
-The Azure portal **[App registrations](https://go.microsoft.com/fwlink/?linkid=2083908)** experience is the one portal experience for managing all applications youΓÇÖve integrated with Microsoft identity platform. If you have been using the Application Registration Portal, start using the Azure portal app registration experience instead.
+The Azure portal **[App registrations](https://go.microsoft.com/fwlink/?linkid=2083908)** experience is the one portal experience for managing all applications you've integrated with Microsoft identity platform. If you have been using the Application Registration Portal, start using the Azure portal app registration experience instead.
-For integration with Azure AD B2C (when authenticating social or local identities), youΓÇÖll need to register your application in an Azure AD B2C tenant. This experience is also part of the Azure portal.
+For integration with Azure AD B2C (when authenticating social or local identities), you'll need to register your application in an Azure AD B2C tenant. This experience is also part of the Azure portal.
Use the [Application API](/graph/api/resources/application) to programmatically configure your applications integrated with Microsoft identity platform for authenticating any Microsoft identity.
active-directory Active Directory Devhowto Adal Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-devhowto-adal-error-handling.md
Title: ADAL client app error handling best practices | Azure
+ Title: ADAL client app error handling best practices
description: Provides error handling guidance and best practices for ADAL client applications.
active-directory App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/app-types.md
Title: Application types in v1.0 | Azure
+ Title: Application types in v1.0
description: Describes the types of apps and scenarios supported by the Azure Active Directory v2.0 endpoint.
active-directory Azure Ad Endpoint Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md
Title: Why update to Microsoft identity platform (v2.0) | Azure
+ Title: Why update to Microsoft identity platform (v2.0)
description: Know the differences between the Microsoft identity platform (v2.0) endpoint and the Azure Active Directory (Azure AD) v1.0 endpoint, and learn the benefits of updating to v2.0.
active-directory V1 Authentication Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-authentication-scenarios.md
Title: Azure AD for developers (v1.0) | Azure
+ Title: Azure AD for developers (v1.0)
description: Learn authentication basics for Azure AD for developers (v1.0) such as the app model, API, provisioning, and the most common authentication scenarios. documentationcenter: dev-center-name
active-directory Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/videos.md
Title: Azure ADAL to MSAL migration videos | Azure
+ Title: Azure ADAL to MSAL migration videos
description: Videos that help you migrate from the Azure Active Directory developer platform to the Microsoft identity platform -+ Last updated 02/12/2020
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
Title: Tutorial - Web app accesses Microsoft Graph as the app| Azure
+ Title: Tutorial - Web app accesses Microsoft Graph as the app
description: In this tutorial, you learn how to access data in Microsoft Graph by using managed identities.
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) | Azure
+ Title: Request custom claims (MSAL iOS/macOS)
description: Learn how to request custom claims.
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview - Azure
+ Title: Microsoft identity platform overview
description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications.
There are several components that make up the Microsoft identity platform:
- **Application configuration API and PowerShell**: Programmatic configuration of your applications through the Microsoft Graph API and PowerShell so you can automate your DevOps tasks. - **Developer content**: Technical documentation including quickstarts, tutorials, how-to guides, and code samples.
-For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You donΓÇÖt need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations.
+For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You don't need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations.
With the Microsoft identity platform, you can write code once and reach any user. You can build an app once and have it work across many platforms, or build an app that functions as a client as well as a resource application (API).
active-directory Conditional Access Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/conditional-access-exclusion.md
Title: Manage users excluded from Conditional Access policies - Azure AD
+ Title: Manage users excluded from Conditional Access policies
description: Learn how to use Azure Active Directory (Azure AD) access reviews to manage users that have been excluded from Conditional Access policies documentationcenter: ''
active-directory How To Connect Fed Sha256 Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-sha256-guidance.md
Title: Change signature hash algorithm for Microsoft 365 relying party trust - Azure
+ Title: Change signature hash algorithm for Microsoft 365 relying party trust
description: This page provides guidelines for changing SHA algorithm for federation trust with Microsoft 365. keywords: SHA1,SHA256,M365,federation,aadconnect,adfs,ad fs,change sha,federation trust,relying party trust
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md
Title: Federating multiple Azure AD with single AD FS - Azure
+ Title: Federating multiple Azure AD with single AD FS
description: In this document, you will learn how to federate multiple Azure AD with a single AD FS. keywords: federate, ADFS, AD FS, multiple tenants, single AD FS, one ADFS, multi-tenant federation, multi-forest adfs, aad connect, federation, cross-tenant federation
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, keys etc used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credetials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credentials.
The following video shows how you can use managed identities:</br>
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
Title: Tutorial`:` Use a managed identity to access Azure Key Vault - Linux - Azure AD
+ Title: "Tutorial: Use a managed identity to access Azure Key Vault - Linux"
description: A tutorial that walks you through the process of using a Linux VM system-assigned managed identity to access Azure Resource Manager. documentationcenter: ''
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Title: Use managed identities from a virtual machine to access Cosmos DB | Microsoft Docs
+ Title: Use managed identities from a virtual machine to access Cosmos DB
description: Learn how to use managed identities with Windows VMs using the Azure portal, CLI, PowerShell, Azure Resource Manager template
Depending on your API version, you have to take [different steps](qs-configure-t
```json "variables": {
- "identityName": "my-user-assigned"
-
- },
+ "identityName": "my-user-assigned"
+
+ },
``` Under the resources element, add the following entry to assign a user-assigned managed identity to your VM. Be sure to replace ```<identityName>``` with the name of the user-assigned managed identity you created.
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
az aks nodepool add \
--cluster-name myAKSCluster \ --resource-group myResourceGroup \ --name myNodepool \
- --enable-custom-ca-trust
+ --enable-custom-ca-trust \
+ --os-type Linux
``` ## Configure an existing nodepool to use a custom CA
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
This article assumes that you have an existing AKS cluster. If you need an AKS c
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-The AKS cluster cluster identity needs permission to manage network resources if you use an existing subnet or resource group. For information see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the the AKS cluster identity also has read access to that subnet.
+The AKS cluster identity needs permission to manage network resources if you use an existing subnet or resource group. For information, see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the AKS cluster identity also has read access to that subnet.
For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp].
internal-app LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m
For more information on configuring your load balancer in a different subnet, see [Specify a different subnet][different-subnet]
+## Connect Azure Private Link service to internal load balancer (Preview)
+
+To attach an Azure Private Link Service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: internal-app
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+ service.beta.kubernetes.io/azure-pls-create: "true"
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: internal-app
+```
+
+Deploy the internal load balancer using the [kubectl apply][kubectl-apply] and specify the name of your YAML manifest:
+
+```console
+kubectl apply -f internal-lb-pls.yaml
+```
+
+An Azure load balancer is created in the node resource group and connected to the same virtual network as the AKS cluster.
+
+When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* is in relation to the external interface of the load balancer, not that it receives a public, external IP address. It may take a minute or two for the IP address to change from *\<pending\>* to an actual internal IP address, as shown in the following example:
+
+```
+$ kubectl get service internal-app
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+internal-app LoadBalancer 10.125.17.53 10.125.0.66 80:30430/TCP 64m
+```
+
+Additionally, a Private Link Service object will also be created that connects to the Frontend IP configuration of the Load Balancer associated with the Kubernetes service. Details of the Private Link Service object can be retrieved as shown in the following example:
+```
+$ AKS_MC_RG=$(az aks show -g myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv)
+$ az network private-link-service list -g ${AKS_MC_RG} --query "[].{Name:name,Alias:alias}" -o table
+
+Name Alias
+-- -
+pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice
+
+```
+
+### Create a Private Endpoint to the Private Link Service
+
+A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service created above. To do so, follow the example shown below:
+
+```azurecli
+$ AKS_PLS_ID=$(az network private-link-service list -g ${AKS_MC_RG} --query "[].id" -o tsv)
+$ az network private-endpoint create \
+ -g myOtherResourceGroup \
+ --name myAKSServicePE \
+ --vnet-name myOtherVNET \
+ --subnet pe-subnet \
+ --private-connection-resource-id ${AKS_PLS_ID} \
+ --connection-name connectToMyK8sService
+```
+ ## Use private networks When you create your AKS cluster, you can specify advanced networking settings. This approach lets you deploy the cluster into an existing Azure virtual network and subnets. One scenario is to deploy your AKS cluster into a private network connected to your on-premises environment and run services only accessible internally. For more information, see configure your own virtual network subnets with [Kubenet][use-kubenet] or [Azure CNI][advanced-networking].
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
Access policies determine which identities can use the authorization that the ac
### Process flow for creating authorizations
-The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available. Please see [this](https://aka.ms/apimauthorizations/postmancollection) Postman collection.
+The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available.
:::image type="content" source="media/authorizations-overview/get-token.svg" alt-text="Process flow for creating authorizations" border="false":::
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
For example, insert the policy fragment named *ForwardContext* in the inbound po
## Manage policy fragments
-After creating a policy fragment, you can view and update policy properties, or delete the policy at any time.
+After creating a policy fragment, you can view and update the properties of a policy fragment, or delete the policy fragment at any time.
-**To view properties of a fragment:**
+**To view properties of a policy fragment:**
1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments**. Select the name of your fragment. 1. On the **Overview** page, review the **Policy document references** to see the policy definitions that include the fragment.
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/03/2022
page lists the **compliance domains** and **security controls** for Azure App Se
assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard. +
+## Release notes
+
+### June 2022
+
+- Deprecation of policy "API App should only be accessible over HTTPS"
+- Rename of policy "Web Application should only be accessible over HTTPS" to "App Service apps should only be accessible over HTTPS"
+- Update scope of policy "App Service apps should only be accessible over HTTPS" to include all app types except Function apps
+- Update scope of policy "App Service apps should only be accessible over HTTPS" to include slots
+- Update scope of policy "Function apps should only be accessible over HTTPS" to include slots
+- Update logic of policy "App Service apps should use a SKU that supports private link" to include checks on App Service plan tier or name so that the policy supports Terraform deployments
+- Update list of supported SKUs of policy "App Service apps should use a SKU that supports private link" to include the Basic and Standard tiers
## Next steps
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
# Common key vault errors in Azure Application Gateway
-Application Gateway enables customers to securely store TLS certificates in Azure Key Vault. When using a Key Vault resource, it is important that the gateway always has access to the linked key vault. If your Application Gateway is unable to fetch the certificate, the associated HTTPS listeners will be placed in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
+Application Gateway enables customers to securely store TLS certificates in Azure Key Vault. When using a key vault resource, it is important that the gateway always has access to the linked key vault. If your Application Gateway is unable to fetch the certificate, the associated HTTPS listeners will be placed in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
-This article helps you understand the details of key vault error codes you might encounter, including what is causing these errors. This article also contains steps to resolve such misconfigurations.
+This article helps you understand the details of the error codes and the steps to resolve such key vault misconfigurations.
> [!TIP] > Use a secret identifier that doesn't specify a version. This way, Azure Application Gateway will automatically rotate the certificate, if a newer version is available in Azure Key Vault. An example of a secret URI without a version is: `https://myvault.vault.azure.net/secrets/mysecret/`. ## List of error codes and their details
-The following sections cover various errors you might encounter. You can find the details in Azure Advisor, and use this troubleshooting article to fix the problems. For more information, see [Create Azure Advisor alerts on new recommendations by using the Azure portal](../advisor/advisor-alerts-portal.md).
+The following sections describe the various errors you might encounter. You can verify if your gateway has any such problem by visting [**Azure Advisor**](./key-vault-certs.md#investigating-and-resolving-key-vault-errors) for your account, and use this troubleshooting article to fix the problem. We recommend configuring Azure Advisor alerts to stay informed when a key vault problem is detected for your gateway.
> [!NOTE] > Azure Application Gateway generates logs for key vault diagnostics every four hours. If the diagnostic continues to show the error after you have fixed the configuration, you might have to wait for the logs to be refreshed.
The following sections cover various errors you might encounter. You can find th
[comment]: # (Error Code 1) ### Error code: UserAssignedIdentityDoesNotHaveGetPermissionOnKeyVault
-**Description:** The associated user-assigned managed identity doesn't have the "Get" permission.
+**Description:** The associated user-assigned managed identity doesn't have the required permission.
-**Resolution:** Configure the access policy of Key Vault to grant the user-assigned managed identity this permission on secrets.
-1. Go to the linked key vault in the Azure portal.
-1. Open the **Access policies** pane.
-1. For **Permission model**, select **Vault access policy**.
-1. Under **Secret Management Operations**, select the **Get** permission.
-1. Select **Save**.
+**Resolution:** Configure the access policies of your key vault to grant the user-assigned managed identity permission on secrets. You may do so in any of the following ways:
+
+ **Vault access policy**
+ 1. Go to the linked key vault in the Azure portal.
+ 1. Open the **Access policies** blade.
+ 1. For **Permission model**, select **Vault access policy**.
+ 1. Under **Secret Management Operations**, select the **Get** permission.
+ 1. Select **Save**.
:::image type="content" source="./media/application-gateway-key-vault-common-errors/no-get-permssion-for-managed-identity.png " alt-text=" Screenshot that shows how to resolve the Get permission error."::: For more information, see [Assign a Key Vault access policy by using the Azure portal](../key-vault/general/assign-access-policy-portal.md).
+ **Azure role-based access control**
+ 1. Go to the linked key vault in the Azure portal.
+ 1. Open the **Access policies** blade.
+ 1. For **Permission model**, select **Azure role-based access control**.
+ 1. After this, navigate to **Access Control (IAM)** blade to configure permissions.
+ 1. **Add role assignment** for your managed identity by choosing the following<br>
+ a. **Role**: Key Vault Secrets User<br>
+ b. **Assign access to**: Managed identity<br>
+ c. **Members**: select the user-assigned managed identity which you've associated with your application gateway.<br>
+ 1. Select **Review + assign**.
+
+For more information, see [Azure role-based access control in Key Vault](../key-vault/general/rbac-guide.md).
+
+> [!NOTE]
+> Portal support for adding a new key vault-based certificate is currently not available when using **Azure role-based access control**. You can accomplish it by using ARM template, CLI, or PowerShell. Visit [this page](./key-vault-certs.md#key-vault-azure-role-based-access-control-permission-model) for guidance.
+ [comment]: # (Error Code 2) ### Error code: SecretDisabled
On the other hand, if a certificate object is permanently deleted, you will need
**Description:** The associated user-assigned managed identity has been deleted.
-**Resolution:** To use the identity again:
-1. Re-create a managed identity with the same name that was used previously, and under the same resource group. Resource activity logs contain more details.
-1. After you create the identity, go to **Application Gateway - Access Control (IAM)**. Assign the identity the **Reader** role, at a minimum.
-1. Finally, go to the desired Key Vault resource, and set its access policies to grant **Get** secret permissions for this new managed identity.
-
-For more information, see [How integration works](./key-vault-certs.md#how-integration-works).
+**Resolution:** Create a new managed identity and use it with the key vault.
+1. Re-create a managed identity with the same name that was previously used, and under the same resource group. (**TIP**: Refer to resource Activity Logs for naming details).
+1. Go to the desired key vault resource, and set its access policies to grant this new managed identity the required permission. You can follow the same steps as mentioned under [UserAssignedIdentityDoesNotHaveGetPermissionOnKeyVault](./application-gateway-key-vault-common-errors.md#error-code-userassignedidentitydoesnothavegetpermissiononkeyvault).
[comment]: # (Error Code 5) ### Error code: KeyVaultHasRestrictedAccess
Select **Managed deleted vaults**. From here, you can find the deleted Key Vault
These troubleshooting articles might be helpful as you continue to use Application Gateway:
+- [Understanding and fixing disabled listeners](disabled-listeners.md)
- [Azure Application Gateway Resource Health overview](resource-health-overview.md)-- [Troubleshoot Azure Application Gateway session affinity issues](how-to-troubleshoot-application-gateway-session-affinity-issues.md)+
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Title: Layouts - Form Recognizer
-description: Learn concepts related to Layout API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Learn concepts related to the Layout API with Form Recognizer REST API usage and limits.
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | **Supported paragraph roles**:
+The paragraph roles are best used with unstructured documents, structured documents and forms. Roles help analyze the structure of the extracted content for better semantic search and analysis.
* title * sectionHeading
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
* pageFooter * pageNumber
-For a richer semantic analysis, paragraph roles are best used with unstructured documents to better understand the layout of the extracted content.
- ## Development options The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Try extracting text from forms and documents using the Form Recognizer Studio. Y
### Form Recognizer Studio (preview) > [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API. The latest service preview is currently not enabled for analyzing Microsoft Word, Excel, PowerPoint, and HTML file formats using the Form Recognizer Studio.
+> Currently, Form Recognizer Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read preview.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
Try extracting text from forms and documents using the Form Recognizer Studio. Y
## Input requirements
-* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, Microsoft Word, Excel, PowerPoint, and HTML files are supported with the Read API in **2022-06-30-preview**.
+* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, the newest API version `2022-06-30-preview` supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier. * Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. Get started with exploring the pre-trained models with sample documents or your own. Create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts. ## Prerequisites for new users
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com). In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
+After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com).
+
+In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
+
+ :::image border="true" type="content" source="../media/quickstarts/form-recognizer-general-document-demo-preview3.gif" alt-text="Selecting the General Document API to analysis a document in the Form Recognizer Studio.":::
1. Select a Form Recognizer service feature from the Studio home page.
-1. This is a one-time step unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
+1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
1. Select the Analyze command to run analysis on the sample document or try your document by using the Add command.
-1. Observe the highlighted extracted content in the document view. Hover your move over the keys and values to see details.
- 1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
-1. Show and hide the text, tables, and selection marks layers to focus on each one of them at a time.
+1. Observe the highlighted extracted content in the document view. Hover your move over the keys and values to see details.
-1. In the output section's Result tab, browse the JSON output to understand the service response format. Copy and download to jumpstart integration.
+1. In the output section's Result tab, browse the JSON output to understand the service response format.
+1. In the Code tab, browse the sample code for integration. Copy and download to get started.
## Additional prerequisites for custom projects
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
### Configure CORS
-[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
+[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
-1. Select the CORS blade for the storage account.
+1. Select the CORS tab for the storage account.
:::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Before you run the cURL command, make the following changes:
#### POST request ```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Reference table
After you've called the [**Analyze document**](https://westus.dev.cognitive.micr
```bash <<<<<<< HEAD
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
=======
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
>>>>>>> resolve-merge-conflict ```
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
The following customers and partners have adopted Form Recognizer across a wide
||-|-| | **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) | | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, Arkas Logistics has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
The following customers and partners have adopted Form Recognizer across a wide
|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)| |**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. | [Blog](https://cloudblogs.microsoft.com/industry-blog/en-in/unicorn/2022/01/12/how-icertis-built-a-contract-management-solution-using-azure-form-recognizer/)|
-|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. Instabase then brings this data into business workflows as organized information. The platform provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. Instabase applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
+|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. The application platform then brings this data into business workflows as organized information. This workflow provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. The applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
+|**Old Mutual**| [**Old Mutual**](https://www.oldmutual.co.za/) is Africa's leading financial services group with a comprehensive range of investment capabilities. They're the industry leader in retirement fund solutions, investments, asset management, group risk benefits, insurance, and multi-fund management. The Old Mutual team used Microsoft Natural Language Processing and Optical Character Recognition to provide the basis for automating key customer transactions received via emails. It also offered an opportunity to identify incomplete customer requests in order to nudge customers to the correct digital channels. Old Mutual's extensible solution technology was further developed as a microservice to be consumed by any enterprise application through a secure API management layer. | [Customer story](https://customers.microsoft.com/en-us/story/1507561807660098567-old-mutual-banking-capital-markets-azure-en-south-africa)|
|**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)| | **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
-|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the Zelros platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the Zelros platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
+|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (
Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [Trusted launch](../virtual-machines/trusted-launch.md)) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring of boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
-## Azure Attestation can run in a TEE
+## Azure Attestation runs in a TEE
Azure Attestation is critical to Confidential Computing scenarios, as it performs the following actions:
Azure Attestation is critical to Confidential Computing scenarios, as it perform
- Manages and stores tenant-specific policies. - Generates and signs a token that is used by relying parties to interact with the enclave.
-Azure Attestation is built to run in two types of environments:
-- Azure Attestation running in an SGX enabled TEE.-- Azure Attestation running in a non-TEE.-
-Azure Attestation customers have expressed a requirement for Microsoft to be operationally out of trusted computing base (TCB). This is to prevent Microsoft entities such as VM admins, host admins, and Microsoft developers from modifying attestation requests, policies, and Azure Attestation-issued tokens. Azure Attestation is also built to run in TEE, where features of Azure Attestation like quote validation, token generation, and token signing are moved into an SGX enclave.
+To keep Microsoft operationally out of trusted computing base (TCB), critical operations of Azure Attestation like quote validation, token generation, policy evaluation and token signing are moved into an SGX enclave.
## Why use Azure Attestation
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
To help troubleshoot issues with your runbooks running on a hybrid runbook worke
## Next steps
+* For more information on Hybrid Runbook Worker, see [Automation Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
* If your runbooks aren't completing successfully, review the troubleshooting guide for [runbook execution failures](troubleshoot/hybrid-runbook-worker.md#runbook-execution-fails). * For more information on PowerShell, including language reference and learning modules, see [PowerShell Docs](/powershell/scripting/overview). * Learn about [using Azure Policy to manage runbook execution](enforce-job-execution-hybrid-worker.md) with Hybrid Runbook Workers.
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
-description: This article provides an overview of the Hybrid Runbook Worker, which you can use to run runbooks on machines in your local datacenter or cloud provider.
+description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider.
Last updated 11/11/2021
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: - The latest releases
+- New features
+- Improvements to existing features
- Known issues - Bug fixes + This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md
Learn more about backup to URL here:
RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' WITH MOVE 'Test' to '/var/opt/mssql/datf' ,MOVE 'Test_log' to '/var/opt/mssql/data/<file name>.ldf'
- ,RECOVERY
- ,REPLACE
- ,STATS = 5;
+ ,RECOVERY;
GO ```
Prepare and run the RESTORE command to restore the backup file to the Azure SQL
RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' WITH MOVE '<database name>' to '/var/opt/mssql/datf' ,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf'
-,RECOVERY
-,REPLACE
-,STATS = 5;
+,RECOVERY;
GO ```
Example:
RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/test.bak' WITH MOVE 'test' to '/var/opt/mssql/datf' ,MOVE 'test' to '/var/opt/mssql/data/test_log.ldf'
-,RECOVERY
-,REPLACE
-,STATS = 5;
+,RECOVERY;
GO ```
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
description: Understand the default Redis configuration for Azure Cache for Redi
Previously updated : 03/22/2022 Last updated : 06/07/2022 -+
Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemo
For more information about `maxmemory` policies, see [Eviction policies](https://redis.io/topics/lru-cache#eviction-policies).
-The **maxmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
+The **maxmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-The **maxfragmentationmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
+The **maxfragmentationmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
The settings in the **Administration** section allow you to perform the followin
### Import/Export
-Import/Export is an Azure Cache for Redis data management operation, which allows you to import and export data in the cache by importing and exporting an Azure Cache for Redis Database (RDB) snapshot from a premium cache to a page blob in an Azure Storage Account. Import/Export enables you to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
+Import/Export is an Azure Cache for Redis data management operation that allows you to import and export data in the cache. You can import and export an Azure Cache for Redis Database (RDB) snapshot from a premium cache to a page blob in an Azure Storage Account. Use Import/Export to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
Import can be used to bring Redis compatible RDB files from any Redis server running in any cloud or environment, including Redis running on Linux, Windows, or any cloud provider such as Amazon Web Services and others. Importing data is an easy way to create a cache with pre-populated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory, and then inserts the keys into the cache.
-Export allows you to export the data stored in Azure Cache for Redis to Redis compatible RDB files. You can use this feature to move data from one Azure Cache for Redis instance to another or to another Redis server. During the export process, a temporary file is created on the VM that hosts the Azure Cache for Redis server instance, and the file is uploaded to the designated storage account. When the export operation completes with either a status of success or failure, the temporary file is deleted.
+Export allows you to export the data stored in Azure Cache for Redis to Redis compatible RDB files. You can use this feature to move data from one Azure Cache for Redis instance to another or to another Redis server. During the export process, a temporary file is created on the VM that hosts the Azure Cache for Redis server instance. The temporary file is uploaded to the designated storage account. When the export operation completes with either a status of success or failure, the temporary file is deleted.
> [!IMPORTANT] > Import/Export is only available for Premium tier caches. For more information and instructions, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
New Azure Cache for Redis instances are configured with the following default Re
| | | | | `databases` |16 |The default number of databases is 16 but you can configure a different number based on the pricing tier.<sup>1</sup> The default database is DB 0, you can select a different one on a per-connection basis using `connection.GetDatabase(dbid)` where `dbid` is a number between `0` and `databases - 1`. | | `maxclients` |Depends on the pricing tier<sup>2</sup> |This value is the maximum number of connected clients allowed at the same time. Once the limit is reached Redis closes all the new connections, returning a 'max number of clients reached' error. |
-| `maxmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
-| `maxfragmentationmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they're reevaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxfragmentationmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they're reevaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
| `maxmemory-policy` |`volatile-lru` | Maxmemory policy is the setting used by the Redis server to select what to remove when `maxmemory` (the size of the cache that you selected when you created the cache) is reached. With Azure Cache for Redis, the default setting is `volatile-lru`. This setting removes the keys with an expiration set using an LRU algorithm. This setting can be configured in the Azure portal. For more information, see [Memory policies](#memory-policies). | | `maxmemory-samples` |3 |To save memory, LRU and minimal TTL algorithms are approximated algorithms instead of precise algorithms. By default Redis checks three keys and picks the one that was used less recently. | | `lua-time-limit` |5,000 |Max execution time of a Lua script in milliseconds. If the maximum execution time is reached, Redis logs that a script is still in execution after the maximum allowed time, and starts to reply to queries with an error. |
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Previously updated : 07/31/2017 Last updated : 06/07/2022
This article provides a guide for importing and exporting data with Azure Cache
> [!IMPORTANT] > Import/Export is only available for [Premium tier](cache-overview.md#service-tiers) caches.
->
->
## Import
Use import to bring Redis compatible RDB files from any Redis server running in
1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. ![Import data](./media/cache-how-to-import-export-data/cache-import-data.png)+ 2. Select **Choose Blob(s)** and select the storage account that contains the data to import. ![Choose storage account](./media/cache-how-to-import-export-data/cache-import-choose-storage-account.png)+ 3. Select the container that contains the data to import. ![Choose container](./media/cache-how-to-import-export-data/cache-import-choose-container.png)+ 4. Select one or more blobs to import by selecting the area to the left of the blob name, and then **Select**. ![Choose blobs](./media/cache-how-to-import-export-data/cache-import-choose-blobs.png)+ 5. Select **Import** to begin the import process. > [!IMPORTANT]
Export allows you to export the data stored in Azure Cache for Redis to Redis co
> ![Storage account](./media/cache-how-to-import-export-data/cache-export-data-choose-account.png)+ 3. Choose the blob container you want, then **Select**. To use new a container, select **Add Container** to add it first and then select it from the list. ![On Containers for contoso55, the + Container option is highlighted. There is one container in the list, cachesaves, and it is selected and highlighted. The Selection option is selected and highlighted.](./media/cache-how-to-import-export-data/cache-export-data-container.png)+ 4. Type a **Blob name prefix** and select **Export** to start the export process. The blob name prefix is used to prefix the names of files generated by this export operation. ![Export](./media/cache-how-to-import-export-data/cache-export-data.png)
Import/Export is available only in the premium pricing tier.
### Can I import data from any Redis server?
-Yes, you can importing data exported from Azure Cache for Redis instances, and you can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration.
+Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration.
> [!IMPORTANT] > To successfully import data exported from Redis servers other than Azure Cache for Redis when using a page blob, the page blob size must be aligned on a 512 byte boundary. For sample code to perform any required byte padding, see [Sample page blob upload](https://github.com/JimRoberts-MS/SamplePageBlobUpload).
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Title: Configure data persistence - Premium Azure Cache for Redis description: Learn how to configure and manage data persistence your Premium tier Azure Cache for Redis instances - Last updated 05/17/2022+ # Configure data persistence for a Premium Azure Cache for Redis instance
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) an
- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-Azure Cache for Redis persistence features are intended to be used to restore data after data loss, not importing it to a new cache. You cannot import from AOF page blob backups to a new cache. To export data for importing back to a new cache, use the export RDB feature or automatic recurring RDB export. For more information on importing to a new cache, see [Import](cache-how-to-import-export-data.md#import).
+Azure Cache for Redis persistence features are intended to be used to restore data after data loss, not importing it to a new cache. You can't import from AOF page blob backups to a new cache. To export data for importing back to a new cache, use the export RDB feature or automatic recurring RDB export. For more information on importing to a new cache, see [Import](cache-how-to-import-export-data.md#import).
> [!NOTE] > Importing from AOF page blob backups to a new cache is not a supported option.
Persistence writes Redis data into an Azure Storage account that you own and man
> [!NOTE] > > Azure Storage automatically encrypts data when it is persisted. You can use your own keys for the encryption. For more information, see [Customer-managed keys with Azure Key Vault](../storage/common/storage-service-encryption.md).
->
->
## Set up data persistence
Persistence writes Redis data into an Azure Storage account that you own and man
:::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
-2. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
+2. On the **Create a resource** page, select **Databases** and then select **Azure Cache for Redis**.
:::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
Previously updated : 08/11/2020 Last updated : 06/07/2022+ # Enable zone redundancy for Azure Cache for Redis+ In this article, you'll learn how to configure a zone-redundant Azure Cache instance using the Azure portal. Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../availability-zones/az-overview.md). It provides higher resilience and availability.
Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in r
> Data transfer between Azure Availability Zones will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/). ## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
## Create a cache+ To create a cache, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
To create a cache, follow these steps:
1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**. :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis.":::
-
+ 1. On the **Basics** page, configure the settings for your new cache.
-
+ | Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
+ | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
| **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Select a [Premium or Enterprise tier](https://azure.microsoft.com/pricing/details/cache/) cache. | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
+ 1. On the **Advanced** page, for a Premium tier cache, choose **Replica count**.
-
+ :::image type="content" source="media/cache-how-to-multi-replicas/create-multi-replicas.png" alt-text="Replica count":::
-1. Select **Availability zones**.
-
+1. Select **Availability zones**.
+ :::image type="content" source="media/cache-how-to-zone-redundancy/create-zones.png" alt-text="Availability zones"::: 1. Configure your settings for clustering and/or RDB persistence.
To create a cache, follow these steps:
> Zone redundancy doesn't support AOF persistence or work with geo-replication currently. >
-1. Select **Create**.
-
- It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
-
+1. Select **Create**.
+
+ It takes a while for the cache to be created. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+ > [!NOTE]
- > Availability zones can't be changed or enabled after a cache is created.
- >
+ > Availability zones can't be changed or enabled after a cache is created.
## Zone Redundancy FAQ
Zone redundancy is available only in Azure regions that have Availability Zones.
### Why can't I select all three zones during cache create?
-A Premium cache has one primary and one replica nodes by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating.
+A Premium cache has one primary and one replica node by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating.
### Can I update my existing Premium cache to use zone redundancy?
-No, this is not supported currently.
+No, this isn't supported currently.
### How much does it cost to replicate my data across Azure Availability Zones?
-When using zone redundancy, configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+When using zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
## Next Steps+ Learn more about Azure Cache for Redis features.
-> [!div class="nextstepaction"]
-> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/18/2021 Last updated : 06/07/2022 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version.
++ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | [Cognitive
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | | | | [Cognitive
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
Response:
} ] ```
-For more information, see [public documentation](../cognitive-services/Face/index.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
+For more information, see [public documentation](../cognitive-services/computer-vision/index-identity.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
## Text Analytics
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
For AI and machine learning services availability in Azure Government, see [Prod
- Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../cognitive-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
- Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../cognitive-services/face/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
Title: How to secure an application in Microsoft Azure Maps with SAS token
+ Title: How to secure an Azure Maps application with a SAS token
-description: This article describes how to configure an application to be secured with SAS token authentication.
+description: Create an Azure Maps account secured with SAS token authentication.
Previously updated : 01/05/2022 Last updated : 06/08/2022
-custom.ms: subject-rbac-steps
+
-# Secure an application with SAS token
+# Secure an Azure Maps account with a SAS token
-This article describes how to create an Azure Maps account with a SAS token that can be used to call the Azure Maps REST API.
+This article describes how to create an Azure Maps account with a securely stored SAS token you can use to call the Azure Maps REST API.
## Prerequisites
-This scenario assumes:
+- An Azure subscription. If you don't already have an Azure account, [sign up for a free one](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **Owner** role permission on the Azure subscription. You need the **Owner** permissions to:
-- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.-- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](../key-vault/general/basic-concepts.md), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.-- Azure CLI is installed to deploy the resources. Read more on [How to install the Azure CLI](/cli/azure/install-azure-cli).-- The current user is signed-in to Azure CLI with an active Azure subscription using `az login`.
+ - Create a key vault in [Azure Key Vault](../key-vault/general/basic-concepts.md).
+ - Create a user-assigned managed identity.
+ - Assign the managed identity a role.
+ - Create an Azure Maps account.
-## Scenario: SAS token
+- [Azure CLI installed](/cli/azure/install-azure-cli) to deploy the resources.
-Applications that use SAS token authentication should store the keys in a secure store. A SAS token is a credential that grants the level of access specified during its creation to anyone who holds it, until the token expires or access is revoked. This scenario describes how to safely store your SAS token as a secret in Azure Key Vault and distribute the SAS token into a public client. Events in an applicationΓÇÖs lifecycle may generate new SAS tokens without interrupting active connections using existing tokens. To understand how to configure Azure Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
+## Example scenario: SAS token secure storage
-The following sample scenario will perform the steps outlined below with two Azure Resource Manager (ARM) template deployments:
+A SAS token credential grants the access level it specifies to anyone who holds it, until the token expires or access is revoked. Applications that use SAS token authentication should store the keys securely.
-- Create an Azure Key Vault.-- Create a user-assigned managed identity.-- Assign Azure RBAC `Azure Maps Data Reader` role to the user-assigned managed identity.-- Create a map account with a CORS configuration and attach the user-assigned managed identity.-- Create and save a SAS token into the Azure Key Vault-- Retrieve the SAS token secret from Azure Key Vault.-- Create an Azure Maps REST API request using the SAS token.
+This scenario safely stores a SAS token as a secret in Key Vault, and distributes the token into a public client. Application lifecycle events can generate new SAS tokens without interrupting active connections that use existing tokens.
-When completed, you should see output from Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources will be deployed with permissions to connect to the Azure Maps account with controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
+For more information about configuring Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
-### Azure resource deployment with Azure CLI
+The following example scenario uses two Azure Resource Manager (ARM) template deployments to do the following steps:
-The following steps describe how to create and configure an Azure Maps account with SAS token authentication. The Azure CLI is assumed to be running in a PowerShell instance.
+1. Create a key vault.
+1. Create a user-assigned managed identity.
+1. Assign Azure role-based access control (RBAC) **Azure Maps Data Reader** role to the user-assigned managed identity.
+1. Create an Azure Maps account with a [Cross Origin Resource Sharing (CORS) configuration](azure-maps-authentication.md#cross-origin-resource-sharing-cors), and attach the user-assigned managed identity.
+1. Create and save a SAS token in the Azure key vault.
+1. Retrieve the SAS token secret from the key vault.
+1. Create an Azure Maps REST API request that uses the SAS token.
-1. Register Key Vault, Managed Identities, and Azure Maps for your subscription
+When you finish, you should see Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources deploy with permissions to connect to the Azure Maps account. There are controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
- ```azurecli
- az provider register --namespace Microsoft.KeyVault
- az provider register --namespace Microsoft.ManagedIdentity
- az provider register --namespace Microsoft.Maps
- ```
+## Azure resource deployment with Azure CLI
+
+The following steps describe how to create and configure an Azure Maps account with SAS token authentication. In this example, Azure CLI runs in a PowerShell instance.
+
+1. Sign in to your Azure subscription with `az login`.
+
+1. Register Key Vault, Managed Identities, and Azure Maps for your subscription.
+
+ ```azurecli
+ az provider register --namespace Microsoft.KeyVault
+ az provider register --namespace Microsoft.ManagedIdentity
+ az provider register --namespace Microsoft.Maps
+ ```
-1. Retrieve your Azure AD object ID
+1. Retrieve your Azure Active Directory (Azure AD) object ID.
```azurecli $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id") ```
-1. Create a template file `prereq.azuredeploy.json` with the following content.
+1. Create a template file named *prereq.azuredeploy.json* with the following content:
```json {
The following steps describe how to create and configure an Azure Maps account w
"objectId": { "type": "string", "metadata": {
- "description": "Specifies the object ID of a user, service principal or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
+ "description": "Specifies the object ID of a user, service principal, or security group in the Azure AD tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
} }, "secretsPermissions": {
The following steps describe how to create and configure an Azure Maps account w
```
-1. Deploy the prerequisite resources. Make sure to pick the location where the Azure Maps accounts is enabled.
+1. Deploy the prerequisite resources you created in the previous step. Supply your own value for `<group-name>`. Make sure to use the same `location` as the Azure Maps account.
- ```azurecli
- az group create --name {group-name} --location "East US"
- $outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
- ```
+ ```azurecli
+ az group create --name <group-name> --location "East US"
+ $outputs = $(az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+ ```
-1. Create a template file `azuredeploy.json` to provision the Map account, role assignment, and SAS token.
+1. Create a template file *azuredeploy.json* to provision the Azure Maps account, role assignment, and SAS token.
```json {
The following steps describe how to create and configure an Azure Maps account w
"type": "string", "defaultValue": "[guid(resourceGroup().id)]", "metadata": {
- "description": "Input string for new GUID associated with assigning built in role types"
+ "description": "Input string for new GUID associated with assigning built in role types."
} }, "startDateTime": { "type": "string", "defaultValue": "[utcNow('u')]", "metadata": {
- "description": "Current Universal DateTime in ISO 8601 'u' format to be used as start of the SAS token."
+ "description": "Current Universal DateTime in ISO 8601 'u' format to use as the start of the SAS token."
} }, "duration" : { "type": "string", "defaultValue": "P1Y", "metadata": {
- "description": "The duration of the SAS token, P1Y is maximum, ISO 8601 format is expected."
+ "description": "The duration of the SAS token. P1Y is maximum, ISO 8601 format is expected."
} }, "maxRatePerSecond": {
The following steps describe how to create and configure an Azure Maps account w
"defaultValue": [], "maxLength": 10, "metadata": {
- "description": "The specified application's web host header origins (example: https://www.azure.com) which the Maps account allows for Cross Origin Resource Sharing (CORS)."
+ "description": "The specified application's web host header origins (example: https://www.azure.com) which the Azure Maps account allows for CORS."
} }, "allowedRegions": { "type": "array", "defaultValue": [], "metadata": {
- "description": "The specified SAS token allowed locations which the token may be used."
+ "description": "The specified SAS token allowed locations where the token may be used."
} } },
The following steps describe how to create and configure an Azure Maps account w
} ```
-1. Deploy the template using ID parameters from the Azure Key Vault and managed identity resources created in the previous step. Note that when creating the SAS token, the `allowedRegions` parameter is set to `eastus`, `westus2`, and `westcentralus`. We use these locations because we plan to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
+1. Deploy the template with the ID parameters from the Key Vault and managed identity resources you created in the previous step. Supply your own value for `<group-name>`. When creating the SAS token, you set the `allowedRegions` parameter to `eastus`, `westus2`, and `westcentralus`. You can then use these locations to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
- > [!IMPORTANT]
- > We save the SAS token into the Azure Key Vault to prevent its credentials from appearing in the Azure deployment logs. The Azure Key Vault SAS token secret's `tags` also contain the start, expiry, and signing key name to help understand when the SAS token will expire.
+ > [!IMPORTANT]
+ > You save the SAS token in the key vault to prevent its credentials from appearing in the Azure deployment logs. The SAS token secret's `tags` also contain the start, expiry, and signing key name, to show when the SAS token will expire.
- ```azurecli
- az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
- ```
+ ```azurecli
+ az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+ ```
-1. Locate, then save a copy of the single SAS token secret from Azure Key Vault.
+1. Locate and save a copy of the single SAS token secret from Key Vault.
- ```azurecli
- $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
- $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
- ```
+ ```azurecli
+ $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+ $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+ ```
-1. Test the SAS Token by making a request to an Azure Maps endpoint. We specify the `us.atlas.microsoft.com` to ensure that our request will be routed to the US geography because our SAS Token has allowed regions within the geography.
+1. Test the SAS token by making a request to an Azure Maps endpoint. This example specifies the `us.atlas.microsoft.com` to ensure your request routes to US geography. Your SAS token allows regions within the US geography.
```azurecli
- az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+ az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
```
-## Complete example
+## Complete script example
-In the current directory of the PowerShell session you should have:
+To run the complete example, the following template files must be in the same directory as the current PowerShell session:
-- `prereq.azuredeploy.json` This creates the Key Vault and managed identity.-- `azuredeploy.json` This creates the Azure Maps account and configures the role assignment and managed identity, then stores the SAS Token into the Azure Key Vault.
+- *prereq.azuredeploy.json* to create the key vault and managed identity.
+- *azuredeploy.json* to create the Azure Maps account, configure the role assignment and managed identity, and store the SAS token in the key vault.
```powershell az login
az provider register --namespace Microsoft.ManagedIdentity
az provider register --namespace Microsoft.Maps $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
-az group create --name {group-name} --location "East US"
-$outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
-az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+az group create --name <group-name> --location "East US"
+$outputs = $(az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
$secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv) $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
-az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+```
+
+## Real-world example
+
+You can run requests to Azure Maps APIs from most clients, like C#, Java, or JavaScript. [Postman](https://learning.postman.com/docs/sending-requests/generate-code-snippets) converts an API request into a basic client code snippet in almost any programming language or framework you choose. You can use this generated code snippet in your front-end applications.
+
+The following small JavaScript code example shows how you could use your SAS token with the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options) to get and return Azure Maps information. The example uses the Azure Maps [Get Search Address](/rest/api/maps/search/get-search-address) API version 1.0. Supply your own value for `<your SAS token>`.
+
+For this sample to work, make sure to run it from within the same origin as the `allowedOrigins` for the API call. For example, if you provide `https://contoso.com` as the `allowedOrigins` in the API call, the HTML page that hosts the JavaScript script should be `https://contoso.com`.
+
+```javascript
+async function getData(url = 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052') {
+ const response = await fetch(url, {
+ method: 'GET',
+ mode: 'cors',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'Authorization': 'jwt-sas <your SAS token>',
+ }
+ });
+ return response.json(); // parses JSON response into native JavaScript objects
+}
+
+postData('https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052')
+ .then(data => {
+ console.log(data); // JSON data parsed by `data.json()` call
+ });
``` ## Clean up resources
az group delete --name {group-name}
## Next steps
-For more detailed examples:
+Deploy a quickstart ARM template to create an Azure Maps account that uses a SAS token:
+> [!div class="nextstepaction"]
+> [Create an Azure Maps account](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.maps/maps-use-sas)
+
+For more detailed examples, see:
> [!div class="nextstepaction"] > [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md)
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
To enable telemetry collection with Application Insights, only the Application s
|App setting name | Definition | Value | |--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux |
-|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to insure optimal performance. | `disabled` or `recommended`. |
+|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. |
|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with Application Insights SDK. Loads the extension side-by-side with the SDK and uses it to send telemetry (disables the Application Insights SDK). |`1`|
To enable telemetry collection with Application Insights, only the Application s
### Upgrade from versions 2.8.9 and up
-Upgrading from version 2.8.9 happens automatically, without any additional actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
+Upgrading from version 2.8.9 happens automatically, without any extra actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. ### Upgrade from versions 1.0.0 - 2.6.5
Below is our step-by-step troubleshooting guide for extension/agent based monito
If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
- - Confirm that `IKeyExists` is `true`
- If it is `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
+ - Confirm that `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
- - In case your application refers to any Application Insights packages, for example if you've previously instrumented (or attempted to instrument) your app with the [ASP.NET Core SDK](./asp-net-core.md), enabling the App Service integration may not take effect and the data may not appear in Application Insights. To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights.
+ - If your application refers to any Application Insights packages, enabling the App Service integration may not take effect and the data may not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights.
- > [!IMPORTANT] > This functionality is in preview
Below is our step-by-step troubleshooting guide for extension/agent based monito
# [Linux](#tab/linux)
-1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
-2. Navigate to */home\LogFiles\ApplicationInsights\status* and open *status_557de146e7fa_27_1.json*.
-
- Confirm that `AppAlreadyInstrumented` is set to false, `AiHostingStartupLoaded` to true and `IKeyExists` to true.
-
- Below is an example of the JSON file:
-
- ```json
- "AppType":".NETCoreApp,Version=v6.0",
-
- "MachineName":"557de146e7fa",
-
- "PID":"27",
-
- "AppDomainId":"1",
-
- "AppDomainName":"dotnet6demo",
-
- "InstrumentationEngineLoaded":false,
-
- "InstrumentationEngineExtensionLoaded":false,
-
- "HostingStartupBootstrapperLoaded":true,
-
- "AppAlreadyInstrumented":false,
-
- "AppDiagnosticSourceAssembly":"System.Diagnostics.DiagnosticSource, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51",
-
- "AiHostingStartupLoaded":true,
-
- "IKeyExists":true,
-
- "IKey":"00000000-0000-0000-0000-000000000000",
-
- "ConnectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/"
-
- ```
-
- If `AppAlreadyInstrumented` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.
+1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2"
+1. Browse to https:// your site name .scm.azurewebsites.net/ApplicationInsights
+1. Within this site, confirm:
+ * The status source exists and looks like: `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`
+ * `Auto-Instrumentation enabled successfully`, is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
+ * `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
+ ##### No Data
Below is our step-by-step troubleshooting guide for extension/agent based monito
-#### Default website deployed with web apps does not support automatic client-side monitoring
+#### Default website deployed with web apps doesn't support automatic client-side monitoring
-When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
+When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET managed web part in IIS. This behavior allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
If you wish to test out codeless server and client-side monitoring for ASP.NET Core in an Azure App Services web app, we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring. [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)]
-### PHP and WordPress are not supported
+### PHP and WordPress aren't supported
PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md).
The table below provides a more detailed explanation of what these values mean,
|Problem Value |Explanation |Fix | |- |-||
-| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio may add references to `Microsoft.ApplicationInsights`. |
+| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio reference `Microsoft.ApplicationInsights`. |
|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of Microsoft.ApplicationsInsights dll in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). |
-|`IKeyExists:false`|This value indicates that the instrumentation key is not present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
+|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
## Release notes
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-failure-diagnostics.md
Title: Smart Detection - failure anomalies, in Application Insights | Microsoft Docs
+ Title: Smart Detection of Failure Anomalies in Application Insights | Microsoft Docs
description: Alerts you to unusual changes in the rate of failed requests to your web app, and provides diagnostic analysis. No configuration is needed. Last updated 12/18/2018
This feature works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data. For example, if you have a worker role that calls [TrackRequest()](./api-custom-events-metrics.md#trackrequest) or [TrackDependency()](./api-custom-events-metrics.md#trackdependency).
-After setting up [Application Insights for your project](./app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts.
+After setting up [Application Insights for your project](./app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts.
Here's a sample alert:
Click the alert to configure it.
:::image type="content" source="./media/proactive-failure-diagnostics/032.png" alt-text="Rule configuration screen." lightbox="./media/proactive-failure-diagnostics/032.png":::
-Notice that you can disable or delete a Failure Anomalies alert rule, but you can't create another one on the same Application Insights resource.
+## Delete alerts
+
+You can disable or delete a Failure Anomalies alert rule, but once deleted you can't create another one for the same Application Insights resource.
+
+Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. You can do so manually on the Alert rules page or with the following Azure CLI command:
+
+```azurecli
+az resource delete --ids <Resource ID of Failure Anomalies alert rule>
+```
## Example of Failure Anomalies alert webhook payload
Click **Alerts** in the Application Insights resource page to get to the most re
:::image type="content" source="./media/proactive-failure-diagnostics/070.png" alt-text="Alerts summary." lightbox="./media/proactive-failure-diagnostics/070.png"::: ## What's the difference ...
-Smart Detection of failure anomalies complements other similar but distinct features of Application Insights.
+Smart Detection of Failure Anomalies complements other similar but distinct features of Application Insights.
-* [metric alerts](../alerts/alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of failure anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
+* [metric alerts](../alerts/alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of Failure Anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
-* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of failure anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for failure anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
+* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
## If you receive a Smart Detection alert *Why have I received this alert?*
These diagnostic tools help you inspect the data from your app:
Smart detections are automatic. But maybe you'd like to set up some more alerts? * [Manually configured metric alerts](../alerts/alerts-log.md)
-* [Availability web tests](./monitor-web-app-availability.md)
+* [Availability web tests](./monitor-web-app-availability.md)
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
Last updated 05/09/2022
+ms.reviwer: cawa
# Navigate to a change using custom filters in Change Analysis
azure-monitor Change Analysis Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-powershell.md
ms.devlang: azurepowershell
Last updated 04/11/2022
+ms.reviwer: cawa
# Azure PowerShell for Change Analysis in Azure Monitor (preview)
azure-monitor Change Analysis Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-query.md
ms.contributor: cawa
Last updated 05/12/2022
+ms.reviwer: cawa
# Pin and share a Change Analysis query to the Azure dashboard
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa
Last updated 05/20/2022 -+ # Use Change Analysis in Azure Monitor (preview)
azure-monitor Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cli-samples.md
- Title: Azure Monitor CLI samples
-description: Sample CLI commands for Azure Monitor features. Azure Monitor is a Microsoft Azure service, which allows you to send alert notifications, call web URLs based on values of configured telemetry data, and autoScale Cloud Services, Virtual Machines, and Web Apps.
--- Previously updated : 05/16/2018 ----
-# Azure Monitor CLI samples
-This article shows you sample command-line interface (CLI) commands to help you access Azure Monitor features. Azure Monitor allows you to AutoScale Cloud Services, Virtual Machines, and Web Apps and to send alert notifications or call web URLs based on values of configured telemetry data.
-
-## Prerequisites
-
-If you haven't already installed the Azure CLI, follow the instructions for [Install the Azure CLI](/cli/azure/install-azure-cli). You can also use [Azure Cloud Shell](/azure/cloud-shell) to run the CLI as an interactive experience in your browser. See a full reference of all available commands in the [Azure Monitor CLI reference](/cli/azure/monitor).
-
-## Log in to Azure
-The first step is to log in to your Azure account.
-
-```azurecli
-az login
-```
-
-After running this command, you have to sign in via the instructions on the screen. All commands work in the context of your default subscription.
-
-List the details of your current subscription.
-
-```azurecli
-az account show
-```
-
-Change working context to a different subscription.
-
-```azurecli
-az account set -s <Subscription ID or name>
-```
-
-View a list of all supported Azure Monitor commands.
-
-```azurecli
-az monitor -h
-```
-
-## View activity log
-
-View a list of activity log events.
-
-```azurecli
-az monitor activity-log list
-```
-
-View all available options.
-
-```azurecli
-az monitor activity-log list -h
-```
-
-List logs by a resourceGroup.
-
-```azurecli
-az monitor activity-log list --resource-group <group name>
-```
-
-List logs by caller.
-
-```azurecli
-az monitor activity-log list --caller myname@company.com
-```
-
-List logs by caller on a resource type, within a date range.
-
-```azurecli
-az monitor activity-log list --resource-provider Microsoft.Web \
- --caller myname@company.com \
- --start-time 2016-03-08T00:00:00Z \
- --end-time 2016-03-16T00:00:00Z
-```
-
-## Work with alerts
-> [!NOTE]
-> Only alerts (classic) is supported in CLI at this time.
-
-### Get alert (classic) rules in a resource group
-
-```azurecli
-az monitor activity-log alert list --resource-group <group name>
-az monitor activity-log alert show --resource-group <group name> --name <alert name>
-```
-
-### Create a metric alert (classic) rule
-
-```azurecli
-az monitor alert create --name <alert name> --resource-group <group name> \
- --action email <email1 email2 ...> \
- --action webhook <URI> \
- --target <target object ID> \
- --condition "<METRIC> {>,>=,<,<=} <THRESHOLD> {avg,min,max,total,last} ##h##m##s"
-```
-
-### Delete an alert (classic) rule
-
-```azurecli
-az monitor alert delete --name <alert name> --resource-group <group name>
-```
-
-## Log profiles
-
-Use the information in this section to work with log profiles.
-
-### Get a log profile
-
-```azurecli
-az monitor log-profiles list
-az monitor log-profiles show --name <profile name>
-```
-
-### Add a log profile with retention
-
-```azurecli
-az monitor log-profiles create --name <profile name> --location <location of profile> \
- --locations <locations to monitor activity in: location1 location2 ...> \
- --categories <categoryName1 categoryName2 ...> \
- --days <# days to retain> \
- --enabled true \
- --storage-account-id <storage account ID to store the logs in>
-```
-
-### Add a log profile with retention and EventHub
-
-```azurecli
-az monitor log-profiles create --name <profile name> --location <location of profile> \
- --locations <locations to monitor activity in: location1 location2 ...> \
- --categories <categoryName1 categoryName2 ...> \
- --days <# days to retain> \
- --enabled true
- --storage-account-id <storage account ID to store the logs in>
- --service-bus-rule-id <service bus rule ID to stream to>
-```
-
-### Remove a log profile
-
-```azurecli
-az monitor log-profiles delete --name <profile name>
-```
-
-## Diagnostics
-
-Use the information in this section to work with diagnostic settings.
-
-### Get a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings list --resource <target resource ID>
-```
-
-### Create a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings create --name <diagnostic name> \
- --storage-account <storage account ID> \
- --resource <target resource object ID> \
- --logs '[
- {
- "category": <category name>,
- "enabled": true,
- "retentionPolicy": {
- "days": <# days to retain>,
- "enabled": true
- }
- }]'
-```
-
-### Delete a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings delete --name <diagnostic name> \
- --resource <target resource ID>
-```
-
-## Autoscale
-
-Use the information in this section to work with autoscale settings. You need to modify these examples.
-
-### Get autoscale settings for a resource group
-
-```azurecli
-az monitor autoscale list --resource-group <group name>
-```
-
-### Get autoscale settings by name in a resource group
-
-```azurecli
-az monitor autoscale show --name <settings name> --resource-group <group name>
-```
-
-### Set autoscale settings
-
-```azurecli
-az monitor autoscale create --name <settings name> --resource-group <group name> \
- --count <# instances> \
- --resource <target resource ID>
-```
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
Title: Configure Container insights agent data collection | Microsoft Docs
description: This article describes how you can configure the Container insights agent to control stdout/stderr and environment variables log collection. Last updated 10/09/2020+ # Configure agent data collection for Container insights
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
Title: Kubernetes monitoring with Container insights | Microsoft Docs
description: This article describes how you can view and analyze the performance of a Kubernetes cluster with Container insights. Last updated 03/26/2020+ # Monitor your Kubernetes cluster performance with Container insights
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Title: Monitoring cost for Container insights | Microsoft Docs
description: This article describes the monitoring cost for metrics & inventory data collected by Container insights to help customers manage their usage and associated costs. Last updated 05/29/2020+ # Understand monitoring costs for Container insights
azure-monitor Container Insights Deployment Hpa Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-deployment-hpa-metrics.md
Title: Deployment & HPA metrics with Container insights | Microsoft Docs
description: This article describes what deployment & HPA (Horizontal pod autoscaler) metrics are collected with Container insights. Last updated 08/09/2020+ # Deployment & HPA metrics with Container insights
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Title: Enable AKS Monitoring Addon using Azure Policy
description: Describes how to enable AKS Monitoring Addon using Azure Custom Policy. Last updated 02/04/2021+ # Enable AKS monitoring addon using Azure Policy
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.+ # Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS)
Last updated 05/24/2022 + # Enable monitoring of Azure Kubernetes Service (AKS) cluster already deployed
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
Last updated 05/24/2022 ms.devlang: azurecli+ # Enable monitoring of a new Azure Kubernetes Service (AKS) cluster
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights
description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. Last updated 05/24/2022+ # Configure GPU monitoring with Container insights
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Title: Configure Hybrid Kubernetes clusters with Container insights | Microsoft
description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environment. Last updated 06/30/2020+ # Configure hybrid Kubernetes clusters with Container insights
azure-monitor Container Insights Livedata Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-deployments.md
description: This article describes the real-time view of Kubernetes Deployments
Last updated 10/15/2019 + # How to view Deployments (preview) in real-time
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
description: This article describes the real-time view of metrics without using
Last updated 05/24/2022 + # How to view metrics in real-time
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
description: This article describes the real-time view of Kubernetes logs, event
Last updated 05/24/2022 + # How to view Kubernetes logs, events, and pod metrics in real-time
azure-monitor Container Insights Livedata Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-setup.md
description: This article describes how to set up the real-time view of containe
Last updated 05/24/2022 + # How to configure Live Data in Container insights
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log alerts from Container insights | Microsoft Docs
description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights. Last updated 07/29/2021+
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: How to query logs from Container insights
description: Container insights collects metrics and log data and this article describes the records and includes sample queries. Last updated 07/19/2021+
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Title: Configure ContainerLogv2 schema (preview) for Container Insights
description: Switch your ContainerLog table to the ContainerLogv2 schema - Last updated 05/11/2022+ # Enable ContainerLogV2 schema (preview)
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Title: How to manage the Container insights agent | Microsoft Docs
description: This article describes managing the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. Last updated 07/21/2020-+ # How to manage the Container insights agent
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights
description: This article reviews the recommended metric alerts available from Container insights in public preview. Last updated 05/24/2022-+ # Recommended metric alerts (preview) from Container insights
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Title: Enable Container insights
description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified. Last updated 05/24/2022+ # Enable Container insights
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
description: This article describes how you can stop monitoring of your hybrid K
Last updated 05/24/2022 -+ # How to stop monitoring your hybrid cluster
azure-monitor Container Insights Optout Openshift V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v3.md
description: This article describes how you can stop monitoring of your Azure Re
Last updated 05/24/2022 +
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
Title: How to stop monitoring your Azure and Red Hat OpenShift v4 cluster | Micr
description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4 cluster with Container insights. Last updated 05/24/2022+
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Last updated 05/24/2022 ms.devlang: azurecli+
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
description: This article describes Container insights that monitors AKS Contain
Last updated 09/08/2020-+ # Container insights overview
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs
description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Last updated 05/24/2022+ # Configure PV monitoring with Container insights
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Title: Configure Container insights Prometheus Integration | Microsoft Docs
description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster. Last updated 04/22/2020+ # Configure scraping of Prometheus metrics with Container insights
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
description: Describes the region mappings supported between Container insights,
Last updated 05/27/2022 + # Region mappings supported by Container insights
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Title: Reports in Container insights
description: Describes reports available to analyze data collected by Container insights. Last updated 05/24/2022+ # Reports in Container insights
azure-monitor Container Insights Transition Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-hybrid.md
description: "Learn how to migrate from using script-based hybrid monitoring solutions to Container Insights on Azure Arc-enabled Kubernetes clusters"+ # Transition to using Container Insights on Azure Arc-enabled Kubernetes
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"+ # Transition from the Container Monitoring Solution to using Container Insights
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: How to Troubleshoot Container insights | Microsoft Docs
description: This article describes how you can troubleshoot and resolve issues with Container insights. Last updated 05/24/2022+
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
description: This article describes how you update Container insights to enable
Last updated 10/09/2020 +
azure-monitor Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/containers.md
Last updated 07/06/2020+
azure-monitor Resource Manager Container Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/resource-manager-container-insights.md
Title: Resource Manager template samples for Container insights description: Sample Azure Resource Manager templates to deploy and configureContainer insights. - Last updated 05/05/2022+
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
description: Describes specific steps for using Azure Monitor to enable Continuo
Previously updated : 10/12/2018 Last updated : 06/07/2022
azure-monitor Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/powershell-samples.md
- Title: Azure Monitor PowerShell samples
-description: Use PowerShell to access Azure Monitor features such as autoscale, alerts, webhooks and searching Activity logs.
--- Previously updated : 2/14/2018 ----
-# Azure Monitor PowerShell samples
-This article shows you sample PowerShell commands to help you access Azure Monitor features.
-
-> [!NOTE]
-> Azure Monitor is the new name for what was called "Azure Insights" until Sept 25th, 2016. However, the namespaces and thus the following commands still contain the word *insights*.
--
-## Set up PowerShell
-If you haven't already, set up PowerShell to run on your computer. For more information, see [How to Install and Configure PowerShell](/powershell/azure/).
-
-## Examples in this article
-The examples in the article illustrate how you can use Azure Monitor cmdlets. You can also review the entire list of Azure Monitor PowerShell cmdlets at [Azure Monitor (Insights) Cmdlets](/powershell/module/az.applicationinsights).
-
-## Sign in and use subscriptions
-First, log in to your Azure subscription.
-
-```powershell
-Connect-AzAccount
-```
-
-You'll see a sign in screen. Once you sign in your Account, TenantID, and default Subscription ID are displayed. All the Azure cmdlets work in the context of your default subscription. To view the list of subscriptions you have access to, use the following command:
-
-```powershell
-Get-AzSubscription
-```
-
-To see your working context (which subscription your commands are run against), use the following command:
-
-```powershell
-Get-AzContext
-```
-To change your working context to a different subscription, use the following command:
-
-```powershell
-Set-AzContext -SubscriptionId <subscriptionid>
-```
--
-## Retrieve Activity log
-Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet. The following are some common examples. The Activity Log holds the last 90 days of operations. Using dates before this time results in an error message.
-
-See what the current date/time are to verify what times to use in the commands below:
-```powershell
-Get-Date
-```
-
-Get log entries from this time/date to present:
-
-```powershell
-Get-AzLog -StartTime 2019-03-01T10:30
-```
-
-Get log entries between a time/date range:
-
-```powershell
-Get-AzLog -StartTime 2019-01-01T10:30 -EndTime 2015-01-01T11:30
-```
-
-Get log entries from a specific resource group:
-
-```powershell
-Get-AzLog -ResourceGroup 'myrg1'
-```
-
-Get log entries from a specific resource provider between a time/date range:
-
-```powershell
-Get-AzLog -ResourceProvider 'Microsoft.Web' -StartTime 2015-01-01T10:30 -EndTime 2015-01-01T11:30
-```
-
-Get all log entries with a specific caller:
-
-```powershell
-Get-AzLog -Caller 'myname@company.com'
-```
-
-The following command retrieves the last 1000 events from the activity log:
-
-```powershell
-Get-AzLog -MaxRecord 1000
-```
-
-`Get-AzLog` supports many other parameters. See the `Get-AzLog` reference for more information.
-
-> [!NOTE]
-> `Get-AzLog` only provides 15 days of history. Using the **-MaxRecords** parameter allows you to query the last N events, beyond 15 days. To access events older than 15 days, use the REST API or SDK (C# sample using the SDK). If you do not include **StartTime**, then the default value is **EndTime** minus one hour. If you do not include **EndTime**, then the default value is current time. All times are in UTC.
->
->
-
-## Retrieve alerts history
-To view all alert events, you can query the Azure Resource Manager logs using the following examples.
-
-```powershell
-Get-AzLog -Caller "Microsoft.Insights/alertRules" -DetailedOutput -StartTime 2015-03-01
-```
-
-To view the history for a specific alert rule, you can use the `Get-AzAlertHistory` cmdlet, passing in the resource ID of the alert rule.
-
-```powershell
-Get-AzAlertHistory -ResourceId /subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/alertrules/myalert -StartTime 2016-03-1 -Status Activated
-```
-
-The `Get-AzAlertHistory` cmdlet supports various parameters. More information, see [Get-AlertHistory](/previous-versions/azure/mt282453(v=azure.100)).
-
-## Retrieve information on alert rules
-All of the following commands act on a Resource Group named "montest".
-
-View all the properties of the alert rule:
-
-```powershell
-Get-AzAlertRule -Name simpletestCPU -ResourceGroup montest -DetailedOutput
-```
-
-Retrieve all alerts on a resource group:
-
-```powershell
-Get-AzAlertRule -ResourceGroup montest
-```
-
-Retrieve all alert rules set for a target resource. For example, all alert rules set on a VM.
-
-```powershell
-Get-AzAlertRule -ResourceGroup montest -TargetResourceId /subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig
-```
-
-`Get-AzAlertRule` supports other parameters. See [Get-AlertRule](/previous-versions/azure/mt282459(v=azure.100)) for more information.
-
-## Create metric alerts
-You can use the `Add-AlertRule` cmdlet to create, update, or disable an alert rule.
-
-You can create email and webhook properties using `New-AzAlertRuleEmail` and `New-AzAlertRuleWebhook`, respectively. In the Alert rule cmdlet, assign these properties as actions to the **Actions** property of the Alert Rule.
-
-The following table describes the parameters and values used to create an alert using a metric.
-
-| parameter | value |
-| | |
-| Name |simpletestdiskwrite |
-| Location of this alert rule |East US |
-| ResourceGroup |montest |
-| TargetResourceId |/subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig |
-| MetricName of the alert that is created |\PhysicalDisk(_Total)\Disk Writes/sec. See the `Get-MetricDefinitions` cmdlet about how to retrieve the exact metric names |
-| operator |GreaterThan |
-| Threshold value (count/sec in for this metric) |1 |
-| WindowSize (hh:mm:ss format) |00:05:00 |
-| aggregator (statistic of the metric, which uses Average count, in this case) |Average |
-| custom emails (string array) |'foo@example.com','bar@example.com' |
-| send email to owners, contributors and readers |-SendToServiceOwners |
-
-Create an Email action
-
-```powershell
-$actionEmail = New-AzAlertRuleEmail -CustomEmail myname@company.com
-```
-
-Create a Webhook action
-
-```powershell
-$actionWebhook = New-AzAlertRuleWebhook -ServiceUri https://example.com?token=mytoken
-```
-
-Create the alert rule on the CPU% metric on a classic VM
-
-```powershell
-Add-AzMetricAlertRule -Name vmcpu_gt_1 -Location "East US" -ResourceGroup myrg1 -TargetResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.ClassicCompute/virtualMachines/my_vm1 -MetricName "Percentage CPU" -Operator GreaterThan -Threshold 1 -WindowSize 00:05:00 -TimeAggregationOperator Average -Action $actionEmail, $actionWebhook -Description "alert on CPU > 1%"
-```
-
-Retrieve the alert rule
-
-```powershell
-Get-AzAlertRule -Name vmcpu_gt_1 -ResourceGroup myrg1 -DetailedOutput
-```
-
-The Add alert cmdlet also updates the rule if an alert rule already exists for the given properties. To disable an alert rule, include the parameter **-DisableRule**.
-
-## Get a list of available metrics for alerts
-You can use the `Get-AzMetricDefinition` cmdlet to view the list of all metrics for a specific resource.
-
-```powershell
-Get-AzMetricDefinition -ResourceId <resource_id>
-```
-
-The following example generates a table with the metric Name and the Unit for it.
-
-```powershell
-Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
-```
-
-A full list of available options for `Get-AzMetricDefinition` is available at [Get-MetricDefinitions](/previous-versions/azure/mt282458(v=azure.100)).
-
-## Create and manage Activity Log alerts
-You can use the `Set-AzActivityLogAlert` cmdlet to set an Activity Log alert. An Activity Log alert requires that you first define your conditions as a dictionary of conditions, then create an alert that uses those conditions.
-
-```powershell
-
-$condition1 = New-AzActivityLogAlertCondition -Field 'category' -Equal 'Administrative'
-$condition2 = New-AzActivityLogAlertCondition -Field 'operationName' -Equal 'Microsoft.Compute/virtualMachines/write'
-$additionalWebhookProperties = New-Object "System.Collections.Generic.Dictionary``2[System.String,System.String]"
-$additionalWebhookProperties.Add('customProperty', 'someValue')
-$actionGrp1 = New-AzActionGroup -ActionGroupId '/subscriptions/<subid>/providers/Microsoft.Insights/actiongr1' -WebhookProperty $additionalWebhookProperties
-Set-AzActivityLogAlert -Location 'Global' -Name 'alert on VM create' -ResourceGroupName 'myResourceGroup' -Scope '/subscriptions/<subid>' -Action $actionGrp1 -Condition $condition1, $condition2
-
-```
-
-The additional webhook properties are optional. You can get back the contents of an Activity Log Alert using `Get-AzActivityLogAlert`.
-
-## Create and manage AutoScale settings
-
-> [!NOTE]
-> For Cloud Services (Microsoft.ClassicCompute), autoscale supports a time grain of 5 minutes (PT5M). For the other services autoscale supports a time grain of minimum of 1 minute (PT1M)
-
-A resource (a Web app, VM, Cloud Service, or Virtual Machine Scale Set) can have only one autoscale setting configured for it.
-However, each autoscale setting can have multiple profiles. For example, one for a performance-based scale profile and a second one for a schedule-based profile. Each profile can have multiple rules configured on it. For more information about Autoscale, see [How to Autoscale an Application](../cloud-services/cloud-services-how-to-scale-portal.md).
-
-Here are the steps to use:
-
-1. Create rule(s).
-2. Create profile(s) mapping the rules that you created previously to the profiles.
-3. Optional: Create notifications for autoscale by configuring webhook and email properties.
-4. Create an autoscale setting with a name on the target resource by mapping the profiles and notifications that you created in the previous steps.
-
-The following examples show you how you can create an Autoscale setting for a Virtual Machine Scale Set for a Windows operating system based by using the CPU utilization metric.
-
-First, create a rule to scale out, with an instance count increase.
-
-```powershell
-$rule1 = New-AzAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -Operator GreaterThan -MetricStatistic Average -Threshold 60 -TimeGrain 00:01:00 -TimeWindow 00:10:00 -ScaleActionCooldown 00:10:00 -ScaleActionDirection Increase -ScaleActionValue 1
-```
-
-Next, create a rule to scale in, with an instance count decrease.
-
-```powershell
-$rule2 = New-AzAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -Operator GreaterThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -TimeWindow 00:10:00 -ScaleActionCooldown 00:10:00 -ScaleActionDirection Decrease -ScaleActionValue 1
-```
-
-Then, create a profile for the rules.
-
-```powershell
-$profile1 = New-AzAutoscaleProfile -DefaultCapacity 2 -MaximumCapacity 10 -MinimumCapacity 2 -Rules $rule1,$rule2 -Name "My_Profile"
-```
-
-Create a webhook property.
-
-```powershell
-$webhook_scale = New-AzAutoscaleWebhook -ServiceUri "https://example.com?mytoken=mytokenvalue"
-```
-
-Create the notification property for the autoscale setting, including email and the webhook that you created previously.
-
-```powershell
-$notification1= New-AzAutoscaleNotification -CustomEmails ashwink@microsoft.com -SendEmailToSubscriptionAdministrators SendEmailToSubscriptionCoAdministrators -Webhooks $webhook_scale
-```
-
-Finally, create the autoscale setting to add the profile that you created previously.
-
-```powershell
-Add-AzAutoscaleSetting -Location "East US" -Name "MyScaleVMSSSetting" -ResourceGroup big2 -TargetResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -AutoscaleProfiles $profile1 -Notifications $notification1
-```
-
-For more information about managing Autoscale settings, see [Get-AutoscaleSetting](/previous-versions/azure/mt282461(v=azure.100)).
-
-## Autoscale history
-The following example shows you how you can view recent autoscale and alert events. Use the activity log search to view the autoscale history.
-
-```powershell
-Get-AzLog -Caller "Microsoft.Insights/autoscaleSettings" -DetailedOutput -StartTime 2015-03-01
-```
-
-You can use the `Get-AzAutoScaleHistory` cmdlet to retrieve AutoScale history.
-
-```powershell
-Get-AzAutoScaleHistory -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/microsoft.insights/autoscalesettings/myScaleSetting -StartTime 2016-03-15 -DetailedOutput
-```
-
-For more information, see [Get-AutoscaleHistory](/previous-versions/azure/mt282464(v=azure.100)).
-
-### View details for an autoscale setting
-You can use the `Get-Autoscalesetting` cmdlet to retrieve more information about the autoscale setting.
-
-The following example shows details about all autoscale settings in the resource group 'myrg1'.
-
-```powershell
-Get-AzAutoscalesetting -ResourceGroup myrg1 -DetailedOutput
-```
-
-The following example shows details about all autoscale settings in the resource group 'myrg1' and specifically the autoscale setting named 'MyScaleVMSSSetting'.
-
-```powershell
-Get-AzAutoscalesetting -ResourceGroup myrg1 -Name MyScaleVMSSSetting -DetailedOutput
-```
-
-### Remove an autoscale setting
-You can use the `Remove-Autoscalesetting` cmdlet to delete an autoscale setting.
-
-```powershell
-Remove-AzAutoscalesetting -ResourceGroup myrg1 -Name MyScaleVMSSSetting
-```
-
-## Manage log profiles for activity log
-You can create a *log profile* and export data from your activity log to a storage account and you can configure data retention for it. Optionally, you can also stream the data to your Event Hub. This feature is currently in Preview and you can only create one log profile per subscription. You can use the following cmdlets with your current subscription to create and manage log profiles. You can also choose a particular subscription. Although PowerShell defaults to the current subscription, you can always change that using `Set-AzContext`. You can configure activity log to route data to any storage account or Event Hub within that subscription. Data is written as blob files in JSON format.
-
-### Get a log profile
-To fetch your existing log profiles, use the `Get-AzLogProfile` cmdlet.
-
-### Add a log profile without data retention
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia
-```
-
-### Remove a log profile
-```powershell
-Remove-AzLogProfile -name my_log_profile_s1
-```
-
-### Add a log profile with data retention
-You can specify the **-RetentionInDays** property with the number of days, as a positive integer, where the data is retained.
-
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia -RetentionInDays 90
-```
-
-### Add log profile with retention and EventHub
-In addition to routing your data to storage account, you can also stream it to an Event Hub. In this preview release the storage account configuration is mandatory but Event Hub configuration is optional.
-
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia -RetentionInDays 90
-```
-
-## Configure diagnostics logs
-Many Azure services provide additional logs and telemetry that can do one or more of the following:
-
-The operation can only be performed at a resource level. The storage account or event hub should be present in the same region as the target resource where the diagnostics setting is configured.
-
-### Get diagnostic setting
-```powershell
-Get-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp
-```
-
-Disable diagnostic setting
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $false
-```
-
-Enable diagnostic setting without retention
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $true
-```
-
-Enable diagnostic setting with retention
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $true -RetentionEnabled $true -RetentionInDays 90
-```
-
-Enable diagnostic setting with retention for a specific log category
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/sakteststorage -Categories NetworkSecurityGroupEvent -Enable $true -RetentionEnabled $true -RetentionInDays 90
-```
-
-Enable diagnostic setting for Event Hubs
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Enable $true
-```
-
-Enable diagnostic setting for Log Analytics
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -WorkspaceId /subscriptions/s1/resourceGroups/insights-integration/providers/providers/microsoft.operationalinsights/workspaces/myWorkspace -Enabled $true
-
-```
-
-Note that the WorkspaceId property takes the *resource ID* of the workspace. You can obtain the resource ID of your Log Analytics workspace using the following command:
-
-```powershell
-(Get-AzOperationalInsightsWorkspace).ResourceId
-
-```
-
-These commands can be combined to send data to multiple destinations.
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
## Application Insights ## Next Steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
+- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
description: Describes recent terminology changes made to Azure monitoring servi
Previously updated : 10/08/2019 Last updated : 06/07/2022
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-security.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Last updated 06/02/2021+
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
Last updated 05/18/2020+
azure-monitor Service Map Scom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map-scom.md
Last updated 07/12/2019+
azure-monitor Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md
Last updated 07/24/2019+
azure-monitor Tutorial Monitor Vm Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert.md
Last updated 11/04/2021+ # Tutorial: Create alert when Azure virtual machine is unavailable
azure-monitor Tutorial Monitor Vm Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable.md
Last updated 11/04/2021+ # Tutorial: Enable monitoring for Azure virtual machine
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
Last updated 11/08/2021+ # Tutorial: Collect guest logs and metrics from Azure virtual machine
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
description: Simplify complex reporting with predefined and custom parameterized
Previously updated : 03/12/2020 Last updated : 05/27/2022
VM insights includes the following workbooks. You can use these workbooks or use
| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. | | Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. | | TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. |
-| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. |
+| Traffic Comparison | This workbook lets you compare network traffic trends for a single machine or a group of machines. |
## Creating a new workbook A workbook is made up of sections consisting of independently editable charts, tables, text, and input controls. To better understand workbooks, let's start by opening a template and walk through creating a custom workbook.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the **Monitor** menu in the Azure portal.
-2. Select **Virtual Machines**.
+2. Select a virtual machine.
-3. From the list, select a VM.
+3. On the VM insights page, select **Performance** or **Maps** tab and then select **View Workbooks** from the link on the page. From the drop-down list, select **Go to Gallery**.
-4. On the VM page, in the **Monitoring** section, select **Insights**.
-
-5. On the VM insights page, select **Performance** or **Maps** tab and then select **View Workbooks** from the link on the page. From the drop-down list, select **Go to Gallery**.
-
- ![Screenshot of workbook drop-down list](media/vminsights-workbooks/workbook-dropdown-gallery-01.png)
+ :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot of workbook drop-down list in V M insights.":::
This launches the workbook gallery with a number of prebuilt workbooks to help you get started. 7. Create a new workbook by selecting **New**.
- ![Screenshot of workbook gallery](media/vminsights-workbooks/workbook-gallery-01.png)
## Editing workbook sections
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
Title: Azure Video Indexer scenes, shots, and keyframes description: This topic gives an overview of the Azure Video Indexer scenes, shots, and keyframes. Previously updated : 07/05/2019 Last updated : 06/07/2022
To extract high-resolution keyframes for your video, you must first upload and i
#### With the Azure Video Indexer website
-To extract keyframes using the Azure Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer.
+To extract keyframes using the Azure Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer (make sure to view the warning regarding artifacts below). Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
![Screenshot that shows the "Download" drop-down with "Artifacts" selected.](./media/scenes-shots-keyframes/extracting-keyframes2.png)
-Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
#### With the Azure Video Indexer API
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-> [!TIP]
-> The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
- To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website. ![Screenshot of the Insights tab in Azure Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
-When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false` to save time and reduce response length.
+When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false`.
+ This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-insights). > [!NOTE] > All the access tokens in Azure Video Indexer expire in one hour.
-## Get the insights
+## Get the insights using the website
To get insights produced on the website or the Azure portal: 1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 1. Find a video whose output you want to examine. 1. Press **Play**.
-1. Select the **Insights** tab to get summarized insights. Or select the **Timeline** tab to filter the relevant insights.
-1. Download artifacts and what's in them.
+1. Choose the **Insights** tab.
+2. Select which insights you want to view (under the **View** drop-down).
+3. Go to the **Timeline** tab to see timestamped transcript lines.
+4. Select **Download** > **Insights (JSON)** to get the insights output file.
+5. If you want to download artifacts, beware of the following:
+
+ [!INCLUDE [artifacts](./includes/artifacts.md)]
For more information, see [View and edit video insights](video-indexer-view-edit.md).
-To get insights produced by the API:
+## Get insights produced by the API
+
+To retrieve the JSON file (OCR, face, keyframe, etc.) or an artifact type, call the [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
+
+This API returns a URL only with a link to the specific resource type you request. An additional GET request must be made to this URL for the specific artifact. The file types for each artifact type vary depending on the artifact:
+
+### JSON
+
+* OCR
+* Faces
+* VisualContentModeration
+* LanguageDetection
+* MultiLanguageDetection
+* Metadata
+* Emotions
+* TextualContentModeration
+* AudioEffects
+* ObservedPeople
+* Labels
-- To retrieve the JSON file, call the [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).-- If you're interested in specific artifacts, call the [Get Video Artifact Download URL API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
+### Zip file containing JPG images
- In the API call, specify the requested artifact type (for example, OCR, face, or keyframe).
+* KeyframesThumbnails
+* FacesThumbnails
## Root elements of the insights
To get insights produced by the API:
|`isEditable`|Indicates whether the current user is authorized to edit the playlist.| |`isBase`|Indicates whether the playlist is a base playlist (a video) or a playlist made of other videos (derived).| |`durationInSeconds`|The total duration of the playlist.|
-|`summarizedInsights`|Contains one [summarized insight](#summarizedinsights).
+|`summarizedInsights`|Contains one [summarized insight](#summary-of-the-insights).
|`videos`|A list of [videos](#videos) that construct the playlist.<br/>If this playlist is constructed of time ranges of other videos (derived), the videos in this list will contain only data from the included time ranges.| ```json
To get insights produced by the API:
} ```
-## summarizedInsights
+## Summary of the insights
This section shows a summary of the insights.
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View and edit Azure Video Indexer insights description: This article demonstrates how to view and edit Azure Video Indexer insights.- Previously updated : 05/15/2019 Last updated : 06/07/2022
This topic shows you how to view and edit the Azure Video Indexer insights of a
2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md). 3. Press **Play**.
- The page shows the video's summarized insights.
+ The page shows the video's insights.
![Insights](./media/video-indexer-view-edit/video-indexer-summarized-insights.png)-
-4. View the summarized insights of the video.
+4. View the insights of the video.
Summarized insights show an aggregated view of the data: faces, keywords, sentiments. For example, you can see the faces of people and the time ranges each face appears in and the % of the time it is shown.
+ [!INCLUDE [insights](./includes/insights.md)]
+
+ Select the **Timeline** tab to see transcripts with timelines and other information that you can choose from the **View** drop-down.
+ The player and the insights are synchronized. For example, if you click a keyword or the transcript line, the player brings you to that moment in the video. You can achieve the player/insights view and synchronization in your application. For more information, see [Embed Azure Indexer widgets into your application](video-indexer-embed-widgets.md).
+ If you want to download artifact files, beware of the following:
+
+ [!INCLUDE [artifacts](./includes/artifacts.md)]
+
+ For more information, see [Insights output](video-indexer-output-json-v2.md).
+
## Next steps [Use your videos' deep insights](use-editor-create-project.md)
azure-web-pubsub Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md
Last updated 11/08/2021
-# What is Azure Web PubSub service?
+# What is Azure Web PubSub service?
The Azure Web PubSub Service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. This real-time functionality allows publishing content updates between server and connected clients (for example a single page web application or mobile application). The clients do not need to poll the latest updates, or submit new HTTP requests for updates.
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
Title: Back up Azure Database for PostgreSQL description: Learn about Azure Database for PostgreSQL backup with long-term retention Previously updated : 02/25/2022 Last updated : 06/07/2022
You can configure backup on multiple databases across multiple Azure PostgreSQL
1. **Select Azure PostgreSQL databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server. >[!Note]
- >You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+ >- You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+ >- Backup of Azure PostgreSQL servers with Private endpoint enabled is currently not supported.
:::image type="content" source="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-inline.png" alt-text="Screenshot showing the option to select an Azure PostgreSQL database." lightbox="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-expanded.png":::
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 05/06/2022 Last updated : 06/08/2022
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features: - Supports *Multiple Backups Per Day* (in preview).-- Instant Restore tier is zonally redundant using Zone-redundant storage (ZRS) resiliency. See the [pricing details for Enhanced policy storage here](https://azure.microsoft.com/pricing/details/managed-disks/).
+- Instant Restore tier is zonally redundant using Zone-redundant storage (ZRS) resiliency. See the [pricing details for Managed Disk Snapshots](https://azure.microsoft.com/pricing/details/managed-disks/).
:::image type="content" source="./media/backup-azure-vms-enhanced-policy/enhanced-backup-policy-settings.png" alt-text="Screenshot showing the enhanced backup policy options.":::
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 05/05/2022 Last updated : 06/08/2022
The following table lists the operations defined as critical operations and can
| Disable soft delete | Mandatory Disable MUA protection | Mandatory
-Modify backup policy | Optional: Can be excluded
-Modify protection | Optional: Can be excluded
-Stop protection | Optional: Can be excluded
+Modify backup policy (reduced retention) | Optional: Can be excluded
+Modify protection (reduced retention) | Optional: Can be excluded
+Stop protection with delete data | Optional: Can be excluded
Change MARS security PIN | Optional: Can be excluded ### Concepts and process
certification How To Indirectly Connected Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-indirectly-connected-devices.md
# Mandatory fields. Title: Certifing device bundles and indirectly connected devices
+ Title: Certify bundled or indirectly connected devices
-description: See how to submit an indirectly connected device for certification.
+description: Learn how to submit a bundled or indirectly connected device for Azure Certified Device certification. See how to configure dependencies and components.
Previously updated : 02/23/2021 Last updated : 06/07/2022 -+ # Optional fields. Don't forget to remove # if you need a field.
-#
# # # Device bundles and indirectly connected devices
-To support devices that interact with Azure through a device, SaaS or PaaS offerings, our submission portal (https://certify.azure.com/), and device catalog (https://devicecatalog.azure.com) enable concepts of bundling and dependencies to promote and enable these device combinations access to our Azure Certified Device program.
+Many devices interact with Azure indirectly. Some communicate through another device, such as a gateway. Others connect through software as a service (SaaS) or platform as a service (PaaS) offerings.
+
+The [submission portal](https://certify.azure.com/) and [device catalog](https://devicecatalog.azure.com) offer support for indirectly connected devices:
+
+- By listing dependencies in the portal, you can specify that your device needs another device or service to connect to Azure.
+- By adding components, you can indicate that your device is part of a bundle.
+
+This functionality gives indirectly connected devices access to the Azure Certified Device program.
-Depending on your product line and services offered, your situation may require a combination of these steps:
+Depending on your product line and the services that you offer or use, your situation might require a combination of dependencies and bundling. The Azure Edge Certification Portal provides a way for you to list dependencies and additional components.
-![Create project dependencies](./media/indirect-connected-device/picture-1.png )
## Sensors and indirect devices
-Many sensors require a device to connect to Azure. In addition, you may have multiple compatible devices that will work with the sensor device. **To accommodate these scenarios, you must first certify the device(s) before certifying the sensor that will pass information through them.**
-Example matrix of submission combinations
-![Submission example](./media/indirect-connected-device/picture-2.png )
+Many sensors require a device to connect to Azure. In addition, you might have multiple compatible devices that work with the sensor. **To accommodate these scenarios, certify the devices before you certify the sensor that passes information through them.**
+
+The following matrix provides some examples of submission combinations:
++
+To certify a sensor that requires a separate device:
+
+1. Go to the [Azure Certified Device portal](https://certify.azure.com) to certify the device and publish it to the Azure Certified Device catalog. If you have multiple, compatible pass-through devices, as in the earlier example, submit them separately for certification and catalog publication.
-To certify your sensor, which requires a separate device:
-1. First, [certify the device](https://certify.azure.com) and publish to the Azure Certified Device Catalog
- - If you have multiple, compatible passthrough devices (as in the example above), Submit them separately for certification and publish to the catalog as well
-2. With the sensor connected through the device, submit the sensor for certification
- * In the ΓÇ£DependenciesΓÇ¥ tab of the ΓÇ£Device detailsΓÇ¥ section, set the following values
- * Dependency type = ΓÇ£Hardware gatewayΓÇ¥
- * Dependency URL = ΓÇ£URL link to the device on the device catalogΓÇ¥
- * Used during testing = ΓÇ£YesΓÇ¥
- * Add any Customer-facing comments that should be provided to a user who sees the product description in the device catalog. (example: ΓÇ£Series 100 devices are required for sensors to connect to AzureΓÇ¥)
+1. With the sensor connected through the device, submit the sensor for certification. In the **Dependencies** tab of the **Device details** section, set the following values:
-3. If you have more devices you would like added as optional for this device, you can select ΓÇ£+ Add additional dependencyΓÇ¥. Then follow the same guidance and note that it was not used during testing. In the Customer-facing comments, ensure your customers are aware that other devices are associated with this sensor are available (as an alternative to the device that was used during testing).
+ - **Dependency type**: Select **Hardware gateway**.
+ - **Dependency URL**: Enter the URL of the device in the device catalog.
+ - **Used during testing**: Select **Yes**.
+ - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the device catalog. For example, you might enter **Series 100 devices are required for sensors to connect to Azure**.
-![Alt text](./media/indirect-connected-device/picture-3.png "Hardware dependency type")
+1. If you'd like to add more devices as optional for this device:
+
+ 1. Select **Add additional dependency**.
+ 1. Enter **Dependency type** and **Dependency URL** values.
+ 1. For **Used during testing**, select **No**.
+ 1. For **Customer-facing comments**, enter a comment that informs your customers that other devices are available as alternatives to the device that was used during testing.
+ ## PaaS and SaaS offerings
-As part of your product portfolio, you may have devices that you certify, but your device also requires other services from your company or other third-party companies. To add this dependency, follow these steps:
-1. Start the submission process for your device
-2. In the ΓÇ£DependenciesΓÇ¥ tab, set the following values
- - Dependency type = ΓÇ£Software serviceΓÇ¥
- - Service name = ΓÇ£[your product name]ΓÇ¥
- - Dependency URL = ΓÇ£URL link to a product page that describes the serviceΓÇ¥
- - Add any customer facing comments that should be provided to a user who sees the product description in the Azure Certified Device Catalog
-3. If you have other software, services or hardware dependencies you would like added as optional for this device, you can select ΓÇ£+ Add additional dependencyΓÇ¥ and follow the same guidance.
-![Software dependency type](./media/indirect-connected-device/picture-4.png )
+As part of your product portfolio, you might certify a device that requires services from your company or third-party companies. To add this type of dependency:
+
+1. Go to the [Azure Certified Device portal](https://certify.azure.com) and start the submission process for your device.
+
+1. In the **Dependencies** tab, enter the following values:
+
+ - **Dependency type**: Select **Software service**.
+ - **Service name**: Enter the name of your product.
+ - **Dependency URL**: Enter the URL of a product page that describes the service.
+ - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the Azure Certified Device catalog.
+
+1. If you have other software, services, or hardware dependencies that you'd like to add as optional for this device, select **Add additional dependency** and enter the required information.
+ ## Bundled products
-Bundled product listings are simply the successful certification of a device with another components that will be sold as part of the bundle in one product listing. You have the ability to submit a device that includes extra components such as a temperature sensor and a camera sensor (#1) or you could submit a touch sensor that includes a passthrough device (#2). Through the ΓÇ£ComponentΓÇ¥ feature, you have the ability to add multiple components to your listing.
-If you intend to do this, you format the product listing image to indicate this product comes with other components. In addition, if your bundle requires additional services to certify, you will need to identify those through the services dependency.
-Example matrix of bundled products
+With bundled product listings, a device is successfully certified in the Azure Certified Device program with other components. The device and the components are then sold together under one product listing.
+
+The following matrix provides some examples of bundled products. You can submit a device that includes extra components such as a temperature sensor and a camera sensor, as in submission example 1. You can also submit a touch sensor that includes a pass-through device, as in submission example 2.
-![Bundle submission example](./media/indirect-connected-device/picture-5.png )
-For a more detailed description on how to use the component functionality in the Azure Certified Device portal, see our [help documentation](./how-to-using-the-components-feature.md).
+Use the component feature to add multiple components to your listing. Format the product listing image to indicate that your product comes with other components. If your bundle requires additional services for certification, identify those services through service dependencies.
-If a device is a passthrough device with a separate sensor in the same product, create one component to reflect the passthrough device, and another component to reflect the sensor. Components can be added to your project in the Product details tab of the Device details section:
+For a more detailed description of how to use the component functionality in the Azure Certified Device portal, see [Add components on the portal](./how-to-using-the-components-feature.md).
-![Adding components](./media/indirect-connected-device/picture-6.png )
+If a device is a pass-through device with a separate sensor in the same product, create one component to reflect the pass-through device, and another component to reflect the sensor. As the following screenshot shows, you can add components to your project in the **Product details** tab of the **Device details** section:
-For the passthrough device, set the Component type as a Customer Ready Product, and fill in the other fields as relevant for your product. Example:
-![Component details](./media/indirect-connected-device/picture-7.png )
+Configure the pass-through device first. For **Component type**, select **Customer Ready Product**. Enter the other values, as relevant for your product. The following screenshot provides an example:
-For the sensor, add a second component, setting the Component type as Peripheral and Attachment method as Discrete. Example:
-![Second component details](./media/indirect-connected-device/picture-8.png )
+For the sensor, add a second component. For **Component type**, select **Peripheral**. For **Attachment method**, select **Discrete**. The following screenshot provides an example:
-Once the Sensor component has been created, Edit the details, navigate to the Sensors tab, and then add the sensor details. Example:
-![Sensor details](./media/indirect-connected-device/picture-9.png )
+After you've created the sensor component, enter its information. Then go to the **Sensors** tab and enter detailed sensor information, as the following screenshot shows.
-Complete your projects details and Submit your device for certification as normal.
+Complete the rest of your project's details, and then submit your device for certification as usual.
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
|[G](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#g-series) | 180-240* | |[H](../virtual-machines/h-series.md) | 290 - 300* | + >[!NOTE] > ACUs marked with a * use Intel® Turbo technology to increase CPU frequency and provide a performance boost. The amount of the boost can vary based on the VM size, workload, and other workloads running on the same host. + ## Configure sizes for Cloud Services (extended support) You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity and the local file system size.
For example, setting the web role instance size to `Standard_D2`:
<WorkerRole name="Worker1" vmsize="Standard_D2"> </WorkerRole> ```
+>[!IMPORTANT]
+> Microsoft Azure has introduced newer generations of high-performance computing (HPC), general purpose, and memory-optimized virtual machines (VMs). For this reason, we recommend that you migrate workloads from the original H-series and H-series Promo VMs to our newer offerings by August 31, 2022. Azure [HC](../virtual-machines/hc-series.md), [HBv2](../virtual-machines/hbv2-series.md), [HBv3](../virtual-machines/hbv3-series.md), [Dv4](../virtual-machines/dv4-dsv4-series.md), [Dav4](../virtual-machines/dav4-dasv4-series.md), [Ev4](../virtual-machines/ev4-esv4-series.md), and [Eav4](../virtual-machines/eav4-easv4-series.md) VMs have greater memory bandwidth, improved networking capabilities, and better cost and performance across various HPC workloads.
## Change the size of an existing role
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md
In addition to the substantial CPU power, the H-series offers diverse options fo
\*RDMA capable
+>[!IMPORTANT]
+> Microsoft Azure has introduced newer generations of high-performance computing (HPC), general purpose, and memory-optimized virtual machines (VMs). For this reason, we recommend that you migrate workloads from the original H-series and H-series Promo VMs to our newer offerings by August 31, 2022. Azure [HC](../virtual-machines/hc-series.md), [HBv2](../virtual-machines/hbv2-series.md), [HBv3](../virtual-machines/hbv3-series.md), [Dv4](../virtual-machines/dv4-dsv4-series.md), [Dav4](../virtual-machines/dav4-dasv4-series.md), [Ev4](../virtual-machines/ev4-esv4-series.md), and [Eav4](../virtual-machines/eav4-easv4-series.md) VMs have greater memory bandwidth, improved networking capabilities, and better cost and performance across various HPC workloads.
+
+ On August 31, 2022, we're retiring the following H-series Azure VM sizes:
+
+- H8
+- H8m
+- H16
+- H16r
+- H16m
+- H16mr
+- H8 Promo
+- H8m Promo
+- H16 Promo
+- H16r Promo
+- H16m Promo
+- H16mr Promo
+ ## Configure sizes for Cloud Services You can specify the Virtual Machine size of a role instance as part of the service model described by the [service definition file](cloud-services-model-and-package.md#csdef). The size of the role determines the number of CPU cores, the memory capacity, and the local file system size that is allocated to a running instance. Choose the role size based on your application's resource requirement.
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
Title: Best practices for using the Anomaly Detector Multivariate API
+ Title: Best practices for using the Multivariate Anomaly Detector API
description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data.
Previously updated : 04/01/2021 Last updated : 06/07/2022 keywords: anomaly detection, machine learning, algorithms
-# Best practices for using the Anomaly Detector multivariate API
+# Best practices for using the Multivariate Anomaly Detector API
This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs. In this tutorial, you'll:
Follow the instructions in this section to avoid errors while using MVAD. If you
## Data engineering
-Now you're able to run the your code with MVAD APIs without any error. What could be done to improve your model accuracy?
+Now you're able to run your code with MVAD APIs without any error. What could be done to improve your model accuracy?
### Data quality
-* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It is hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
-* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learnt as normal patterns. That may result in real (not missing) data points being detected as anomalies.
- However, there are cases when a high missing ratio is acceptable. For example, if you have two variables (time series) in a group using `Outer` mode to align their timestamps. One of them has one-minute granularity, the other one has hourly granularity. Then the hourly variable by nature has at least 59 / 60 = 98.33% missing data points. In such cases, it's fine to fill the hourly variable using the only value available (not missing) if it typically does not fluctuate too much.
+* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It's hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
+* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learned as normal patterns. That may result in real (not missing) data points being detected as anomalies.
+ ### Data quantity * The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **15,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable. * Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
- Anything beyond that or "before" the leading sliding window will not impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
+ Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
### Timestamp round-up
-In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here is a simple example.
+In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here's a simple example.
*Variable-1*
In a group of variables (time series), each variable may be collected from an in
| 12:01:34 | 1.7 | | 12:02:04 | 2.0 |
-We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors are not sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
+We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table will be
Let's see what happens if they're not pre-processed. If we set `alignMode` to be
| 12:02:04 | `nan` | 2.0 | | 12:02:08 | 1.3 | `nan` |
-`nan` indicates missing values. Obviously, the merged table is not what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model cannot extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there is no common timestamp in variable 1 and variable 2.
+`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there's no common timestamp in variable 1 and variable 2.
Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are
Now the merged table is more reasonable.
Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information.
+### Limitations
+
+There are some limitations in both the training and inference APIs, you should be aware of these limitations to avoid errors.
+
+#### General Limitations
+* Sliding window: 28-2880 timestamps, default is 300. For periodic data, set the length of 2-4 cycles as the sliding window.
+* API calls: At most 20 API calls per minute.
+* Variable numbers: For training and asynchronized inference, at most 301 variables.
+#### Training Limitations
+* Timestamps: At most 1000000. Too few timestamps may decrease model quality. Recommend having more than 15000 timestamps.
+* Granularity: The minimum granularity is `per_second`.
+
+#### Asynchronized inference limitations
+* Timestamps: At most 20000, at least 1 sliding window length.
+#### Synchronized inference limitations
+* Timestamps: At most 2880, at least 1 sliding window length.
+* Detecting timestamps: From 1 to 10.
+
+## Model quality
+
+### How to deal with false positive and false negative in real scenarios?
+We have provided severity which indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
+
+### How to estimate which model is best to use according to training loss and validation loss?
+Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values.
++ ## Common pitfalls Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues. | Pitfall | Consequence |Explanation and solution | | | -- | -- |
-| Timestamps in training data and/or inference data were not rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results are not as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
+| Timestamps in training data and/or inference data weren't rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results aren't as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. | | Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.|
-| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
+| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that aren't severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. | | Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. |
-| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You will get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
+| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You'll get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
## FAQ
Apart from the [error code table](./troubleshoot.md), we've learned from custome
Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity.
-* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because, MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
+* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive) to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`).
-### Why only accepting zip files for training and inference?
+### Why does the service only accept zip files for training and inference when sending data asynchronously?
-We use zip files because in batch scenarios, we expect the size of both training and inference data would be very large and cannot be put in the HTTP request body. This allows users to perform batch inference on historical data either for model validation or data analysis.
+We use zip files because in batch scenarios, we expect the size of both training and inference data would be very large and can't be put in the HTTP request body. This allows users to perform batch inference on historical data either for model validation or data analysis.
However, this might be somewhat inconvenient for streaming inference and for high frequency data. We have a plan to add a new API specifically designed for streaming inference that users can pass data in the request body. ### What's the difference between `severity` and `score`?
-Normally we recommend you use `severity` as the filter to sift out 'anomalies' that are not so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
+Normally we recommend you to use `severity` as the filter to sift out 'anomalies' that aren't so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help:
-We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it is also marked as an anomaly.
+We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it's also marked as an anomaly.
+ ## Next steps * [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md).
-* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
+* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/build-enrollment-app.md
+
+ Title: Build a React app to add users to a Face service
+
+description: Learn how to set up your development environment and deploy a Face app to get consent from customers.
++++++ Last updated : 11/17/2020+++
+# Build a React app to add users to a Face service
+
+This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
+
+When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
+
+The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
+
+## Prerequisites
+
+* An Azure subscription ΓÇô [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you created to connect your application to Face API.
+ * For local development and testing only, the API key and endpoint are environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables.
+
+### Important Security Considerations
+* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
+* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
+* As a best practice, consider having separate API keys for development and production.
+
+## Set up the development environment
+
+#### [Android](#tab/android)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select your development OS and **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/).
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+
+#### [iOS](#tab/ios)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select **macOS** as your development OS and **iOS** as the target OS. Complete the section **Installing dependencies**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/). You will also need to download Xcode.
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+++
+## Create a user add experience
+
+Now that you have set up the sample app, you can tailor it to your own needs.
+
+For example, you may want to add situation-specific information on your consent page:
+
+> [!div class="mx-imgBorder"]
+> ![app consent page](../media/enrollment-app/1-consent-1.jpg)
+
+Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are:
+* Face size (faces that are distant from the camera)
+* Face orientation (faces turned or tilted away from camera)
+* Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
+* Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
+* Blur (such as by rapid face movement when the photograph was taken).
+
+The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
++
+> [!div class="mx-imgBorder"]
+> ![app image capture instruction page](../media/enrollment-app/4-instruction.jpg)
+
+Notice the app also offers functionality for deleting the user's information and the option to re-add.
+
+> [!div class="mx-imgBorder"]
+> ![profile management page](../media/enrollment-app/10-manage-2.jpg)
+
+To extend the app's functionality to cover the full experience, read the [overview](../enrollment-overview.md) for additional features to implement and best practices.
+
+## Deploy the app
+
+#### [Android](#tab/android)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp).
+
+When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
+
+Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
+
+Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn more about how to release your app.
+
+#### [iOS](#tab/ios)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp). To prepare for distribution, you will need to create an app icon, a launch screen, and configure deployment info settings. Follow the [documentation from Xcode](https://developer.apple.com/documentation/Xcode/preparing_your_app_for_distribution) to prepare your app for distribution.
+
+When you're ready to release your app for production, you'll build an archive of your app. Follow the [Xcode documentation](https://developer.apple.com/documentation/Xcode/distributing_your_app_for_beta_testing_and_releases) on how to create an archive build and options for distributing your app.
+++
+## Next steps
+
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
keywords: on-premises, OCR, Docker, container
Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run Computer Vision containers.
-The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md).
+The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face. > [!NOTE]
-> This feature is also offered by the Azure [Face](../face/index.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
+> This feature is also offered by the Azure [Face](./index-identity.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
+
+ Title: "Face detection and attributes concepts"
+
+description: Learn more about face detection; face detection is the action of locating human faces in an image and optionally returning different kinds of face-related data.
+++++++ Last updated : 10/27/2021+++
+# Face detection and attributes
+
+This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
+
+You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
+
+## Face rectangle
+
+Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
+
+## Face ID
+
+The face ID is a unique identifier string for each detected face in an image. You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+
+## Face landmarks
+
+Face landmarks are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. By default, there are 27 predefined landmark points. The following figure shows all 27 points:
+
+![A face diagram with all 27 landmarks labeled](./media/landmarks.1.jpg)
+
+The coordinates of the points are returned in units of pixels.
+
+The Detection_03 model currently has the most accurate landmark detection. The eye and pupil landmarks it returns are precise enough to enable gaze tracking of the face.
+
+## Attributes
+
+Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
+
+* **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
+* **Age**. The estimated age in years of a particular face.
+* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
+* **Exposure**. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
+* **Facial hair**. The estimated facial hair presence and the length for the given face.
+* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless.
+* **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
+* **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
+* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings:
+
+ ![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg)
+
+ For more details on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
+* **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
+* **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
+* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
+* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
+* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
+ >[!NOTE]
+ > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
+
+> [!IMPORTANT]
+> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
+
+## Input data
+
+Use the following tips to make sure that your input images give the most accurate detection results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* The image file size should be no larger than 6 MB.
+* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size.
+* The maximum detectable face size is 4096 x 4096 pixels.
+* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+
+### Input data with orientation information:
+
+Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image.
+
+To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
+
+![Two face images with and without rotation](./media/image-rotation.png)
+
+### Video input
+
+If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera:
+
+* **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
+* **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
+* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This will result in clearer video frames.
+
+ >[!NOTE]
+ > A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.
+
+## Next steps
+
+Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
+
+* [Call the detect API](./how-to/identity-detect-faces.md)
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
+
+ Title: "Face recognition concepts"
+
+description: Learn the concept of Face recognition, its related operations, and the underlying data structures.
+++++++ Last updated : 10/27/2021+++
+# Face recognition concepts
+
+This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, Face recognition refers to the method of verifying or identifying an individual by their face.
+
+Verification is one-to-one matching that takes two faces and returns whether they are the same face, and identification is one-to-many matching that takes a single face as input and returns a set of matching candidates. Face recognition is important in implementing the identity verification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
+
+## Related data structures
+
+The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription, but name fields may be duplicated.
+
+|Name|Description|
+|:--|:--|
+|DetectedFace| This single face representation is retrieved by the [face detection](./how-to/identity-detect-faces.md) operation. Its ID expires 24 hours after it's created.|
+|PersistedFace| When DetectedFace objects are added to a group, such as FaceList or Person, they become PersistedFace objects. They can be [retrieved](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c) at any time and don't expire.|
+|[FaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b) or [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc)| This data structure is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string, and optionally a user data string.|
+|[Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c)| This data structure is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string, and optionally a user data string.|
+|[PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d)| This data structure is an assorted list of Person objects. It has a unique ID, a name string, and optionally a user data string. A PersonGroup must be [trained](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) before it can be used in recognition operations.|
+|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure](./how-to/use-persondirectory.md).
+
+## Recognition operations
+
+This section details how the underlying operations use the above data structures to identify and verify a face.
+
+### PersonGroup creation and training
+
+You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
+
+The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
+
+### Identification
+
+The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
+
+### Verification
+
+The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
+
+## Input data
+
+Use the following tips to ensure that your input images give the most accurate recognition results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* Image file size should be no larger than 6 MB.
+* When you create Person objects, use photos that feature different kinds of angles and lighting.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+* You can utilize the qualityForRecognition attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only "high" quality images are recommended for person enrollment and quality at or above "medium" is recommended for identification scenarios.
+
+## Next steps
+
+Now that you're familiar with face recognition concepts, Write a script that identifies faces against a trained PersonGroup.
+
+* [Face client library quickstart](./quickstarts-sdk/identity-client-library.md)
cognitive-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/enrollment-overview.md
+
+ Title: Best practices for adding users to a Face service
+
+description: Learn about the process of Face enrollment to register users in a face recognition service.
++++++ Last updated : 09/27/2021+++
+# Best practices for adding users to a Face service
+
+In order to use the Cognitive Services Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
+
+## Meaningful consent
+
+One of the key purposes of an enrollment application for facial recognition is to give users the opportunity to consent to the use of images of their face for specific purposes, such as access to a worksite. Because facial recognition technologies may be perceived as collecting sensitive personal data, it's especially important to ask for consent in a way that is both transparent and respectful. Consent is meaningful to users when it empowers them to make the decision that they feel is best for them.
+
+Based on Microsoft user research, Microsoft's Responsible AI principles, and [external research](ftp://ftp.cs.washington.edu/tr/2000/12/UW-CSE-00-12-02.pdf), we have found that consent is meaningful when it offers the following to users enrolling in the technology:
+
+* Awareness: Users should have no doubt when they are being asked to provide their face template or enrollment photos.
+* Understanding: Users should be able to accurately describe in their own words what they were being asked for, by whom, to what end, and with what assurances.
+* Freedom of choice: Users should not feel coerced or manipulated when choosing whether to consent and enroll in facial recognition.
+* Control: Users should be able to revoke their consent and delete their data at any time.
+
+This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
+
+> [!NOTE]
+> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices.
+
+## Application development
+
+Before you design an enrollment flow, think about how the application you're building can uphold the promises you make to users about how their data is protected. The following recommendations can help you build an enrollment experience that includes responsible approaches to securing personal data, managing users' privacy, and ensuring that the application is accessible to all users.
+
+|Category | Recommendations |
+|||
+|Hardware | Consider the camera quality of the enrollment device. |
+|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
+|Security | Cognitive Services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Cognitive Services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. |
+|User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. |
+|Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |
+
+## Next steps
+
+Follow the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide to get started with a sample enrollment app. Then customize it or write your own app to suit the needs of your product.
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
+
+ Title: "Example: Add faces to a PersonGroup - Face"
+
+description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure Cognitive Services Face service.
+++++++ Last updated : 04/10/2019+
+ms.devlang: csharp
+++
+# Add faces to a PersonGroup
+
+This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure Cognitive Services Face .NET client library.
+
+## Step 1: Initialization
+
+The following code declares several variables and implements a helper function to schedule the face add requests:
+
+- `PersonCount` is the total number of persons.
+- `CallLimitPerSecond` is the maximum calls per second according to the subscription tier.
+- `_timeStampQueue` is a Queue to record the request timestamps.
+- `await WaitCallLimitPerSecondAsync()` waits until it's valid to send the next request.
+
+```csharp
+const int PersonCount = 10000;
+const int CallLimitPerSecond = 10;
+static Queue<DateTime> _timeStampQueue = new Queue<DateTime>(CallLimitPerSecond);
+
+static async Task WaitCallLimitPerSecondAsync()
+{
+ Monitor.Enter(_timeStampQueue);
+ try
+ {
+ if (_timeStampQueue.Count >= CallLimitPerSecond)
+ {
+ TimeSpan timeInterval = DateTime.UtcNow - _timeStampQueue.Peek();
+ if (timeInterval < TimeSpan.FromSeconds(1))
+ {
+ await Task.Delay(TimeSpan.FromSeconds(1) - timeInterval);
+ }
+ _timeStampQueue.Dequeue();
+ }
+ _timeStampQueue.Enqueue(DateTime.UtcNow);
+ }
+ finally
+ {
+ Monitor.Exit(_timeStampQueue);
+ }
+}
+```
+
+## Step 2: Authorize the API call
+
+When you use a client library, you must pass your key to the constructor of the **FaceClient** class. For example:
+
+```csharp
+private readonly IFaceClient faceClient = new FaceClient(
+ new ApiKeyServiceClientCredentials("<SubscriptionKey>"),
+ new System.Net.Http.DelegatingHandler[] { });
+```
+
+To get the key, go to the Azure Marketplace from the Azure portal. For more information, see [Subscriptions](https://www.microsoft.com/cognitive-services/sign-up).
+
+## Step 3: Create the PersonGroup
+
+A PersonGroup named "MyPersonGroup" is created to save the persons.
+The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
+
+```csharp
+const string personGroupId = "mypersongroupid";
+const string personGroupName = "MyPersonGroup";
+_timeStampQueue.Enqueue(DateTime.UtcNow);
+await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
+```
+
+## Step 4: Create the persons for the PersonGroup
+
+Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit.
+
+```csharp
+Person[] persons = new Person[PersonCount];
+Parallel.For(0, PersonCount, async i =>
+{
+ await WaitCallLimitPerSecondAsync();
+
+ string personName = $"PersonName#{i}";
+ persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
+});
+```
+
+## Step 5: Add faces to the persons
+
+Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially.
+Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
+
+```csharp
+Parallel.For(0, PersonCount, async i =>
+{
+ Guid personId = persons[i].PersonId;
+ string personImageDir = @"/path/to/person/i/images";
+
+ foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
+ {
+ await WaitCallLimitPerSecondAsync();
+
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
+ }
+ }
+});
+```
+
+## Summary
+
+In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders:
+
+- This strategy also applies to FaceLists and LargePersonGroups.
+- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.
+- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.
+- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.
+
+The following features were explained and demonstrated:
+
+- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.
+- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.
+- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API.
+
+## Next steps
+
+In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
+
+- [Use the PersonDirectory structure](use-persondirectory.md)
cognitive-services Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/analyze-video.md
+
+ Title: Analyze videos in near real time - Computer Vision
+
+description: Learn how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API.
++++++ Last updated : 09/09/2019
+ms.devlang: csharp
+++
+# Analyze videos in near real time
+
+This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API. The basic elements of such an analysis are:
+
+- Acquiring frames from a video source.
+- Selecting which frames to analyze.
+- Submitting these frames to the API.
+- Consuming each analysis result that's returned from the API call.
+
+The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+## Approaches to running near real-time analysis
+
+You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
+
+### Design an infinite loop
+
+The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+}
+```
+
+If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
+
+### Allow the API calls to run in parallel
+
+Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ var t = Task.Run(async () =>
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+ }
+}
+```
+
+With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
+* It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
+* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
+* Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
+
+### Design a producer-consumer system
+
+For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
+
+```csharp
+// Queue that will contain the API call tasks.
+var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
+
+// Producer thread.
+while (true)
+{
+ // Grab a frame.
+ Frame f = GrabFrame();
+
+ // Decide whether to analyze the frame.
+ if (ShouldAnalyze(f))
+ {
+ // Start a task that will run in parallel with this thread.
+ var analysisTask = Task.Run(async () =>
+ {
+ // Put the frame, and the result/exception into a wrapper object.
+ var output = new ResultWrapper(f);
+ try
+ {
+ output.Analysis = await Analyze(f);
+ }
+ catch (Exception e)
+ {
+ output.Exception = e;
+ }
+ return output;
+ }
+
+ // Push the task onto the queue.
+ taskQueue.Add(analysisTask);
+ }
+}
+```
+
+You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
+
+```csharp
+// Consumer thread.
+while (true)
+{
+ // Get the oldest task.
+ Task<ResultWrapper> analysisTask = taskQueue.Take();
+
+ // Wait until the task is completed.
+ var output = await analysisTask;
+
+ // Consume the exception or result.
+ if (output.Exception != null)
+ {
+ throw output.Exception;
+ }
+ else
+ {
+ ConsumeResult(output.Analysis);
+ }
+}
+```
+
+## Implement the solution
+
+### Get started quickly
+
+To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+The library contains the `FrameGrabber` class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
+
+To illustrate some of the possibilities, we've provided two sample apps that use the library.
+
+The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
+
+```csharp
+using System;
+using System.Linq;
+using Microsoft.Azure.CognitiveServices.Vision.Face;
+using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
+using VideoFrameAnalyzer;
+
+namespace BasicConsoleSample
+{
+ internal class Program
+ {
+ const string ApiKey = "<your API key>";
+ const string Endpoint = "https://<your API region>.api.cognitive.microsoft.com";
+
+ private static async Task Main(string[] args)
+ {
+ // Create grabber.
+ FrameGrabber<DetectedFace[]> grabber = new FrameGrabber<DetectedFace[]>();
+
+ // Create Face Client.
+ FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(ApiKey))
+ {
+ Endpoint = Endpoint
+ };
+
+ // Set up a listener for when we acquire a new frame.
+ grabber.NewFrameProvided += (s, e) =>
+ {
+ Console.WriteLine($"New frame acquired at {e.Frame.Metadata.Timestamp}");
+ };
+
+ // Set up a Face API call.
+ grabber.AnalysisFunction = async frame =>
+ {
+ Console.WriteLine($"Submitting frame acquired at {frame.Metadata.Timestamp}");
+ // Encode image and submit to Face service.
+ return (await faceClient.Face.DetectWithStreamAsync(frame.Image.ToMemoryStream(".jpg"))).ToArray();
+ };
+
+ // Set up a listener for when we receive a new result from an API call.
+ grabber.NewResultAvailable += (s, e) =>
+ {
+ if (e.TimedOut)
+ Console.WriteLine("API call timed out.");
+ else if (e.Exception != null)
+ Console.WriteLine("API call threw an exception.");
+ else
+ Console.WriteLine($"New result received for frame acquired at {e.Frame.Metadata.Timestamp}. {e.Analysis.Length} faces detected");
+ };
+
+ // Tell grabber when to call the API.
+ // See also TriggerAnalysisOnPredicate
+ grabber.TriggerAnalysisOnInterval(TimeSpan.FromMilliseconds(3000));
+
+ // Start running in the background.
+ await grabber.StartProcessingCameraAsync();
+
+ // Wait for key press to stop.
+ Console.WriteLine("Press any key to stop...");
+ Console.ReadKey();
+
+ // Stop, blocking until done.
+ await grabber.StopProcessingAsync();
+ }
+ }
+}
+```
+
+The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
+
+In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure Cognitive Services.
+
+By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Cognitive Services APIs can be used to augment this processing with more advanced analysis when necessary.
+
+![The LiveCameraSample app displaying an image with tags](../../Video/Images/FramebyFrame.jpg)
+
+### Integrate the samples into your codebase
+
+To get started with this sample, do the following:
+
+1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
+2. Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+ - [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ After the resources are deployed, click **Go to resource** to collect your key and endpoint for each resource.
+3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+4. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
+ - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
+ - For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
+
+When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
+
+The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure Cognitive Services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure Cognitive Services.
+
+## Summary
+
+In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
+
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
+
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
+
+ Title: Call the Image Analysis API
+
+description: Learn how to call the Image Analysis API and configure its behavior.
++++++ Last updated : 04/11/2022+++
+# Call the Image Analysis API
+
+This article demonstrates how to call the Image Analysis API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
+
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+
+## Submit data to the service
+
+The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
+
+#### [REST](#tab/rest)
+
+When analyzing a local image, you put the binary image data in the HTTP request body. For a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+
+#### [C#](#tab/csharp)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
+
+#### [Java](#tab/java)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
+
+#### [JavaScript](#tab/javascript)
+
+In your main function, save a reference to the URL of the image you want to analyze.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
+
+#### [Python](#tab/python)
+
+Save a reference to the URL of the image you want to analyze.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
++++
+## Determine how to process the data
+
+### Select visual features
+
+The Analyze API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples below add all of the available visual features, but for practical usage you'll likely only need one or two.
+
+#### [REST](#tab/rest)
+
+You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
+
+|URL parameter | Value | Description|
+|||--|
+|`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content ("racy" content) is also detected.|
+|`visualFeatures`|`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
+|`visualFeatures`|`Categories` | categorizes image content according to a taxonomy defined in documentation. This value is the default value of `visualFeatures`.|
+|`visualFeatures`|`Color` | determines the accent color, dominant color, and whether an image is black&white.|
+|`visualFeatures`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`visualFeatures`|`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
+|`visualFeatures`|`ImageType` | detects if image is clip art or a line drawing.|
+|`visualFeatures`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`visualFeatures`|`Tags` | tags the image with a detailed list of words related to the image content.|
+|`details`| `Celebrities` | identifies celebrities if detected in the image.|
+|`details`|`Landmarks` |identifies landmarks if detected in the image.|
+
+A populated URL might look like this:
+
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities`
+
+#### [C#](#tab/csharp)
+
+Define your new method for image analysis. Add the code below, which specifies visual features you'd like to extract in your analysis. See the **[VisualFeatureTypes](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes)** enum for a complete list.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_visualfeatures)]
++
+#### [Java](#tab/java)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_features_remote)]
+
+#### [JavaScript](#tab/javascript)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/visualfeaturetypes) enum for a complete list.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_features_remote)]
+
+#### [Python](#tab/python)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_features_remote)]
+++++
+### Specify languages
+
+You can also specify the language of the returned data.
+
+#### [REST](#tab/rest)
+
+The following URL query parameter specifies the language. The default value is `en`.
+
+|URL parameter | Value | Description|
+|||--|
+|`language`|`en` | English|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
+
+A populated URL might look like this:
+
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities&language=en`
+
+#### [C#](#tab/csharp)
+
+Use the *language* parameter of [AnalyzeImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclientextensions.analyzeimageasync#microsoft-azure-cognitiveservices-vision-computervision-computervisionclientextensions-analyzeimageasync(microsoft-azure-cognitiveservices-vision-computervision-icomputervisionclient-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-visualfeaturetypes))))-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-details))))-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-descriptionexclude))))-system-string-system-threading-cancellationtoken)) call to specify a language. A method call that specifies a language might look like the following.
+
+```csharp
+ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
+```
+
+#### [Java](#tab/java)
+
+Use the [AnalyzeImageOptionalParameter](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.analyzeimageoptionalparameter) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
++
+```java
+ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
+ .withVisualFeatures(featuresToExtractFromLocalImage)
+ .language("en")
+ .execute();
+```
+
+#### [JavaScript](#tab/javascript)
+
+Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
+
+```javascript
+const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
+```
+
+#### [Python](#tab/python)
+
+Use the *language* parameter of your [analyze_image](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin#azure-cognitiveservices-vision-computervision-operations-computervisionclientoperationsmixin-analyze-image) call to specify a language. A method call that specifies a language might look like the following.
+
+```python
+results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
+```
++++
+## Get results from the service
+
+This section shows you how to parse the results of the API call. It includes the API call itself.
+
+> [!NOTE]
+> **Scoped API calls**
+>
+> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://{endpoint}/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
+
+#### [REST](#tab/rest)
+
+The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
+
+```json
+{
+ "tags":[
+ {
+ "name":"outdoor",
+ "score":0.976
+ },
+ {
+ "name":"bird",
+ "score":0.95
+ }
+ ],
+ "description":{
+ "tags":[
+ "outdoor",
+ "bird"
+ ],
+ "captions":[
+ {
+ "text":"partridge in a pear tree",
+ "confidence":0.96
+ }
+ ]
+ }
+}
+```
+
+See the following table for explanations of the fields in this example:
+
+Field | Type | Content
+|||
+Tags | `object` | The top-level object for an array of tags.
+tags[].Name | `string` | The keyword from the tags classifier.
+tags[].Score | `number` | The confidence score, between 0 and 1.
+description | `object` | The top-level object for an image description.
+description.tags[] | `string` | The list of tags. If there is insufficient confidence in the ability to produce a caption, the tags might be the only information available to the caller.
+description.captions[].text | `string` | A phrase describing the image.
+description.captions[].confidence | `number` | The confidence score for the phrase.
+
+### Error codes
+
+See the following list of possible errors and their causes:
+
+* 400
+ * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
+ * `InvalidImageFormat` - Input data is not a valid image.
+ * `InvalidImageSize` - Input image is too large.
+ * `NotSupportedVisualFeature` - Specified feature type isn't valid.
+ * `NotSupportedImage` - Unsupported image, for example child pornography.
+ * `InvalidDetails` - Unsupported `detail` parameter value.
+ * `NotSupportedLanguage` - The requested operation isn't supported in the language specified.
+ * `BadArgument` - More details are provided in the error message.
+* 415 - Unsupported media type error. The Content-Type isn't in the allowed types:
+ * For an image URL, Content-Type should be `application/json`
+ * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
+* 500
+ * `FailedToProcess`
+ * `Timeout` - Image processing timed out.
+ * `InternalServerError`
++
+#### [C#](#tab/csharp)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze)]
+
+#### [Java](#tab/java)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_analyze)]
+
+#### [JavaScript](#tab/javascript)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_analyze)]
+
+#### [Python](#tab/python)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_analyze)]
++++
+> [!TIP]
+> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
++
+## Next steps
+
+* Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
+* See the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn more about the API functionality.
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
+
+ Title: How to call the Read API
+
+description: Learn how to call the Read API and configure its behavior in detail.
+++++++ Last updated : 02/05/2022+++
+# Call the Read API
+
+In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+
+## Determine how to process the data (optional)
+
+### Specify the OCR model
+
+By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
+
+When using the Read operation, use the following values for the optional `model-version` parameter.
+
+|Value| Model used |
+|:--|:-|
+| Not provided | Latest GA model |
+| latest | Latest GA model|
+| [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwriitten text, adds support for Japanese and Korean. |
+| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages, For handwriitten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
+| 2021-04-12 | 2021 GA model |
+
+### Input language
+
+By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
+
+### Natural reading order output (Latin languages only)
+
+By default, the service outputs the text lines in the left to right order. Optionally, with the `readingOrder` request parameter, use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page(s) or page ranges for text extraction
+
+By default, the service extracts text from all pages in the documents. Optionally, use the `pages` request parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## Submit data to the service
+
+You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
+
+The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+
+`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
+
+The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
+
+|Response header| Example value |
+|:--|:-|
+|Operation-Location | `https://cognitiveservice/vision/v3.2/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+
+> [!NOTE]
+> **Billing**
+>
+> The [Computer Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
++
+## Get results from the service
+
+The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+
+`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
+
+It returns a JSON response that contains a **status** field with the following possible values.
+
+|Value | Meaning |
+|:--|:-|
+| `notStarted`| The operation has not started. |
+| `running`| The operation is being processed. |
+| `failed`| The operation has failed. |
+| `succeeded`| The operation has succeeded. |
+
+You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
+
+> [!NOTE]
+> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
+
+> [!NOTE]
+> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
+
+### Sample JSON output
+
+See the following example of a successful JSON response:
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
+ "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
+ "analyzeResult": {
+ "version": "3.2",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 2.1243,
+ "width": 502,
+ "height": 252,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 58,
+ 42,
+ 314,
+ 59,
+ 311,
+ 123,
+ 56,
+ 121
+ ],
+ "text": "Tabs vs",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.96
+ }
+ },
+ "words": [
+ {
+ "boundingBox": [
+ 68,
+ 44,
+ 225,
+ 59,
+ 224,
+ 122,
+ 66,
+ 123
+ ],
+ "text": "Tabs",
+ "confidence": 0.933
+ },
+ {
+ "boundingBox": [
+ 241,
+ 61,
+ 314,
+ 72,
+ 314,
+ 123,
+ 239,
+ 122
+ ],
+ "text": "vs",
+ "confidence": 0.977
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Handwritten classification for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+## Next steps
+
+- Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).
+- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
+
+ Title: "Find similar faces"
+
+description: Use the Face service to find similar faces (face search by image).
+++++++ Last updated : 05/05/2022++++
+# Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
+
+## Set up sample URL
+
+This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path.
+
+```
+"https://csdx.blob.core.windows.net/resources/Face/media/"
+```
+
+## Detect faces for comparison
+
+You need to detect faces in images before you can compare them. In this guide, the following remote image, called *findsimilar.jpg*, will be used as the source:
+
+![Photo of a man who is smiling.](../media/quickstarts/find-similar.jpg)
+
+#### [C#](#tab/csharp)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
+
+The following code uses the above method to get face data from a series of images.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
++
+#### [JavaScript](#tab/javascript)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
++
+The following code uses the above method to get face data from a series of images.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate. Then run the command to detect one of the target faces.
++
+Find the `"faceId"` value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
++
+Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
++++
+## Find and print matches
+
+In this guide, the face detected in the *Family1-Dad1.jpg* image should be returned as the face that's similar to the source image face.
+
+![Photo of a man who is smiling; this is the same person as the previous image.](../media/quickstarts/family-1-dad-1.jpg)
+
+#### [C#](#tab/csharp)
+
+The following code calls the Find Similar API on the saved list of faces.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
+
+The following code prints the match details to the console:
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
+
+#### [JavaScript](#tab/javascript)
+
+The following method takes a set of target faces and a single source face. Then, it compares them and finds all the target faces that are similar to the source face. Finally, it prints the match details to the console.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate.
++
+Paste in the following JSON content for the `body` value:
++
+Then, copy over the source face ID value to the `"faceId"` field. Then copy the other face IDs, separated by commas, as terms in the `"faceIds"` array.
+
+Run the command, and the returned JSON should show the correct face ID as a similar match.
+++
+## Next steps
+
+In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
+
+* [Specify a face recognition model](specify-recognition-model.md)
cognitive-services Identity Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-analyze-video.md
+
+ Title: "Example: Real-time video analysis - Face"
+
+description: Use the Face service to perform near-real-time analysis on frames taken from a live video stream.
+++++++ Last updated : 03/01/2018+
+ms.devlang: csharp
+++
+# Example: How to Analyze Videos in Real-time
+
+This guide will demonstrate how to perform near-real-time analysis on frames taken from a live video stream. The basic components in such a system are:
+
+- Acquire frames from a video source
+- Select which frames to analyze
+- Submit these frames to the API
+- Consume each analysis result that is returned from the API call
+
+These samples are written in C# and the code can be found on GitHub here: [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/).
+
+## The Approach
+
+There are multiple ways to solve the problem of running near-real-time analysis on video streams. We will start by outlining three approaches in increasing levels of sophistication.
+
+### A Simple Approach
+
+The simplest design for a near-real-time analysis system is an infinite loop, where each iteration grabs a frame, analyzes it, and then consumes the result:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+}
+```
+
+If our analysis consisted of a lightweight client-side algorithm, this approach would be suitable. However, when analysis happens in the cloud, the latency involved means that an API call might take several seconds. During this time, we are not capturing images, and our thread is essentially doing nothing. Our maximum frame-rate is limited by the latency of the API calls.
+
+### Parallelizing API Calls
+
+While a simple single-threaded loop makes sense for a lightweight client-side algorithm, it doesn't fit well with the latency involved in cloud API calls. The solution to this problem is to allow the long-running API calls to execute in parallel with the frame-grabbing. In C#, we could achieve this using Task-based parallelism, for example:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ var t = Task.Run(async () =>
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+ }
+}
+```
+
+This code launches each analysis in a separate Task, which can run in the background while we continue grabbing new frames. With this method we avoid blocking the main thread while waiting for an API call to return, but we have lost some of the guarantees that the simple version provided. Multiple API calls might occur in parallel, and the results might get returned in the wrong order. This could also cause multiple threads to enter the ConsumeResult() function simultaneously, which could be dangerous, if the function is not thread-safe. Finally, this simple code does not keep track of the Tasks that get created, so exceptions will silently disappear. Therefore, the final step is to add a "consumer" thread that will track the analysis tasks, raise exceptions, kill long-running tasks, and ensure that the results get consumed in the correct order.
+
+### A Producer-Consumer Design
+
+In our final "producer-consumer" system, we have a producer thread that looks similar to our previous infinite loop. However, instead of consuming analysis results as soon as they are available, the producer simply puts the tasks into a queue to keep track of them.
+
+```csharp
+// Queue that will contain the API call tasks.
+var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
+
+// Producer thread.
+while (true)
+{
+ // Grab a frame.
+ Frame f = GrabFrame();
+
+ // Decide whether to analyze the frame.
+ if (ShouldAnalyze(f))
+ {
+ // Start a task that will run in parallel with this thread.
+ var analysisTask = Task.Run(async () =>
+ {
+ // Put the frame, and the result/exception into a wrapper object.
+ var output = new ResultWrapper(f);
+ try
+ {
+ output.Analysis = await Analyze(f);
+ }
+ catch (Exception e)
+ {
+ output.Exception = e;
+ }
+ return output;
+ }
+
+ // Push the task onto the queue.
+ taskQueue.Add(analysisTask);
+ }
+}
+```
+
+We also have a consumer thread that takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, we can guarantee that results get consumed one at a time, in the correct order, without limiting the maximum frame-rate of the system.
+
+```csharp
+// Consumer thread.
+while (true)
+{
+ // Get the oldest task.
+ Task<ResultWrapper> analysisTask = taskQueue.Take();
+
+ // Await until the task is completed.
+ var output = await analysisTask;
+
+ // Consume the exception or result.
+ if (output.Exception != null)
+ {
+ throw output.Exception;
+ }
+ else
+ {
+ ConsumeResult(output.Analysis);
+ }
+}
+```
+
+## Implementing the Solution
+
+### Getting Started
+
+To get your app up and running as quickly as possible, you will use a flexible implementation of the system described above. To access the code, go to [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis).
+
+The library contains the class FrameGrabber, which implements the producer-consumer system discussed above to process video frames from a webcam. The user can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired or a new analysis result is available.
+
+To illustrate some of the possibilities, there are two sample apps that use the library. The first is a simple console app, and a simplified version of it is reproduced below. It grabs frames from the default webcam, and submits them to the Face service for face detection.
++
+The second sample app is a bit more interesting, and allows you to choose which API to call on the video frames. On the left-hand side, the app shows a preview of the live video, on the right-hand side it shows the most recent API result overlaid on the corresponding frame.
+
+In most modes, there will be a visible delay between the live video on the left, and the visualized analysis on the right. This delay is the time taken to make the API call. One exception is the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer using OpenCV, before submitting any images to Cognitive Services. This way, we can visualize the detected face immediately and then update the emotions once the API call returns. This is an example of a "hybrid" approach, where the client can perform some simple processing, and Cognitive Services APIs can augment this with more advanced analysis when necessary.
+
+![HowToAnalyzeVideo](../../Video/Images/FramebyFrame.jpg)
+
+### Integrating into your codebase
+
+To get started with this sample, follow these steps:
+
+1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
+2. Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+ - [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ After the resources are deployed, click **Go to resource** to collect your key and endpoint for each resource.
+3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+4. Open the sample in Visual Studio, and build and run the sample applications:
+ - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
+ - For LiveCameraSample, the keys should be entered into the Settings pane of the app. They will be persisted across sessions as user data.
+
+
+When you're ready to integrate, **reference the VideoFrameAnalyzer library from your own projects.**
+
+## Summary
+
+In this guide, you learned how to run near-real-time analysis on live video streams using the Face, Computer Vision, and Emotion APIs, and how to use our sample code to get started.
+
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
+
+## Related Topics
+- [Call the detect API](identity-detect-faces.md)
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
+
+ Title: "Call the Detect API - Face"
+
+description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
+++++++ Last updated : 08/04/2021+
+ms.devlang: csharp
+++
+# Call the Detect API
+
+This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
+
+The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
++
+## Setup
+
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
+
+## Submit data to the service
+
+To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
++
+You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for their unique IDs and a rectangle that gives the pixel coordinates of the face. This way, you can tell which face ID maps to which face in the original image.
++
+For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
+
+## Determine how to process the data
+
+This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
+
+### Get face landmarks
+
+[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
++
+### Get face attributes
+
+Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
+
+To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
+++
+## Get results from the service
+
+### Face landmark results
+
+The following code demonstrates how you might retrieve the locations of the nose and pupils:
++
+You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
++
+When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
++
+### Face attribute results
+
+The following code shows how you might retrieve the face attribute data that you requested in the original call.
++
+To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
+
+## Next steps
+
+In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.
+
+- [Tutorial: Add users to a Face service](../enrollment-overview.md)
+
+## Related articles
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-face-data.md
+
+ Title: "Migrate your face data across subscriptions - Face"
+
+description: This guide shows you how to migrate your stored face data from one Face subscription to another.
+++++++ Last updated : 02/22/2021+
+ms.devlang: csharp
+++
+# Migrate your face data to a different Face subscription
+
+This guide shows you how to move face data, such as a saved PersonGroup object with faces, to a different Azure Cognitive Services Face subscription. To move the data, you use the Snapshot feature. This way you avoid having to repeatedly build and train a PersonGroup or FaceList object when you move or expand your operations. For example, perhaps you created a PersonGroup object with a free subscription and now want to migrate it to your paid subscription. Or you might need to sync face data across subscriptions in different regions for a large enterprise operation.
+
+This same migration strategy also applies to LargePersonGroup and LargeFaceList objects. If you aren't familiar with the concepts in this guide, see their definitions in the [Face recognition concepts](../concept-face-recognition.md) guide. This guide uses the Face .NET client library with C#.
+
+> [!WARNING]
+> The Snapshot feature might move your data outside the geographic region you originally selected. Data might move to West US, West Europe, and Southeast Asia regions.
+
+## Prerequisites
+
+You need the following items:
+
+- Two Face keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md).
+- The Face subscription ID string that corresponds to the target subscription. To find it, select **Overview** in the Azure portal.
+- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).
+
+## Create the Visual Studio project
+
+This guide uses a simple console app to run the face data migration. For a full implementation, see the [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) on GitHub.
+
+1. In Visual Studio, create a new Console app .NET Framework project. Name it **FaceApiSnapshotSample**.
+1. Get the required NuGet packages. Right-click your project in the Solution Explorer, and select **Manage NuGet Packages**. Select the **Browse** tab, and select **Include prerelease**. Find and install the following package:
+ - [Microsoft.Azure.CognitiveServices.Vision.Face 2.3.0-preview](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/2.2.0-preview)
+
+## Create face clients
+
+In the **Main** method in *Program.cs*, create two [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) instances for your source and target subscriptions. This example uses a Face subscription in the East Asia region as the source and a West US subscription as the target. This example demonstrates how to migrate data from one Azure region to another.
++
+```csharp
+var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Key>"))
+ {
+ Endpoint = "https://southeastasia.api.cognitive.microsoft.com/>"
+ };
+
+var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Key>"))
+ {
+ Endpoint = "https://westus.api.cognitive.microsoft.com/"
+ };
+```
+
+Fill in the key values and endpoint URLs for your source and target subscriptions.
++
+## Prepare a PersonGroup for migration
+
+You need the ID of the PersonGroup in your source subscription to migrate it to the target subscription. Use the [PersonGroupOperationsExtensions.ListAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperationsextensions.listasync) method to retrieve a list of your PersonGroup objects. Then get the [PersonGroup.PersonGroupId](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.persongroup.persongroupid#Microsoft_Azure_CognitiveServices_Vision_Face_Models_PersonGroup_PersonGroupId) property. This process looks different based on what PersonGroup objects you have. In this guide, the source PersonGroup ID is stored in `personGroupId`.
+
+> [!NOTE]
+> The [sample code](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) creates and trains a new PersonGroup to migrate. In most cases, you should already have a PersonGroup to use.
+
+## Take a snapshot of a PersonGroup
+
+A snapshot is temporary remote storage for certain Face data types. It functions as a kind of clipboard to copy data from one subscription to another. First, you take a snapshot of the data in the source subscription. Then you apply it to a new data object in the target subscription.
+
+Use the source subscription's FaceClient instance to take a snapshot of the PersonGroup. Use [TakeAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperationsextensions.takeasync) with the PersonGroup ID and the target subscription's ID. If you have multiple target subscriptions, add them as array entries in the third parameter.
+
+```csharp
+var takeSnapshotResult = await FaceClientEastAsia.Snapshot.TakeAsync(
+ SnapshotObjectType.PersonGroup,
+ personGroupId,
+ new[] { "<Azure West US Subscription ID>" /* Put other IDs here, if multiple target subscriptions wanted */ });
+```
+
+> [!NOTE]
+> The process of taking and applying snapshots doesn't disrupt any regular calls to the source or target PersonGroups or FaceLists. Don't make simultaneous calls that change the source object, such as [FaceList management calls](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.facelistoperations) or the [PersonGroup Train](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperations) call, for example. The snapshot operation might run before or after those operations or might encounter errors.
+
+## Retrieve the snapshot ID
+
+The method used to take snapshots is asynchronous, so you must wait for its completion. Snapshot operations can't be canceled. In this code, the `WaitForOperation` method monitors the asynchronous call. It checks the status every 100 ms. After the operation finishes, retrieve an operation ID by parsing the `OperationLocation` field.
+
+```csharp
+var takeOperationId = Guid.Parse(takeSnapshotResult.OperationLocation.Split('/')[2]);
+var operationStatus = await WaitForOperation(FaceClientEastAsia, takeOperationId);
+```
+
+A typical `OperationLocation` value looks like this:
+
+```csharp
+"/operations/a63a3bdd-a1db-4d05-87b8-dbad6850062a"
+```
+
+The `WaitForOperation` helper method is here:
+
+```csharp
+/// <summary>
+/// Waits for the take/apply operation to complete and returns the final operation status.
+/// </summary>
+/// <returns>The final operation status.</returns>
+private static async Task<OperationStatus> WaitForOperation(IFaceClient client, Guid operationId)
+{
+ OperationStatus operationStatus = null;
+ do
+ {
+ if (operationStatus != null)
+ {
+ Thread.Sleep(TimeSpan.FromMilliseconds(100));
+ }
+
+ // Get the status of the operation.
+ operationStatus = await client.Snapshot.GetOperationStatusAsync(operationId);
+
+ Console.WriteLine($"Operation Status: {operationStatus.Status}");
+ }
+ while (operationStatus.Status != OperationStatusType.Succeeded
+ && operationStatus.Status != OperationStatusType.Failed);
+
+ return operationStatus;
+}
+```
+
+After the operation status shows `Succeeded`, get the snapshot ID by parsing the `ResourceLocation` field of the returned OperationStatus instance.
+
+```csharp
+var snapshotId = Guid.Parse(operationStatus.ResourceLocation.Split('/')[2]);
+```
+
+A typical `resourceLocation` value looks like this:
+
+```csharp
+"/snapshots/e58b3f08-1e8b-4165-81df-aa9858f233dc"
+```
+
+## Apply a snapshot to a target subscription
+
+Next, create the new PersonGroup in the target subscription by using a randomly generated ID. Then use the target subscription's FaceClient instance to apply the snapshot to this PersonGroup. Pass in the snapshot ID and the new PersonGroup ID.
+
+```csharp
+var newPersonGroupId = Guid.NewGuid().ToString();
+var applySnapshotResult = await FaceClientWestUS.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
+```
++
+> [!NOTE]
+> A Snapshot object is valid for only 48 hours. Only take a snapshot if you intend to use it for data migration soon after.
+
+A snapshot apply request returns another operation ID. To get this ID, parse the `OperationLocation` field of the returned applySnapshotResult instance.
+
+```csharp
+var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
+```
+
+The snapshot application process is also asynchronous, so again use `WaitForOperation` to wait for it to finish.
+
+```csharp
+operationStatus = await WaitForOperation(FaceClientWestUS, applyOperationId);
+```
+
+## Test the data migration
+
+After you apply the snapshot, the new PersonGroup in the target subscription populates with the original face data. By default, training results are also copied. The new PersonGroup is ready for face identification calls without needing retraining.
+
+To test the data migration, run the following operations and compare the results they print to the console:
+
+```csharp
+await DisplayPersonGroup(FaceClientEastAsia, personGroupId);
+await IdentifyInPersonGroup(FaceClientEastAsia, personGroupId);
+
+await DisplayPersonGroup(FaceClientWestUS, newPersonGroupId);
+// No need to retrain the PersonGroup before identification,
+// training results are copied by snapshot as well.
+await IdentifyInPersonGroup(FaceClientWestUS, newPersonGroupId);
+```
+
+Use the following helper methods:
+
+```csharp
+private static async Task DisplayPersonGroup(IFaceClient client, string personGroupId)
+{
+ var personGroup = await client.PersonGroup.GetAsync(personGroupId);
+ Console.WriteLine("PersonGroup:");
+ Console.WriteLine(JsonConvert.SerializeObject(personGroup));
+
+ // List persons.
+ var persons = await client.PersonGroupPerson.ListAsync(personGroupId);
+
+ foreach (var person in persons)
+ {
+ Console.WriteLine(JsonConvert.SerializeObject(person));
+ }
+
+ Console.WriteLine();
+}
+```
+
+```csharp
+private static async Task IdentifyInPersonGroup(IFaceClient client, string personGroupId)
+{
+ using (var fileStream = new FileStream("data\\PersonGroup\\Daughter\\Daughter1.jpg", FileMode.Open, FileAccess.Read))
+ {
+ var detectedFaces = await client.Face.DetectWithStreamAsync(fileStream);
+
+ var result = await client.Face.IdentifyAsync(detectedFaces.Select(face => face.FaceId.Value).ToList(), personGroupId);
+ Console.WriteLine("Test identify against PersonGroup");
+ Console.WriteLine(JsonConvert.SerializeObject(result));
+ Console.WriteLine();
+ }
+}
+```
+
+Now you can use the new PersonGroup in the target subscription.
+
+To update the target PersonGroup again in the future, create a new PersonGroup to receive the snapshot. To do this, follow the steps in this guide. A single PersonGroup object can have a snapshot applied to it only one time.
+
+## Clean up resources
+
+After you finish migrating face data, manually delete the snapshot object.
+
+```csharp
+await FaceClientEastAsia.Snapshot.DeleteAsync(snapshotId);
+```
+
+## Next steps
+
+Next, see the relevant API reference documentation, explore a sample app that uses the Snapshot feature, or follow a how-to guide to start using the other API operations mentioned here:
+
+- [Snapshot reference documentation (.NET SDK)](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperations)
+- [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample)
+- [Add faces](add-faces.md)
+- [Call the detect API](identity-detect-faces.md)
cognitive-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/mitigate-latency.md
+
+ Title: How to mitigate latency when using the Face service
+
+description: Learn how to mitigate latency when using the Face service.
+++++ Last updated : 1/5/2021+
+ms.devlang: csharp
+++
+# How to: mitigate latency when using the Face service
+
+You may encounter latency when using the Face service. Latency refers to any kind of delay that occurs when communicating over a network. In general, possible causes of latency include:
+- The physical distance each packet must travel from source to destination.
+- Problems with the transmission medium.
+- Errors in routers or switches along the transmission path.
+- The time required by antivirus applications, firewalls, and other security mechanisms to inspect packets.
+- Malfunctions in client or server applications.
+
+This article talks about possible causes of latency specific to using the Azure Cognitive Services, and how you can mitigate these causes.
+
+> [!NOTE]
+> Azure Cognitive Services does not provide any Service Level Agreement (SLA) regarding latency.
+
+## Possible causes of latency
+
+### Slow connection between the Cognitive Service and a remote URL
+
+Some Azure services provide methods that obtain data from a remote URL that you provide. For example, when you call the [DetectWithUrlAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithUrlAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_String_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can specify the URL of an image in which the service tries to detect faces.
+
+```csharp
+var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+```
+
+The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will affect the response time of the Detect method.
+
+To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
+
+### Large upload size
+
+Some Azure services provide methods that obtain data from a file that you upload. For example, when you call the [DetectWithStreamAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithStreamAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_IO_Stream_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can upload an image in which the service tries to detect faces.
+
+```csharp
+using FileStream fs = File.OpenRead(@"C:\images\face.jpg");
+System.Collections.Generic.IList<DetectedFace> faces = await client.Face.DetectWithStreamAsync(fs, detectionModel: DetectionModel.Detection02);
+```
+
+If the file to upload is large, that will impact the response time of the `DetectWithStreamAsync` method, for the following reasons:
+- It takes longer to upload the file.
+- It takes the service longer to process the file, in proportion to the file size.
+
+Mitigations:
+- Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
+- Consider uploading a smaller file.
+ - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
+ - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
+ - For face recognition, reducing the face size to 200x200 pixels doesn't affect the accuracy of the recognition model.
+ - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
+ - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
+```csharp
+var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
+Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
+IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
+```
+
+### Slow connection between your compute resource and the Face service
+
+If your computer has a slow connection to the Face service, this will affect the response time of service methods.
+
+Mitigations:
+- When you create your Face subscription, make sure to choose the region closest to where your application is hosted.
+- If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.
+- If longer latencies affect the user experience, choose a timeout threshold (for example, maximum 5 seconds) before retrying the API call.
+
+## Next steps
+
+In this guide, you learned how to mitigate latency when using the Face service. Next, learn how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively.
+
+> [!div class="nextstepaction"]
+> [Example: Use the large-scale feature](use-large-scale.md)
+
+## Related topics
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-detection-model.md
+
+ Title: How to specify a detection model - Face
+
+description: This article will show you how to choose which face detection model to use with your Azure Face application.
+++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face detection model
+
+This guide shows you how to specify a face detection model for the Azure Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
+
+Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
+
+If you aren't sure whether you should use the latest model, skip to the [Evaluate different models](#evaluate-different-models) section to evaluate the new model and compare results using your current data set.
+
+## Prerequisites
+
+You should be familiar with the concept of AI face detection. If you aren't, see the face detection conceptual guide or how-to guide:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Call the detect API](identity-detect-faces.md)
+
+## Detect faces with specified model
+
+Face detection finds the bounding-box locations of human faces and identifies their visual landmarks. It extracts the face's features and stores them for later use in [recognition](../concept-face-recognition.md) operations.
+
+When you use the [Face - Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
+
+* `detection_01`
+* `detection_02`
+* `detection_03`
+
+A request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
+
+If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, false, false, recognitionModel: "recognition_04", detectionModel: "detection_03");
+```
+
+## Add face to Person with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object through the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. In this API call, you can specify the detection model in the same way as in [Face - Detect].
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create a PersonGroup and add a person with face detected by "detection_03" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+
+string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Face - Identify] API, for example).
+
+## Add face to FaceList with specified model
+
+You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.
+
+## Evaluate different models
+
+The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
+
+|**detection_01** |**detection_02** |**detection_03**
+||||
+|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations.
+|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
+|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
+|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
+
+The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
+
+## Next steps
+
+In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started with face detection and analysis.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[FaceList - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-recognition-model.md
+
+ Title: How to specify a recognition model - Face
+
+description: This article will show you how to choose which recognition model to use with your Azure Face application.
++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face recognition model
+
+This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use. They can choose the model that best fits their use case.
+
+The Azure Face service has four recognition models available. The models _recognition_01_ (published 2017), _recognition_02_ (published 2019), and _recognition_03_ (published 2020) are continually supported to ensure backwards compatibility for customers using FaceLists or **PersonGroup**s created with these models. A **FaceList** or **PersonGroup** will always use the recognition model it was created with, and new faces will become associated with this model when they're added. This can't be changed after creation and customers will need to use the corresponding recognition model with the corresponding **FaceList** or **PersonGroup**.
+
+You can move to later recognition models at your own convenience; however, you'll need to create new FaceLists and PersonGroups with the recognition model of your choice.
+
+The _recognition_04_ model (published 2021) is the most accurate model currently available. If you're a new customer, we recommend using this model. _Recognition_04_ will provide improved accuracy for both similarity comparisons and person-matching comparisons. _Recognition_04_ improves recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now you can build safe and seamless user experiences that use the latest _detection_03_ model to detect whether an enrolled user is wearing a face cover. Then you can use the latest _recognition_04_ model to recognize their identity. Each model operates independently of the others, and a confidence threshold set for one model isn't meant to be compared across the other recognition models.
+
+Read on to learn how to specify a selected model in different Face operations while avoiding model conflicts. If you're an advanced user and would like to determine whether you should switch to the latest model, skip to the [Evaluate different models](#evaluate-different-models) section. You can evaluate the new model and compare results using your current data set.
++
+## Prerequisites
+
+You should be familiar with the concepts of AI face detection and identification. If you aren't, see these guides first:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Face recognition concepts](../concept-face-recognition.md)
+* [Call the detect API](identity-detect-faces.md)
+
+## Detect faces with specified model
+
+Face detection identifies the visual landmarks of human faces and finds their bounding-box locations. It also extracts the face's features and stores them for use in identification. All of this information forms the representation of one face.
+
+The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.
+
+When using the [Face - Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
+* recognition_01
+* recognition_02
+* recognition_03
+* recognition_04
++
+Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
+
+If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, true, recognitionModel: "recognition_01", returnRecognitionModel: true);
+```
+
+## Identify faces with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object (through the [Add face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Face - Identify] call), and the matching person within that group can be identified.
+
+A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([PersonGroup - Create] or [LargePersonGroup - Create]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [PersonGroup - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create an empty PersonGroup with "recognition_04" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+```
+
+In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
+
+Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
+
+There is no change in the [Face - Identify] API; you only need to specify the model version in detection.
+
+## Find similar faces with specified model
+
+You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+```
+
+This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
+
+There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
+
+## Verify faces with specified model
+
+The [Face - Verify] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
+
+## Evaluate different models
+
+If you'd like to compare the performances of different recognition models on your own data, you'll need to:
+1. Create four PersonGroups using _recognition_01_, _recognition_02_, _recognition_03_, and _recognition_04_ respectively.
+1. Use your image data to detect faces and register them to **Person**s within these four **PersonGroup**s.
+1. Train your PersonGroups using the PersonGroup - Train API.
+1. Test with Face - Identify on all four **PersonGroup**s and compare the results.
++
+If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.
+
+## Next steps
+
+In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-headpose.md
+
+ Title: Use the HeadPose attribute
+
+description: Learn how to use the HeadPose attribute to automatically rotate the face rectangle or detect head gestures in a video feed.
++++++ Last updated : 02/23/2021+
+ms.devlang: csharp
+++
+# Use the HeadPose attribute
+
+In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
+
+## Rotate the face rectangle
+
+The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
+
+The [Cognitive Services Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
+
+### Explore the sample code
+
+You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
+
+```csharp
+/// <summary>
+/// Calculate the rendering face rectangle
+/// </summary>
+/// <param name="faces">Detected face from service</param>
+/// <param name="maxSize">Image rendering size</param>
+/// <param name="imageInfo">Image width and height</param>
+/// <returns>Face structure for rendering</returns>
+public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo)
+{
+ var imageWidth = imageInfo.Item1;
+ var imageHeight = imageInfo.Item2;
+ var ratio = (float)imageWidth / imageHeight;
+ int uiWidth = 0;
+ int uiHeight = 0;
+ if (ratio > 1.0)
+ {
+ uiWidth = maxSize;
+ uiHeight = (int)(maxSize / ratio);
+ }
+ else
+ {
+ uiHeight = maxSize;
+ uiWidth = (int)(ratio * uiHeight);
+ }
+
+ var uiXOffset = (maxSize - uiWidth) / 2;
+ var uiYOffset = (maxSize - uiHeight) / 2;
+ var scale = (float)uiWidth / imageWidth;
+
+ foreach (var face in faces)
+ {
+ var left = (int)(face.FaceRectangle.Left * scale + uiXOffset);
+ var top = (int)(face.FaceRectangle.Top * scale + uiYOffset);
+
+ // Angle of face rectangles, default value is 0 (not rotated).
+ double faceAngle = 0;
+
+ // If head pose attributes have been obtained, re-calculate the left & top (X & Y) positions.
+ if (face.FaceAttributes?.HeadPose != null)
+ {
+ // Head pose's roll value acts directly as the face angle.
+ faceAngle = face.FaceAttributes.HeadPose.Roll;
+ var angleToPi = Math.Abs((faceAngle / 180) * Math.PI);
+
+ // _____ | / \ |
+ // |____| => |/ /|
+ // | \ / |
+ // Re-calculate the face rectangle's left & top (X & Y) positions.
+ var newLeft = face.FaceRectangle.Left +
+ face.FaceRectangle.Width / 2 -
+ (face.FaceRectangle.Width * Math.Sin(angleToPi) + face.FaceRectangle.Height * Math.Cos(angleToPi)) / 2;
+
+ var newTop = face.FaceRectangle.Top +
+ face.FaceRectangle.Height / 2 -
+ (face.FaceRectangle.Height * Math.Sin(angleToPi) + face.FaceRectangle.Width * Math.Cos(angleToPi)) / 2;
+
+ left = (int)(newLeft * scale + uiXOffset);
+ top = (int)(newTop * scale + uiYOffset);
+ }
+
+ yield return new Face()
+ {
+ FaceId = face.FaceId?.ToString(),
+ Left = left,
+ Top = top,
+ OriginalLeft = (int)(face.FaceRectangle.Left * scale + uiXOffset),
+ OriginalTop = (int)(face.FaceRectangle.Top * scale + uiYOffset),
+ Height = (int)(face.FaceRectangle.Height * scale),
+ Width = (int)(face.FaceRectangle.Width * scale),
+ FaceAngle = faceAngle,
+ };
+ }
+}
+```
+
+### Display the updated rectangle
+
+From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
+
+```xaml
+ <DataTemplate>
+ <Rectangle Width="{Binding Width}" Height="{Binding Height}" Stroke="#FF26B8F4" StrokeThickness="1">
+ <Rectangle.LayoutTransform>
+ <RotateTransform Angle="{Binding FaceAngle}"/>
+ </Rectangle.LayoutTransform>
+ </Rectangle>
+</DataTemplate>
+```
+
+## Detect head gestures
+
+You can detect head gestures like nodding and head shaking by tracking HeadPose changes in real time. You can use this feature as a custom liveness detector.
+
+Liveness detection is the task of determining that a subject is a real person and not an image or video representation. A head gesture detector could serve as one way to help verify liveness, especially as opposed to an image representation of a person.
+
+> [!CAUTION]
+> To detect head gestures in real time, you'll need to call the Face API at a high rate (more than once per second). If you have a free-tier (f0) subscription, this will not be possible. If you have a paid-tier subscription, make sure you've calculated the costs of making rapid API calls for head gesture detection.
+
+See the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceAPIHeadPoseSample) on GitHub for a working example of head gesture detection.
+
+## Next steps
+
+See the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
cognitive-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-large-scale.md
+
+ Title: "Example: Use the Large-Scale feature - Face"
+
+description: This guide is an article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects.
+++++++ Last updated : 05/01/2019+
+ms.devlang: csharp
+++
+# Example: Use the large-scale feature
+
+This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
+
+LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
+
+The samples are written in C# by using the Azure Cognitive Services Face client library.
+
+> [!NOTE]
+> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
+
+## Step 1: Initialize the client object
+
+When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. For example:
+
+```csharp
+string SubscriptionKey = "<Key>";
+// Use your own subscription endpoint corresponding to the key.
+string SubscriptionEndpoint = "https://westus.api.cognitive.microsoft.com";
+private readonly IFaceClient faceClient = new FaceClient(
+ new ApiKeyServiceClientCredentials(subscriptionKey),
+ new System.Net.Http.DelegatingHandler[] { });
+faceClient.Endpoint = SubscriptionEndpoint
+```
+
+To get the key with its corresponding endpoint, go to the Azure Marketplace from the Azure portal.
+For more information, see [Subscriptions](https://azure.microsoft.com/services/cognitive-services/directory/vision/).
+
+## Step 2: Code migration
+
+This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility.
+
+Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead.
+
+### Migrate a PersonGroup to a LargePersonGroup
+
+Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations.
+
+For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person.
+
+Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md).
+
+### Migrate a FaceList to a LargeFaceList
+
+| FaceList APIs | LargeFaceList APIs |
+|::|::|
+| Create | Create |
+| Delete | Delete |
+| Get | Get |
+| List | List |
+| Update | Update |
+| - | Train |
+| - | Get Training Status |
+
+The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the
+[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList:
+
+```csharp
+/// <summary>
+/// Helper function to train LargeFaceList and wait for finish.
+/// </summary>
+/// <remarks>
+/// The time interval can be adjusted considering the following factors:
+/// - The training time which depends on the capacity of the LargeFaceList.
+/// - The acceptable latency for getting the training status.
+/// - The call frequency and cost.
+///
+/// Estimated training time for LargeFaceList in different scale:
+/// - 1,000 faces cost about 1 to 2 seconds.
+/// - 10,000 faces cost about 5 to 10 seconds.
+/// - 100,000 faces cost about 1 to 2 minutes.
+/// - 1,000,000 faces cost about 10 to 30 minutes.
+/// </remarks>
+/// <param name="largeFaceListId">The Id of the LargeFaceList for training.</param>
+/// <param name="timeIntervalInMilliseconds">The time interval for getting training status in milliseconds.</param>
+/// <returns>A task of waiting for LargeFaceList training finish.</returns>
+private static async Task TrainLargeFaceList(
+ string largeFaceListId,
+ int timeIntervalInMilliseconds = 1000)
+{
+ // Trigger a train call.
+ await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
+
+ // Wait for training finish.
+ while (true)
+ {
+ Task.Delay(timeIntervalInMilliseconds).Wait();
+ var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
+
+ if (status.Status == Status.Running)
+ {
+ continue;
+ }
+ else if (status.Status == Status.Succeeded)
+ {
+ break;
+ }
+ else
+ {
+ throw new Exception("The train operation is failed!");
+ }
+ }
+}
+```
+
+Previously, a typical use of FaceList with added faces and FindSimilar looked like the following:
+
+```csharp
+// Create a FaceList.
+const string FaceListId = "myfacelistid_001";
+const string FaceListName = "MyFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
+
+// Add Faces to the FaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream);
+ }
+ });
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
+ }
+}
+```
+
+When migrating it to LargeFaceList, it becomes the following:
+
+```csharp
+// Create a LargeFaceList.
+const string LargeFaceListId = "mylargefacelistid_001";
+const string LargeFaceListName = "MyLargeFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
+
+// Add Faces to the LargeFaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream);
+ }
+ });
+
+// Train() is newly added operation for LargeFaceList.
+// Must call it before FindSimilarAsync() to ensure the newly added faces searchable.
+await TrainLargeFaceList(LargeFaceListId);
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
+ }
+}
+```
+
+As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works.
+
+## Step 3: Train suggestions
+
+Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)
+and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
+
+| Scale for faces or persons | Estimated training time |
+|::|::|
+| 1,000 | 1-2 sec |
+| 10,000 | 5-10 sec |
+| 100,000 | 1-2 min |
+| 1,000,000 | 10-30 min |
+
+To better utilize the large-scale feature, we recommend the following strategies.
+
+### Step 3.1: Customize time interval
+
+As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
+
+The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
+
+### Step 3.2: Small-scale buffer
+
+Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
+
+To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily.
+
+An example workflow:
+
+1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces.
+1. Add new persons or faces to both the master collection and the buffer collection.
+1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.
+1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results.
+1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
+1. Delete the old buffer collection after the Train operation finishes on the master collection.
+
+### Step 3.3: Standalone training
+
+If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
+
+Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
+
+```csharp
+private static void Main()
+{
+ // Create a LargePersonGroup.
+ const string LargePersonGroupId = "mylargepersongroupid_001";
+ const string LargePersonGroupName = "MyLargePersonGroupDisplayName";
+ faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait();
+
+ // Set up standalone training at regular intervals.
+ const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status.
+ const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training.
+ var trainTimer = new Timer(TimeIntervalForTrain);
+ trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus);
+ trainTimer.AutoReset = true;
+ trainTimer.Enabled = true;
+
+ // Other operations like creating persons, adding faces, and identification, except for Train.
+ // ...
+}
+
+private static void TrainTimerOnElapsed(string largePersonGroupId, int timeIntervalInMilliseconds)
+{
+ TrainLargePersonGroup(largePersonGroupId, timeIntervalInMilliseconds).Wait();
+}
+```
+
+For more information about data management and identification-related implementations, see [Add faces](add-faces.md).
+
+## Summary
+
+In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList:
+
+- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.
+- Take the proper Train strategy to dynamic data update for large-scale data sets.
+
+## Next steps
+
+Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup.
+
+- [Add faces](add-faces.md)
+- [Face client library quickstart](../quickstarts-sdk/identity-client-library.md)
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-persondirectory.md
+
+ Title: "Example: Use the PersonDirectory structure - Face"
+
+description: Learn how to use the PersonDirectory data structure to store face and person data at greater capacity and with other new features.
+++++++ Last updated : 04/22/2021+
+ms.devlang: csharp
+++
+# Use the PersonDirectory structure
+
+To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory.
+
+Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+
+Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically.
+
+## Prerequisites
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below.
+ * You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Add Persons to the PersonDirectory
+**Persons** are the base enrollment units in the **PersonDirectory**. Once you add a **Person** to the directory, you can add up to 248 face images to that **Person**, per recognition model. Then you can identify faces against them using varying scopes.
+
+### Create the Person
+To create a **Person**, you need to call the **CreatePerson** API and provide a name or userData property value.
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Net.Http;
+using System.Net.Http.Headers;
+using System.Text;
+using System.Threading.Tasks;
+
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Person");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(addPersonUri, content);
+}
+```
+
+The CreatePerson call will return a generated ID for the **Person** and an operation location. The **Person** data will be processed asynchronously, so you use the operation location to fetch the results.
+
+### Wait for asynchronous operation completion
+You'll need to query the async operation status using the returned operation location string to check the progress.
+
+First, you should define a data model like the following to handle the status response.
+
+```csharp
+[Serializable]
+public class AsyncStatus
+{
+ [DataMember(Name = "status")]
+ public string Status { get; set; }
+
+ [DataMember(Name = "createdTime")]
+ public DateTime CreatedTime { get; set; }
+
+ [DataMember(Name = "lastActionTime")]
+ public DateTime? LastActionTime { get; set; }
+
+ [DataMember(Name = "finishedTime", EmitDefaultValue = false)]
+ public DateTime? FinishedTime { get; set; }
+
+ [DataMember(Name = "resourceLocation", EmitDefaultValue = false)]
+ public string ResourceLocation { get; set; }
+
+ [DataMember(Name = "message", EmitDefaultValue = false)]
+ public string Message { get; set; }
+}
+```
+
+Using the HttpResponseMessage from above, you can then poll the URL and wait for results.
+
+```csharp
+string operationLocation = response.Headers.GetValues("Operation-Location").FirstOrDefault();
+
+Stopwatch s = Stopwatch.StartNew();
+string status = "notstarted";
+do
+{
+ if (status == "succeeded")
+ {
+ await Task.Delay(500);
+ }
+
+ var operationResponseMessage = await client.GetAsync(operationLocation);
+
+ var asyncOperationObj = JsonConvert.DeserializeObject<AsyncStatus>(await operationResponseMessage.Content.ReadAsStringAsync());
+ status = asyncOperationObj.Status;
+
+} while ((status == "running" || status == "notstarted") && s.Elapsed < TimeSpan.FromSeconds(30));
+```
++
+Once the status returns as "succeeded", the **Person** object is considered added to the directory.
+
+> [!NOTE]
+> The asynchronous operation from the Create **Person** call does not have to show "succeeded" status before faces can be added to it, but it does need to be completed before the **Person** can be added to a **DynamicPersonGroup** (see below Create and update a **DynamicPersonGroup**) or compared during an Identify call. Verify calls will work immediately after faces are successfully added to the **Person**.
++
+### Add faces to Persons
+
+Once you have the **Person** ID from the Create Person call, you can add up to 248 face images to a **Person** per recognition model. Specify the recognition model (and optionally the detection model) to use in the call, as data under each recognition model will be processed separately inside the **PersonDirectory**.
+
+The currently supported recognition models are:
+* `Recognition_02`
+* `Recognition_03`
+* `Recognition_04`
+
+Additionally, if the image contains multiple faces, you'll need to specify the rectangle bounding box for the face that is the intended target. The following code adds faces to a **Person** object.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var queryString = "userData={userDefinedData}&targetFace={left,top,width,height}&detectionModel={detectionModel}";
+var uri = "https://{endpoint}/face/v1.0-preview/persons/{personId}/recognitionModels/{recognitionModel}/persistedFaces?" + queryString;
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("url", "{image url}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+After the Add Faces call, the face data will be processed asynchronously, and you'll need to wait for the success of the operation in the same manner as before.
+
+When the operation for the face addition finishes, the data will be ready for in Identify calls.
+
+## Create and update a **DynamicPersonGroup**
+
+**DynamicPersonGroups** are collections of references to **Person** objects within a **PersonDirectory**; they're used to create subsets of the directory. A common use is when you want to get fewer false positives and increased accuracy in an Identify operation by limiting the scope to just the **Person** objects you expect to match. Practical use cases include directories for specific building access among a larger campus or organization. The organization directory may contain 5 million individuals, but you only need to search a specific 800 people for a particular building, so you would create a **DynamicPersonGroup** containing those specific individuals.
+
+If you've used a **PersonGroup** before, take note of two major differences:
+* Each **Person** inside a **DynamicPersonGroup** is a reference to the actual **Person** in the **PersonDirectory**, meaning that it's not necessary to recreate a **Person** in each group.
+* As mentioned in previous sections, there is no need to make Train calls, as the face data is processed at the Directory level automatically.
+
+### Create the group
+
+To create a **DynamicPersonGroup**, you need to provide a group ID with alphanumeric or dash characters. This ID will function as the unique identifier for all usage purposes of the group.
+
+There are two ways to initialize a group collection. You can create an empty group initially, and populate it later:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+}
+```
+
+This process is immediate and there is no need to wait for any asynchronous operations to succeed.
+
+Alternatively, you can create it with a set of **Person** IDs to contain those references from the beginning by providing the set in the _AddPersonIds_ argument:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+> [!NOTE]
+> As soon as the call returns, the created **DynamicPersonGroup** will be ready to use in an Identify call, with any **Person** references provided in the process. The completion status of the returned operation ID, on the other hand, indicates the update status of the person-to-group relationship.
+
+### Update the DynamicPersonGroup
+
+After the initial creation, you can add and remove **Person** references from the **DynamicPersonGroup** with the Update Dynamic Person Group API. To add **Person** objects to the group, list the **Person** IDs in the _addPersonsIds_ argument. To remove **Person** objects, list them in the _removePersonIds_ argument. Both adding and removing can be performed in a single call:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Dynamic Person Group updated");
+body.Add("userData", "User defined data updated");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("removePersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PatchAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+Once the call returns, the updates to the collection will be reflected when the group is queried. As with the creation API, the returned operation indicates the update status of person-to-group relationship for any **Person** that's involved in the update. You don't need to wait for the completion of the operation before making further Update calls to the group.
+
+## Identify faces in a PersonDirectory
+
+The most common way to use face data in a **PersonDirectory** is to compare the enrolled **Person** objects against a given face and identify the most likely candidate it belongs to. Multiple faces can be provided in the request, and each will receive its own set of comparison results in the response.
+
+In **PersonDirectory**, there are three types of scopes each face can be identified against:
+
+### Scenario 1: Identify against a DynamicPersonGroup
+
+Specifying the _dynamicPersonGroupId_ property in the request compares the face against every **Person** referenced in the group. Only a single **DynamicPersonGroup** can be identified against in a call.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("dynamicPersonGroupId", "{dynamicPersonGroupIdToIdentifyIn}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 2: Identify against a specific list of persons
+
+You can also specify a list of **Person** IDs in the _personIds_ property to compare the face against each of them.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 3: Identify against the entire **PersonDirectory**
+
+Providing a single asterisk in the _personIds_ property in the request compares the face against every single **Person** enrolled in the **PersonDirectory**.
+
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"*"});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+For all three scenarios, the identification only compares the incoming face against faces whose AddPersonFace call has returned with a "succeeded" response.
+
+## Verify faces against persons in the **PersonDirectory**
+
+With a face ID returned from a detection call, you can verify if the face belongs to a specific **Person** enrolled inside the **PersonDirectory**. Specify the **Person** using the _personId_ property.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/verify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceId", "{guid1}");
+body.Add("personId", "{guid1}");
+var jsSerializer = new JavaScriptSerializer();
+byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+The response will contain a Boolean value indicating whether the service considers the new face to belong to the same **Person**, and a confidence score for the prediction.
+
+## Next steps
+
+In this guide, you learned how to use the **PersonDirectory** structure to store face and person data for your Face app. Next, learn the best practices for adding your users' face data.
+
+* [Best practices for adding users](../enrollment-overview.md)
cognitive-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-api-reference.md
+
+ Title: API Reference - Face
+
+description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
+++++++ Last updated : 02/17/2021+++
+# Face API reference list
+
+Azure Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
+
+- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
cognitive-services Identity Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-encrypt-data-at-rest.md
+
+ Title: Face service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020++
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Face service encryption of data at rest
+
+The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Face Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Face service, you will need to create a new Face resource and select E0 as the Pricing Tier. Once your Face resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
You can use Computer Vision Spatial Analysis to ingest streaming video from came
<!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
* The [conceptual articles](tbd) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/language-support.md
The Computer Vision [Read API](./overview-ocr.md#read-api) supports many languag
> > `Read` OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-See [How to specify the `Read` model](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
+See [How to specify the `Read` model](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
### Handwritten text
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Japanese |`ja`|✅ | ✅| ✅|||||| |✅|✅| |Kazakh |`kk`| | ✅| |||||| ||| |Korean |`ko`| | ✅| |||||| |||
-|Lithuanian |`It`| | ✅| |||||| |||
-|Latvian |`Iv`| | ✅| |||||| |||
+|Lithuanian |`lt`| | ✅| |||||| |||
+|Latvian |`lv`| | ✅| |||||| |||
|Macedonian |`mk`| | ✅| |||||| ||| |Malay Malaysia |`ms`| | ✅| |||||| ||| |Norwegian (Bokmal) |`nb`| | ✅| |||||| |||
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Polish |`pl`| | ✅| |||||| ||| |Dari |`prs`| | ✅| |||||| ||| | Portuguese-Brazil|`pt-BR`| | ✅| |||||| |||
-| Portuguese-Portugal |`pt`/`pt-PT`|✅ | ✅| ✅|||||| |✅|✅|
+| Portuguese-Portugal |`pt`|✅ | ✅| ✅|||||| |✅|✅|
+| Portuguese-Portugal |`pt-PT`| | ✅| |||||| |||
|Romanian |`ro`| | ✅| |||||| ||| |Russian |`ru`| | ✅| |||||| ||| |Slovak |`sk`| | ✅| |||||| |||
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Turkish |`tr`| | ✅| |||||| ||| |Ukrainian |`uk`| | ✅| |||||| ||| |Vietnamese |`vi`| | ✅| |||||| |||
-|Chinese Simplified |`zh`/ `zh-Hans`|✅ | ✅| ✅|||||| |✅|✅|
+|Chinese Simplified |`zh`|✅ | ✅| ✅|||||| |✅|✅|
+|Chinese Simplified |`zh-Hans`| | ✅| |||||| |||
|Chinese Traditional |`zh-Hant`| | ✅| |||||| |||
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
+
+ Title: What is the Azure Face service?
+
+description: The Azure Face service provides AI algorithms that you use to detect, recognize, and analyze human faces in images.
++++++ Last updated : 02/28/2022++
+keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
+#Customer intent: As the developer of an app that deals with images of humans, I want to learn what the Face service does so I can determine if I should use its features.
++
+# What is the Azure Face service?
+
+> [!WARNING]
+> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
+
+The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
+
+## Example use cases
+
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+
+**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
+
+**Face redaction**: Redact or blur detected faces of people recorded in a video to protect their privacy.
++
+## Face detection and analysis
+
+Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
+
+Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
+
+> [!NOTE]
+> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to use other Face operations like Identify, Verify, Find Similar, or Face grouping, you should use this service instead.
+
+For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
++
+## Identity verification
+
+Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be.
+
+### Identification
+
+Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
+
+The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
+
+![A grid with three columns for different people, each with three rows of face images](./media/person.group.clare.jpg)
+
+After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+
+### Verification
+
+The verification operation answers the question, "Do these two faces belong to the same person?".
+
+Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID.
+
+For more information about identity verification, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
++
+## Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
+
+The following example shows the target face:
+
+![A woman smiling](./media/FaceFindSimilar.QueryFace.jpg)
+
+And these images are the candidate faces:
+
+![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
+
+To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
+
+## Group faces
+
+The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
+
+All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
+
+## Data privacy and security
+
+As with all of the Cognitive Services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center.
+
+## Next steps
+
+Follow a quickstart to code the basic components of a face recognition app in the language of your choice.
+
+- [Client library quickstart](quickstarts-sdk/identity-client-library.md).
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
You can use Image Analysis through a client library SDK or by calling the [REST
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
Generate a description of an entire image in human-readable language, using comp
### Detect faces
-Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](../face/index.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
+Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](./index-identity.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
### Detect image types
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Optical character recognition (OCR) allows you to extract printed or handwritten
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/call-read-api.md) contain instructions for using the service in more specific or customized ways.
-<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.
+* The [how-to guides](./how-to/call-read-api.md) contain instructions for using the service in more specific or customized ways.
+<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features.
* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. --> ## Read API
OCR for print text includes support for English, French, German, Italian, Portug
OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, Spanish languages.
-See [How to specify the model version](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+See [How to specify the model version](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
## Key features
The Read API includes the following features.
* Handwriting classification for text lines (Latin only) * Available as Distroless Docker container for on-premises deployment
-Learn [how to use the OCR features](./vision-api-how-to-topics/call-read-api.md).
+Learn [how to use the OCR features](./how-to/call-read-api.md).
## Use the cloud API or deploy on-premises The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Azure's Computer Vision service gives you access to advanced algorithms that pro
||| | [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| |[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.|
+| [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. |
| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.| ## Computer Vision for digital asset management
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
+
+ Title: 'Quickstart: Use the Face client library'
+
+description: The Face API offers client libraries that makes it easy to detect, find similar, identify, verify and more.
+++
+zone_pivot_groups: programming-languages-set-face
+++ Last updated : 09/27/2021+
+ms.devlang: csharp, golang, javascript, python
+
+keywords: face search by image, facial recognition search, facial recognition, face recognition app
++
+# Quickstart: Use the Face client library
++++++++++++
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Title: What's new in Computer Vision?
-description: This article contains news about Computer Vision.
+description: Stay up to date on recent releases and updates to Azure Computer Vision.
Previously updated : 05/02/2022 Last updated : 05/25/2022 # What's new in Computer Vision
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
## May 2022
Computer Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 suppo
* Performance and latency improvements. * Available as [cloud service](overview-ocr.md#read-api) and [Docker container](computer-vision-how-to-install-containers.md).
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages
* Enhancements including better support for extracting handwritten dates, amounts, names, and single character boxes. * General performance and AI quality improvements
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+### New Quality Attribute in Detection_01 and Detection_03
+* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concept-face-detection.md) and see how to use it with [QuickStart](./quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio).
+ ## September 2021
Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages
* Enhancements for processing digital PDFs and Machine Readable Zone (MRZ) text in identity documents. * General performance and AI quality improvements
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-h
The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
+## July 2021
+
+### New HeadPose and Landmarks improvements for Detection_03
+
+* The Detection_03 model has been updated to support facial landmarks.
+* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
+ ## May 2021 ### Spatial Analysis container update
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
The Computer Vision API v3.2 is now generally available with the following updates:
-* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
-* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages. * [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment. > [!div class="nextstepaction"] > [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+### PersonDirectory data structure
+
+* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md).
+ ## March 2021 ### Computer Vision 3.2 Public Preview update
The Computer Vision Read API v3.2 public preview, available as cloud service and
* Extract text only for selected pages for a multi-page document. * Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) +
+### New Face API detection model
+* The new Detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New detectable Face attributes
+* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New Face API Recognition Model
+* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./how-to/specify-recognition-model.md) for more details.
+ ## January 2021 ### Spatial Analysis container update
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details * Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details.
+### Mitigate latency
+* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./how-to/mitigate-latency.md).
+
+## December 2020
+### Customer configuration for Face ID storage
+* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
+
+## November 2020
+### Sample Face enrollment app
+* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+ ## October 2020 ### Computer Vision API v3.1 GA
The Computer Vision Read API v3.1 public preview adds these capabilities:
* This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
+## August 2020
+### Customer-managed encryption of data at rest
+* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./identity-encrypt-data-at-rest.md).
+ ## July 2020 ### Read API v3.1 Public Preview with OCR for Simplified Chinese
The Computer Vision Read API v3.1 public preview adds support for Simplified Chi
* This preview version of the Read API supports English, Dutch, French, German, Italian, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005)
Computer Vision API v3.0 entered General Availability, with updates to the Read
See the [OCR overview](overview-ocr.md) to learn more.
+## April 2020
+### New Face API Recognition Model
+* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./how-to/specify-recognition-model.md).
+ ## March 2020 * TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
You now can use version 3.0 of the Read API to extract printed or handwritten te
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/REST/CSharp-hand-text.md?tabs=version-3) to get starting using the 3.0 API. +
+## June 2019
+
+### New Face API detection model
+* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
+
+## April 2019
+
+### Improved attribute accuracy
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Improved processing speeds
+* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
+
+## March 2019
+
+### New Face API recognition model
+* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
+
+## January 2019
+
+### Face Snapshot feature
+* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](how-to/migrate-face-data.md).
+
+## October 2018
+
+### API messages
+* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
+
+## May 2018
+
+### Improved attribute accuracy
+* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Increased file size limit
+* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
+
+## March 2018
+
+### New data structure
+* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](how-to/use-large-scale.md).
+* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
+
+## May 2017
+
+### New detectable Face attributes
+* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
+* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
+
+## March 2017
+
+### New detectable Face attribute
+* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Fixed issues
+* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
+
+## November 2016
+### New subscription tier
+* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
+
+## October 2016
+### API messages
+* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+
+## July 2016
+### New features
+* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
+* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
+* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
+
+## V1.0 changes from V0
+
+* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
+ [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
+* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
+* Deprecated the V0 endpoint of Face API on June 30, 2016.
++ ## Cognitive Service updates [Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/ReleaseNotes.md
- Title: What's new in Azure Face service?-
-description: Stay up to date on recent releases and updates to the Azure Face service.
------- Previously updated : 09/27/2021----
-# What's new in Azure Face service?
-
-The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with new features, enhancements, fixes, and documentation updates.
-
-## February 2022
-
-### New Quality Attribute in Detection_01 and Detection_03
-* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concepts/face-detection.md) and see how to use it with [QuickStart](./quickstarts/client-libraries.md?pivots=programming-language-csharp&tabs=visual-studio).
--
-## July 2021
-
-### New HeadPose and Landmarks improvements for Detection_03
-
-* The Detection_03 model has been updated to support facial landmarks.
-* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
--
-## April 2021
-
-### PersonDirectory data structure
-
-* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
-* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](Face-API-How-to-Topics/use-persondirectory.md).
--
-## February 2021
-
-### New Face API detection model
-* The new Detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
-### New detectable Face attributes
-* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
-### New Face API Recognition Model
-* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./face-api-how-to-topics/specify-recognition-model.md) for more details.
--
-## January 2021
-### Mitigate latency
-* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./face-api-how-to-topics/how-to-mitigate-latency.md).
-
-## December 2020
-### Customer configuration for Face ID storage
-* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
-
-## November 2020
-### Sample Face enrollment app
-* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
-
-## August 2020
-### Customer-managed encryption of data at rest
-* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./encrypt-data-at-rest.md).
-
-## April 2020
-### New Face API Recognition Model
-* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./face-api-how-to-topics/specify-recognition-model.md).
-
-## June 2019
-
-### New Face API detection model
-* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](Face-API-How-to-Topics/specify-detection-model.md).
-
-## April 2019
-
-### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Improved processing speeds
-* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
-
-## March 2019
-
-### New Face API recognition model
-* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](Face-API-How-to-Topics/specify-recognition-model.md).
-
-## January 2019
-
-### Face Snapshot feature
-* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](Face-API-How-to-Topics/how-to-migrate-face-data.md).
-
-## October 2018
-
-### API messages
-* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
-
-## May 2018
-
-### Improved attribute accuracy
-* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Increased file size limit
-* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
-
-## March 2018
-
-### New data structure
-* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](Face-API-How-to-Topics/how-to-use-large-scale.md).
-* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
-
-## May 2017
-
-### New detectable Face attributes
-* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
-* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
-
-## March 2017
-
-### New detectable Face attribute
-* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Fixed issues
-* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
-
-## November 2016
-### New subscription tier
-* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
-
-## October 2016
-### API messages
-* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-
-## July 2016
-### New features
-* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
-* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
-* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
-
-## V1.0 changes from V0
-
-* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
- [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
-* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
-* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
-* Deprecated the V0 endpoint of Face API on June 30, 2016.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
The issues are divided into three types. Refer to the following tables to check
**Auto-rejected**
-Data with these errors will not be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
+Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
| Category | Name | Description | | | -- | |
Unresolved errors listed in the next table affect the quality of training, but d
After you validate your data files, you can use them to build your Custom Neural Voice model.
-1. On the **Train model** tab, select **Train model** to create a voice model with the data you've uploaded.
+1. On the **Train model** tab, select **Train a new model** to create a voice model with the data you've uploaded.
-1. Select the neural training method for your model and target language. By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
+1. Select the neural training method for your model and target language.
+
+ By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
1. Choose the data you want to use for training, and specify a speaker file.
After you validate your data files, you can use them to build your Custom Neural
>- To create a custom neural voice, select at least 300 utterances. >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use his or her speech data to train a custom neural voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access](https://aka.ms/customneural).
-1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
+1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script, including up to 100 utterances. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
``` This is the waistline, and it's falling.
After you validate your data files, you can use them to build your Custom Neural
> [!NOTE] > Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
- The **Train model** table displays a new entry that corresponds to this newly created model. The table also displays the status: processing, succeeded, or failed. The status reflects the process of converting your data to a voice model, as shown in this table:
+ The **Train model** table displays a new entry that corresponds to this newly created model.
+
+ When the model is training, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
+
+ :::image type="content" source="media/custom-voice/cnv-cancel-training.png" alt-text="Screenshot that shows how to cancel training for a model.":::
+
+ The table displays the status: processing, succeeded, failed, and canceled. The status reflects the process of converting your data to a voice model, as shown in this table:
| State | Meaning | | -- | - | | Processing | Your voice model is being created. | | Succeeded | Your voice model has been created and can be deployed. | | Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
+ | Canceled | The training for your voice model was canceled. |
Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. > [!NOTE]
- > Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
+ > Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
1. After you finish training the model successfully, you can review the model details.
The quality of the voice depends on many factors, such as:
- The accuracy of the transcript file. - How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
+### Rename your model
+
+If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
++
+Enter the new name on the **Clone voice model** window, then click **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
++
+### Test your voice model
+
+After you've trained your voice model, you can test the model on the model details page. Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
++
+If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
++
+Before uploading test script, check the [test script requirements](#train-your-custom-neural-voice-model). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+On **Add test scripts** window, click **Browse for a file** to select your own script, then select **Add** to upload it.
++ For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). > [!NOTE]
-> Custom Neural Voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
+> Custom Neural Voice training is only available in some regions. But you can easily copy a neural voice model from these regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
## Next steps
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Cognitive Services for Big Data can use services from any region in the world, a
|Service Name|Service Description| |:--|:| |[Computer Vision](../computer-vision/index.yml "Computer Vision")| The Computer Vision service provides you with access to advanced algorithms for processing images and returning information. |
-|[Face](../face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. |
+|[Face](../computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. |
### Speech
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/best-practices.md
+
+ Title: Project best practices
+description: Best practices for Question Answering
+++++
+recommendations: false
Last updated : 06/03/2022++
+# Project best practices
+
+The following list of QnA pairs will be used to represent a project (knowledge base) to highlight best practices when authoring in custom question answering.
+
+|Question |Answer |
+|-|-|
+|I want to buy a car. |There are three options for buying a car. |
+|I want to purchase software license. |Software licenses can be purchased online at no cost. |
+|How to get access to WPA? |WPA can be accessed via the company portal. |
+|What is the price of Microsoft stock?|$200. |
+|How do I buy Microsoft Services? |Microsoft services can be bought online. |
+|I want to sell car. |Please send car pictures and documents. |
+|How do I get an identification card? |Apply via company portal to get an identification card.|
+|How do I use WPA? |WPA is easy to use with the provided manual. |
+|What is the utility of WPA? |WPA provides a secure way to access company resources. |
+
+## When should you add alternate questions to a QnA?
+
+- Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to questions in the knowledge base. For example, consider the following question answer pair:
+
+ **Question: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥**
+
+ **Answer: ΓÇ£$200ΓÇ¥.**
+
+ The service can return expected responses for semantically similar queries such as:
+
+ "How much is Microsoft stock worth?"
+
+ "How much is Microsoft's share value?"
+
+ "How much does a Microsoft share cost?"
+
+ "What is the market value of Microsoft stock?"
+
+ "What is the market value of a Microsoft share?"
+
+ However, please note that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
+
+- There are certain scenarios which require the customer to add an alternate question. When a query does not return the correct answer despite it being present in the knowledge base, we advise adding that query as an alternate question to the intended QnA pair.
+
+## How many alternate questions per QnA is optimal?
+
+- Users can add up to 10 alternate questions depending on their scenario. Alternate questions beyond the first 10 arenΓÇÖt considered by our core ranker. However, they are evaluated in the other processing layers resulting in better output overall. All the alternate questions will be considered in the preprocessing step to look for an exact match.
+
+- Semantic understanding in question answering should be able to take care of similar alternate questions.
+
+- The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all intents for the answer are captured by these 10 questions. For the knowledge base above, in QNA #1, adding alternate questions such as "How can I buy a car?", "I wanna buy a car." are not required. Whereas adding alternate questions such as "How to purchase a car.", "What are the options for buying a vehicle?" can be useful.
+
+## When to add synonyms to a knowledge base
+
+- Question answering provides the flexibility to use synonyms at the knowledge base level, unlike QnA Maker where synonyms are shared across knowledge bases for the entire service.
+
+- For better relevance, the customer needs to provide a list of acronyms that the end user intends to use interchangeably. For instance, the following is a list of acceptable acronyms:
+
+ MSFT ΓÇô Microsoft
+
+ ID ΓÇô Identification
+
+ ETA ΓÇô Estimated time of Arrival
+
+- Apart from acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as "my carΓÇÖs audio isnΓÇÖt working" and the knowledge base has questions on "fixing audio for car X", then we need to add "X" and "car" as synonyms.
+
+- The Transformer based model already takes care of most of the common synonym cases, for e.g.- Purchase ΓÇô Buy, Sell - Auction, Price ΓÇô Value. For example, consider the following QnA pair: Q: "What is the price of Microsoft Stock?" A: "$200".
+
+If we receive user queries like "Microsoft stock value", "Microsoft share value", "Microsoft stock worth", "Microsoft share worth", "stock value", etc., they should be able to get correct answer even though these queries have words like share, value, worth which are not originally present in the knowledge base.
+
+## How are lowercase/uppercase characters treated?
+
+Question answering takes casing into account but it's intelligent enough to understand when it is to be ignored. You should not be seeing any perceivable difference due to wrong casing.
+
+## How are QnAs prioritized for multi-turn questions?
+
+When a KB has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other QnAs, for the next query we give slight preference to all the children QnAs, sibling QnAs and grandchildren QnAs in that order. Along with any query, the [Question Answering API] (/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a "context" object with the property "previousQnAId" which denotes the last top answer. Based on this previous QnA ID, all the related QnAs are boosted.
+
+## How are accents treated?
+
+Accents are supported for all major European languages. If the query has an incorrect accent, confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
+
+## How is punctuation in a user query treated?
+
+Punctuation is ignored in user query before sending it to the ranking stack. Ideally it should not impact the relevance scores. Punctuations that are ignored are as follows: ,?:;\"'(){}[]-+。./!*؟
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Question Answering](../quickstart/sdk.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
* [Anomaly Detector (Preview)](./anomaly-detector/index.yml) * [Custom Vision](./custom-vision-service/index.yml)
-* [Face](./face/index.yml)
+* [Face](./computer-vision/index-identity.yml)
* [Personalizer](./personalizer/index.yml) ## Vision
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
See the tables below to learn about the services offered within those categories
|:--|:|--| |[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information.| [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md)| |[Custom Vision](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | [Custom Vision quickstart](./custom-vision-service/getting-started-build-a-classifier.md)|
-|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
+|[Face](./computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
## Speech APIs
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-The Network Diagnostics Tool enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://acs-network-diagnostic-tool.azurewebsites.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool.
+The **Network Diagnostics Tool** enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://azurecommdiagnostics.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool. After the test, a GUID is presented which can be provided to our support team for further help.
![Network Diagnostic Tool home screen](../media/network-diagnostic-tool.png)
If you are looking to build your own Network Diagnostic Tool or to perform deepe
## Privacy When a user runs a network diagnostic, the tool collects and store service and client telemetry data to verify your network conditions and ensure that they're compatible with Azure Communication Services. The telemetry collected doesn't contain personal identifiable information. The test utilizes both audio and video collected through your device for this verification. The audio and video used for the test aren't stored.+
+## Support
+
+The test provides a **unique identifier** for your test which you can provide our support team who can provide further help. For more information see [help and support options](../../support.md)
## Next Steps
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/messaging-policy.md
# Azure Communication Services Messaging Policy
-Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams and Skype. Integrate SMS messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements and industry standards to get started.
+Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams, Skype, and Exchange. You can easily integrate SMS and email messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements and industry standards to get started.
We know that messaging requirements can seem daunting to learn, but they're as easy as remembering ΓÇ£COMSΓÇ¥:
We developed this messaging policy to help you satisfy regulatory requirements a
### What is consent?
-Consent is an agreement between you and the message recipient that allows you to send automated messages to them. You must obtain consent before sending the first message, and you should make clear to the recipient that they're agreeing to receive messages from you. This procedure is known as receiving ΓÇ£prior express consentΓÇ¥ from the individual you intend to message.
+Consent is an agreement between you and the message recipient that allows you to send application to person (A2P) messages to them. You must obtain consent before sending the first message, and you should make clear to the recipient that they're agreeing to receive messages from you. This procedure is known as receiving "prior express consent" from the individual you intend to message.
-The messages that you send must be the same type of messages that the recipient agreed to receive and should only be sent to the number that the recipient provided to you. If you intend to send informational messages, such as appointment reminders or alerts, then consent can be either written or oral. If you intend to send promotional messages, such as sales or marketing messages that promote a product or service, then consent must be written.
+The messages that you send must be the same type of messages that the recipient agreed to receive and should only be sent to the number or email address that the recipient provided to you. If you intend to send informational messages, such as appointment reminders or alerts, then consent can be either written or oral. If you intend to send promotional messages, such as sales or marketing messages that promote a product or service, then consent must be written.
### How do you obtain consent? Consent can be obtained in a variety of ways, such as: -- When a user enters their telephone number into a website,
+- When a user enters their telephone number or email address into a website,
- When a user initiates a text message exchange, or - When a user sends a sign-up keyword to your phone number.
Regardless of how consent is obtained, you and your customers must ensure that t
- Provide a ΓÇ£Call to ActionΓÇ¥ before obtaining consent. You and your customers should provide potential message recipients with a ΓÇ£call to actionΓÇ¥ that invites them to opt-in to your messaging program. The call to action should include, at a minimum: (1) the identity of the message sender, (2) clear opt-in instructions, (3) opt-out instructions, and (4) any associated messaging fees. - Consent isn't transferable or assignable. Any consent that an individual provides to you cannot be transferred or sold to an unaffiliated third party. If you collect an individualΓÇÖs consent for a third party, then you must clearly identify the third party to the individual. You must also state that the consent you obtained applies only to communications from the third party.-- Consent is limited in purpose. An individual who provides their number for a particular purpose consents to receive communications only for that specific purpose and from that specific message sender. Before obtaining consent, you should clearly notify the intended message recipient if you'll send recurring messages or messages from an affiliate.
+- Consent is limited in purpose. An individual who provides their number or an email address for a particular purpose consents to receive communications only for that specific purpose and from that specific message sender. Before obtaining consent, you should clearly notify the intended message recipient if you'll send recurring messages or messages from an affiliate.
### Consent best practices:
In addition to the messaging requirements discussed above, you may want to imple
- Detailed ΓÇ£Call to ActionΓÇ¥ information. To ensure that you obtain appropriate consent, provide - The name or description of your messaging program or product
- - The number(s) from which recipients will receive messages, and
+ - The number(s) or email address(es) from which recipients will receive messages, and
- Any applicable terms and conditions before an individual opts-in to receiving messages from you. - Accurate records of consent. You should retain records of any consent that an individual provides to you for at least four years. Records of consent can include: - Timestamps
Message recipients may revoke consent and opt-out of receiving future messages t
Ensure that message recipients can opt-out of future messages at any time. You must also offer multiple opt-out options. After a message recipient opts-out, you should not send additional messages unless the individual provides renewed consent.
-One of the most common opt-out mechanisms is to include a ΓÇ£STOPΓÇ¥ keyword in the initial message of every new conversation. Be prepared to remove customers that reply with a lowercase ΓÇ£stopΓÇ¥ or other common keywords, such as ΓÇ£unsubscribeΓÇ¥ or ΓÇ£cancel.ΓÇ¥ After an individual revokes consent, you should remove them from all recurring messaging campaigns unless they expressly elect to continue receiving messages from a particular program.
+One of the most common opt-out mechanisms in SMS applications is to include a ΓÇ£STOPΓÇ¥ keyword in the initial message of every new conversation. Be prepared to remove customers that reply with a lowercase ΓÇ£stopΓÇ¥ or other common keywords, such as ΓÇ£unsubscribeΓÇ¥ or ΓÇ£cancel.ΓÇ¥
+
+For email, it is to embed a link to unsubscribe in every email sent to the customer. If the customer selects the unsubscribe link, you should be prepared to remove that customer email address(es) from your communication list.
+
+After an individual revokes consent, you should remove them from all recurring messaging campaigns unless they expressly elect to continue receiving messages from a particular program.
### Opt-out best practices:
-In addition to keywords, other common opt-out mechanisms include providing customers with a designated opt-out e-mail address, the phone number of customer support staff, or a link to unsubscribe on your webpage.
+In addition to keywords, other common opt-out mechanisms include providing customers with a designated opt-out e-mail address, the phone number of customer support staff, or a link to unsubscribe embedded in an email message you sent or available on your webpage.
-### How we handle opt-out requests:
+### How we handle opt-out requests for SMS
If an individual requests to opt-out of future messages on an Azure Communication Services toll-free number, then all further traffic from that number will be automatically stopped. However, you must still ensure that you do not send additional messages for that messaging campaign from new or different numbers. If you have separately obtained express consent for a different messaging campaign, then you may continue to send messages from a different number for that campaign. Check out our FAQ section to learn more on [Opt-out handling](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/communication-services/concepts/sms/sms-faq.md#how-can-i-receive-messages-using-azure-communication-services)
+### How we handle opt-out requests for email
+
+If an individual requests to opt out of future messages on Azure Communication Services using the unsubscribe UI page to process the unsubscribe requests, you will have to add the requested recipient's email address to the suppression list that will be used to filter recipients during the send-mail process.
+ ## Message content ### Adult content:
We reserve the right to modify the list of prohibited message content at any tim
## Spoofing
-Spoofing is the act of causing a misleading or inaccurate originating number to display on a message recipientΓÇÖs device. We strongly discourage you and any service provider that you use from sending spoofed messages. Spoofing shields the identity of the message sender and prevents message recipients from easily opting out of unwanted communications. We also require that you abide by all applicable spoofing laws.
+Spoofing is the act of causing a misleading or inaccurate originating number or email address to display on a message recipientΓÇÖs device. We strongly discourage you and any service provider that you use from sending spoofed messages. Spoofing shields the identity of the message sender and prevents message recipients from easily opting out of unwanted communications. We also require that you abide by all applicable spoofing laws.
## Final thoughts
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The SBC makes a DNS query to resolve sip.pstnhub.microsoft.com. Based on the SBC
## Media traffic: IP and Port ranges
-The media traffic flows to and from a separate service called Media Processor. At the moment of publishing, Media Processor for Communication Services can use any Azure IP address.
-Download [the full list of addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+The media traffic flows to and from a separate service in the Microsoft Cloud called Media Processor. The IP address range for media traffic:
+- `20.202.0.0/16 (IP addresses from 20.202.0.1 to 20.202.255.254)`
-### Port range
-The port range of the Media Processors is shown in the following table:
+### Port ranges
+The port ranges of the Media Processors are shown in the following table:
|Traffic|From|To|Source port|Destination port| |: |: |: |: |: |
The port range of the Media Processors is shown in the following table:
## Media traffic: Media processors geography
-The media traffic flows via components called media processors. Media processors are placed in the same datacenters as SIP proxies:
+Media Processors are placed in the same datacenters as SIP proxies:
- NOAM (US South Central, two in US West and US East datacenters) - Europe (UK South, France Central, Amsterdam and Dublin datacenters) - Asia (Singapore datacenter)
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 06/01/2022 Last updated : 06/08/2022 tags: connectors
The SQL Server connector has different versions, based on [logic app type and ho
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). | | **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For operations, managed connector limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). For ISE-versioned limits, review the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed connector's message limits. |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). <br><br>The built-in connector differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. This action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For built-in connector operations, limits, and other information, review the [SQL Server built-in connector reference](#built-in-connector-operations). |
||||
+## Limitations
+
+For more information, review the [SQL Server managed connector reference](/connectors/sql/) or the [SQL Server built-in connector reference](#built-in-connector-operations).
+ ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The SQL Server connector has different versions, based on [logic app type and ho
You can use the SQL Server built-in connector, which requires a connection string. To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-For other connector requirements, review [SQL Server connector reference](/connectors/sql/).
-
-## Limitations
-
-For more information, review the [SQL Server connector reference](/connectors/sql/).
+For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
When you call a stored procedure by using the SQL Server connector, the returned
1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
+<a name="built-in-connector-operations"></a>
+
+## Built-in connector operations
++
+### Actions
+
+The SQL Server built-in connector has a single action.
+
+#### Execute Query
+
+Operation ID: `executeQuery`
+
+Runs a query against a SQL database.
+
+##### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Query** | `query` | True | Dynamic | The body for your query |
+| **Query Parameters** | `queryParameters` | False | Objects | The parameters for your query |
+||||||
+
+##### Returns
+
+The outputs from this operation are dynamic.
+
+## Built-in connector app settings
+
+The SQL Server built-in connector includes app settings on your Standard logic app resource that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the default timeout value for connector operations. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+ ## Troubleshoot problems <a name="connection-problems"></a>
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Previously updated : 05/15/2022 Last updated : 06/07/2022
Azure Container Apps allows you to bind one or more custom domains to a containe
- Every domain name must be associated with a domain certificate. - Certificates are applied to the container app environment and are bound to individual container apps. You must have role-based access to the environment to add certificates. - [SNI domain certificates](https://wikipedia.org/wiki/Server_Name_Indication) are required.
+- Ingress must be enabled for the container app
## Add a custom domain and certificate
-> [!NOTE]
-> If you are using a new certificate, you must have an existing [SNI domain certificate](https://wikipedia.org/wiki/Server_Name_Indication) file available to upload to Azure.
+> [!IMPORTANT]
+> If you are using a new certificate, you must have an existing [SNI domain certificate](https://wikipedia.org/wiki/Server_Name_Indication) file available to upload to Azure.
1. Navigate to your container app in the [Azure portal](https://portal.azure.com)
+1. Verify that your app has ingress enabled by selecting **Ingress** in the *Settings* section. If ingress is not enabled, enable it with these steps:
+
+ 1. Set *HTTP Ingress* to **Enabled**.
+ 1. Select the desired *Ingress traffic* setting.
+ 1. Enter the *Target port*.
+ 1. Select **Save**.
+ 1. Under the *Settings* section, select **Custom domains**. 1. Select the **Add custom domain** button.
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 05/10/2022 Last updated : 06/07/2022 # Dapr integration with Azure Container Apps
-The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once enabled in Container Apps, Dapr exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
+The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled in Container Apps, it exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
Dapr APIs, also referred to as building blocks, are built on best practice industry standards, that:
The following Pub/sub example demonstrates how Dapr works alongside your contain
| -- | - | -- | | 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings apply across all revisions of a given container app. | | 2 | Dapr sidecar | Fully managed Dapr APIs are exposed to your container app via the Dapr sidecar. These APIs are available through HTTP and gRPC protocols. By default, the sidecar runs on port 3500 in Container Apps. |
-| 3 | Dapr component | Dapr components can be shared by multiple container apps. Using scopes, the Dapr sidecar will determine which components to load for a given container app at runtime. |
+| 3 | Dapr component | Dapr components can be shared by multiple container apps. The Dapr sidecar uses scopes to determine which components to load for a given container app at runtime. |
### Enable Dapr
-You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like bicep or ARM. With the following settings, you enable Dapr on your app:
+You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like a bicep or an Azure Resource Manager (ARM) template. You can enable Dapr in your app with the following settings:
-| Field | Description |
-| -- | -- |
-| `--enable-dapr` / `enabled` | Enables Dapr on the container app. |
-| `--dapr-app-port` / `appPort` | Identifies which port your application is listening. |
-| `--dapr-app-protocol` / `appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
-| `--dapr-app-id` / `appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
+| CLI Parameter | Template field | Description |
+| -- | -- | -- |
+| `--enable-dapr` | `dapr.enabled` | Enables Dapr on the container app. |
+| `--dapr-app-port` | `dapr.appPort` | Identifies which port your application is listening. |
+| `--dapr-app-protocol` | `dapr.appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
+| `--dapr-app-id` | `dapr.appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
-Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr settings. However, when changing a Dapr setting, the container app instance and revisions are automatically restarted.
+The following example shows how to define a Dapr configuration in a template by adding the Dapr configuration to the `properties.configuration` section of your container apps resource declaration.
+
+# [Bicep](#tab/bicep1)
+
+```bicep
+ dapr: {
+ enabled: true
+ appId: 'nodeapp'
+ appProtocol: 'http'
+ appPort: 3000
+ }
+```
+
+# [ARM](#tab/arm1)
+
+```json
+ "dapr": {
+ "enabled": true,
+ "appId": "nodeapp",
+ "appProcotol": "http",
+ "appPort": 3000
+ }
+
+```
+++
+Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr setting. However, when changing Dapr settings, the container app revisions and replicas are automatically restarted.
### Configure Dapr components
Once Dapr is enabled on your container app, you're able to plug in and use the [
- Can be easily modified to point to any one of the component implementations. - Can reference secure configuration values using Container Apps secrets.
-Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you will find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
+Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you'll find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
> [!NOTE] > By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. By adding scopes to a component, you tell the Dapr sidecars for each respective container app which components to load at runtime. Using scopes is recommended for production workloads. # [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. When configuring multiple components, you will need to create a separate YAML file and run the Azure CLI command for each component.
+When defining a Dapr component via YAML, you'll pass your component manifest into the Azure CLI. When configuring multiple components, you'll need to create a separate YAML file and run the Azure CLI command for each component.
For example, deploy a `pubsub.yaml` component using the following command:
For example, deploy a `pubsub.yaml` component using the following command:
az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml" ```
-The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`.
+The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`.
```yaml # pubsub.yaml for Azure Service Bus component
The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with ap
This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```bicep resource daprComponent 'daprComponents@2022-03-01' = {
resource daprComponent 'daprComponents@2022-03-01' = {
A Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```json {
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Once your Azure Blob Storage account is created, you'll create a template where
### Create Azure Resource Manager (ARM) template
-Create an ARM template to deploy a Container Apps environment including:
+Create an ARM template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace
-* Application Insights resource for distributed tracing
+* the Application Insights resource for distributed tracing
* a dapr component for the state store
-* two dapr-enabled container apps
+* the two dapr-enabled container apps
Save the following file as _hello-world.json_:
Save the following file as _hello-world.json_:
"managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]", "configuration": { "ingress": {
- "external": true,
+ "external": false,
"targetPort": 3000 }, "dapr": {
Save the following file as _hello-world.json_:
{ "image": "dapriosamples/hello-k8s-node:latest", "name": "hello-k8s-node",
+ "env": [
+ {
+ "name": "APP_PORT",
+ "value": "3000"
+ }
+ ],
"resources": { "cpu": 0.5, "memory": "1.0Gi"
Save the following file as _hello-world.json_:
### Create Azure Bicep templates
-Create a bicep template to deploy a Container Apps environment including:
+Create a bicep template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace
-* Application Insights resource for distributed tracing
+* the Application Insights resource for distributed tracing
* a dapr component for the state store * the two dapr-enabled container apps
resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
managedEnvironmentId: environment.id configuration: { ingress: {
- external: true
+ external: false
targetPort: 3000 } dapr: {
resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
{ image: 'dapriosamples/hello-k8s-node:latest' name: 'hello-k8s-node'
+ env: [
+ {
+ name: 'APP_PORT'
+ value: '3000'
+ }
+ ]
resources: { cpu: json('0.5') memory: '1.0Gi'
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
az containerapp env dapr-component set `
-Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and is not available to other container apps.
+Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and isn't available to other container apps.
## Deploy the service application (HTTP web server)
az containerapp create \
--environment $CONTAINERAPPS_ENVIRONMENT \ --image dapriosamples/hello-k8s-node:latest \ --target-port 3000 \
- --ingress 'external' \
+ --ingress 'internal' \
--min-replicas 1 \ --max-replicas 1 \ --enable-dapr \
+ --dapr-app-id nodeapp \
--dapr-app-port 3000 \
- --dapr-app-id nodeapp
+ --env-vars 'APP_PORT=3000'
``` # [PowerShell](#tab/powershell)
az containerapp create `
--environment $CONTAINERAPPS_ENVIRONMENT ` --image dapriosamples/hello-k8s-node:latest ` --target-port 3000 `
- --ingress 'external' `
+ --ingress 'internal' `
--min-replicas 1 ` --max-replicas 1 ` --enable-dapr `
+ --dapr-app-id nodeapp `
--dapr-app-port 3000 `
- --dapr-app-id nodeapp
+ --env-vars 'APP_PORT=3000'
```
az containerapp create `
This command deploys: * the service (Node) app server on `--target-port 3000` (the app port)
-* its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000` for service discovery and invocation
+* its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000'` for service discovery and invocation
## Deploy the client application (headless client)
az containerapp create `
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there is no `--target-port` to start a server, nor is there a need to enable ingress.
+This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `--target-port` to start a server, nor is there a need to enable ingress.
## Verify the result
You can confirm that the services are working correctly by viewing data in your
### View Logs
-Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI. Wait a few minutes for the analytics to arrive for the first time before you are able to query the logged data.
+Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI. Wait a few minutes for the analytics to arrive for the first time before you're able to query the logged data.
Use the following CLI command to view logs on the command line.
nodeapp Got a new order! Order ID: 63 PrimaryResult 2021-10-22
## Clean up resources
-Once you are done, run the following command to delete your resource group along with all the resources you created in this tutorial.
+Once you're done, run the following command to delete your resource group along with all the resources you created in this tutorial.
# [Bash](#tab/bash)
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Title: Manage revisions in Azure Container Apps
-description: Manage revisions and traffic splitting in Azure Container Apps.
+description: Manage revisions and traffic splitting in Azure Container Apps.
Previously updated : 11/02/2021 Last updated : 06/07/2022 -
-# Manage revisions Azure Container Apps
+# Manage revisions in Azure Container Apps
-Supporting multiple revisions in Azure Container Apps allows you to manage the versioning and amount of [traffic sent to each revision](#traffic-splitting). Use the following commands to control of how your container app manages revisions.
+Supporting multiple revisions in Azure Container Apps allows you to manage the versioning of your container app. With this feature, you can activate and deactivate revisions, and control the amount of [traffic sent to each revision](#traffic-splitting). To learn more about revisions, see [Revisions in Azure Container Apps](revisions.md)
-## List
+A revision is created when you first deploy your application. New revisions are created when you [update](#updating-your-container-app) your application with [revision-scope changes](revisions.md#revision-scope-changes). You can also update your container app based on a specific revision.
-List all revisions associated with your container app with `az containerapp revision list`.
+
+This article described the commands to manage your container app's revisions. For more information about Container Apps commands, see [`az containerapp`](/cli/azure/containerapp). For more information about commands to manage revisions, see [`az containerapp revision`](/cli/azure/containerapp/revision).
++
+## Updating your container app
+
+To update a container app, use the `az containerapp update` command. With this command you can modify environment variables, compute resources, scale parameters, and deploy a different image. If your container app update includes [revision-scope changes](revisions.md#revision-scope-changes), a new revision will be generated.
+
+You may also use a YAML file to define these and other configuration options and parameters. For more information regarding this command, see [`az containerapp revision copy`](/cli/azure/containerapp#az-containerapp-update).
+
+This example updates the container image. (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp update \
+ --name <APPLICATION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --image <IMAGE_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp update `
+ --name <APPLICATION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --image <IMAGE_NAME>
+```
+++
+You can also update your container app with the [Revision copy](#revision-copy) command.
+
+## Revision list
+
+List all revisions associated with your container app with `az containerapp revision list`. For more information about this command, see [`az containerapp revision list`](/cli/azure/containerapp/revision#az-containerapp-revision-list)
# [Bash](#tab/bash)
az containerapp revision list `
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision show
-## Show
+Show details about a specific revision by using `az containerapp revision show`. For more information about this command, see [`az containerapp revision show`](/cli/azure/containerapp/revision#az-containerapp-revision-show).
-Show details about a specific revision by using `az containerapp revision show`.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision show \ --name <REVISION_NAME> \
- --app <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision show \
```azurecli az containerapp revision show ` --name <REVISION_NAME> `
- --app <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision copy
+
+To create a new revision based on an existing revision, use the `az containerapp revision copy`. Container Apps will use the configuration of the existing revision, which you then may modify.
-## Update
+With this command, you can modify environment variables, compute resources, scale parameters, and deploy a different image. You may also use a YAML file to define these and other configuration options and parameters. For more information regarding this command, see [`az containerapp revision copy`](/cli/azure/containerapp/revision#az-containerapp-revision-copy).
-To update a container app, use `az containerapp update`.
+This example copies the latest revision and sets the compute resource parameters. (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli
-az containerapp update \
+az containerapp revision copy \
--name <APPLICATION_NAME> \ --resource-group <RESOURCE_GROUP_NAME> \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld
+ --cpu 0.75 \
+ --memory 1.5Gi
``` # [PowerShell](#tab/powershell) ```azurecli
-az containerapp update `
+az containerapp revision copy `
--name <APPLICATION_NAME> ` --resource-group <RESOURCE_GROUP_NAME> `
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld
+ --cpu 0.75 `
+ --memory 1.5Gi
```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision activate
-## Activate
+Activate a revision by using `az containerapp revision activate`. For more information about this command, see [`az containerapp revision activate`](/cli/azure/containerapp/revision#az-containerapp-revision-activate).
-Activate a revision by using `az containerapp revision activate`.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision activate \ --revision <REVISION_NAME> \
- --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision activate \
```poweshell az containerapp revision activate ` --revision <REVISION_NAME> `
- --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision deactivate
-## Deactivate
+Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision. For more information, see [`az containerapp revision deactivate`](/cli/azure/containerapp/revision#az-containerapp-revision-deactivate).
-Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision deactivate \ --revision <REVISION_NAME> \
- --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision deactivate \
```azurecli az containerapp revision deactivate ` --revision <REVISION_NAME> `
- --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision restart
+
+This command restarts a revision. For more information about this command, see [`az containerapp revision restart`](/cli/azure/containerapp/revision#az-containerapp-revision-restart).
-## Restart
+When you modify secrets in your container app, you'll need to restart the active revisions so they can access the secrets.
-All existing container apps revisions will not have access to this secret until they are restarted
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision restart \ --revision <REVISION_NAME> \
- --name <APPLICATION_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision restart \
```azurecli az containerapp revision restart ` --revision <REVISION_NAME> `
- --name <APPLICATION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision set mode
-## Set active revision mode
+The revision mode controls whether only a single revision or multiple revisions of your container app can be simultaneously active. To set your container app to support [single revision mode](revisions.md#single-revision-mode) or [multiple revision mode](revisions.md#multiple-revision-mode), use the `az containerapp revision set-mode` command.
-Configure whether or not your container app supports multiple active revisions.
+The default setting is *single revision mode*. For more information about this command, see [`az containerapp revision set-mode`](/cli/azure/containerapp/revision#az-containerapp-revision-set-mode).
-The `activeRevisionsMode` property accepts two values:
+The mode values are `single` or `multiple`. Changing the revision mode doesn't create a new revision.
-- `multiple`: Configures the container app to allow more than one active revision.
+Example: (Replace the \<placeholders\> with your values.)
-- `single`: Automatically deactivates all other revisions when a revision is activated. Enabling `single` mode makes it so that when you create a revision-scope change and a new revision is created, any other revisions are automatically deactivated.
+# [Bash](#tab/bash)
-```json
-{
- ...
- "resources": [
- {
- ...
- "properties": {
- "configuration": {
- "activeRevisionsMode": "multiple"
- }
- }
- }]
-}
+```azurecli
+az containerapp revision set-mode \
+ --name <APPLICATION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --mode single
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --name <APPLICATION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode single
+```
+++
+## Revision labels
+
+Labels provide a unique URL that you can use to direct traffic to a revision. You can move a label between revisions to reroute traffic directed to the label's URL to a different revision. For more information about revision labels, see [Revision Labels](revisions.md#revision-labels).
+
+You can add and remove a label from a revision. For more information about the label commands, see [`az containerapp revision label`](/cli/azure/containerapp/revision/label)
+
+### Revision label add
+
+To add a label to a revision, use the [`az containerapp revision label add`](/cli/azure/containerapp/revision/label#az-containerapp-revision-label-add) command.
+
+You can only assign a label to one revision at a time, and a revision can only be assigned one label. If the revision you specify has a label, the add command will replace the existing label.
+
+This example adds a label to a revision: (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision label add \
+ --revision <REVISION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --label <LABEL_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --revision <REVISION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode <LABEL_NAME>
```
-The following configuration fragment shows how to set the `activeRevisionsMode` property. Changes made to this property require the context of the container app's full ARM template.
++
+### Revision label remove
+
+To remove a label from a revision, use the [`az containerapp revision label remove`](/cli/azure/containerapp/revision/label#az-containerapp-revision-label-remove) command.
+
+This example removes a label to a revision: (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision label add \
+ --revision <REVISION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --label <LABEL_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --revision <REVISION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode <LABEL_NAME>
+```
++ ## Traffic splitting Applied by assigning percentage values, you can decide how to balance traffic among different revisions. Traffic splitting rules are assigned by setting weights to different revisions.
-The following example shows how to split traffic between three revisions.
+The following example shows how to split traffic between three revisions.
```json {
Each revision gets traffic based on the following rules:
- 30% of the requests go to REVISION2 - 20% of the requests go to the latest revision
-The sum total of all revision weights must equal 100.
+The sum of all revision weights must equal 100.
-In this example, replace the `<REVISION*_NAME>` placeholders with revision names in your container app. You access revision names via the [list](#list) command.
+In this example, replace the `<REVISION*_NAME>` placeholders with revision names in your container app. You access revision names via the [revision list](#revision-list) command.
## Next steps
-> [!div class="nextstepaction"]
-> [Get started](get-started.md)
+* [Revisions in Azure Container Apps](revisions.md)
+* [Application lifecycle management in Azure Container Apps](application-lifecycle-management.md)
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
Budget alerts notify you when spending, based on usage or cost, reaches or excee
In the Azure portal, budgets are defined by cost. Using the Azure Consumption API, budgets are defined by cost or by consumption usage. Budget alerts support both cost-based and usage-based budgets. Budget alerts are generated automatically whenever the budget alert conditions are met. You can view all cost alerts in the Azure portal. Whenever an alert is generated, it's shown in cost alerts. An alert email is also sent to the people in the alert recipients list of the budget.
-If you have an Enterprise Agreement, you can [Create and edit budgets with PowerShell](tutorial-acm-create-budgets.md#create-and-edit-budgets-with-powershell). However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs.
+If you have an Enterprise Agreement, you can [Create and edit budgets with PowerShell](tutorial-acm-create-budgets.md#create-and-edit-budgets-with-powershell). Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically.
You can use the Budget API to send email alerts in a different language. For more information, see [Supported locales for budget alert emails](manage-automation.md#supported-locales-for-budget-alert-emails).
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Budget integration with action groups works for action groups which have enabled
If you're an EA customer, you can create and edit budgets programmatically using the Azure PowerShell module. However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs. > [!NOTE]
-> Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically because PowerShell and CLI aren't yet supported.
+> Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically.
To download the latest version of Azure PowerShell, run the following command:
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
Previously updated : 06/07/2022 Last updated : 06/08/2022
Some third-party reseller services available on Azure Marketplace now consume yo
### Partners > [!NOTE]
-> The Azure Marketplace price list feature in the EA portal is retired. The same feature is available in the Azure portal.
+> The Azure Marketplace price list feature in the EA portal is retired.
LSPs can download an Azure Marketplace price list from the price sheet page in the Azure Enterprise portal. Select the **Marketplace Price list** link in the upper right. Azure Marketplace price list shows all available services and their prices.
The following services are billed hourly under an Enterprise Agreement instead o
### Azure RemoteApp
-If you have an Enterprise Agreement, you pay for Azure RemoteApp based on your Enterprise Agreement price level. There aren't additional charges. The standard price includes an initial 40 hours. The unlimited price covers an initial 80 hours. RemoteApp stops emitting usage over 80 hours.
+If you have an Enterprise Agreement, you pay for Azure RemoteApp based on your Enterprise Agreement price level. There aren't extra charges. The standard price includes an initial 40 hours. The unlimited price covers an initial 80 hours. RemoteApp stops emitting usage over 80 hours.
## Next steps
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 04/26/2022 Last updated : 06/08/2022 # View and download your Microsoft Azure invoice
-You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Invoices are sent to the person set to receive invoices for the enrollment.
+You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. Invoices are sent to the person set to receive invoices for the enrollment.
-## When invoices are generated
+If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Direct EA administrators can [Download or view their Azure billing invoice](../manage/direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). Indirect EA administrators can use the information at [Azure Enterprise enrollment invoices](../manage/ea-portal-enrollment-invoices.md) to download their invoice.
-An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP) also called pay-as-you-go, Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts. However, invoices for EA billing accounts aren't shown in the Azure portal.
+## Where invoices are generated
+
+An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP) also called pay-as-you-go, Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts.
To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](../manage/view-all-accounts.md).
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md
Previously updated : 09/09/2021 Last updated : 06/07/2022 # Copy data from DB2 using Azure Data Factory or Synapse Analytics
Typical properties inside the connection string:
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No | > [!TIP]
-> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, the service will try to create the package under the collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want the service to create the needed packages when querying the database.
+> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, the service will try to create the package under the collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want the service to create the needed packages when querying the database. If you can't determine the package collection name, try to set `packageCollection=NULLID`.
**Example:**
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 04/01/2022 Last updated : 06/07/2022
The following sections provide details about properties you can use to define Da
## Linked service properties
-> [!Important]
-> Due to Azure service security and compliance request, system-assigned managed identity authentication is no longer available in REST connector for both Copy and Mapping data flow. You are recommended to migrate existing linked services that use system-managed identity authentication to user-assigned managed identity authentication or other authentication types. Please make sure the migration to be done by **September 15, 2022**. For more detailed steps about how to create, manage user-assigned managed identities, refer to [this](data-factory-service-identity.md#user-assigned-managed-identity).
- The following properties are supported for the REST linked service: | Property | Description | Required |
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-lookup-activity.md
Previously updated : 04/06/2022 Last updated : 05/31/2022 # Lookup activity in Azure Data Factory and Azure Synapse Analytics
databox-online Azure Stack Edge Gpu 2205 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md
The following table provides a summary of known issues carried over from the pre
|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
-|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle wasn't managed in the update flow. Hence, when updating to 2103 release, GPU VMs aren't stopped automatically during the update. You'll need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
-|**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
-|**26.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
## Next steps
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/23/2022 Last updated : 06/07/2022 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest update
-The current update is Update 2203. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2205. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
-- Device software version - **2.2.1902.4561**
+- Device software version - **2.2.1983.5094**
- Kubernetes server version - **v1.21.7** - IoT Edge version: **0.1.0-beta15**-- Azure Arc version: **1.5.3**-- GPU driver version: **470.57.02**-- CUDA version: **11.4**
+- Azure Arc version: **1.6.6**
+- GPU driver version: **510.47.03**
+- CUDA version: **11.6**
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2203-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2205-release-notes.md).
-**To apply 2203 update, your device must be running 2106 or later.**
+**To apply 2205 update, your device must be running 2106 or later.**
- If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*. -- You can update to 2106 from an older version and then install 2203.
+- You can update to 2106 from an older version and then install 2205.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has two files for the Kubernetes updates (*Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+4. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has three files for the Kubernetes updates (*Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
This procedure takes around 20 minutes to complete. Perform the following steps
6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2203**.
-7. You will now update the Kubernetes software version. Select the remaining two Kubernetes files together (file with the *Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe* suffix) and repeat the above steps to apply update.
+7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
![Screenshot of files selected for the Kubernetes update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-7.png)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
|--|--|:-:|--| | **Attempt to create a new Linux namespace from a container detected (Preview)**<br>(K8S.NODE_NamespaceCreation) | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | | **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
-| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
-| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
+| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
-| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
-| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
+| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
+| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
+| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low |
+| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | | **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low | | **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
-| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
-| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
-| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
+| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
+| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
+| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
-| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
+| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
-| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
-| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
-| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
+| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium |
+| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
+| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium | | **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | | **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium | | **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low | | **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low | | **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
-| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
-| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
+| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
+| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
+| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low | | **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
-| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
-| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container indicates a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
-| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container detected the download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
-| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
-| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container detected common files as a way to obfuscate their actions or for persistence. | Persistence | Medium |
-| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container detected the initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
-| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
+| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
+| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
+| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
+| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
+| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
+| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
+| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
+| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
+| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an override for common files as a way to obfuscate actions or for persistence. | Persistence | Medium |
+| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
+| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
-| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
+| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
| **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low | | **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low |
-| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
-| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
+| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a screen capture tool. This isn't a common usage scenario for containers and could be part of attackers attempt to access private data. | Collection | Low |
+| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
+| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
-| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
-| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
+| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
+| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | | **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
-| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational |
-| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
+| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
+| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
+| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
+| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
<sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
defender-for-cloud Quickstart Enable Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-defender-for-cosmos.md
Title: Enable Microsoft Defender for Azure Cosmos DB
description: Learn how to enable Microsoft Defender for Azure Cosmos DB's enhanced security features. Previously updated : 02/28/2022 Last updated : 06/07/2022 # Quickstart: Enable Microsoft Defender for Azure Cosmos DB
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
### [ARM template](#tab/arm-template)
-Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/?term=cosmosdb-advanced-threat-protection-create-account).
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment.-- Previously updated : 04/28/2022 Last updated : 06/08/2022
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 11/09/2021 Last updated : 06/06/2022
If you forgot your password, select the **Recover Password** option. See [Passwo
## Activate the on-premises management console
-After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file.
+After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforces the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
-To activate the on-premises management console:
+**To activate the on-premises management console**:
1. Sign in to the on-premises management console.
After initial activation, the number of monitored devices might exceed the numbe
If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console).
-### Activate an expired license (versions under 10.0)
+### Activation expirations
+
+After activating an on-premises management console, you'll need to apply new activation files on both the on-premises management console and connected sensors as follows:
+
+|Location |Activation process |
+|||
+|**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#update-committed-devices-in-a-subscription) in your subscription. |
+|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](how-to-manage-individual-sensors.md#download-a-new-activation-file-for-version-221x-or-higher) from a legacy version to version 22.2.x. |
+| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+
+For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
+
+### Activate expired licenses from versions earlier than 10.0
For users with versions prior to 10.0, your license might expire and the following alert will appear: :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="Screenshot that shows the License has expired alert.":::
-To activate your license:
+**To activate your license**:
1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Title: Activate and set up your sensor description: This article describes how to sign in and activate a sensor console. Previously updated : 11/09/2021 Last updated : 06/06/2022
You might need to refresh your screen after uploading the CA-signed certificate.
For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+### Activation expirations
+
+After activating a sensor, you'll need to apply new activation files as follows:
+
+|Location |Activation process |
+|||
+|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](how-to-manage-individual-sensors.md#download-a-new-activation-file-for-version-221x-or-higher) from a legacy version to version 22.2.x. |
+| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+
+For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
+ ### Activate an expired license (versions under 10.0)
digital-twins How To Monitor Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-diagnostics.md
Here are the field and property descriptions for API logs.
| `ResultDescription` | String | Additional details about the event | | `DurationMs` | String | How long it took to perform the event in milliseconds | | `CallerIpAddress` | String | A masked source IP address for the event |
-| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `CorrelationId` | Guid | Unique identifier for the event |
| `ApplicationId` | Guid | Application ID used in bearer authorization | | `Level` | Int | The logging severity of the event | | `Location` | String | The region where the event took place |
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription. > [!NOTE]
-> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 15th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
>[!IMPORTANT] > **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 15th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 15th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
Last updated 03/31/2022
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
+## REST API version 2021-12
+This release corresponds to REST API version 2021-12-01, which includes the following features:
+
+- [Enable managed identities for system topics](enable-identity-system-topics.md)
+- [Enabled managed identities for custom topics and domains](enable-identity-custom-topics-domains.md)
+- [Use managed identities to deliver events to destinations](add-identity-roles.md)
+- [Support for delivery attributes](delivery-properties.md)
+- [Storage queue - message time-to-live (TTL)](delivery-properties.md#configure-time-to-live-on-outgoing-events-to-azure-storage-queues)-
+- [Azure Active Directory authentication for topics and domains, and partner namespaces](authenticate-with-active-directory.md)
+ ## REST API version 2021-10 This release corresponds to REST API version 2021-10-15-preview, which includes the following features:
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
### FastPath and Private Link for 100Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This is supported for connections associated to 100Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
1. Send an email to **ERFastPathPL@microsoft.com**, providing the following information: * Azure Subscription ID * Virtual Network (Vnet) Resource ID
firewall Tutorial Hybrid Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal-policy.md
Previously updated : 08/26/2021 Last updated : 06/08/2022 #Customer intent: As an administrator, I want to control network access from an on-premises network to an Azure virtual network.
If you don't have an Azure subscription, create a [free account](https://azure.m
First, create the resource group to contain the resources for this tutorial: 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal home page, select **Resource groups** > **Add**.
+2. On the Azure portal home page, select **Resource groups** > **Create**.
3. For **Subscription**, select your subscription. 1. For **Resource group name**, type **FW-Hybrid-Test**. 2. For **Region**, select **(US) East US**. All resources that you create later must be in the same location.
Now, create the VNet:
1. From the Azure portal home page, select **Create a resource**. 2. In **Networking**, select **Virtual network**.
-7. For **Resource group**, select **FW-Hybrid-Test**.
+1. Select **Create**.
+1. For **Resource group**, select **FW-Hybrid-Test**.
1. For **Name**, type **VNet-Spoke**.
-2. For **Region**, select **(US) East US**.
-3. Select **Next: IP Addresses**.
-4. For **IPv4 address space**, delete the default address and type **10.6.0.0/16**.
-6. Under **Subnet name**, select **Add subnet**.
-7. For **Subnet name** type **SN-Workload**.
-8. For **Subnet address range**, type **10.6.0.0/24**.
-9. Select **Add**.
-10. Select **Review + create**.
-11. Select **Create**.
+1. For **Region**, select **(US) East US**.
+1. Select **Next: IP Addresses**.
+1. For **IPv4 address space**, delete the default address and type **10.6.0.0/16**.
+1. Under **Subnet name**, select **Add subnet**.
+1. For **Subnet name** type **SN-Workload**.
+1. For **Subnet address range**, type **10.6.0.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
## Create the on-premises virtual network
Now create a second subnet for the gateway.
2. Select **+Subnet**. 3. For **Name**, type **GatewaySubnet**. 4. For **Subnet address range** type **192.168.2.0/24**.
-5. Select **OK**.
+5. Select **Save**.
## Configure and deploy the firewall Now deploy the firewall into the firewall hub virtual network. 1. From the Azure portal home page, select **Create a resource**.
-2. In the left column, select **Networking**, and search for and then select **Firewall**.
+2. In the left column, select **Networking**, and search for and then select **Firewall**, and then select **Create**.
4. On the **Create a Firewall** page, use the following table to configure the firewall: |Setting |Value |
Now deploy the firewall into the firewall hub virtual network.
|Resource group |**FW-Hybrid-Test** | |Name |**AzFW01**| |Region |**East US**|
+ |Firewall tier|**Standard**|
|Firewall management|**Use a Firewall Policy to manage this firewall**| |Firewall policy|Add new:<br>**hybrid-test-pol**<br>**East US** |Choose a virtual network |Use existing:<br> **VNet-hub**|
- |Public IP address |Add new: <br>**fw-pip**. |
+ |Public IP address |Add new: <br>**fw-pip** |
5. Select **Review + create**.
Next, create a couple routes:
12. Select **Routes** in the left column. 13. Select **Add**. 14. For the route name, type **ToSpoke**.
-15. For the address prefix, type **10.6.0.0/16**.
-16. For next hop type, select **Virtual appliance**.
-17. For next hop address, type the firewall's private IP address that you noted earlier.
-18. Select **OK**.
+1. For the **Address prefix destination**, select **IP Addresses**.
+1. For the **Destination IP addresses/CIDR ranges**, type **10.6.0.0/16**.
+1. For next hop type, select **Virtual appliance**.
+1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the default route from the spoke subnet.
8. Select **Routes** in the left column. 9. Select **Add**. 10. For the route name, type **ToHub**.
-11. For the address prefix, type **0.0.0.0/0**.
-12. For next hop type, select **Virtual appliance**.
-13. For next hop address, type the firewall's private IP address that you noted earlier.
-14. Select **OK**.
+1. For the **Address prefix destination**, select **IP Addresses**.
+1. For the **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
+1. For next hop type, select **Virtual appliance**.
+1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the spoke workload and on-premises virtual machines, and place them i
Create a virtual machine in the spoke virtual network, running IIS, with no public IP address. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine:
- - **Resource group** - Select **FW-Hybrid-Test**.
- - **Virtual machine name**: *VM-Spoke-01*.
- - **Region** - Same region that you're used previously.
- - **User name**: \<type a user name\>.
+ - **Resource group** - Select **FW-Hybrid-Test**
+ - **Virtual machine name**: *VM-Spoke-01*
+ - **Region** - Same region that you're used previously
+ - **User name**: \<type a user name\>
- **Password**: \<type a password\>
-4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**
+4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**.
4. Select **Next:Disks**. 5. Accept the defaults and select **Next: Networking**. 6. Select **VNet-Spoke** for the virtual network and the subnet is **SN-Workload**.
Create a virtual machine in the spoke virtual network, running IIS, with no publ
### Install IIS
+After the virtual machine is created, install IIS.
+ 1. From the Azure portal, open the Cloud Shell and make sure that it's set to **PowerShell**. 2. Run the following command to install IIS on the virtual machine and change the location if necessary:
Create a virtual machine in the spoke virtual network, running IIS, with no publ
This is a virtual machine that you use to connect using Remote Desktop to the public IP address. From there, you then connect to the on-premises server through the firewall. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine: - **Resource group** - Select existing, and then select **FW-Hybrid-Test**. - **Virtual machine name** - *VM-Onprem*.
This is a virtual machine that you use to connect using Remote Desktop to the pu
1. First, note the private IP address for **VM-spoke-01** virtual machine. 2. From the Azure portal, connect to the **VM-Onprem** virtual machine.
-<!2. Open a Windows PowerShell command prompt on **VM-Onprem**, and ping the private IP for **VM-spoke-01**.
- You should get a reply.>
3. Open a web browser on **VM-Onprem**, and browse to http://\<VM-spoke-01 private IP\>. You should see the **VM-spoke-01** web page:
This is a virtual machine that you use to connect using Remote Desktop to the pu
So now you've verified that the firewall rules are working:
-<!- You can ping the server on the spoke VNet.>
- You can browse web server on the spoke virtual network. - You can connect to the server on the spoke virtual network using RDP.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId 'ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037' -Role Contributor
+ New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037"
``` ##### Azure CLI
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- SP_ID=$(az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --query objectId -o tsv)
- az role assignment create --assignee $SP_ID --role Contributor
+ az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037
``` #### Grant Azure Front Door access to your key vault
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- SP_ID=$(az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --query objectId -o tsv)
- az role assignment create --assignee $SP_ID --role Contributor
+ az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8
``` #### Grant Azure Front Door access to your key vault
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
description: Use the JDBC driver from a Java application to submit Apache Hive q
Previously updated : 04/20/2020 Last updated : 06/08/2022 # Query Apache Hive through the JDBC driver in HDInsight
hdinsight Hdinsight Hadoop Create Linux Clusters With Secure Transfer Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md
description: Learn how to create HDInsight clusters with secure transfer enabled
Previously updated : 02/18/2020 Last updated : 06/08/2022 # Apache Hadoop clusters with secure transfer storage accounts in Azure HDInsight
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
description: Add custom components to HDInsight clusters by using script actions
Previously updated : 03/09/2021 Last updated : 06/08/2022 # Customize Azure HDInsight clusters by using script actions
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
Title: Enable Private Link on an Azure HDInsight cluster
description: Learn how to connect to an outside HDInsight cluster by using Azure Private Link. Previously updated : 10/15/2020 Last updated : 06/08/2022 # Enable Private Link on an HDInsight cluster
hdinsight Hdinsight Use External Metadata Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md
description: Use external metadata stores with Azure HDInsight clusters.
Previously updated : 05/05/2022 Last updated : 06/08/2022 # Use external metadata stores in Azure HDInsight
+> [!IMPORTANT]
+> The default metastore provides a basic tier Azure SQL Database with only **5 DTU and 2 GB data max size (NOT UPGRADEABLE)**! Use this for QA and testing purposes only. **For production or large workloads, we recommend migrating to an external metastore!**
+ HDInsight allows you to take control of your data and metadata with external data stores. This feature is available for [Apache Hive metastore](#custom-metastore), [Apache Oozie metastore](#apache-oozie-metastore), and [Apache Ambari database](#custom-ambari-db). The Apache Hive metastore in HDInsight is an essential part of the Apache Hadoop architecture. A metastore is the central schema repository. The metastore is used by other big data access tools such as Apache Spark, Interactive Query (LLAP), Presto, or Apache Pig. HDInsight uses an Azure SQL Database as the Hive metastore.
There are two ways you can set up a metastore for your HDInsight clusters:
## Default metastore
-> [!IMPORTANT]
-> The default metastore provides a basic tier Azure SQL Database with only **5 DTU and 2 GB data max size (NOT UPGRADEABLE)**! Use this for QA and testing purposes only. **For production or large workloads, we recommend migrating to an external metastore!**
- By default, HDInsight creates a metastore with every cluster type. You can instead specify a custom metastore. The default metastore includes the following considerations:
+* Limited resources. See notice at the top of the page.
+ * No additional cost. HDInsight creates a metastore with every cluster type without any additional cost to you.
-* Each default metastore is part of the cluster lifecycle. When you delete a cluster, the corresponding metastore and metadata are also deleted.
+* The default metastore is part of the cluster lifecycle. When you delete a cluster, the corresponding metastore and metadata are also deleted.
-* You can't share the default metastore with other clusters.
+* The default metastore is recommended only for simple workloads. Workloads that don't require multiple clusters and don't need metadata preserved beyond the cluster's lifecycle.
-* Default metastore is recommended only for simple workloads. Workloads that don't require multiple clusters and don't need metadata preserved beyond the cluster's lifecycle.
+* The default metastore can't be shared with other clusters.
## Custom metastore
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
To validate connectivity to Kafka, use the following steps to create and run a P
* If you have __enabled name resolution through a custom DNS server__, replace the `kafka_broker` entries with the FQDN of the worker nodes. > [!NOTE]
- > This code sends the string `test message` to the topic `testtopic`. The default configuration of Kafka on HDInsight is to create the topic if it does not exist.
-
+ > This code sends the string `test message` to the topic `testtopic`. The default configuration of Kafka on HDInsight is not to create the topic if it does not exist. See [How to configure Apache Kafka on HDInsight to automatically create topics](./apache-kafka-auto-create-topics.md). Alternatively, you can create topics manually before producing messages.
+
4. To retrieve the messages from Kafka, use the following Python code: ```python
hdinsight Apache Spark Load Data Run Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md
description: Tutorial - Learn how to load data and run interactive queries on Sp
Previously updated : 02/12/2020 Last updated : 06/08/2022 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data.
hdinsight Apache Spark Troubleshoot Application Stops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-application-stops.md
Title: Apache Spark Streaming application stops after 24 days in Azure HDInsight
description: An Apache Spark Streaming application stops after executing for 24 days and there are no errors in the log files. Previously updated : 07/29/2019 Last updated : 06/08/2022 # Scenario: Apache Spark Streaming application stops after executing for 24 days in Azure HDInsight
Replace `<yourclustername>` with the name of your HDInsight cluster as shown in
## Next steps
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Using the REST API:
1. Use the REST API to retrieve a list of role IDs from your application: ```http
- GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+ GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
``` The response to this request looks like the following example:
Using the REST API:
1. Use the REST API to create an API token for a role. For example, to create an API token called `operator-token` for the operator role: ```http
- PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=1.0
+ PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=2022-05-31
``` Request body:
Using the REST API:
You can use the REST API to list and delete API tokens in an application.
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) includes support for the new [organizations feature](howto-create-organizations.md).
- ## Use a bearer token To use a bearer token when you make a REST API call, your authorization header looks like the following example:
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/devices) includes support for the new [organizations feature](howto-create-organizations.md).
- [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] ## Components and modules
In IoT Central, a module refers to an IoT Edge module running on a connected IoT
Use the following request to retrieve the components from a device called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=2022-05-31
``` The response to this request looks like the following example. The `value` array contains details of each device component:
The response to this request looks like the following example. The `value` array
Use the following request to retrieve a list of modules running on a connected IoT Edge device called `environmental-sensor-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
``` The response to this request looks like the following example. The array of modules only includes custom modules running on the IoT Edge device, not the built-in `$edgeAgent` and `$edgeHub` modules:
The response to this request looks like the following example. The array of modu
Use the following request to retrieve a list of the components in a module called `SimulatedTemperatureSensor`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
``` ## Read telemetry
GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-s
Use the following request to retrieve the last known telemetry value from a device that doesn't use components. In this example, the device is called `thermostat-01` and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the last known telemetry value from a device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to retrieve the last known telemetry value from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor` and telemetry called `ambient`. The `ambient` telemetry type has temperature and humidity values: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the property values from a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
``` The response to this request looks like the following example. It shows the device is reporting a single property value:
The response to this request looks like the following example. It shows the devi
Use the following request to retrieve property values from all components. In this example, the device is called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a property value from an individual component. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to retrieve property values from a from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
Some properties are writable. For example, in the thermostat model the `targetTe
Use the following request to write an individual property value to a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
Use the following request to write an individual property value to a device that does use components. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to write an individual property value to a module. This example uses a device called `environmental-sensor-01`, a module called `SimulatedTemperatureSensor`, and a property called `SendInterval`: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If you're using an IoT Edge device, use the following request to retrieve property values from a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=2022-05-31
``` If you're using an IoT Edge device, use the following request to retrieve property values from a component in a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=2022-05-31
``` ## Call commands
You can use the REST API to call device commands and retrieve the device history
Use the following request to call a command on device that doesn't use components. In this example, the device is called `thermostat-01` and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to call a command on device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
``` The formats of the request payload and response are the same as for a device that doesn't use components.
The formats of the request payload and response are the same as for a device tha
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
``` > [!TIP]
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
+
+ Title: Integrate Azure IoT Central with CI/CD | Microsoft Docs
+description: Describes how to integrate IoT Central into a pipeline created with Azure Pipelines.
++ Last updated : 05/27/2022+++
+# Integrate IoT Central with Azure Pipelines for CI/CD
+
+## Overview
+
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. This article shows you how to automate the build, test, and deployment of IoT Central application configuration, to enable development teams to deliver reliable releases more frequently.
+
+Continuous integration starts with a commit of your code to a branch in a source code repository. Each commit is merged with commits from other developers to ensure that no conflicts are introduced. Changes are further validated by creating a build and running automated tests against that build. This process ultimately results in an artifact, or deployment bundle, to deploy to a target environment, in this case an Azure IoT Central application.
+
+Just as IoT Central is a part of your larger IoT solution, IoT Central is a part of your CI/CD pipeline. Your CI/CD pipeline should deploy your entire IoT solution and all configurations to each environment from development through to production:
++
+IoT Central is an *application platform as a service* that has different deployment requirements from *platform as a service* components. For IoT Central, you deploy configurations and device templates. These configurations and device templates are managed and integrated into your release pipeline by using APIs.
+
+While it's possible to automate IoT Central app creation, you should create an app in each environment before you develop your CI/CD pipeline.
+
+By using the Azure IoT Central REST API, you can integrate IoT Central app configurations into your release pipeline.
+
+This guide walks you through the creation of a new pipeline that updates an IoT Central application based on configuration files managed in GitHub. This guide has specific instructions for integrating with [Azure Pipelines](/azure/devops/pipelines/?view=azure-devops&preserve-view=true), but could be adapted to include IoT Central in any release pipeline built using tools such as Tekton, Jenkins, GitLab, or GitHub Actions.
+
+In this guide, you create a pipeline that only applies an IoT Central configuration to a single instance of an IoT Central application. You should integrate the steps into a larger pipeline that deploys your entire solution and promotes it from *development* to *QA* to *pre-production* to *production*, performing all necessary testing along the way.
+
+The scripts currently don't transfer the following settings between IoT Central instances: dashboards, views, custom settings in device templates, pricing plan, UX customizations, application image, rules, scheduled jobs, saved jobs, and enrollment groups.
+
+The scripts currently don't remove settings from the target IoT Central application that aren't present in the configuration file.
+
+## Prerequisites
+
+You need the following prerequisites to complete the steps in this guide:
+
+- Two IoT Central applications - one for your development environment and one for your production environment. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
+- Two Azure Key Vaults - one for your development environment and one for your production environment. It's best practice to have a dedicated Key Vault for each environment. To learn more, see [Create an Azure Key Vault with the Azure portal](../../key-vault/general/quick-create-portal.md).
+- A GitHub account [GitHub](https://github.com/).
+- An Azure DevOps organization. To learn more, see [Create an Azure DevOps organization](/devops/organizations/accounts/create-organization?view=azure-devops&preserve-view=true).
+- PowerShell 7 for Windows, Mac or Linux. [Get PowerShell](/powershell/scripting/install/installing-powershell).
+- Azure Az PowerShell module installed in your PowerShell 7 environment. To learn more, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+- Visual Studio Code or other tool to edit PowerShell and JSON files.[Get Visual Studio Code](https://code.visualstudio.com/Download).
+- Git client. Download the latest version from [Git - Downloads (git-scm.com)](https://git-scm.com/downloads).
++
+## Download the sample code
+
+To get started, fork the IoT Central CI/CD GitHub repository and then clone your fork to your local machine:
+
+1. To fork the Git Hub repository, open the [IoT Central CI/CD GitHub repository](https://github.com/Azure/iot-central-CICD-sample) and select **Fork**.
+
+1. Clone your fork of the repository to your local machine by opening a console or bash window and running the following command.
+
+ ```cmd\bash
+ git clone https://github.com/{your GitHub username}/iot-central-CICD-sample
+ ```
+
+## Create a service principal
+
+While Azure Pipelines can integrate directly with a key vault, your pipeline needs a service principal for some of the dynamic key vault interactions such as fetching secrets for data export destinations.
+
+To create a service principal scoped to your subscription:
+
+1. Run the following command to create a new service principal:
+
+ ```azurecli
+ az ad sp create-for-rbac -n DevOpsAccess --scopes /subscriptions/{your Azure subscription Id} --role Contributor
+ ```
+
+1. Make a note of the **password**, **appId**, and **tenant** as you need these values later.
+
+1. Add the service principal password as a secret called `SP-Password` to your production key vault:
+
+ ```azurecli
+ az keyvault secret set --name SP-Password --vault-name {your production key vault name} --value {your service principal password}
+ ```
+
+1. Give the service principal permission to read secrets from the key vault:
+
+ ```azurecli
+ az keyvault set-policy --name {your production key vault name} --secret-permissions get list --spn {the appId of the service principal}
+ ```
+
+## Generate IoT Central API tokens
+
+In this guide, your pipeline uses API tokens to interact with your IoT Central applications. It's also possible to use a service principal.
+
+> [!NOTE]
+> IoT Central API tokens expire after one year.
+
+Complete the following steps for both your development and production IoT Central apps.
+
+1. In your IoT Central app, select **Permissions** and then **API tokens**.
+1. Select **New**.
+1. Give the token a name, specify the top-level organization in your app, and set the role to **App Administrator**.
+1. Make a note of the API token from your development IoT Central application. You use it later when you run the *IoTC-Config.ps1* script.
+1. Save the generated token from the production IoT Central application as a secret called `API-Token` to the production key vault:
+
+ ```azurecli
+ az keyvault secret set --name API-Token --vault-name {your production key vault name} --value '{your production app API token}'
+ ```
+
+## Generate a configuration file
+
+These steps produce a JSON configuration file for your development environment based on an existing IoT Central application. You also download all the existing device templates from the application.
+
+1. Run the following PowerShell 7 script in the local copy of the IoT Central CI/CD repository:
+
+ ```powershell
+ cd .\iot-central-CICD-sample\PowerShell\
+ .\IoTC-Config.ps1
+ ```
+
+1. Follow the instructions to sign in to your Azure account.
+1. After you sign in, the script displays the IoTC Config options menu. The script can generate a config file from an existing IoT Central application and apply a configuration to another IoT Central application.
+1. Select option **1** to generate a configuration file.
+1. Enter the necessary parameters and press **Enter**:
+ - The API token you generated for your development IoT Central application.
+ - The subdomain of your development IoT Central application.
+ - Enter *..\Config\Dev* as the folder to store the config file and device templates.
+ - The name of your development key vault.
+
+1. The script creates a folder called *IoTC Configuration* in the *Config\Dev* folder in your local copy of the repository. This folder contains a configuration file and a folder called *Device Models* for all the device templates in your application.
+
+## Modify the configuration file
+
+Now that you have a configuration file that represents the settings for your development IoT Central application instance, make any necessary changes before you apply this configuration to your production IoT Central application instance.
+
+1. Create a copy of the *Dev* folder created previously and call it *Production*.
+1. Open IoTC-Config.json in the *Production* folder using a text editor.
+1. The file has multiple sections. However, if your application doesn't use a particular setting, that section is omitted from the file:
+
+ ```json
+ {
+ "APITokens": {
+ "value": [
+ {
+ "id": "dev-admin",
+ "roles": [
+ {
+ "role": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4"
+ }
+ ],
+ "expiry": "2023-05-31T10:47:08.53Z"
+ }
+ ]
+ },
+ "data exports": {
+ "value": [
+ {
+ "id": "5ad278d6-e22b-4749-803d-db1a8a2b8529",
+ "displayName": "All telemetry to blob storage",
+ "enabled": false,
+ "source": "telemetry",
+ "destinations": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63"
+ }
+ ],
+ "status": "notStarted"
+ }
+ ]
+ },
+ "device groups": {
+ "value": [
+ {
+ "id": "66f41d29-832d-4a12-9e9d-18932bee3141",
+ "displayName": "MXCHIP Getting Started Guide - All devices"
+ },
+ {
+ "id": "494dc749-0963-4ec1-89ff-e1de2228e750",
+ "displayName": "RS40 Occupancy Sensor - All devices"
+ },
+ {
+ "id": "dd87877d-9465-410b-947e-64167a7a1c39",
+ "displayName": "Cascade 500 - All devices"
+ },
+ {
+ "id": "91ceac5b-f98d-4df0-9ed6-5465854e7d9e",
+ "displayName": "Simulated devices"
+ }
+ ]
+ },
+ "organizations": {
+ "value": []
+ },
+ "roles": {
+ "value": [
+ {
+ "id": "344138e9-8de4-4497-8c54-5237e96d6aaf",
+ "displayName": "Builder"
+ },
+ {
+ "id": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4",
+ "displayName": "Administrator"
+ },
+ {
+ "id": "ae2c9854-393b-4f97-8c42-479d70ce626e",
+ "displayName": "Operator"
+ }
+ ]
+ },
+ "destinations": {
+ "value": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63",
+ "displayName": "Blob destination",
+ "type": "blobstorage@v1",
+ "authorization": {
+ "type": "connectionString",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourexportaccount;AccountKey=*****;EndpointSuffix=core.windows.net",
+ "containerName": "dataexport"
+ },
+ "status": "waiting"
+ }
+ ]
+ },
+ "file uploads": {
+ "connectionString": "FileUpload",
+ "container": "fileupload",
+ "sasTtl": "PT1H"
+ },
+ "jobs": {
+ "value": []
+ }
+ }
+ ```
+
+1. If your application uses file uploads, the script creates a secret in your development key vault with the value shown in the `connectionString` property. Create a secret with the same name in your production key vault that contains the connection string for your production storage account. For example:
+
+ ```azurecli
+ az keyvault secret set --name FileUpload --vault-name {your production key vault name} --value '{your production storage account connection string}'
+ ```
+
+1. If your application uses data exports, add secrets for the destinations to the production key vault. The config file doesn't contain any actual secrets for your destination, the secrets are stored in your key vault.
+1. Update the secrets in the config file with the name of the secret in your key vault.
+
+ | Destination type | Property to change |
+ | | |
+ | Service Bus queue | connectionString |
+ | Service Bus topic | connectionString |
+ | Azure Data Explorer | clientSecret |
+ | Azure Blob Storage | connectionString |
+ | Event Hubs | connectionString |
+ | Webhook No Auth | N/A |
+
+ For example:
+
+ ```json
+ "destinations": {
+ "value": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63",
+ "displayName": "Blob destination",
+ "type": "blobstorage@v1",
+ "authorization": {
+ "type": "connectionString",
+ "connectionString": "Storage-CS",
+ "containerName": "dataexport"
+ },
+ "status": "waiting"
+ }
+ ]
+ }
+ ```
+
+1. To upload the *Configuration* folder to your GitHub repository, run the following commands from the *IoTC-CICD-howto* folder.
+
+ ```cmd/bash
+ git add Config
+ git commit -m "Adding config directories and files"
+ git push
+ ```
+
+## Create a pipeline
+
+1. Open your Azure DevOps organization in a web browser by going to `https://dev.azure.com/{your DevOps organization}`
+1. Select **New project** to create a new project.
+1. Give your project a name and optional description and then select **Create**.
+1. On the **Welcome to the project** page, select **Pipelines** and then **Create Pipeline**.
+1. Select **GitHub** as the location of your code.
+1. Select **Authorize AzurePipelines** to authorize Azure Pipelines to access your GitHub account.
+1. On the **Select a repository** page, select your fork of the IoT Central CI/CD GitHub repository.
+1. When prompted to log into GitHub and provide permission for Azure Pipelines to access the repository, select **Approve & install**.
+1. On the **Configure your pipeline** page, select **Starter pipeline** to get started. The *azure-pipelines.yml* is displayed for you to edit.
+
+## Create a variable group
+
+An easy way to integrate key vault secrets into a pipeline is through variable groups. Use a variable group to ensure the right secrets are available to your deployment script. To create a variable group:
+
+1. Select **Library** in the **Pipelines** section of the menu on the left.
+1. Select **+ Variable group**.
+1. Enter `keyvault` as the name for your variable group.
+1. Enable the toggle to link secrets from an Azure key vault.
+1. Select your Azure subscription and authorize it. Then select your production key vault name.
+
+1. Select **Add** to start adding variables to the group.
+
+1. Add the following secrets:
+ - The IoT Central API Key for your production app. You called this secret `API-Token` when you created it.
+ - The password for the service principal you created previously. You called this secret `SP-Password` when you created it.
+1. Select **OK**.
+1. Select **Save** to save the variable group.
+
+## Configure your pipeline
+
+Now configure the pipeline to push configuration changes to your IoT Central application:
+
+1. Select **Pipelines** in the **Pipelines** section of the menu on the left.
+1. Replace the contents of your pipeline YAML with the following YAML. The configuration assumes your production key vault contains:
+ - The API token for your production IoT Central app in a secret called `API-Token`.
+ - Your service principal password in a secret called `SP-Password`.
+
+ Replace the values for `-AppName` and `-KeyVault` with the appropriate values for your production instances.
+
+ You made a note of the `-AppId` and `-TenantId` when you created your service principal.
+
+ ```yml
+ trigger:
+ - master
+ variables:
+ - group: keyvault
+ - name: buildConfiguration
+ value: 'Release'
+ steps:
+ - task: PowerShell@2
+ displayName: 'IoT Central'
+ inputs:
+ filePath: 'PowerShell/IoTC-Task.ps1'
+ arguments: '-ApiToken "$(API-Token)" -ConfigPath "Config/Production/IoTC Configuration" -AppName "{your production IoT Central app name}" -ServicePrincipalPassword (ConvertTo-SecureString "$(SP-Password)" -AsPlainText -Force) -AppId "{your service principal app id}" -KeyVault "{your production key vault name}" -TenantId "{your tenant id}"'
+ pwsh: true
+ failOnStderr: true
+ ```
+
+1. Select **Save and run**.
+1. The YAML file is saved to your Git Hub repository, so you need to provide a commit message and then select **Save and run** again.
+
+Your pipeline is queued. It may take a few minutes before it runs.
+
+The first time you run your pipeline, you're prompted to give permissions for the pipeline to access your subscription and to access your key vault. Select **Permit** and then **Permit** again for each resource.
+
+When your pipeline job completes successfully, sign in to your production IoT Central application and verify the configuration was applied as expected.
+
+## Promote changes from development to production
+
+Now that you have a working pipeline you can manage your IoT Central instances directly by using configuration changes. You can upload new device templates into the *Device Models* folder and make changes directly to the configuration file. This approach lets you treat your IoT Central application's configuration the same as any other code.
+
+## Next steps
+
+Now that know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create and publish a new device template. Default views are automatically generated for device templates created this way. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve details of a device template from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
## Update a device template ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to delete a device template: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` ## List device templates
DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?a
Use the following request to retrieve a list of device templates from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-05-31
``` The response to this request looks like the following example:
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create a new device. ```http
-PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` The following example shows a request body that adds a device for a device template. You can get the `template` details from the device templates page in IoT Central application UI.
The response to this request looks like the following example:
Use the following request to retrieve details of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve credentials of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
} ``` - ### Update a device ```http
-PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to delete a device: ```http
-DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` ### List devices
DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
Use the following request to retrieve a list of devices from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to create a new device group. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true
The response to this request looks like the following example:
Use the following request to retrieve details of a device group from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` * deviceGroupId - Unique ID for the device group.
The response to this request looks like the following example:
### Update a device group ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` The sample request body looks like the following example which updates the `displayName` of the device group:
The response to this request looks like the following example:
Use the following request to delete a device group: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` ### List device groups
DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-ver
Use the following request to retrieve a list of device groups from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
} ``` - ## Next steps Now that you've learned how to manage devices with the REST API, a suggested next step is to [How to control devices with rest api.](howto-control-devices-with-rest-api.md)
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/jobs) includes support for the new [organizations feature](howto-create-organizations.md).
- To learn how to create and manage jobs in the UI, see [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md). [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage organizations in your IoT Central application.
-> [!TIP]
-> The [organizations feature](howto-create-organizations.md) is currently available in [preview API](/rest/api/iotcentral/1.2-previewdataplane/users).
- Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md). For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
The IoT Central REST API lets you:
The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application: ```http
-PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` * organizationId - Unique ID of the organization
The response to this request looks like the following example:
Use the following request to retrieve details of an individual organization from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update details of an organization in your application: ```http
-PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` The following example shows a request body that updates an organization.
The response to this request looks like the following example:
Use the following request to retrieve a list of organizations from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=1.2-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=2022-05-31
``` The response to this request looks like the following example.
The response to this request looks like the following example.
Use the following request to delete an organization: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=1.2-preview
+DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
``` ## Next steps
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/users) includes support for the new [organizations feature](howto-create-organizations.md).
- [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] ## Manage roles
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
``` The response to this request looks like the following example that includes the three built-in roles and a custom role:
The REST API lets you:
Use the following request to retrieve a list of users from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=2022-05-31
``` The response to this request looks like the following example. The role values identify the role ID the user is associated with:
The response to this request looks like the following example. The role values i
Use the following request to retrieve details of an individual user from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=2022-05-31
``` The response to this request looks like the following example. The role value identifies the role ID the user is associated with:
The response to this request looks like the following example. The role value id
Use the following request to create a user in your application. The ID and email must be unique in the application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` In the following request body, the `role` value is for the operator role you retrieved previously:
You can also add a service principal user which is useful if you need to use ser
Use the following request to change the role assigned to user. This example uses the ID of the builder role you retrieved previously: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` Request body. The value is for the builder role you retrieved previously:
The response to this request looks like the following example:
Use the following request to delete a user: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` ## Next steps
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
The response to this request looks like the following example:
Use the following request to create a file upload blob storage account configuration in your IoT Central application: ```http
-PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` The request body has the following fields:
The response to this request looks like the following example:
"etag": "\"7502ac89-0000-0300-0000-627eaf100000\"" }- ``` ## Get the file upload storage account configuration Use the following request to retrieve details of a file upload blob storage account configuration in your IoT Central application: - ```http
-GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update a file upload blob storage account configuration in your IoT Central application: ```http
-PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` ```json
The response to this request looks like the following example:
Use the following request to delete a storage account configuration: ```http
-DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` ## Test file upload
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application administration guide
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, and security.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, security, and automated deployments.
Last updated 01/04/2022
IoT Central application administration includes the following tasks:
- Upgrade applications. - Export and share applications. - Monitor application health.
+- DevOps integration.
## Create applications
An administrator can:
- Create a copy of an application if you just need a duplicate copy of your application. For example, you may need a duplicate copy for testing. - Create an application template from an existing application if you plan to create multiple copies.
-To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template) .
+To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template).
+
+## Integrate with DevOps pipelines
+
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure DevOps pipelines to automate the build, test, and deployment of IoT Central application configurations.
+
+Just as IoT Central is a part of your larger IoT solution, make IoT Central a part of your CI/CD pipeline.
+
+To learn more, see [Integrate IoT Central into your Azure DevOps CI/CD pipeline](howto-integrate-with-devops.md).
## Monitor application health
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
DPS uses the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagn
| ApplicationId | GUID | Application ID used in bearer authorization. | | CallerIpAddress | String | A masked source IP address for the event. | | Category | String | Type of operation, either **ServiceOperations** or **DeviceOperations**. |
-| CorrelationId | GUID | Customer provided unique identifier for the event. |
+| CorrelationId | GUID | Unique identifier for the event. |
| DurationMs | String | How long it took to perform the event in milliseconds. | | Level | Int | The logging severity of the event. For example, Information or Error. | | OperationName | String | The type of action performed during the event. For example: Query, Get, Upsert, and so on. |
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Title: Update IoT Edge version on devices - Azure IoT Edge | Microsoft Docs
-description: How to update IoT Edge devices to run the latest versions of the security daemon and the IoT Edge runtime
+description: How to update IoT Edge devices to run the latest versions of the security subsystem and the IoT Edge runtime
keywords:
As the IoT Edge service releases new versions, you'll want to update your IoT Edge devices for the latest features and security improvements. This article provides information about how to update your IoT Edge devices when a new version is available.
-Two components of an IoT Edge device need to be updated if you want to move to a newer version. The first is the security daemon, which runs on the device and starts the runtime modules when the device starts. Currently, the security daemon can only be updated from the device itself. The second component is the runtime, made up of the IoT Edge hub and IoT Edge agent modules. Depending on how you structure your deployment, the runtime can be updated from the device or remotely.
+Two logical components of an IoT Edge device need to be updated if you want to move to a newer version. The first is the security subsystem. Although the architecture of the security subsystem [changed between version 1.1 and 1.2](iot-edge-security-manager.md), its overall responsibilities remained the same. It runs on the device, handles security-based tasks, and starts the modules when the device starts. Currently, the security subsystem can only be updated from the device itself. The second component is the runtime, made up of the IoT Edge hub and IoT Edge agent modules. Depending on how you structure your deployment, the runtime can be updated from the device or remotely.
To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases).
-## Update the security daemon
+## Update the security subsystem
-The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device. View the [Update the security daemon](how-to-update-iot-edge.md#update-the-security-daemon) tutorial for a walk-through on Linux-based devices.
+The IoT Edge security subsystem includes a set of native components that need to be updated using the package manager on the IoT Edge device.
-Check the version of the security daemon running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
+Check the version of the security subsystem running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
# [Linux](#tab/linux) >[!IMPORTANT] >If you are updating a device from version 1.0 or 1.1 to version 1.2, there are differences in the installation and configuration processes that require extra steps. For more information, refer to the steps later in this article: [Special case: Update from 1.0 or 1.1 to 1.2](#special-case-update-from-10-or-11-to-12).
-On Linux x64 devices, use apt-get or your appropriate package manager to update the security daemon to the latest version.
+On Linux x64 devices, use apt-get or your appropriate package manager to update the runtime module to the latest version.
Update apt.
Check to see which versions of IoT Edge are available.
apt list -a iotedge ```
-If you want to update to the most recent version of the security daemon, use the following command which also updates **libiothsm-std** to the latest version:
+If you want to update to the most recent version of the runtime module, use the following command which also updates **libiothsm-std** to the latest version:
```bash sudo apt-get install iotedge ```
-If you want to update to a specific version of the security daemon, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.1 release:
+If you want to update to a specific version of the runtime module, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.1 release:
```bash sudo apt-get install iotedge=1.1.1 libiothsm-std=1.1.1
Check to see which versions of IoT Edge are available.
apt list -a aziot-edge ```
-If you want to update to the most recent version of IoT Edge, use the following command which also updates the identity service to the latest version:
+If you want to update to the most recent version of IoT Edge, use the following command which also updates the [identity service](https://azure.github.io/iot-identity-service/) to the latest version:
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
+It's recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
<!-- end 1.2 --> :::moniker-end
For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates]
:::moniker range=">=iotedge-2020-11" >[!NOTE]
->Currently, there is not support for IoT Edge version 1.2 running on Windows devices.
+>Currently, there's no support for IoT Edge version 1.2 running on Windows devices.
> >To view the steps for updating IoT Edge for Linux on Windows, see [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true&tabs=windows).
For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates]
With IoT Edge for Windows, IoT Edge runs directly on the Windows device.
-Use the `Update-IoTEdge` command to update the security daemon. The script automatically pulls the latest version of the security daemon.
+Use the `Update-IoTEdge` command to update the module runtime. The script automatically pulls the latest version of the module runtime.
```powershell . {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Update-IoTEdge ```
-Running the Update-IoTEdge command removes and updates the security daemon from your device, along with the two runtime container images. The config.yaml file is kept on the device, as well as data from the Moby container engine. Keeping the configuration information means that you don't have to provide the connection string or Device Provisioning Service information for your device again during the update process.
+Running the `Update-IoTEdge` command removes and updates the runtime module from your device, along with the two runtime container images. The config.yaml file is kept on the device, as well as data from the Moby container engine. Keeping the configuration information means that you don't have to provide the connection string or Device Provisioning Service information for your device again during the update process.
-If you want to update to a specific version of the security daemon, find the version from 1.1 release channel you want to target from [IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). In that version, download the **Microsoft-Azure-IoTEdge.cab** file. Then, use the `-OfflineInstallationPath` parameter to point to the local file location. For example:
+If you want to update to a specific version of the security subsystem, find the version from 1.1 release channel you want to target from [IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). In that version, download the **Microsoft-Azure-IoTEdge.cab** file. Then, use the `-OfflineInstallationPath` parameter to point to the local file location. For example:
```powershell . {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Update-IoTEdge -OfflineInstallationPath <absolute path to directory>
The IoT Edge agent and IoT Edge hub images are tagged with the IoT Edge version
If you use rolling tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.1**) then you need to force the container runtime on your device to pull the latest version of the image.
-Delete the local version of the image from your IoT Edge device. On Windows machines, uninstalling the security daemon also removes the runtime images, so you don't need to take this step again.
+Delete the local version of the image from your IoT Edge device. On Windows machines, uninstalling the security subsystem also removes the runtime images, so you don't need to take this step again.
```bash docker rmi mcr.microsoft.com/azureiotedge-hub:1.1
Some of the key differences between 1.2 and earlier versions include:
* The package name changed from **iotedge** to **aziot-edge**. * The **libiothsm-std** package is no longer used. If you used the standard package provided as part of the IoT Edge release, then your configurations can be transferred to the new version. If you used a different implementation of libiothsm-std, then any user-provided certificates like the device identity certificate, device CA, and trust bundle will need to be reconfigured.
-* A new identity service, **aziot-identity-service** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like [Device Update for IoT Hub](../iot-hub-device-update/understand-device-update.md).
+* A new identity service, **[aziot-identity-service](https://azure.github.io/iot-identity-service/)** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like [Device Update for IoT Hub](../iot-hub-device-update/understand-device-update.md).
* The default config file has a new name and location. Formerly `/etc/iotedge/config.yaml`, your device configuration information is now expected to be in `/etc/aziot/config.toml` by default. The `iotedge config import` command can be used to help migrate configuration information from the old location and syntax to the new one. * The import command cannot detect or modify access rules to a device's trusted platform module (TPM). If your device uses TPM attestation, you need to manually update the /etc/udev/rules.d/tpmaccess.rules file to give access to the aziottpm service. For more information, see [Give IoT Edge access to the TPM](how-to-auto-provision-simulated-device-linux.md?view=iotedge-2020-11&preserve-view=true#give-iot-edge-access-to-the-tpm). * The workload API in version 1.2 saves encrypted secrets in a new format. If you upgrade from an older version to version 1.2, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in version 1.2 are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary.
When you're ready, follow these steps to update IoT Edge on your devices:
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
+It's recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
1. Import your old config.yaml file into its new format, and apply the configuration info.
The IoT Edge agent and hub modules have RC versions that are tagged with the sam
As previews, release candidate versions aren't included as the latest version that the regular installers target. Instead, you need to manually target the assets for the RC version that you want to test. For the most part, installing or updating to an RC version is the same as targeting any other specific version of IoT Edge.
-Use the sections in this article to learn how to update an IoT Edge device to a specific version of the security daemon or runtime modules.
+Use the sections in this article to learn how to update an IoT Edge device to a specific version of the security subsystem or runtime modules.
If you're installing IoT Edge, rather than upgrading an existing installation, use the steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
iot-hub-device-update Device Update Apt Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-apt-manifest.md
If version is omitted, the latest available version of specified package will be
> APT package manager ignores versioning requirements given by a package when the dependent packages to install are being automatically resolved. Unless explicit versions of dependent packages are given they will use the latest, even though the package itself may specify a strict requirement (=) on a given version. This automatic resolution can lead to errors regarding an unmet dependency. [Learn More](https://unix.stackexchange.com/questions/350192/apt-get-not-properly-resolving-a-dependency-on-a-fixed-version-in-a-debian-ubunt)
-If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `iotedge` package and its dependent `libiothsm-std` package in your APT manifest.
-[Learn More](../iot-edge/how-to-update-iot-edge.md#update-the-security-daemon)
+If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `aziot-edge` package and its dependent `aziot-identity-service` package in your APT manifest.
+[Learn More](../iot-edge/how-to-update-iot-edge.md#update-the-security-subsystem)
> [!NOTE] > An apt manifest can be used to update Device Update agent and its dependencies. List the device update agent name and desired version in the apt manifest, like you would for any other package. This apt manifest can then be imported and deployed through the Device Update for IoT Hub pipeline.
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
az keyvault key rotation-policy update --vault-name <vault-name> --name <key-nam
Set rotation policy using Azure Powershell [Set-AzKeyVaultKeyRotationPolicy](/powershell/module/az.keyvault/set-azkeyvaultkeyrotationpolicy) cmdlet. ```powershell
-Get-AzKeyVaultKey -VaultName <vault-name> -Name <key-name>
-$action = [Microsoft.Azure.Commands.KeyVault.Models.PSKeyRotationLifetimeAction]::new()
-$action.Action = "Rotate"
-$action.TimeAfterCreate = New-TimeSpan -Days 540
-$expiresIn = New-TimeSpan -Days 720
-Set-AzKeyVaultKeyRotationPolicy -InputObject $key -KeyRotationLifetimeAction $action -ExpiresIn $expiresIn
+Set-AzKeyVaultKeyRotationPolicy -VaultName <vault-name> -KeyName <key-name> -ExpiresIn (New-TimeSpan -Days 720) -KeyRotationLifetimeAction @{Action="Rotate";TimeAfterCreate= (New-TimeSpan -Days 540)}
```- ## Rotation on demand Key rotation can be invoked manually.
logic-apps Create Serverless Apps Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-serverless-apps-visual-studio.md
Title: Create an example serverless app with Visual Studio
-description: Create, deploy, and manage an example serverless app with an Azure quickstart template, Azure Logic Apps and Azure Functions in Visual Studio.
+description: Create, deploy, and manage an example serverless app with an Azure Quickstart Template, Azure Logic Apps and Azure Functions in Visual Studio.
ms.suite: integration
Last updated 07/15/2021
# Create an example serverless app with Azure Logic Apps and Azure Functions in Visual Studio + You can quickly create, build, and deploy cloud-based "serverless" apps by using the services and capabilities in Azure, such as Azure Logic Apps and Azure Functions. When you use Azure Logic Apps, you can quickly and easily build workflows using low-code or no-code approaches to simplify orchestrating combined tasks. You can integrate different services, cloud, on-premises, or hybrid, without coding those interactions, having to maintain glue code, or learn new APIs or specifications. When you use Azure Functions, you can speed up development by using an event-driven model. You can use triggers that respond to events by automatically running your own code. You can use bindings to seamlessly integrate other services.
-This article shows how to create an example serverless app that runs in multi-tenant Azure by using an Azure Quickstart template. The template creates an Azure resource group project that includes an Azure Resource Manager deployment template. This template defines a basic logic app resource where a predefined a workflow includes a call to an Azure function that you define. The workflow definition includes the following components:
+This article shows how to create an example serverless app that runs in multi-tenant Azure by using an Azure Quickstart Template. The template creates an Azure resource group project that includes an Azure Resource Manager deployment template. This template defines a basic logic app resource where a predefined a workflow includes a call to an Azure function that you define. The workflow definition includes the following components:
* A Request trigger that receives HTTP requests. To start this trigger, you send a request to the trigger's URL. * An Azure Functions action that calls an Azure function that you can later define.
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
Title: Deploy single-tenant logic apps to private storage accounts
-description: How to deploy Standard logic app workflows to Azure storage accounts that use private endpoints and deny public access.
+ Title: Deploy Standard logic apps to private storage accounts
+description: Deploy Standard logic app workflows to Azure storage accounts that use private endpoints and deny public access.
ms.suite: integration Last updated 01/06/2022
-# As a developer, I want to deploy my single-tenant logic apps to Azure storage accounts using private endpoints
+# As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
# Deploy single-tenant Standard logic apps to private storage accounts using private endpoints + When you create a single-tenant Standard logic app resource, you're required to have a storage account for storing logic app artifacts. You can restrict access to this storage account so that only the resources inside a virtual network can connect to your logic app workflow. Azure Storage supports adding private endpoints to your storage account. This article describes the steps to follow for deploying such logic apps to protected private storage accounts. For more information, review [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md).
logic-apps Designer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/designer-overview.md
Title: About single-tenant workflow designer
+ Title: About Standard logic app workflow designer
description: Learn how the designer in single-tenant Azure Logic Apps helps you visually create workflows through the Azure portal. Discover the benefits and features in this latest version. ms.suite: integration
Last updated 06/30/2021
-# About the workflow designer in single-tenant Azure Logic Apps
+# About the Standard logic app workflow designer in single-tenant Azure Logic Apps
+ When you work with Azure Logic Apps in the Azure portal, you can edit your [*workflows*](logic-apps-overview.md#workflow) visually or programmatically. After you open a [*logic app* resource](logic-apps-overview.md#logic-app) in the portal, on the resource menu under **Developer**, you can select between [**Code** view](#code-view) and **Designer** view. When you want to visually develop, edit, and run your workflow, select the designer view. You can switch between the designer view and code view at any time.
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
Title: Edit runtime and environment settings in single-tenant Azure Logic Apps
-description: Change the runtime and environment settings for logic apps in single-tenant Azure Logic Apps.
+ Title: Edit runtime and environment settings for Standard logic apps
+description: Change the runtime and environment settings for Standard logic apps in single-tenant Azure Logic Apps.
ms.suite: integration
Last updated 03/22/2022
-# Edit host and app settings for logic apps in single-tenant Azure Logic Apps
+# Edit host and app settings for Standard logic apps in single-tenant Azure Logic Apps
+ In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
logic-apps Estimate Storage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/estimate-storage-costs.md
Title: Estimate storage costs for single-tenant Azure Logic Apps
-description: Estimate storage costs for your workflows using the Logic Apps Storage Calculator.
+description: Estimate storage costs for Standard logic app workflows using the Logic Apps Storage Calculator.
ms.suite: integration
Last updated 11/10/2021
-# Estimate storage costs for workflows in single-tenant Azure Logic Apps
+# Estimate storage costs for Standard logic app workflows in single-tenant Azure Logic Apps
+ Azure Logic Apps uses [Azure Storage](../storage/index.yml) for any storage operations. In traditional *multi-tenant* Azure Logic Apps, any storage usage and costs are attached to the logic app. Now, in *single-tenant* Azure Logic Apps, you can use your own storage account. These storage costs are listed separately in your Azure billing invoice. This capability gives you more flexibility and control over your logic app data.
logic-apps Healthy Unhealthy Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/healthy-unhealthy-resource.md
Title: Set up logging to monitor logic apps in Azure Security Center
-description: Monitor the health of your Logic Apps resources in Azure Security Center by setting up diagnostic logging.
+description: Monitor health for Azure Logic Apps resources in Azure Security Center by setting up diagnostic logging.
ms.suite: integration
Last updated 12/07/2020
# Set up logging to monitor logic apps in Microsoft Defender for Cloud
-When you monitor your Logic Apps resources in [Microsoft Azure Security Center](../security-center/security-center-introduction.md), you can [review whether your logic apps are following the default policies](#view-logic-apps-health-status). Azure shows the health status for a Logic Apps resource after you enable logging and correctly set up the logs' destination. This article explains how to configure diagnostic logging and make sure that all your logic apps are healthy resources.
+When you monitor your Azure Logic Apps resources in [Microsoft Azure Security Center](../security-center/security-center-introduction.md), you can [review whether your logic apps are following the default policies](#view-logic-apps-health-status). Azure shows the health status for an Azure Logic Apps resource after you enable logging and correctly set up the logs' destination. This article explains how to configure diagnostic logging and make sure that all your logic apps are healthy resources.
> [!TIP]
-> To find the current status for the Logic Apps service, review the [Azure status page](https://status.azure.com/), which lists the status for different products and services in each available region.
+> To find the current status for the Azure Logic Apps service, review the [Azure status page](https://status.azure.com/), which lists the status for different products and services in each available region.
## Prerequisites
When you monitor your Logic Apps resources in [Microsoft Azure Security Center](
Before you can view the resource health status for your logic apps, you must first [set up diagnostic logging](monitor-logic-apps-log-analytics.md). If you already have a Log Analytics workspace, you can enable logging either when you create your logic app or on existing logic apps. > [!TIP]
-> The default recommendation is to enable diagnostic logs for Logic Apps. However, you control this setting for your logic apps. When you enable diagnostic logs for your logic apps, you can use the information to help analyze security incidents.
+> The default recommendation is to enable diagnostic logs for Azure Logic Apps. However, you control this setting for your logic apps. When you enable diagnostic logs for your logic apps, you can use the information to help analyze security incidents.
### Check diagnostic logging setting
If you're not sure whether your logic apps have diagnostic logging enabled, you
1. In the search bar, enter and select **Defender for Cloud**. 1. On the workload protection dashboard menu, under **General**, select **Recommendations**. 1. In the table of security suggestions, find and select **Enable auditing and logging** &gt; **Diagnostic logs in Logic Apps should be enabled** in the table of security controls.
-1. On the recommendation page, expand the **Remediation steps** section and review the options. You can enable Logic Apps diagnostics by selecting the **Quick Fix!** button, or by following the manual remediation instructions.
+1. On the recommendation page, expand the **Remediation steps** section and review the options. You can enable Azure Logic Apps diagnostics by selecting the **Quick Fix!** button, or by following the manual remediation instructions.
## View logic apps' health status
After you've [enabled diagnostic logging](#enable-diagnostic-logging), you can s
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search bar, enter and select **Defender for Cloud**. 1. On the workload protection dashboard menu, under **General**, select **Inventory**.
-1. On the inventory page, filter your assets list to show only Logic Apps resources. In the page menu, select **Resource types** &gt; **logic apps**.
+1. On the inventory page, filter your assets list to show only Azure Logic Apps resources. In the page menu, select **Resource types** &gt; **logic apps**.
The **Unhealthy Resources** counter shows the number of logic apps that Defender for Cloud considers unhealthy. 1. In the list of logic apps resources, review the **Recommendations** column. To review the health details for a specific logic app, select a resource name, or select the ellipses button (**...**) &gt; **View resource**.
If your [logic apps are listed as unhealthy in Defender for Cloud](#view-logic-a
### Log Analytics and Event Hubs destinations
-If you use Log Analytics or Event Hubs as the destination for your Logic Apps diagnostic logs, check the following settings.
+If you use Log Analytics or Event Hubs as the destination for your Azure Logic Apps diagnostic logs, check the following settings.
1. To confirm that you enabled diagnostic logs, check that the diagnostic settings `logs.enabled` field is set to `true`. 1. To confirm that you haven't set a storage account as the destination instead, check that the `storageAccountId` field is set to `false`.
For example:
### Storage account destination
-If you use a storage account as the destination for your Logic Apps diagnostic logs, check the following settings.
+If you use a storage account as the destination for your Azure Logic Apps diagnostic logs, check the following settings.
1. To confirm that you enabled diagnostic logs, check that the diagnostics settings `logs.enabled` field is set to `true`. 1. To confirm that you enabled a retention policy for your diagnostic logs, check that the `retentionPolicy.enabled` field is set to `true`.
logic-apps Logic Apps Batch Process Send Receive Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-batch-process-send-receive-messages.md
Last updated 07/31/2020
# Send, receive, and batch process messages in Azure Logic Apps + To send and process messages together in a specific way as groups, you can create a batching solution. This solution collects messages into a *batch* and waits until your specified criteria are met before releasing and processing the batched messages. Batching can reduce how often your logic app processes messages. This article shows how to build a batching solution by creating two logic apps within the same Azure subscription, Azure region, and in this order:
This article shows how to build a batching solution by creating two logic apps w
1. One or more ["batch sender"](#batch-sender) logic apps, which send the messages to the previously created batch receiver.
- You can also specify a unique key, such as a customer number, that *partitions* or divides the target batch into logical subsets based on that key. That way, the receiver app can collect all items with the same key and process them together.
+ The batch sender can specify a unique key that *partitions* or divides the target batch into logical subsets, based on that key. For example, a customer number is a unique key. That way, the receiver app can collect all items with the same key and process them together.
-Your batch receiver and batch sender needs to share the same Azure subscription *and* Azure region. If they don't, you can't select the batch receiver when you create the batch sender because they're not visible to each other.
+Your batch receiver and batch sender need to share the same Azure subscription *and* Azure region. If they don't, you can't select the batch receiver when you create the batch sender because they're not visible to each other.
## Prerequisites
Now create one or more batch sender logic apps that send messages to the batch r
![Set up a partition for your target batch](./media/logic-apps-batch-process-send-receive-messages/batch-sender-partition-advanced-options.png)
- This **rand** function generates a number between one and five. So you are dividing this batch into five numbered partitions, which this expression dynamically sets.
+ This **rand** function generates a number between one and five. So, you're dividing this batch into five numbered partitions, which this expression dynamically sets.
1. Save your logic app. Your sender logic app now looks similar to this example:
Now create one or more batch sender logic apps that send messages to the batch r
To test your batching solution, leave your logic apps running for a few minutes. Soon, you start getting emails in groups of five, all with the same partition key.
-Your batch sender logic app runs every minute, generates a random number between one and five, and uses this generated number as the partition key for the target batch where messages are sent. Each time the batch has five items with the same partition key, your batch receiver logic app fires and sends mail for each message.
+Your batch sender logic app runs every minute and generates a random number between one and five. The batch sender uses this random number as the partition key for the target batch where you send the messages. Each time the batch has five items with the same partition key, your batch receiver logic app fires and sends mail for each message.
> [!IMPORTANT] > When you're done testing, make sure that you disable the `BatchSender` logic app to stop sending messages and avoid overloading your inbox. ## Next steps
-* [Batch and send EDI messages](../logic-apps/logic-apps-scenario-edi-send-batch-messages.md)
-* [Build on logic app definitions by using JSON](../logic-apps/logic-apps-author-definitions.md)
-* [Exception handling and error logging for logic apps](../logic-apps/logic-apps-scenario-error-and-exception-handling.md)
+* [Batch and send EDI messages](../logic-apps/logic-apps-scenario-edi-send-batch-messages.md)
logic-apps Logic Apps Create Logic Apps From Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-logic-apps-from-templates.md
Last updated 10/15/2017
# Create logic app workflows from prebuilt templates + To get you started creating workflows more quickly, Logic Apps provides templates, which are prebuilt logic apps that follow commonly used patterns.
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-examples-and-scenarios.md
and [switch statements](../logic-apps/logic-apps-control-flow-switch-statement.m
* [Repeat steps or process items in arrays and collections with loops](../logic-apps/logic-apps-control-flow-loops.md) * [Group actions together with scopes](../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md) * [Add error and exception handling to a workflow](../logic-apps/logic-apps-exception-handling.md)
-* [Use case: How a healthcare company uses logic app exception handling for HL7 FHIR workflows](../logic-apps/logic-apps-scenario-error-and-exception-handling.md)
## Create custom APIs and connectors
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
To perform different exception handling patterns, you can use the expressions pr
## Set up Azure Monitor logs
-The previous patterns are useful ways to handle errors and exceptions that happen within a run. However, you can also identify and respond to errors that happen independently from the run. [Azure Monitor](../azure-monitor/overview.md) provides a streamlined way to send all workflow events, including all run and action statuses, to a destination. For example, you can send events to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md), [Azure storage account](../storage/blobs/storage-blobs-overview.md), or [Azure Event Hubs](../event-hubs/event-hubs-about.md).
+The previous patterns are useful ways to handle errors and exceptions that happen within a run. However, you can also identify and respond to errors that happen independently from the run. To evaluate run statuses, you can monitor the logs and metrics for your runs, or publish them into any monitoring tool that you prefer.
-To evaluate run statuses, you can monitor the logs and metrics, or publish them into any monitoring tool that you prefer. One potential option is to stream all the events through Event Hubs into [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/). In Stream Analytics, you can write live queries based on any anomalies, averages, or failures from the diagnostic logs. You can use Stream Analytics to send information to other data sources, such as queues, topics, SQL, Azure Cosmos DB, or Power BI.
+For example, [Azure Monitor](../azure-monitor/overview.md) provides a streamlined way to send all workflow events, including all run and action statuses, to a destination. You can [set up alerts for specific metrics and thresholds in Azure Monitor](monitor-logic-apps.md#set-up-monitoring-alerts). You can also send workflow events to a [Log Analytics workspace](../azure-monitor/logs/data-platform-logs.md) or [Azure storage account](../storage/blobs/storage-blobs-overview.md). Or, you can stream all events through [Azure Event Hubs](../event-hubs/event-hubs-about.md) into [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/). In Stream Analytics, you can write live queries based on any anomalies, averages, or failures from the diagnostic logs. You can use Stream Analytics to send information to other data sources, such as queues, topics, SQL, Azure Cosmos DB, or Power BI.
+
+For more information, review [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](monitor-logic-apps-log-analytics.md).
## Next steps
-* [See how a customer builds error handling with Azure Logic Apps](logic-apps-scenario-error-and-exception-handling.md)
-* [Find more Azure Logic Apps examples and scenarios](logic-apps-examples-and-scenarios.md)
+* [Learn more about Azure Logic Apps examples and scenarios](logic-apps-examples-and-scenarios.md)
logic-apps Logic Apps Scenario Edi Send Batch Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-edi-send-batch-messages.md
Last updated 08/19/2018
# Exchange EDI messages as batches or groups between trading partners in Azure Logic Apps + In business to business (B2B) scenarios, partners often exchange messages in groups or *batches*. When you build a batching solution with Logic Apps,
logic-apps Logic Apps Scenario Error And Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-error-and-exception-handling.md
- Title: Exception handling & error logging scenario
-description: Advanced exception handling and error logging in Azure Logic Apps.
---- Previously updated : 07/29/2016--
-# Scenario: Exception handling and error logging for logic apps
-
-This scenario describes how you can extend a logic app to better support exception handling.
-We've used a real-life use case to answer the question: "Does Azure Logic Apps support exception and error handling?"
-
-> [!NOTE]
-> The current Azure Logic Apps schema provides a standard template for action responses.
-> This template includes both internal validation and error responses returned from an API app.
-
-## Scenario and use case overview
-
-Here's the story as the use case for this scenario:
-
-A well-known healthcare organization engaged us to develop an Azure solution
-that would create a patient portal by using Microsoft Dynamics CRM Online.
-They needed to send appointment records between the Dynamics CRM Online patient portal and Salesforce.
-We were asked to use the [HL7 FHIR](https://www.hl7.org/implement/standards/fhir/) standard for all patient records.
-
-The project had two major requirements:
-
-* A method to log records sent from the Dynamics CRM Online portal
-* A way to view any errors that occurred within the workflow
-
-> [!TIP]
-> For a high-level video about this project, see
-> [Integration User Group](http://www.integrationusergroup.com/logic-apps-support-error-handling/ "Integration User Group").
-
-## How we solved the problem
-
-We chose [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/ "Azure Cosmos DB")
-As a repository for the log and error records (Cosmos DB refers to records as documents).
-Because Azure Logic Apps has a standard template for all responses,
-we would not have to create a custom schema. We could create an API app to **Insert** and **Query** for both error and log records.
-We could also define a schema for each within the API app.
-
-Another requirement was to purge records after a certain date.
-Cosmos DB has a property called [Time to Live](https://azure.microsoft.com/blog/documentdb-now-supports-time-to-live-ttl/ "Time to Live") (TTL),
-which allowed us to set a **Time to Live** value for each record or collection.
-This capability eliminated the need to manually delete records in Cosmos DB.
-
-> [!IMPORTANT]
-> To complete this tutorial, you need to create a Cosmos DB database and two collections (Logging and Errors).
-
-## Create the logic app
-
-The first step is to create the logic app and open the app in Logic App Designer.
-In this example, we are using parent-child logic apps.
-Let's assume that we have already created the parent and are going to create one child logic app.
-
-Because we are going to log the record coming out of Dynamics CRM Online,
-let's start at the top. We must use a **Request** trigger because the parent logic app triggers this child.
-
-### Logic app trigger
-
-We are using a **Request** trigger as shown in the following example:
-
-``` json
-"triggers": {
- "request": {
- "type": "request",
- "kind": "http",
- "inputs": {
- "schema": {
- "properties": {
- "CRMid": {
- "type": "string"
- },
- "recordType": {
- "type": "string"
- },
- "salesforceID": {
- "type": "string"
- },
- "update": {
- "type": "boolean"
- }
- },
- "required": [
- "CRMid",
- "recordType",
- "salesforceID",
- "update"
- ],
- "type": "object"
- }
- }
- }
- },
-
-```
--
-## Steps
-
-We must log the source (request) of the patient record from the Dynamics CRM Online portal.
-
-1. We must get a new appointment record from Dynamics CRM Online.
-
- The trigger coming from CRM provides us with the **CRM PatentId**,
- **record type**, **New or Updated Record** (new or update Boolean value),
- and **SalesforceId**. The **SalesforceId** can be null because it's only used for an update.
- We get the CRM record by using the CRM **PatientID** and the **Record Type**.
-
-2. Next, we need to add our Azure Cosmos DB SQL API app **InsertLogEntry** operation as shown here in
-Logic App Designer.
-
- **Insert log entry**
-
- ![Screenshot from Logic App Designer showing the configuration settings for InsertLogEntry.](media/logic-apps-scenario-error-and-exception-handling/lognewpatient.png)
-
- **Insert error entry**
-
- ![Screenshot from Logic App Designer showing the configuration settings for CreateErrorRecord.](media/logic-apps-scenario-error-and-exception-handling/insertlogentry.png)
-
- **Check for create record failure**
-
- ![Screenshot of the CreateErrorRecord screen in the Logic App Designer showing the fields for creating an error entry.](media/logic-apps-scenario-error-and-exception-handling/condition.png)
-
-## Logic app source code
-
-> [!NOTE]
-> The following examples are samples only.
-> Because this tutorial is based on an implementation now in production,
-> the value of a **Source Node** might not display properties
-> that are related to scheduling an appointment.>
-
-### Logging
-
-The following logic app code sample shows how to handle logging.
-
-#### Log entry
-
-Here is the logic app source code for inserting a log entry.
-
-``` json
-"InsertLogEntry": {
- "metadata": {
- "apiDefinitionUrl": "https://.../swagger/docs/v1",
- "swaggerSource": "website"
- },
- "type": "Http",
- "inputs": {
- "body": {
- "date": "@{outputs('Gets_NewPatientRecord')['headers']['Date']}",
- "operation": "New Patient",
- "patientId": "@{triggerBody()['CRMid']}",
- "providerId": "@{triggerBody()['providerID']}",
- "source": "@{outputs('Gets_NewPatientRecord')['headers']}"
- },
- "method": "post",
- "uri": "https://.../api/Log"
- },
- "runAfter": {
- "Gets_NewPatientecord": ["Succeeded"]
- }
-}
-```
-
-#### Log request
-
-Here is the log request message posted to the API app.
-
-``` json
- {
- "uri": "https://.../api/Log",
- "method": "post",
- "body": {
- "date": "Fri, 10 Jun 2016 22:31:56 GMT",
- "operation": "New Patient",
- "patientId": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0",
- "providerId": "",
- "source": "{/"Pragma/":/"no-cache/",/"x-ms-request-id/":/"e750c9a9-bd48-44c4-bbba-1688b6f8a132/",/"OData-Version/":/"4.0/",/"Cache-Control/":/"no-cache/",/"Date/":/"Fri, 10 Jun 2016 22:31:56 GMT/",/"Set-Cookie/":/"ARRAffinity=785f4334b5e64d2db0b84edcc1b84f1bf37319679aefce206b51510e56fd9770;Path=/;Domain=127.0.0.1/",/"Server/":/"Microsoft-IIS/8.0,Microsoft-HTTPAPI/2.0/",/"X-AspNet-Version/":/"4.0.30319/",/"X-Powered-By/":/"ASP.NET/",/"Content-Length/":/"1935/",/"Content-Type/":/"application/json; odata.metadata=minimal; odata.streaming=true/",/"Expires/":/"-1/"}"
- }
- }
-
-```
--
-#### Log response
-
-Here is the log response message from the API app.
-
-``` json
-{
- "statusCode": 200,
- "headers": {
- "Pragma": "no-cache",
- "Cache-Control": "no-cache",
- "Date": "Fri, 10 Jun 2016 22:32:17 GMT",
- "Server": "Microsoft-IIS/8.0",
- "X-AspNet-Version": "4.0.30319",
- "X-Powered-By": "ASP.NET",
- "Content-Length": "964",
- "Content-Type": "application/json; charset=utf-8",
- "Expires": "-1"
- },
- "body": {
- "ttl": 2592000,
- "id": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0_1465597937",
- "_rid": "XngRAOT6IQEHAAAAAAAAAA==",
- "_self": "dbs/XngRAA==/colls/XngRAOT6IQE=/docs/XngRAOT6IQEHAAAAAAAAAA==/",
- "_ts": 1465597936,
- "_etag": "/"0400fc2f-0000-0000-0000-575b3ff00000/"",
- "patientID": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0",
- "timestamp": "2016-06-10T22:31:56Z",
- "source": "{/"Pragma/":/"no-cache/",/"x-ms-request-id/":/"e750c9a9-bd48-44c4-bbba-1688b6f8a132/",/"OData-Version/":/"4.0/",/"Cache-Control/":/"no-cache/",/"Date/":/"Fri, 10 Jun 2016 22:31:56 GMT/",/"Set-Cookie/":/"ARRAffinity=785f4334b5e64d2db0b84edcc1b84f1bf37319679aefce206b51510e56fd9770;Path=/;Domain=127.0.0.1/",/"Server/":/"Microsoft-IIS/8.0,Microsoft-HTTPAPI/2.0/",/"X-AspNet-Version/":/"4.0.30319/",/"X-Powered-By/":/"ASP.NET/",/"Content-Length/":/"1935/",/"Content-Type/":/"application/json; odata.metadata=minimal; odata.streaming=true/",/"Expires/":/"-1/"}",
- "operation": "New Patient",
- "salesforceId": "",
- "expired": false
- }
-}
-
-```
-
-Now let's look at the error handling steps.
-
-### Error handling
-
-The following logic app code sample shows how you can implement error handling.
-
-#### Create error record
-
-Here is the logic app source code for creating an error record.
-
-``` json
-"actions": {
- "CreateErrorRecord": {
- "metadata": {
- "apiDefinitionUrl": "https://.../swagger/docs/v1",
- "swaggerSource": "website"
- },
- "type": "Http",
- "inputs": {
- "body": {
- "action": "New_Patient",
- "isError": true,
- "crmId": "@{triggerBody()['CRMid']}",
- "patientID": "@{triggerBody()['CRMid']}",
- "message": "@{body('Create_NewPatientRecord')['message']}",
- "providerId": "@{triggerBody()['providerId']}",
- "severity": 4,
- "source": "@{actions('Create_NewPatientRecord')['inputs']['body']}",
- "statusCode": "@{int(outputs('Create_NewPatientRecord')['statusCode'])}",
- "salesforceId": "",
- "update": false
- },
- "method": "post",
- "uri": "https://.../api/CrMtoSfError"
- },
- "runAfter":
- {
- "Create_NewPatientRecord": ["Failed" ]
- }
- }
-}
-```
-
-#### Insert error into Cosmos DB--request
-
-``` json
-
-{
- "uri": "https://.../api/CrMtoSfError",
- "method": "post",
- "body": {
- "action": "New_Patient",
- "isError": true,
- "crmId": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0",
- "patientId": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0",
- "message": "Salesforce failed to complete task: Message: duplicate value found: Account_ID_MED__c duplicates value on record with id: 001U000001c83gK",
- "providerId": "",
- "severity": 4,
- "salesforceId": "",
- "update": false,
- "source": "{/"Account_Class_vod__c/":/"PRAC/",/"Account_Status_MED__c/":/"I/",/"CRM_HUB_ID__c/":/"6b115f6d-a7ee-e511-80f5-3863bb2eb2d0/",/"Credentials_vod__c/",/"DTC_ID_MED__c/":/"/",/"Fax/":/"/",/"FirstName/":/"A/",/"Gender_vod__c/":/"/",/"IMS_ID__c/":/"/",/"LastName/":/"BAILEY/",/"MasterID_mp__c/":/"/",/"C_ID_MED__c/":/"851588/",/"Middle_vod__c/":/"/",/"NPI_vod__c/":/"/",/"PDRP_MED__c/":false,/"PersonDoNotCall/":false,/"PersonEmail/":/"/",/"PersonHasOptedOutOfEmail/":false,/"PersonHasOptedOutOfFax/":false,/"PersonMobilePhone/":/"/",/"Phone/":/"/",/"Practicing_Specialty__c/":/"FM - FAMILY MEDICINE/",/"Primary_City__c/":/"/",/"Primary_State__c/":/"/",/"Primary_Street_Line2__c/":/"/",/"Primary_Street__c/":/"/",/"Primary_Zip__c/":/"/",/"RecordTypeId/":/"012U0000000JaPWIA0/",/"Request_Date__c/":/"2016-06-10T22:31:55.9647467Z/",/"ONY_ID__c/":/"/",/"Specialty_1_vod__c/":/"/",/"Suffix_vod__c/":/"/",/"Website/":/"/"}",
- "statusCode": "400"
- }
-}
-```
-
-#### Insert error into Cosmos DB--response
-
-``` json
-{
- "statusCode": 200,
- "headers": {
- "Pragma": "no-cache",
- "Cache-Control": "no-cache",
- "Date": "Fri, 10 Jun 2016 22:31:57 GMT",
- "Server": "Microsoft-IIS/8.0",
- "X-AspNet-Version": "4.0.30319",
- "X-Powered-By": "ASP.NET",
- "Content-Length": "1561",
- "Content-Type": "application/json; charset=utf-8",
- "Expires": "-1"
- },
- "body": {
- "id": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0-1465597917",
- "_rid": "sQx2APhVzAA8AAAAAAAAAA==",
- "_self": "dbs/sQx2AA==/colls/sQx2APhVzAA=/docs/sQx2APhVzAA8AAAAAAAAAA==/",
- "_ts": 1465597912,
- "_etag": "/"0c00eaac-0000-0000-0000-575b3fdc0000/"",
- "prescriberId": "6b115f6d-a7ee-e511-80f5-3863bb2eb2d0",
- "timestamp": "2016-06-10T22:31:57.3651027Z",
- "action": "New_Patient",
- "salesforceId": "",
- "update": false,
- "body": "CRM failed to complete task: Message: duplicate value found: CRM_HUB_ID__c duplicates value on record with id: 001U000001c83gK",
- "source": "{/"Account_Class_vod__c/":/"PRAC/",/"Account_Status_MED__c/":/"I/",/"CRM_HUB_ID__c/":/"6b115f6d-a7ee-e511-80f5-3863bb2eb2d0/",/"Credentials_vod__c/":/"DO - Degree level is DO/",/"DTC_ID_MED__c/":/"/",/"Fax/":/"/",/"FirstName/":/"A/",/"Gender_vod__c/":/"/",/"IMS_ID__c/":/"/",/"LastName/":/"BAILEY/",/"MterID_mp__c/":/"/",/"Medicis_ID_MED__c/":/"851588/",/"Middle_vod__c/":/"/",/"NPI_vod__c/":/"/",/"PDRP_MED__c/":false,/"PersonDoNotCall/":false,/"PersonEmail/":/"/",/"PersonHasOptedOutOfEmail/":false,/"PersonHasOptedOutOfFax/":false,/"PersonMobilePhone/":/"/",/"Phone/":/"/",/"Practicing_Specialty__c/":/"FM - FAMILY MEDICINE/",/"Primary_City__c/":/"/",/"Primary_State__c/":/"/",/"Primary_Street_Line2__c/":/"/",/"Primary_Street__c/":/"/",/"Primary_Zip__c/":/"/",/"RecordTypeId/":/"012U0000000JaPWIA0/",/"Request_Date__c/":/"2016-06-10T22:31:55.9647467Z/",/"XXXXXXX/":/"/",/"Specialty_1_vod__c/":/"/",/"Suffix_vod__c/":/"/",/"Website/":/"/"}",
- "code": 400,
- "errors": null,
- "isError": true,
- "severity": 4,
- "notes": null,
- "resolved": 0
- }
-}
-```
-
-#### Salesforce error response
-
-``` json
-{
- "statusCode": 400,
- "headers": {
- "Pragma": "no-cache",
- "x-ms-request-id": "3e8e4884-288e-4633-972c-8271b2cc912c",
- "X-Content-Type-Options": "nosniff",
- "Cache-Control": "no-cache",
- "Date": "Fri, 10 Jun 2016 22:31:56 GMT",
- "Set-Cookie": "ARRAffinity=785f4334b5e64d2db0b84edcc1b84f1bf37319679aefce206b51510e56fd9770;Path=/;Domain=127.0.0.1",
- "Server": "Microsoft-IIS/8.0,Microsoft-HTTPAPI/2.0",
- "X-AspNet-Version": "4.0.30319",
- "X-Powered-By": "ASP.NET",
- "Content-Length": "205",
- "Content-Type": "application/json; charset=utf-8",
- "Expires": "-1"
- },
- "body": {
- "status": 400,
- "message": "Salesforce failed to complete task: Message: duplicate value found: Account_ID_MED__c duplicates value on record with id: 001U000001c83gK",
- "source": "Salesforce.Common",
- "errors": []
- }
-}
-
-```
-
-### Return the response back to parent logic app
-
-After you get the response, you can pass the response back to the parent logic app.
-
-#### Return success response to parent logic app
-
-``` json
-"SuccessResponse": {
- "runAfter":
- {
- "UpdateNew_CRMPatientResponse": ["Succeeded"]
- },
- "inputs": {
- "body": {
- "status": "Success"
- },
- "headers": {
- " Content-type": "application/json",
- "x-ms-date": "@utcnow()"
- },
- "statusCode": 200
- },
- "type": "Response"
-}
-```
-
-#### Return error response to parent logic app
-
-``` json
-"ErrorResponse": {
- "runAfter":
- {
- "Create_NewPatientRecord": ["Failed"]
- },
- "inputs": {
- "body": {
- "status": "BadRequest"
- },
- "headers": {
- "Content-type": "application/json",
- "x-ms-date": "@utcnow()"
- },
- "statusCode": 400
- },
- "type": "Response"
-}
-
-```
--
-## Cosmos DB repository and portal
-
-Our solution added capabilities with [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db).
-
-### Error management portal
-
-To view the errors, you can create an MVC web app to display the error records from Cosmos DB.
-The **List**, **Details**, **Edit**, and **Delete** operations are included in the current version.
-
-> [!NOTE]
-> Edit operation: Cosmos DB replaces the entire document.
-> The records shown in the **List** and **Detail** views are samples only.
-> They are not actual patient appointment records.
-
-Here are examples of our MVC app details created with the previously described approach.
-
-#### Error management list
-![Error List](media/logic-apps-scenario-error-and-exception-handling/errorlist.png)
-
-#### Error management detail view
-![Error Details](media/logic-apps-scenario-error-and-exception-handling/errordetails.png)
-
-### Log management portal
-
-To view the logs, we also created an MVC web app.
-Here are examples of our MVC app details created with the previously described approach.
-
-#### Sample log detail view
-![Log Detail View](media/logic-apps-scenario-error-and-exception-handling/samplelogdetail.png)
-
-### API app details
-
-#### Logic Apps exception management API
-
-Our open-source Azure Logic Apps exception management API app
-provides functionality as described here - there are two controllers:
-
-* **ErrorController** inserts an error record (document) in an Azure Cosmos DB collection.
-* **LogController** Inserts a log record (document) in an Azure Cosmos DB collection.
-
-> [!TIP]
-> Both controllers use `async Task<dynamic>` operations,
-> allowing operations to resolve at runtime,
-> so we can create the Azure Cosmos DB schema in the body of the operation.
->
-
-Every document in Azure Cosmos DB must have a unique ID.
-We are using `PatientId` and adding a timestamp that is converted to a Unix timestamp value (double).
-We truncate the value to remove the fractional value.
-
-You can view the source code of our error controller API from
-[GitHub](https://github.com/HEDIDIN/LogicAppsExceptionManagementApi/blob/master/LogicAppsExceptionManagementApi/Controllers/LogController.cs).
-
-We call the API from a logic app by using the following syntax:
-
-``` json
- "actions": {
- "CreateErrorRecord": {
- "metadata": {
- "apiDefinitionUrl": "https://.../swagger/docs/v1",
- "swaggerSource": "website"
- },
- "type": "Http",
- "inputs": {
- "body": {
- "action": "New_Patient",
- "isError": true,
- "crmId": "@{triggerBody()['CRMid']}",
- "prescriberId": "@{triggerBody()['CRMid']}",
- "message": "@{body('Create_NewPatientRecord')['message']}",
- "salesforceId": "@{triggerBody()['salesforceID']}",
- "severity": 4,
- "source": "@{actions('Create_NewPatientRecord')['inputs']['body']}",
- "statusCode": "@{int(outputs('Create_NewPatientRecord')['statusCode'])}",
- "update": false
- },
- "method": "post",
- "uri": "https://.../api/CrMtoSfError"
- },
- "runAfter": {
- "Create_NewPatientRecord": ["Failed"]
- }
- }
- }
-```
-
-The expression in the preceding code sample checks for the *Create_NewPatientRecord* status of **Failed**.
-
-## Summary
-
-* You can easily implement logging and error handling in a logic app.
-* You can use Azure Cosmos DB as the repository for log and error records (documents).
-* You can use MVC to create a portal to display log and error records.
-
-### Source code
-
-The source code for the Logic Apps exception management API application is available in this
-[GitHub repository](https://github.com/HEDIDIN/LogicAppsExceptionManagementApi "Logic App Exception Management API").
-
-## Next steps
-
-* [View more logic app examples and scenarios](../logic-apps/logic-apps-examples-and-scenarios.md)
-* [Monitor logic apps](../logic-apps/monitor-logic-apps.md)
-* [Automate logic app deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md)
logic-apps Logic Apps Scenario Social Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-social-serverless.md
Last updated 03/15/2018
# Create a streaming customer insights dashboard with Azure Logic Apps and Azure Functions + Azure offers [serverless](https://azure.microsoft.com/solutions/serverless/) tools that help you quickly build and host apps in the cloud, without having to think about infrastructure. In this tutorial, you can create a dashboard that triggers on customer feedback,
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-visual-studio.md
Last updated 01/28/2022
# Manage logic apps with Visual Studio + Although you can create, edit, manage, and deploy logic apps in the [Azure portal](https://portal.azure.com), you can also use Visual Studio when you want to add your logic apps to source control, publish different versions, and create [Azure Resource Manager](../azure-resource-manager/management/overview.md) templates for various deployment environments. With Visual Studio Cloud Explorer, you can find and manage your logic apps along with other Azure resources. For example, you can open, download, edit, run, view run history, disable, and enable logic apps that are already deployed in the Azure portal. If you're new to working with Azure Logic Apps in Visual Studio, learn [how to create logic apps with Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md). You can also [manage your logic apps in the Azure portal](manage-logic-apps-with-azure-portal.md).
logic-apps Quickstart Create Deploy Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-azure-resource-manager-template.md
Last updated 04/27/2022
# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with an ARM template + [Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying an Azure Resource Manager template (ARM template) to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences). [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
logic-apps Quickstart Create Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-bicep.md
Last updated 04/07/2022
# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with Bicep + [Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying a Bicep file to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences). [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
Last updated 05/02/2022
# Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal + This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account. More specifically, you create a [Consumption plan-based](logic-apps-pricing.md#consumption-pricing) logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. This resource runs in [*multi-tenant* Azure Logic Apps](logic-apps-overview.md). > [!NOTE]
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
Last updated 02/02/2022
# Quickstart: Create and manage logic app workflow definitions with multi-tenant Azure Logic Apps and Visual Studio Code + This quickstart shows how to create and manage logic app workflows that help you automate tasks and processes that integrate apps, data, systems, and services across organizations and enterprises by using multi-tenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio Code. You can create and edit the underlying workflow definitions, which use JavaScript Object Notation (JSON), for logic apps through a code-based experience. You can also work on existing logic apps that are already deployed to Azure. For more information about multi-tenant versus single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md). Although you can perform these same tasks in the [Azure portal](https://portal.azure.com) and in Visual Studio, you can get started faster in Visual Studio Code when you're already familiar with logic app definitions and want to work directly in code. For example, you can disable, enable, delete, and refresh already created logic apps. Also, you can work on logic apps and integration accounts from any development platform where Visual Studio Code runs, such as Linux, Windows, and Mac.
logic-apps Quickstart Create Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-with-visual-studio.md
Last updated 05/25/2021
# Quickstart: Create automated integration workflows with multi-tenant Azure Logic Apps and Visual Studio + This quickstart shows how to design, develop, and deploy automated workflows that integrate apps, data, systems, and services across enterprises and organizations by using multi-tenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio. Although you can perform these tasks in the Azure portal, Visual Studio lets you add your logic apps to source control, publish different versions, and create Azure Resource Manager templates for different deployment environments. For more information about multi-tenant versus single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md). If you're new to Azure Logic Apps and just want the basic concepts, try the [quickstart for creating a logic app in the Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md). The Logic App Designer works similarly in both the Azure portal and Visual Studio.
logic-apps Quickstart Logic Apps Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-cli.md
Last updated 05/03/2022
# Quickstart: Create and manage workflows with Azure CLI in Azure Logic Apps + This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using the [Azure CLI Logic Apps extension](/cli/azure/logic) (`az logic`). From the command line, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running operations such as `list`, `show` (`get`), `update`, and `delete` from the command line. > [!WARNING]
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
Last updated 05/03/2022
# Quickstart: Create and manage workflows with Azure PowerShell in Azure Logic Apps + This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using [Azure PowerShell](/powershell/azure/install-az-ps). From PowerShell, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module. > [!NOTE]
logic-apps Sample Logic Apps Cli Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sample-logic-apps-cli-script.md
Last updated 07/30/2020
# Azure CLI script sample - create a logic app + This script creates a sample logic app through the [Azure CLI Logic Apps extension](/cli/azure/logic), (`az logic`). For a detailed guide to creating and managing logic apps through the Azure CLI, see the [Logic Apps quickstart for the Azure CLI](quickstart-logic-apps-azure-cli.md). > [!WARNING]
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
Last updated 03/11/2022
# Secure traffic between single-tenant Standard logic apps and Azure virtual networks using private endpoints and VNet integration + To securely and privately communicate between your workflow in a Standard logic app and an Azure virtual network, you can set up *private endpoints* for inbound traffic and use VNet integration for outbound traffic. A private endpoint is a network interface that privately and securely connects to a service powered by Azure Private Link. This service can be an Azure service such as Azure Logic Apps, Azure Storage, Azure Cosmos DB, SQL, or your own Private Link Service. The private endpoint uses a private IP address from your virtual network, which effectively brings the service into your virtual network.
logic-apps Send Related Messages Sequential Convoy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/send-related-messages-sequential-convoy.md
Last updated 05/29/2020
# Send related messages in order by using a sequential convoy in Azure Logic Apps with Azure Service Bus + When you need to send correlated messages in a specific order, you can follow the [*sequential convoy* pattern](/azure/architecture/patterns/sequential-convoy) when using [Azure Logic Apps](../logic-apps/logic-apps-overview.md) by using the [Azure Service Bus connector](../connectors/connectors-create-api-servicebus.md). Correlated messages have a property that defines the relationship between those messages, such as the ID for the [session](../service-bus-messaging/message-sessions.md) in Service Bus. For example, suppose that you have 10 messages for a session named "Session 1", and you have 5 messages for a session named "Session 2" that are all sent to the same [Service Bus queue](../service-bus-messaging/service-bus-queues-topics-subscriptions.md). You can create a logic app that processes messages from the queue so that all messages from "Session 1" are handled by a single trigger run and all messages from "Session 2" are handled by the next trigger run.
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Last updated 02/14/2022
# Set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps + This article shows how to deploy a Standard logic app project to single-tenant Azure Logic Apps from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md). ## Prerequisites
logic-apps Set Up Sql Db Storage Single Tenant Standard Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-sql-db-storage-single-tenant-standard-workflows.md
# Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps (preview) + > [!IMPORTANT] > This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
logic-apps Tutorial Build Schedule Recurring Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md
Last updated 03/24/2021
# Tutorial: Create schedule-based and recurring automation workflows with Azure Logic Apps + This tutorial shows how to build an example [logic app](../logic-apps/logic-apps-overview.md) that automates a workflow that runs on a recurring schedule. Specifically, this example logic app checks the travel time, including the traffic, between two places and runs every weekday morning. If the time exceeds a specific limit, the logic app sends you an email that includes the travel time and the extra time necessary to arrive at your destination. The workflow includes various steps, which start with a schedule-based trigger followed by a Bing Maps action, a data operations action, a control flow action, and an email notification action. In this tutorial, you learn how to:
logic-apps Tutorial Process Email Attachments Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-email-attachments-workflow.md
Last updated 03/24/2021
# Tutorial: Automate tasks to process emails by using Azure Logic Apps, Azure Functions, and Azure Storage + Azure Logic Apps helps you automate workflows and integrate data across Azure services, Microsoft services, other software-as-a-service (SaaS) apps, and on-premises systems. This tutorial shows how you can build a [logic app](../logic-apps/logic-apps-overview.md) that handles incoming emails and any attachments. This logic app analyzes the email content, saves the content to Azure storage, and sends notifications for reviewing that content. In this tutorial, you learn how to:
logic-apps Tutorial Process Mailing List Subscriptions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md
Last updated 03/24/2021
# Tutorial: Create automated approval-based workflows by using Azure Logic Apps + This tutorial shows how to build an example [logic app](../logic-apps/logic-apps-overview.md) that automates an approval-based workflow. Specifically, this example logic app processes subscription requests for a mailing list that's managed by the [MailChimp](https://mailchimp.com/) service. This logic app includes various steps, which start by monitoring an email account for requests, sends these requests for approval, checks whether or not the request gets approval, adds approved members to the mailing list, and confirms whether or not new members get added to the list. In this tutorial, you learn how to:
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Azure Machine Learning supports the following unmanaged compute types:
* Azure Databricks * Azure Data Lake Analytics * Azure Container Instance
-* Azure Kubernetes Service & Azure Arc-enabled Kubernetes (preview)
+* Kubernetes
For more information, see [set up compute targets for model training and deployment](how-to-attach-compute-targets.md)
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Title: Data access
-description: Learn how to connect to your data storage on Azure with Azure Machine Learning.
+description: Learn how to access and process data in Azure Machine Learning
> * [v1](./v1/concept-data.md) > * [v2 (current version)](concept-data.md)
-Azure Machine Learning makes it easy to connect to your data in the cloud. It provides an abstraction layer over the underlying storage service, so you can securely access and work with your data without having to write code specific to your storage type. Azure Machine Learning also provides the following data capabilities:
+Azure Machine Learning lets you bring data from a local machine or an existing cloud-based storage. In this article you will learn the main data concepts in Azure Machine Learning, including:
-* Interoperability with Pandas and Spark DataFrames
-* Versioning and tracking of data lineage
-* Data labeling (V1 only for now)
+> [!div class="checklist"]
+> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs.
+> - [**Data asset**](#data-asset) - Create data assets in your workspace to share with team members, version, and track data lineage.
+> - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
+> - [**MLTable**](#mltable) - a method to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
-You can bring data to Azure Machine Learning
+## URIs
+A URI (uniform resource identifier) represents a storage location on your local computer, an attached Datastore, blob/ADLS storage, or a publicly available http(s) location. In addition to local paths (for example: `./path_to_my_data/`), several different protocols are supported for cloud storage locations:
-* Directly from your local machine and URLs
+- `http(s)` - Private/Public Azure Blob Storage Locations, or publicly available http(s) location
+- `abfs(s)` - Azure Data Lake Storage Gen2 storage location
+- `azureml` - An Azure Machine Learning [Datastore](#datastore) location
-* That's already in a cloud-based storage service in Azure and access it using your [Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal) related credentials and an Azure Machine Learning datastore.
+Azure Machine Learning distinguishes two types of URIs:
-<a name="datastores"></a>
-## Connect to storage with datastores
+Data type | Description | Examples
+||
+`uri_file` | Refers to a specific **file** location | `https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>`<br> `azureml://datastores/<datastore_name>/paths/<folder>/<file>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>`
+`uri_folder`| Refers to a specific **folder** location | `https://<account_name>.blob.core.windows.net/<container_name>/<folder>`<br> `azureml://datastores/<datastore_name>/paths/<folder>` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/`
+
+URIs are mapped to the filesystem on the compute target, hence using URIs is like using files or folders in the command that consumes/produces them. URIs leverage **identity-based authentication** to connect to storage services with either your Azure Active Directory ID (default) or Managed Identity.
+
+> [!TIP]
+> For data located in an Azure storage account we recommend using the [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/#overview). You can browse data and obtain the URI for any file/folder by right-selecting **Copy URL**:
+> :::image type="content" source="media/concept-data/use-storage-explorer.png" alt-text="Screenshot of the Storage Explorer with Copy URL highlighted.":::
+
+### Examples
+
+# [`uri_file`](#tab/uri-file-example)
+
+Below is an example of a job specification that shows how to access a file from a public blob store. In this example, the job executes the Linux `ls` command.
+
+```yml
+# hello-data-uri-file.yml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+command: |
+ ls ${{inputs.my_csv_file}}
+
+inputs:
+ my_csv_file:
+ type: uri_file
+ path: https://azuremlexamples.blob.core.windows.net/datasets/titanic.csv
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+
+Create the job using the CLI:
+
+```azurecli
+az ml job create --file hello-data-uri-file.yml
+```
+
+When the job has completed the user logs will show the standard output of the Linux command `ls ${{inputs.my_csv_file}}`:
++
+Notice that the file has been mapped to the filesystem on the compute target and `${{inputs.my_csv_file}}` resolves to that location.
+
+# [`uri_folder`](#tab/uri-folder-example)
+
+In the case where you want to map a **folder** to the filesystem of the compute target, you define the `uri_folder` type in your job specification file:
+
+```yml
+# hello-data-uri-folder.yml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+command: |
+ ls ${{inputs.sampledata}}
+inputs:
+ sampledata:
+ type: uri_folder
+ path: https://<account_name>.blob.core.windows.net/<container_name>/<folder>
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+
+Create the job using the CLI:
+
+```azurecli
+az ml job create --file hello-data-uri-folder.yml
+```
+
+When the job has completed the user logs will show the standard output of the Linux command `ls ${{inputs.sampledata}}`:
++
+Notice that the folder has been mapped to the filesystem on the compute target (you can see all the files in the folder), and `${{inputs.sampledata}}` resolves to the folder location.
++
-Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
+## Data asset
-You can access your data and create datastores with,
-* [Credential-based data authentication](how-to-access-data.md), like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have *Reader* access to the workspace.
-* Identity-based data authentication to connect to storage services with your Azure Active Directory ID.
+Azure Machine Learning allows you to create and version data assets in a workspace so that other members of your team can easily consume the data asset by using a name/version.
+
+### Example usage
++
+# [Create data asset](#tab/cli-data-create-example)
+To create a data asset, firstly define a data specification in a YAML file that provides a name, type and path for the data:
+
+```yml
+# data-example.yml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: <name>
+description: <description>
+type: <type> # uri_file, uri_folder, mltable
+path: https://<storage_name>.blob.core.windows.net/<container_name>/path
+```
-The following table summarizes which cloud-based storage services in Azure can be registered as datastores and what authentication type can be used to access them.
+Then in the CLI, create the data asset:
+
+```azurecli
+az ml data create --file data-example.yml --version 1
+```
+
+# [Consume data asset](#tab/cli-data-consume-example)
+
+To consume a data asset in a job, define your job specification in a YAML file the path to be `azureml:<NAME_OF_DATA_ASSET>:<VERSION>`, for example:
+
+```yml
+# hello-data-uri-file.yml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+command: |
+ ls ${{inputs.sampledata}}
+code: src
+inputs:
+ sampledata:
+ type: <type> # uri_file, uri_folder, mltable
+ path: azureml:<data_name>@latest
+environment: azureml:<environment_name>@latest
+compute: azureml:<compute_name>
+```
+
+Next, use the CLI to create your job:
+
+```azurecli
+az ml job create --file hello-data-uri-file.yml
+```
+++
+## Datastore
+
+An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. The benefits of creating and using a datastore are:
+
+1. A common and easy-to-use API to interact with different storage types (Blob/Files/ADLS).
+1. Easier to discover useful datastores when working as a team.
+1. When using credential-based access (service principal/SAS/key), the connection information is secured so you don't have to code it in your scripts.
+
+When you create a datastore with an existing storage account on Azure, you have the choice between two different authentication methods:
+
+- **Credential-based** - authenticate access to the data using a service principal, shared access signature (SAS) token or account key. These credentials can be accessed by users who have *Reader* access to the workspace.
+- **Identity-based** - authenticate access to the data using your Azure Active Directory identity or managed identity.
+
+The table below summarizes which cloud-based storage services in Azure can be created as an Azure Machine Learning datastore and what authentication type can be used to access them.
Supported storage service | Credential-based authentication | Identity-based authentication ||:-:|::|
Azure File Share| Γ£ô | |
Azure Data Lake Gen1 | Γ£ô | Γ£ô| Azure Data Lake Gen2| Γ£ô | Γ£ô|
+> [!NOTE]
+> The URI format to refer to a file/folder/mltable on a datastore is:
+> `azureml://datastores/<name>/paths/<path>`
-## Work with data
-You can read in data from a datastore or directly from storage uri's.
+## MLTable
+`mltable` is a way to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
-In Azure Machine Learning there are three types for data
+> [!TIP]
+> The ideal scenarios to use `mltable` are:
+> - The schema of your data is complex and/or changes frequently.
+> - You only need a subset of data (for example: a sample of rows or files, specific columns, etc).
+> - AutoML jobs requiring tabular data.
+>
+> If your scenario does not fit the above then it is likely that URIs are a more suitable type.
-Data type | Description | Example
-||
-`uri_file` | Refers to a specific file | `https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv`.
-`uri_folder`| Refers to a specific folder |`https://<account_name>.blob.core.windows.net/<container_name>/path`
-`mltable` |Defines tabular data for use in automated ML and parallel jobs| Schema and subsetting transforms
+### A motivating example
+
+Imagine a scenario where you have many text files in a folder:
+
+```text
+Γö£ΓöÇΓöÇ my_data
+Γöé Γö£ΓöÇΓöÇ file1.txt
+Γöé Γö£ΓöÇΓöÇ file1_use_this.txt
+Γöé Γö£ΓöÇΓöÇ file2.txt
+Γöé Γö£ΓöÇΓöÇ file2_use_this.txt
+.
+.
+.
+Γöé Γö£ΓöÇΓöÇ file1000.txt
+Γöé Γö£ΓöÇΓöÇ file1000_use_this.txt
+```
-In the following example, the expectation is to provide a `uri_folder` because to read the file in, the training script creates a path that joins the folder with the file name. If you want to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
+Each text file has the following structure:
+
+```text
+store_location date zip_code amount x y z noise_col1 noise_col2
+Seattle 20/04/2022 12324 123.4 true false true blah blah
+.
+.
+.
+London 20/04/2022 XX358YY 156 true true true blah blah
+```
+
+Some important features of this data are:
+
+- The data of interest is only in files that have the following suffix: `_use_this.txt` and other file names that don't match should be ignored.
+- The date should be represented as a date and not a string.
+- The x, y, z columns are booleans, not strings.
+- The store location is an index that is useful for generating subsets of data.
+- The file is encoded in `ascii` format.
+- Every file in the folder contains the same header.
+- The first million records for zip_code are numeric but later on you can see they're alphanumeric.
+- There are some dummy (noisy) columns in the data that aren't useful for machine learning.
+
+You could materialize the above text files into a dataframe using Pandas and a URI:
```python
- file_name = os.path.join(args.input_folder, "MY_CSV_FILE.csv")
-df = pd.read_csv(file_name)
+import glob
+import datetime
+import os
+import argparse
+import pandas as pd
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--input_folder", type=str)
+args = parser.parse_args()
+
+path = os.path.join(args.input_folder, "*_use_this.txt")
+files = glob.glob(path)
+
+# create empty list
+dfl = []
+
+# dict of column types
+col_types = {
+ "zip": str,
+ "date": datetime.date,
+ "x": bool,
+ "y": bool,
+ "z": bool
+}
+
+# enumerate files into a list of dfs
+for f in files:
+ csv = pd.read_table(
+ path=f,
+ delimiter=" ",
+ header=0,
+ usecols=["store_location", "zip_code", "date", "amount", "x", "y", "z"],
+ dtype=col_types,
+ encoding='ascii'
+ )
+ dfl.append(csv)
+
+# concatenate the list of dataframes
+df = pd.concat(dfl)
+# set the index column
+df.index_columns("store_location")
+```
+
+However, it will be the responsibility of the *consumer* of the data asset to parse the schema into a dataframe. In the scenario defined above, that means the consumers will need to independently ascertain the Python code to materialize the data into a dataframe.
+
+Passing responsibility to the consumer of the data asset will cause problems when:
+
+- **The schema changes (for example, a column name changes):** All consumers of the data must update their Python code independently. Other examples can be type changes, columns being added/removed, encoding change, etc.
+- **The data size increases** - If the data gets too large for Pandas to process, then all the consumers of the data need to switch to a more scalable library (PySpark/Dask).
+
+Under the above two conditions, `mltable` can help because it enables the creator of the data asset to define the schema in a single file and the consumers can materialize the data into a dataframe easily without needing to write Python code to parse the schema. For the above example, the creator of the data asset defines an MLTable file **in the same directory** as the data:
+
+```text
+Γö£ΓöÇΓöÇ my_data
+Γöé Γö£ΓöÇΓöÇ MLTable
+Γöé Γö£ΓöÇΓöÇ file1.txt
+Γöé Γö£ΓöÇΓöÇ file1_use_this.txt
+.
+.
+.
+```
+
+The MLTable file has the following definition that specifies how the data should be processed into a dataframe:
+
+```yaml
+type: mltable
+
+paths:
+ - pattern: ./*_use_this.txt
+
+traits:
+ - index_columns: store_location
+
+transformations:
+ - read_delimited:
+ encoding: ascii
+ header: all_files_have_same_headers
+ delimiter: " "
+ - keep_columns: ["store_location", "zip_code", "date", "amount", "x", "y", "z"]
+ - convert_column_types:
+ - columns: ["x", "y", "z"]
+ to_type: boolean
+ - columns: "date"
+ to_type: datetime
+```
+
+The consumers can read the data into dataframe using three lines of Python code:
+
+```python
+import mltable
+
+tbl = mltable.load("./my_data")
+df = tbl.to_pandas_dataframe()
```
+If the schema of the data changes, then it can be updated in a single place (the MLTable file) rather than having to make code changes in multiple places.
+
+Just like `uri_file` and `uri_folder`, you can create a data asset with `mltable` types.
+ ## Next steps
-* [Work with data using SDK v2](how-to-use-data.md)
+- [Install and set up the CLI (v2)](how-to-configure-cli.md#install-and-set-up-the-cli-v2)
+- [Create datastores](how-to-datastore.md#create-datastores)
+- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Read and write data in a job](how-to-read-write-data-v2.md#read-and-write-data-in-a-job)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning Concept Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-datastore.md
- Title: Azure Machine Learning datastores-
-description: Learn how to securely connect to your data storage on Azure with Azure Machine Learning datastores.
------- Previously updated : 10/21/2021--
-# Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
--
-# Azure Machine Learning datastores
-
-Supported cloud-based storage services in Azure Machine Learning include:
-
-+ Azure Blob Container
-+ Azure File Share
-+ Azure Data Lake
-+ Azure Data Lake Gen2
-
-Azure Machine Learning allows you to connect to data directly by using a storage URI, for example:
--- ```https://storageAccount.blob.core.windows.net/container/path/file.csv``` (Azure Blob Container)-- ```abfss://container@storageAccount.dfs.core.windows.net/base/path/folder1``` (Azure Data Lake Gen2). -
-Storage URIs use *identity-based* access that will prompt you for your Azure Active Directory token for data access authentication. This approach allows for data access management at the storage level and keeps credentials confidential.
-
-> [!NOTE]
-> When using Notebooks in Azure Machine Learning Studio, your Azure Active Directory token is automatically passed through to storage for data access authentication.
-
-Although storage URIs provide a convenient mechanism to access data, there may be cases where using an Azure Machine Learning *Datastore* is a better option:
-
-* **You need *credential-based* data access (for example: Service Principals, SAS Tokens, Account Name/Key).** Datastores are helpful because they keep the connection information to your data storage securely in an Azure Keyvault, so you don't have to code it in your scripts.
-* **You want team members to easily discover relevant datastores.** Datastores are registered to an Azure Machine Learning workspace making them easier for your team members to find/discover them.
-
- [Register and create a datastore](how-to-datastore.md) to easily connect to your storage account, and access the data in your underlying storage service.
-
-## Credential-based vs identity-based access
-
-Azure Machine Learning Datastores support both credential-based and identity-based access. In *credential-based* access, your authentication credentials are usually kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. When you use *identity-based* data access, Azure Machine Learning prompts you for your Azure Active Directory token for data access authentication instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
--
-## Next steps
-
-+ [How to create a datastore](how-to-datastore.md)
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
You can use the following options for input data when invoking a batch endpoint:
For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
-For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
- Specify the storage output location to any datastore and path. By default, batch endpoints store their output to the workspace's default blob store, organized by the Job Name (a system-generated GUID). ### Security
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 03/04/2022 Last updated : 06/08/2022 ms.devlang: azurecli
In this article, learn about the network communication requirements when securin
## Well-known ports
-The following are well-known ports used by services listed in this article. If a port range is used in this article and is not listed in this section, it is specific to the service and may not have published information on what it is used for:
+The following are well-known ports used by services listed in this article. If a port range is used in this article and isn't listed in this section, it's specific to the service and may not have published information on what it's used for:
| Port | Description |
These rule collections are described in more detail in [What are some Azure Fire
| **graph.windows.net** | Used by Azure Machine Learning compute instance/cluster. | | **anaconda.com**</br>**\*.anaconda.com** | Used to install default packages. | | **\*.anaconda.org** | Used to get repo data. |
- | **pypi.org** | Used to list dependencies from the default index, if any, and the index is not overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
+ | **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
| **cloud.r-project.org** | Used when installing CRAN packages for R development. | | **\*pytorch.org** | Used by some examples based on PyTorch. | | **\*.tensorflow.org** | Used by some examples based on Tensorflow. |
These rule collections are described in more detail in [What are some Azure Fire
1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) articles.
-### Azure Kubernetes Services
+### Kubernetes Compute
+
+[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra network configuration. Configure the [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements) needed by Azure Arc agents. The following outbound URLs are also required for Azure Machine Learning,
+
+| Outbound Endpoint| Port | Description|Training |Inference |
+|--|--|--|--|--|
+| __\*.kusto.windows.net__<br>__\*.table.core.windows.net__<br>__\*.queue.core.windows.net__ | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**|
+| __\*.azurecr.io__ | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
+| __\*.blob.core.windows.net__ | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
+| __\*.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure mahince learning service API.|**&check;**|**&check;**|
+| __pypi.org__ | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A|
+| __archive.ubuntu.com__<br>__security.ubuntu.com__<br>__ppa.launchpad.net__ | http:80 | Required to download the necessary security patches. |**&check;**|N/A|
+
+> [!NOTE]
+> `<region>` is the lowcase full spelling of Azure Region, for example, eastus, southeastasia.
+
-When using Azure Kubernetes Service with Azure Machine Learning, the following traffic must be allowed:
-* General inbound/outbound requirements for AKS as described in the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) article.
-* __Outbound__ to mcr.microsoft.com.
-* When deploying a model to an AKS cluster, use the guidance in the [Deploy ML models to Azure Kubernetes Service](v1/how-to-deploy-azure-kubernetes-service.md#connectivity) article.
## Other firewalls
-The guidance in this section is generic, as each firewall has its own terminology and specific configurations. If you have questions, check the documentation for the firewall you are using.
+The guidance in this section is generic, as each firewall has its own terminology and specific configurations. If you have questions, check the documentation for the firewall you're using.
If not configured correctly, the firewall can cause problems using your workspace. There are various host names that are used both by the Azure Machine Learning workspace. The following sections list hosts that are required for Azure Machine Learning.
The hosts in this section are used to install Python packages, and are required
| - | - | | **anaconda.com**</br>**\*.anaconda.com** | Used to install default packages. | | **\*.anaconda.org** | Used to get repo data. |
-| **pypi.org** | Used to list dependencies from the default index, if any, and the index is not overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
+| **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
| **\*pytorch.org** | Used by some examples based on PyTorch. | | **\*.tensorflow.org** | Used by some examples based on Tensorflow. |
The hosts in this section are used to install R packages, and are required durin
| - | - | | **cloud.r-project.org** | Used when installing CRAN packages. |
-### Azure Arc enabled Kubernetes <a id="arc-kubernetes"></a>
-
-Clusters running behind an outbound proxy server or firewall need additional network configurations. Fulfill [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements) needed by Azure Arc agents. Besides that, the following outbound URLs are required for Azure Machine Learning,
-
-| Outbound Endpoint| Port | Description|Training |Inference |
-|--|--|--|--|--|
-| *.kusto.windows.net,<br> *.table.core.windows.net, <br>*.queue.core.windows.net | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**|
-| *.azurecr.io | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
-| *.blob.core.windows.net | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
-| *.workspace.\<region\>.api.azureml.ms ,<br> \<region\>.experiments.azureml.net, <br> \<region\>.api.azureml.ms | https:443 | Azure mahince learning service API.|**&check;**|**&check;**|
-| pypi.org | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A|
-| archive.ubuntu.com, <br> security.ubuntu.com,<br> ppa.launchpad.net | http:80 | Required to download the necessary security patches. |**&check;**|N/A|
-
-> [!NOTE]
-> `<region>` is the lowcase full spelling of Azure Region, for example, eastus, southeastasia.
- ### Visual Studio Code hosts The hosts in this section are used to install Visual Studio Code packages to establish a remote connection between Visual Studio Code and compute instances in your Azure Machine Learning workspace.
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
Last updated 05/24/2022
# Customer intent: As an administrator, I need to administrate data access and set up authentication method for data scientists.
-# How to authenticate data access
+# Data administration
Learn how to manage data access and how to authenticate in Azure Machine Learning [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!IMPORTANT] > The information in this article is intended for Azure administrators who are creating the infrastructure required for an Azure Machine Learning solution.
The following table lists what identities should be used for specific scenarios:
Data access is complex and it's important to recognize that there are many pieces to it. For example, accessing data from Azure Machine Learning studio is different than using the SDK. When using the SDK on your local development environment, you're directly accessing data in the cloud. When using studio, you aren't always directly accessing the data store from your client. Studio relies on the workspace to access data on your behalf. > [!TIP]
-> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, _user_ identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+> If you need to access data from outside Azure Machine Learning, such as using Azure Storage Explorer, *user* identity is probably what is used. Consult the documentation for the tool or service you are using for specific information. For more information on how Azure Machine Learning works with data, see [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
## Azure Storage Account
When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-s
When using Azure Data Lake Storage Gen2 as a datastore, you can use both Azure RBAC and POSIX-style access control lists (ACLs) to control data access inside of a virtual network.
-**To use Azure RBAC**, follow the steps in the [Datastore: Azure Storage Account](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) section of the 'Use Azure Machine Learning studio in an Azure Virtual Network' article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
+__To use Azure RBAC__, follow the steps in the [Datastore: Azure Storage Account](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) section of the 'Use Azure Machine Learning studio in an Azure Virtual Network' article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
-**To use ACLs**, the managed identity of the workspace can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
+__To use ACLs__, the managed identity of the workspace can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
## Next steps
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
Title: Azure Machine Learning anywhere with Kubernetes (preview)
+ Title: Configure Kubernetes cluster (Preview)
description: Configure and attach an existing Kubernetes in any infrastructure across on-premises and multi-cloud to build, train, and deploy models with seamless Azure ML experience.
-# Azure Machine Learning anywhere with Kubernetes (preview)
+# Configure Kubernetes cluster for Azure Machine Learning (Preview)
-Azure Machine Learning anywhere with Kubernetes (AzureML anywhere) enables customers to build, train, and deploy models in any infrastructure on-premises and across multi-cloud using Kubernetes. With an AzureML extension deployment on a Kubernetes cluster, you can instantly onboard teams of ML professionals with AzureML service capabilities. These services include full machine learning lifecycle and automation with MLOps in hybrid cloud and multi-cloud.
+Using Kubernetes with Azure Machine Learning enables you to build, train, and deploy models in any infrastructure on-premises and across multi-cloud. With an AzureML extension deployment on Kubernetes, you can instantly onboard teams of ML professionals with AzureML service capabilities. These services include full machine learning lifecycle and automation with MLOps in hybrid cloud and multi-cloud.
+
+You can easily bring AzureML capabilities to your Kubernetes cluster from cloud or on-premises by deploying AzureML extension.
+
+- For Azure Kubernetes Service (AKS) in Azure, deploy AzureML extension to the AKS directly. For more information, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)](../aks/cluster-extensions.md).
+- For Kubernetes clusters on-premises or from other cloud providers, connect the cluster with Azure Arc first, then deploy AzureML extension to Azure Arc-enabled Kubernetes. For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md).
In this article, you can learn about steps to configure and attach an existing Kubernetes cluster anywhere for Azure Machine Learning:
-* [Deploy AzureML extension to Kubernetes cluster](#deploy-azureml-extensionexample-scenarios)
-* [Create and use instance types to manage compute resources efficiently](#create-custom-instance-types)
+* [Deploy AzureML extension to Kubernetes cluster](#deploy-azureml-extension)
+* [Attach a Kubernetes cluster to AzureML workspace](#attach-a-kubernetes-cluster-to-an-azureml-workspace)
+
+## Why use Azure Machine Learning Kubernetes?
+
+AzureML Kubernetes is customer fully configured and managed compute for machine learning. It can be used as both [training compute target](./concept-compute-target.md#train) and [inference compute target](./concept-compute-target.md#deploy). It provides the following benefits:
+
+- Harness existing heterogeneous or homogeneous Kubernetes cluster, with CPUs or GPUs.
+- Share the same Kubernetes cluster in multiple AzureML Workspaces across region.
+- Use the same Kubernetes cluster for different machine learning purposes, including model training, batch scoring, and real-time inference.
+- Secure network communication between the cluster and cloud via Azure Private Link and Private Endpoint.
+- Isolate team projects and machine learning workloads with Kubernetes node selector and namespace.
+- [Target certain types of compute nodes and CPU/Memory/GPU resource allocation for training and inference workloads](./reference-kubernetes.md#create-and-use-instance-types-for-efficient-compute-resource-usage).
+- [Connect with custom data sources for machine learning workloads using Kubernetes PV and PVC ](./reference-kubernetes.md#azureml-jobs-connect-with-on-premises-data-storage).
## Prerequisites
-1. A running Kubernetes cluster - **We recommend minimum of 4 vCPU cores and 8GB memory, around 2 vCPU cores and 3GB memory will be used by Azure Arc agent and AzureML extension components**.
-1. Connect your Kubernetes cluster to Azure Arc. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
+* A running Kubernetes cluster in [supported version and region](./reference-kubernetes.md#supported-kubernetes-version-and-region). **We recommend your cluster has a minimum of 4 vCPU cores and 8GB memory, around 2 vCPU cores and 3GB memory will be used by Azure Arc and AzureML extension components**.
+* Other than Azure Kubernetes Services (AKS) cluster in Azure, connect your Kubernetes cluster to Azure Arc. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
- a. if you have Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, follow another prerequisite step [here](#prerequisite-for-azure-arc-enabled-kubernetes) before AzureML extension deployment.
-1. If you have an AKS cluster in Azure, register the AKS-ExtensionManager feature flag by using the ```az feature register --namespace "Microsoft.ContainerService" --name "AKS-ExtensionManager``` command. **Azure Arc connection is not required and not recommended**.
-1. Install or upgrade Azure CLI to version >=2.16.0
-1. Install the Azure CLI extension ```k8s-extension``` (version>=1.0.0) by running ```az extension add --name k8s-extension```
+ * If you have an AKS cluster in Azure, **Azure Arc connection is not required and not recommended**.
+
+ * If you have Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, follow another prerequisite step [here](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) before AzureML extension deployment.
+* Cluster running behind an outbound proxy server or firewall needs additional network configurations. Fulfill the [network requirements](./how-to-access-azureml-behind-firewall.md#kubernetes-compute)
+* Install or upgrade Azure CLI to version >=2.16.0
+* Install the Azure CLI extension ```k8s-extension``` (version>=1.2.3) by running ```az extension add --name k8s-extension```
+ ## What is AzureML extension
-AzureML extension consists of a set of system components deployed to your Kubernetes cluster so you can enable your cluster to run an AzureML workload - model training jobs or model endpoints. You can use an Azure CLI command ```k8s-extension create``` to deploy AzureML extension.
+AzureML extension consists of a set of system components deployed to your Kubernetes cluster in `azureml` namespace, so you can enable your cluster to run an AzureML workload - model training jobs or model endpoints. You can use an Azure CLI command ```k8s-extension create``` to deploy AzureML extension. General available (GA) version of AzureML extension >= 1.1.1
-For a detailed list of AzureML extension system components, see appendix [AzureML extension components](#appendix-i-azureml-extension-components).
+For a detailed list of AzureML extension system components, see [AzureML extension components](./reference-kubernetes.md#azureml-extension-components).
## Key considerations for AzureML extension deployment AzureML extension allows you to specify configuration settings needed for different workload support at deployment time. Before AzureML extension deployment, **read following carefully to avoid unnecessary extension deployment errors**:
- * Type of workload to enable for your cluster. ```enableTraining``` and ```enableInference``` config settings are your convenient choices here; they will enable training and inference workload respectively.
+ * Type of workload to enable for your cluster. ```enableTraining``` and ```enableInference``` config settings are your convenient choices here; `enableTraining` will enable **training** and **batch scoring** workload, `enableInference` will enable **real-time inference** workload.
* For inference workload support, it requires ```azureml-fe``` router service to be deployed for routing incoming inference requests to model pod, and you would need to specify ```inferenceRouterServiceType``` config setting for ```azureml-fe```. ```azureml-fe``` can be deployed with one of following ```inferenceRouterServiceType```: * Type ```LoadBalancer```. Exposes ```azureml-fe``` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer. * Type ```NodePort```. Exposes ```azureml-fe``` on each Node's IP at a static port. You'll be able to contact ```azureml-fe```, from outside of cluster, by requesting ```<NodeIP>:<NodePort>```. Using ```NodePort``` also allows you to set up your own load balancing solution and SSL termination for ```azureml-fe```. * Type ```ClusterIP```. Exposes ```azureml-fe``` on a cluster-internal IP, and it makes ```azureml-fe``` only reachable from within the cluster. For ```azureml-fe``` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and SSL termination for ```azureml-fe```. * For inference workload support, to ensure high availability of ```azureml-fe``` routing service, AzureML extension deployment by default creates 3 replicas of ```azureml-fe``` for clusters having 3 nodes or more. If your cluster has **less than 3 nodes**, set ```inferenceLoadbalancerHA=False```.
- * For inference workload support, you would also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either ```sslSecret``` config setting or combination of ```sslCertPemFile``` and ```sslCertKeyFile``` config settings. By default, AzureML extension deployment expects **HTTPS** support required, and you would need to provide above config setting. For development or test purposes, **HTTP** support is conveniently supported through config setting ```allowInsecureConnections=True```.
+ * For inference workload support, you would also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either ```sslSecret``` config setting or combination of ```sslKeyPemFile``` and ```sslCertPemFile``` config settings. By default, AzureML extension deployment expects **HTTPS** support required, and you would need to provide above config setting. For development or test purposes, **HTTP** support is conveniently supported through config setting ```allowInsecureConnections=True```.
-For a complete list of configuration settings available to choose at AzureML deployment time, see appendix [Review AzureML extension config settings](#appendix-ii-review-azureml-deployment-configuration-settings)
+For a complete list of configuration settings available to choose at AzureML deployment time, see [Review AzureML extension config settings](#review-azureml-extension-configuration-settings)
-## Deploy AzureML extension - example scenarios
+## Deploy AzureML extension
+### [CLI](#tab/deploy-extension-with-cli)
+To deploy AzureML extension with CLI, use `az k8s-extension create` command passing in values for the mandatory parameters.
-### Use AKS in Azure for a quick Proof of Concept, both training and inference workloads support
+We list 4 typical extension deployment scenarios for reference. To deploy extension for your production usage, please carefully read the complete list of [configuration settings](#review-azureml-extension-configuration-settings).
-Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
-```azurecli
- az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
-```
+- **Use AKS in Azure for a quick Proof of Concept, both training and inference workloads support**
-### Use Minikube on your desktop for a quick POC, training workload support only
+ Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Run the following Azure CLI command to deploy AzureML extension:
+ ```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
-Ensure you have fulfilled [prerequisites](#prerequisites). Since the follow steps would create an Azure Arc connected cluster, you would need to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Run following simple Azure CLI command to deploy AzureML extension:
-```azurecli
- az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
-```
+- **Use Kubernetes at your lab for a quick Proof of Concept, training workload support only**
-### Enable an AKS cluster in Azure for production training and inference workload
+ Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on Azure Arc connected cluster, you would need to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Run following simple Azure CLI command to deploy AzureML extension:
+ ```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
-Ensure you have fulfilled [prerequisites](#prerequisites). Assuming your cluster has more than 3 nodes, and you will use an Azure public load balancer and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
-```azurecli
- az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslCertKeyFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
-```
-### Enable an Azure Arc connected cluster anywhere for production training and inference workload
+- **Enable an AKS cluster in Azure for production training and inference workload**
+ Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on AKS, make sure to specify ```managedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, and you will use an Azure public load balancer and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+ ```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+- **Enable an Azure Arc connected cluster anywhere for production training and inference workload using NVIDIA GPUs**
-Ensure you have fulfilled [prerequisites](#prerequisites). Assuming your cluster has more than 3 nodes, you will use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
-```azurecli
- az k8s-extension create --name azureml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslCertKeyFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
-```
+ Ensure you have fulfilled [prerequisites](#prerequisites). For AzureML extension deployment on Azure Arc connected cluster, make sure to specify ```connectedClusters``` value for ```--cluster-type``` parameter. Assuming your cluster has more than 3 nodes, you will use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy AzureML extension:
+ ```azurecli
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort sslCname=<ssl cname> installNvidiaDevicePlugin=True installDcgmExporter=True --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+
+### [Azure portal](#tab/portal)
+
+The UI experience to deploy extension is only available for **Azure Arc-enabled Kubernetes**. If you have an AKS cluster without Azure Arc connected, you need to use CLI to deploy AzureML extension.
+
+1. In the [Azure portal](https://ms.portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster.
+1. Select **Extensions** (under **Settings**), and then select **+ Add**.
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui.png" alt-text="Screenshot of adding new extension to the Arc-enabled Kubernetes cluster from Azure portal.":::
+
+1. From the list of available extensions, select **Azure Machine Learning extension** to deploy the latest version of the extension.
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-extension-list.png" alt-text="Screenshot of selecting AzureML extension from Azure portal.":::
+
+1. Follow the prompts to deploy the extension. You can customize the installation by configuring the installtion in the tab of **Basics**, **Configurations** and **Advanced**. For a detailed list of AzureML extension configuration settings, see [AzureML extension configuration settings](#review-azureml-extension-configuration-settings).
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-settings.png" alt-text="Screenshot of configuring AzureML extension settings from Azure portal.":::
+1. On the **Review + create** tab, select **Create**.
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-create.png" alt-text="Screenshot of deploying new extension to the Arc-enabled Kubernetes cluster from Azure portal.":::
+
+1. After the deployment completes, you are able to see the AzureML extension in **Extension** page. If the extension installation succeeds, you can see **Installed** for the **Install status**.
+
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/deploy-extension-from-ui-extension-detail.png" alt-text="Screenshot of installed AzureML extensions listing in Azure portal.":::
### Verify AzureML extension deployment 1. Run the following CLI command to check AzureML extension details: ```azurecli
- az k8s-extension show --name arcml-extension --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
+ az k8s-extension show --name <extension-name> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <resource-group>
```
-1. In the response, look for "name": "azureml-extension" and "provisioningState": "Succeeded". Note it might show "provisioningState": "Pending" for the first few minutes.
+1. In the response, look for "name" and "provisioningState": "Succeeded". Note it might show "provisioningState": "Pending" for the first few minutes.
1. If the provisioningState shows Succeeded, run the following command on your machine with the kubeconfig file pointed to your cluster to check that all pods under "azureml" namespace are in 'Running' state:
Ensure you have fulfilled [prerequisites](#prerequisites). Assuming your cluster
kubectl get pods -n azureml ```
-## Attach a Kubernetes cluster to an AzureML workspace
-
-### Prerequisite for Azure Arc enabled Kubernetes
+### Manage AzureML extension
-Azure Machine Learning workspace defaults to having a system-assigned managed identity to access Azure ML resources. The steps are completed if the system assigned default setting is on.
+Update, list, show and delete an AzureML extension.
+- For AKS cluster without Azure Arc connected, refer to [Usage of AKS extensions](../aks/cluster-extensions.md#usage-of-cluster-extensions).
+- For Azure Arc-enabled Kubernetes, refer to [Usage of cluster extensions](../azure-arc/kubernetes/extensions.md#usage-of-cluster-extensions).
-Otherwise, if a user-assigned managed identity is specified in Azure Machine Learning workspace creation, the following role assignments need to be granted to the identity manually before attaching the compute.
+
-|Azure resource name |Role to be assigned|
-|--|--|
-|Azure Relay|Azure Relay Owner|
-|Azure Arc-enabled Kubernetes|Reader|
+## Review AzureML extension configuration settings
-Azure Relay resources are created under the same Resource Group as the Arc cluster.
+For AzureML extension deployment configurations, use ```--config``` or ```--config-protected``` to specify list of ```key=value``` pairs. Following is the list of configuration settings available to be used for different AzureML extension deployment scenario ns.
-### [Studio](#tab/studio)
+|Configuration Setting Key Name |Description |Training |Inference |Training and Inference
+ |--|--|--|--|--|
+ |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
+ | ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
+ | ```allowInsecureConnections``` |```True``` or ```False```, default `False`. **Must** be set to ```True``` to use inference HTTP endpoints for development or test purposes. |N/A| Optional | Optional |
+ | ```inferenceRouterServiceType``` |```loadBalancer```, ```nodePort``` or ```clusterIP```. **Required** if ```enableInference=True```. | N/A| **&check;** | **&check;** |
+ | ```internalLoadBalancerProvider``` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to ```azure``` to allow the inference router using internal load balancer. | N/A| Optional | Optional |
+ |```sslSecret```| The name of Kubernetes secret in `azureml` namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. You can find a sample YAML definition of sslSecret [here](./reference-kubernetes.md#sample-yaml-definition-of-kubernetes-secret-for-tlsssl). Use this config or combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional |
+ |```sslCname``` |A SSL CName used by inference HTTPS endpoint. **Required** if ```allowInsecureConnections=True``` | N/A | Optional | Optional|
+ | ```inferenceRouterHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy 3 ingress controller replicas for high availability, which requires at least 3 workers in a cluster. Set to ```False``` if your cluster has fewer than 3 workers, in this case only one ingress controller is deployed. | N/A| Optional | Optional |
+ |```nodeSelector``` | By default, the deployed kubernetes resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
+ |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```False```. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment will not install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to ```True```, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
+ |```installPromOp```|```True``` or ```False```, default ```True```. AzureML extension needs prometheus operator to manage prometheus. Set to ```False``` to reuse existing prometheus operator. Compatible [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md) helm chart versions are from 9.3.4 to 30.0.1.| Optional| Optional | Optional |
+ |```installVolcano```| ```True``` or ```False```, default ```True```. AzureML extension needs volcano scheduler to schedule the job. Set to ```False``` to reuse existing volcano scheduler. Supported volcano scheduler versions are 1.4, 1.5. | Optional| N/A | Optional |
+ |```installDcgmExporter``` |```True``` or ```False```, default ```False```. Dcgm-exporter can expose GPU metrics for AzureML workloads, which can be monitored in Azure portal. Set ```installDcgmExporter``` to ```True``` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](https://github.com/Azure/AML-Kubernetes/blob/master/docs/troubleshooting.md#dcgm) |Optional |Optional |Optional |
-Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your workspace for training.
-1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
-1. Under **Manage**, select **Compute**.
-1. Select the **Attached computes** tab.
-1. Select **+New > Kubernetes (preview)**
+ |Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference
+ |--|--|--|--|--|
+ | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. | N/A| Optional | Optional |
+
+## Attach a Kubernetes cluster to an AzureML workspace
- :::image type="content" source="media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
+Attach an AKS or Arc-enabled Kubernetes cluster with AzureML extension installed to AzureML workspace. The same cluster can be attached and shared by multiple AzureMl Workspaces across region.
-1. Enter a compute name and select your Azure Arc-enabled Kubernetes cluster from the dropdown.
+### Prerequisite
- * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster.
+Azure Machine Learning workspace defaults to having a system-assigned managed identity to access Azure ML resources. The steps are completed if the system assigned default setting is on.
- * **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md) .
- :::image type="content" source="media/how-to-attach-arc-kubernetes/configure-kubernetes-cluster-2.png" alt-text="Screenshot of settings for developer configuration of Kubernetes cluster.":::
+Otherwise, if a user-assigned managed identity is specified in Azure Machine Learning workspace creation, the following role assignments need to be granted to the managed identity manually before attaching the compute.
-1. Select **Attach**
+|Azure resource name |Role to be assigned|Description|
+|--|--|--|
+|Azure Relay|Azure Relay Owner|Only applicable for Arc-enabled Kubernetes cluster. Azure Relay isn't created for AKS cluster without Arc connected.|
+|Azure Arc-enabled Kubernetes|Reader|Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
- In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
+Azure Relay resource is created during the extension deployment under the same Resource Group as the Arc-enabled Kubernetes cluster.
- :::image type="content" source="media/how-to-attach-arc-kubernetes/provision-resources.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::
### [CLI](#tab/cli)
-You can attach an AKS or Azure Arc enabled Kubernetes cluster using the Azure Machine Learning 2.0 CLI (preview).
-
-Use the Azure Machine Learning CLI [`attach`](/cli/azure/ml/compute) command and set the `--type` argument to `Kubernetes` to attach your Kubernetes cluster using the Azure Machine Learning 2.0 CLI.
-> [!NOTE]
-> Compute attach support for AKS or Azure Arc enabled Kubernetes clusters requires a version of the Azure CLI `ml` extension >= 2.0.1a4. For more information, see [Install and set up the CLI (v2)](how-to-configure-cli.md).
-
-The following commands show how to attach an Azure Arc-enabled Kubernetes cluster and use it as a compute target with managed identity enabled.
+The following commands show how to attach an AKS and Azure Arc-enabled Kubernetes cluster, and use it as a compute target with managed identity enabled.
**AKS** ```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/managedclusters/<cluster-name>" --type Kubernetes --identity-type UserAssigned --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name k8s-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>" --identity-type SystemAssigned --namespace <Kubernetes namespace to run AzureML workloads> --no-wait
``` **Azure Arc enabled Kubernetes** ```azurecli
-az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster-name>" --type kubernetes --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
+az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name amlarc-compute --resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Kubernetes/connectedClusters/<cluster-name>" --user-assigned-identities "subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>" --no-wait
```
-Use the `identity_type` argument to enable `SystemAssigned` or `UserAssigned` managed identities.
+Set the `--type` argument to `Kubernetes`. Use the `identity_type` argument to enable `SystemAssigned` or `UserAssigned` managed identities.
> [!IMPORTANT] > `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster. --
-## Create instance types for efficient compute resource usage
-
-### What are instance types?
-
-Instance types are an Azure Machine Learning concept that allows targeting certain types of
-compute nodes for training and inference workloads. For an Azure VM, an example for an
-instance type is `STANDARD_D2_V3`.
-
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Instance types are represented by two elements in AzureML extension:
-[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
-and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
-In short, a `nodeSelector` lets us specify which node a pod should run on. The node must have a
-corresponding label. In the `resources` section, we can set the compute resources (CPU, memory and
-Nvidia GPU) for the pod.
-
-### Default instance type
-
-By default, a `defaultinstancetype` with following definition is created when you attach Kuberenetes cluster to AzureML workspace:
-- No `nodeSelector` is applied, meaning the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.6 cpu cores, 1536Mi memory and 0 GPU:
-```yaml
-resources:
- requests:
- cpu: "0.6"
- memory: "1536Mi"
- limits:
- cpu: "0.6"
- memory: "1536Mi"
- nvidia.com/gpu: null
-```
-
-> [!NOTE]
-> - The default instance type purposefully uses little resources. To ensure all ML workloads
-run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
-> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
-> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
-
-## Create custom instance types
+### [Python](#tab/python)
-To create a new instance type, create a new custom resource for the instance type CRD. For example:
-```bash
-kubectl apply -f my_instance_type.yaml
-```
+```python
+from azureml.core.compute import KubernetesCompute, ComputeTarget
-With `my_instance_type.yaml`:
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceType
-metadata:
- name: myinstancetypename
-spec:
- nodeSelector:
- mylabel: mylabelvalue
- resources:
- limits:
- cpu: "1"
- nvidia.com/gpu: 1
- memory: "2Gi"
- requests:
- cpu: "700m"
- memory: "1500Mi"
-```
+# Specify a name for your Kubernetes compute
+compute_target_name = "<kubernetes compute target name>"
-The following steps will create an instance type with the labeled behavior:
-- Pods will be scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods will be assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods will be assigned resource limits of `1` CPU, `2Gi` memory and `1` Nvidia GPU.
+# resource ID of the Arc-enabled Kubernetes cluster
+cluster_resource_id = "/subscriptions/<sub ID>/resourceGroups/<RG>/providers/Microsoft.Kubernetes/connectedClusters/<cluster name>"
-> [!NOTE]
-> - Nvidia GPU resources are only specified in the `limits` section as integer values. For more information,
- see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
-> - CPU and memory resources are string values.
-> - CPU can be specified in millicores, for example `100m`, or in full numbers, for example `"1"`
- is equivalent to `1000m`.
-> - Memory can be specified as a full number + suffix, for example `1024Mi` for 1024 MiB.
+user_assigned_identity_resouce_id = ['subscriptions/<sub ID>/resourceGroups/<RG>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity name>']
-It is also possible to create multiple instance types at once:
+# Specify Kubernetes namespace to run AzureML workloads
+ns = "default"
-```bash
-kubectl apply -f my_instance_type_list.yaml
+try:
+ compute_target = ComputeTarget(workspace=ws, name=compute_target_name)
+ print('Found existing cluster, use it.')
+except ComputeTargetException:
+ attach_configuration = KubernetesCompute.attach_configuration(resource_id = cluster_resource_id, namespace = ns, identity_type ='UserAssigned',identity_ids = user_assigned_identity_resouce_id)
+ compute_target = ComputeTarget.attach(ws, compute_target_name, attach_configuration)
+ compute_target.wait_for_completion(show_output=True)
```
+### [Studio](#tab/studio)
-With `my_instance_type_list.yaml`:
-```yaml
-apiVersion: amlarc.azureml.com/v1alpha1
-kind: InstanceTypeList
-items:
- - metadata:
- name: cpusmall
- spec:
- resources:
- requests:
- cpu: "100m"
- memory: "100Mi"
- limits:
- cpu: "1"
- nvidia.com/gpu: 0
- memory: "1Gi"
-
- - metadata:
- name: defaultinstancetype
- spec:
- resources:
- requests:
- cpu: "1"
- memory: "1Gi"
- limits:
- cpu: "1"
- nvidia.com/gpu: 0
- memory: "1Gi"
-```
+Attaching an Azure Arc-enabled Kubernetes cluster makes it available to your workspace for training.
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. Above `defaultinstancetype` definition will override the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
+1. Under **Manage**, select **Compute**.
+1. Select the **Attached computes** tab.
+1. Select **+New > Kubernetes**
-If a training or inference workload is submitted without an instance type, it uses the default
-instance type. To specify a default instance type for a Kubernetes cluster, create an instance
-type with name `defaultinstancetype`. It will automatically be recognized as the default.
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/attach-kubernetes-cluster.png" alt-text="Screenshot of settings for Kubernetes cluster to make available in your workspace.":::
-## Select instance type to submit training job
+1. Enter a compute name and select your Azure Arc-enabled Kubernetes cluster from the dropdown.
-To select an instance type for a training job using CLI (V2), specify its name as part of the
-`resources` properties section in job YAML. For example:
-```yaml
-command: python -c "print('Hello world!')"
-environment:
- docker:
- image: python
-compute: azureml:<compute_target_name>
-resources:
- instance_type: <instance_type_name>
-```
+ * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster.
-In the above example, replace `<compute_target_name>` with the name of your Kubernetes compute
-target and `<instance_type_name>` with the name of the instance type you wish to select. If there is no `instance_type` property specified, the system will use `defaultinstancetype` to submit job.
-
-## Select instance type to deploy model
-
-To select an instance type for a model deployment using CLI (V2), specify its name for `instance_type` property in deployment YAML. For example:
-
-```yaml
-deployments:
- - name: blue
- app_insights_enabled: true
- model:
- name: sklearn_mnist_model
- version: 1
- local_path: ./model/sklearn_mnist_model.pkl
- code_configuration:
- code:
- local_path: ./script/
- scoring_script: score.py
- instance_type: <instance_type_name>
- environment:
- name: sklearn-mnist-env
- version: 1
- path: .
- conda_file: file:./model/conda.yml
- docker:
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
-```
+ * **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md) .
-In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there is no `instance_type` property specified, the system will use `defaultinstancetype` to deploy model.
-
-### Appendix I: AzureML extension components
-
-Upon AzureML extension deployment completes, it will create following resources in Azure cloud:
-
- |Resource name |Resource type | Description |
- |--|--|--|
- |Azure Service Bus|Azure resource|Used to sync nodes and cluster resource information to Azure Machine Learning services regularly.|
- |Azure Relay|Azure resource|Route traffic between Azure Machine Learning services and the Kubernetes cluster.|
-
-Upon AzureML extension deployment completes, it will create following resources in Kubernetes cluster, depending on each AzureML extension deployment scenario:
-
- |Resource name |Resource type |Training |Inference |Training and Inference| Description | Communication with cloud service|
- |--|--|--|--|--|--|--|
- |relayserver|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The entry component to receive and sync the message with cloud.|Receive the request of job creation, model deployment from cloud service; sync the job status with cloud service.|
- |gateway|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The gateway to communicate and send data back and forth.|Send nodes and cluster resource information to cloud services.|
- |aml-operator|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of training jobs.| Token exchange with cloud token service for authentication and authorization of Azure Container Registry used by training job.|
- |metrics-controller-manager|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Manage the configuration for Prometheus|N/A|
- |{EXTENSION-NAME}-kube-state-metrics|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Export the cluster-related metrics to Prometheus.|N/A|
- |{EXTENSION-NAME}-prometheus-operator|Kubernetes deployment|**&check;**|**&check;**|**&check;**| Provide Kubernetes native deployment and management of Prometheus and related monitoring components.|N/A|
- |amlarc-identity-controller|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
- |amlarc-identity-proxy|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
- |azureml-fe|Kubernetes deployment|N/A|**&check;**|**&check;**|The front-end component that routes incoming inference requests to deployed services.|azureml-fe service logs are sent to Azure Blob.|
- |inference-operator-controller-manager|Kubernetes deployment|N/A|**&check;**|**&check;**|Manage the lifecycle of inference endpoints. |N/A|
- |cluster-status-reporter|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Gather the cluster information, like cpu/gpu/memory usage, cluster healthiness.|N/A|
- |csi-blob-controller|Kubernetes deployment|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|N/A|
- |csi-blob-node|Kubernetes daemonset|**&check;**|N/A|**&check;**|Azure Blob Storage Container Storage Interface(CSI) driver.|N/A|
- |fluent-bit|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Gather the components' system log.| Upload the components' system log to cloud.|
- |k8s-host-device-plugin-daemonset|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Expose fuse to pods on each node.|N/A|
- |prometheus-prom-prometheus|Kubernetes statefulset|**&check;**|**&check;**|**&check;**|Gather and send job metrics to cloud.|Send job metrics like cpu/gpu/memory utilization to cloud.|
- |volcano-admission|Kubernetes deployment|**&check;**|N/A|**&check;**|Volcano admission webhook.|N/A|
- |volcano-controllers|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of Azure Machine Learning training job pods.|N/A|
- |volcano-scheduler |Kubernetes deployment|**&check;**|N/A|**&check;**|Used to do in cluster job scheduling.|N/A|
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/configure-kubernetes-cluster-2.png" alt-text="Screenshot of settings for developer configuration of Kubernetes cluster.":::
-> [!IMPORTANT]
- > * Azure ServiceBus and Azure Relay resources are under the same resource group as the Arc cluster resource. These resources are used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
- > * By default, the deployed kubernetes deployment resourses are randomly deployed to 1 or more nodes of the cluster, and daemonset resource are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described as below.
+1. Select **Attach**
-> [!NOTE]
- > * **{EXTENSION-NAME}:** is the extension name specified with ```az k8s-extension create --name``` CLI command.
+ In the Attached compute tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*.
-### Appendix II: Review AzureML deployment configuration settings
+ :::image type="content" source="media/how-to-attach-arc-kubernetes/provision-resources.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::
+
+
-For AzureML extension deployment configurations, use ```--config``` or ```--config-protected``` to specify list of ```key=value``` pairs. Following is the list of configuration settings available to be used for different AzureML extension deployment scenario ns.
+## Next steps
- |Configuration Setting Key Name |Description |Training |Inference |Training and Inference
- |--|--|--|--|--|
- |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
- | ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** |
- | ```allowInsecureConnections``` |```True``` or ```False```, default False. This **must** be set to ```True``` for AzureML extension deployment with HTTP endpoints support for inference, when ```sslCertPemFile``` and ```sslKeyPemFile``` are not provided. |N/A| Optional | Optional |
- | ```inferenceRouterServiceType``` |```loadBalancer``` or ```nodePort```. **Must** be set for ```enableInference=true```. | N/A| **&check;** | **&check;** |
- | ```internalLoadBalancerProvider``` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. **Must** be set to ```azure``` to allow the inference router use internal load balancer. | N/A| Optional | Optional |
- |```sslSecret```| The Kubernetes secret name under azureml namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to False. Use this config or give static cert and key file path in configuration protected settings. |N/A| Optional | Optional |
- |```sslCname``` |A SSL CName to use if enabling SSL validation on the cluster. | N/A | N/A | required when using HTTPS endpoint |
- | ```inferenceLoadBalancerHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy three ingress controller replicas for high availability, which requires at least three workers in a cluster. Set this value to ```False``` if you have fewer than three workers and want to deploy AzureML extension for development and testing only, in this case it will deploy one ingress controller replica only. | N/A| Optional | Optional |
- |```openshift``` | ```True``` or ```False```, default ```False```. Set to ```True``` if you deploy AzureML extension on ARO or OCP cluster. The deployment process will automatically compile a policy package and load policy package on each node so AzureML services operation can function properly. | Optional| Optional | Optional |
- |```nodeSelector``` | Set the node selector so the extension components and the training/inference workloads will only be deployed to the nodes with all specified selectors. Usage: `nodeSelector.key=value`, support multiple selectors. Example: `nodeSelector.node-purpose=worker nodeSelector.node-region=eastus`| Optional| Optional | Optional |
- |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```False```. [Nvidia Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on Nvidia GPU hardware. By default, AzureML extension deployment will not install Nvidia Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this configuration setting to ```True```, so the extension will install Nvidia Device Plugin, but make sure to have [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites) ready beforehand. | Optional |Optional |Optional |
- |```blobCsiDriverEnabled```| ```True``` or ```False```, default ```True```. Blob CSI driver is required for ML workloads. User can specify this configuration setting to ```False``` if it was installed already. | Optional |Optional |Optional |
- |```reuseExistingPromOp```|```True``` or ```False```, default ```False```. AzureML extension needs prometheus operator to manage prometheus. Set to ```True``` to reuse existing prometheus operator. Compatible [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md) helm chart versions are from 9.3.4 to 30.0.1.| Optional| Optional | Optional |
- |```volcanoScheduler.enable```| ```True``` or ```False```, default ```True```. AzureML extension needs volcano scheduler to schedule the job. Set to ```False``` to reuse existing volcano scheduler. Supported volcano scheduler versions are 1.4, 1.5. | Optional| N/A | Optional |
- |```logAnalyticsWS``` |```True``` or ```False```, default ```False```. AzureML extension integrates with Azure LogAnalytics Workspace to provide log viewing and analysis capability through LogAalytics Workspace. This setting must be explicitly set to ```True``` if customer wants to use this capability. LogAnalytics Workspace cost may apply. |N/A |Optional |Optional |
- |```installDcgmExporter``` |```True``` or ```False```, default ```False```. Dcgm-exporter is used to collect GPU metrics for GPU jobs. Specify ```installDcgmExporter``` flag to ```true``` to enable the build-in dcgm-exporter. |N/A |Optional |Optional |
+- [Create and use instance types for efficient compute resource usage](./reference-kubernetes.md#create-and-use-instance-types-for-efficient-compute-resource-usage)
+- [Train models with CLI v2](how-to-train-cli.md)
+- [Train models with Python SDK](how-to-set-up-training-targets.md)
+- [Deploy model with an online endpoint (CLI v2)](./how-to-deploy-managed-online-endpoints.md)
+- [Use batch endpoint for batch scoring (CLI v2)](./how-to-use-batch-endpoint.md)
- |Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference
- |--|--|--|--|--|
- | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to False. | N/A| Optional | Optional |
-
+### Examples
-## Next steps
+All AzureML examples can be found in [https://github.com/Azure/azureml-examples.git](https://github.com/Azure/azureml-examples).
-- [Train models with CLI (v2)](how-to-train-cli.md)-- [Configure and submit training runs](how-to-set-up-training-targets.md)-- [Tune hyperparameters](how-to-tune-hyperparameters.md)-- [Train a model using Scikit-learn](how-to-train-scikit-learn.md)-- [Train a TensorFlow model](how-to-train-tensorflow.md)-- [Train a PyTorch model](how-to-train-pytorch.md)-- [Train using Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)-- [Train model on-premise with outbound proxy server](../azure-arc/kubernetes/quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server)
+For any AzureML example, you only need to update the compute target name to your Kubernetes compute target, then you are all done.
+* Explore training job samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/jobs](https://github.com/Azure/azureml-examples/tree/main/cli/jobs)
+* Explore model deployment with online endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes)
+* Explore batch endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch)
+* Explore training job samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/jobs](https://github.com/Azure/azureml-examples/tree/main/sdk/jobs)
+* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/kubernetes)
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
In this article, learn how to create and manage compute targets in Azure Machine studio. You can also create and manage compute targets with:
-* Azure Machine Learning Learning SDK or CLI extension for Azure Machine Learning
+* Azure Machine Learning SDK or CLI extension for Azure Machine Learning
* [Compute instance](how-to-create-manage-compute-instance.md) * [Compute cluster](how-to-create-attach-compute-cluster.md)
- * [Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md)
* [Other compute resources](how-to-attach-compute-targets.md) * The [VS Code extension](how-to-manage-resources-vscode.md#compute-clusters) for Azure Machine Learning.
Follow the previous steps to view the list of compute targets. Then use these st
* [Compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create) * [Compute clusters](#amlcompute)
- * [Inference clusters](#inference-clusters)
* [Attached compute](#attached-compute) 1. Select __Create__.
During cluster creation or when editing compute cluster details, in the **Advanc
> [!IMPORTANT] > Using Azure Kubernetes Service with Azure Machine Learning has multiple configuration options. Some scenarios, such as networking, require additional setup and configuration. For more information on using AKS with Azure ML, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md).- Create or attach an Azure Kubernetes Service (AKS) cluster for large scale inferencing. Use the [steps above](#portal-create) to create the AKS cluster. Then fill out the form as follows:
Use the [steps above](#portal-create) to attach a compute. Then fill out the fo
* Azure Databricks (for use in machine learning pipelines) * Azure Data Lake Analytics (for use in machine learning pipelines) * Azure HDInsight
- * Kubernetes (preview)
+ * [Kubernetes](./how-to-attach-kubernetes-anywhere.md#attach-a-kubernetes-cluster-to-an-azureml-workspace)
1. Fill out the form and provide values for the required properties.
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
Title: Create Azure Machine Learning data assets
+ Title: Create Data Assets
-description: Learn how to create Azure Machine Learning data assets to access your data for machine learning experiment runs.
+description: Learn how to create Azure Machine Learning data assets.
Last updated 05/24/2022
-# Create Azure Machine Learning data assets
+# Create data assets
> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"] > * [v1](./v1/how-to-create-register-datasets.md) > * [v2 (current version)](how-to-create-register-datasets.md) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-In this article, you learn how to create Azure Machine Learning Data to access data for your local or remote experiments with the Azure Machine Learning SDK V2 and CLI V2. To understand where Data fits in Azure Machine Learning's overall data access workflow, see the [Work with Data](concept-data.md) article.
+In this article, you learn how to create a Data asset in Azure Machine Learning. By creating a Data asset, you create a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. You can create Data from Datastores, Azure Storage, public URLs, and local files.
-By creating a Data asset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also Data assets are lazily evaluated, which aids in workflow performance speeds. You can create Data from Datastores, Azure Storage, public URLs, and local files.
+The benefits of creating Data assets are:
-With Azure Machine Learning Data assets, you can:
+* You can **share and reuse data** with other members of the team such that they do not need to remember file locations.
-* Easy to share with other members of the team (no need to remember file locations)
-
-* Seamlessly access data during model training without worrying about connection strings or data paths.
-
-* Can refer to the Data by short Entity name in Azure ML
+* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
+* You can **version** the data.
## Prerequisites
To create and work with Data assets, you need:
* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed.
-
- * Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), which is a fully configured and managed development environment that includes integrated notebooks and the SDK already installed.
-
- **OR**
-
- * Work on your own Jupyter notebook and install the CLI/SDK and required packages.
-
-> [!IMPORTANT]
-> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
-
-## Compute size guidance
-
-When creating a Data asset, review your compute processing power and the size of your data in memory. The size of your data in storage isn't the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10x in a dataframe, so a 1-GB CSV file can become 10 GB in a dataframe.
-
-If your data is compressed, it can expand further; 20 GB of relatively sparse data stored in compressed parquet format can expand to ~400 GB in memory.
-
-[Learn more about optimizing data processing in Azure Machine Learning](concept-optimize-data-processing.md).
+* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed (`pip install mltable`).
-## Data types
+## Supported paths
-Azure Machine Learning allows you to work with different types of data. Your data can be local or in the cloud (from a registered Azure ML Datastore, a common Azure Storage URL or a public data url). In this article, you'll learn about using the Python SDK V2 and CLI V2 to work with _URIs_ and _Tables_. URIs reference a location either local to your development environment or in the cloud. Tables are a tabular data abstraction.
+When you create a data asset in Azure Machine Learning, you'll need to specify a `path` parameter that points to its location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter:
-For most scenarios, you could use URIs (`uri_folder` and `uri_file`). A URI references a location in storage that can be easily mapped to the filesystem of a compute node when you run a job. The data is accessed by either mounting or downloading the storage to the node.
-When using tables, you could use `mltable`. It's an abstraction for tabular data that is used for AutoML jobs, parallel jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning, and aren't using AutoML, we strongly encourage you to begin with URIs.
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A path on a datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
-If you're creating Azure ML Data asset from an existing Datastore:
-
-1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](/azure/role-based-access-control/check-access).
-
-1. Create the data asset by referencing paths in the datastore. You can create a Data asset from multiple paths in multiple datastores. There's no hard limit on the number of files or data size that you can create a data asset from.
> [!NOTE]
-> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A Data asset referencing one folder with 1000 files inside is considered referencing one data path. We recommend creating Data asset referencing less than 100 paths in datastores for optimal performance.
-
-> [!TIP]
-> You can create Data asset with identity-based data access. If you don't provide any credentials, we will use your identity by default.
+> When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud.
+## Create a `uri_folder` data asset
-> [!TIP]
-> If you have dataset assets created using the SDK v1, you can still use those with SDK v2. For more information, see the [Consuming V1 Dataset Assets in V2](how-to-read-write-data-v2.md) section.
+Below shows you how to create a *folder* as an asset:
+# [CLI](#tab/CLI)
+Create a `YAML` file (`<file-name>.yml`):
-## URIs
-
-The code snippets in this section cover the following scenarios:
-* Registering data as an asset in Azure Machine Learning
-* Reading registered data assets from Azure Machine Learning in a job
-
-These snippets use `uri_file` and `uri_folder`.
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-- `uri_file` is a type that refers to a specific file. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv'`.-- `uri_folder` is a type that refers to a specific folder. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path'`.
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
+type: uri_folder
+name: <name_of_data>
+description: <description goes here>
+path: <path>
+```
-> [!TIP]
-> We recommend using an argument parser to pass folder information into _data-plane_ code. By data-plane code, we mean your data processing and/or training code that you run in the cloud. The code that runs in your development environment and submits code to the data-plane is _control-plane_ code.
->
-> Data-plane code is typically a Python script, but can be any programming language. Passing the folder as part of job submission allows you to easily adjust the path from training locally using local data, to training in the cloud.
-> If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
+Next, create the data asset using the CLI:
-For a complete example, see the [working_with_uris.ipynb notebook](https://github.com/Azure/azureml-examples/blob/samuel100/mltable/sdk/assets/data/working_with_uris.ipynb).
+```azurecli
+az ml data create -f <file-name>.yml
+```
+# [Python-SDK](#tab/Python-SDK)
-### Register data as URI Folder type Data
+You can create a data asset in Azure Machine Learning using the following Python Code:
-# [Python-SDK](#tab/Python-SDK)
```python from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes
-# select one from:
-my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
-my_path = 'https://<account_name>.blob.core.windows.net/<container_name>/path' # blob
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
+
+my_path = '<path>'
my_data = Data( path=my_path, type=AssetTypes.URI_FOLDER,
- description="description here",
- name="a_name",
- version='1'
+ description="<description>",
+ name="<name>",
+ version='<version>'
) ml_client.data.create_or_update(my_data) ```
-# [CLI](#tab/CLI)
-You can also use CLI to register a URI Folder type Data as below example.
-
-```azurecli
-az ml data create -f <file-name>.yml
-```
-
-Sample `YAML` file `<file-name>.yml` for local path is as below:
-
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: uri_folder_my_data
-description: Local data asset will be created as URI folder type Data in Azure ML.
-path: path
-```
-Sample `YAML` file `<file-name>.yml` for data folder in an existing Azure ML Datastore is as below:
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: uri_folder_my_data
-description: Datastore data asset will be created as URI folder type Data in Azure ML.
-type: uri_folder
-path: azureml://datastores/workspaceblobstore/paths/example-data/
-```
-
-Sample `YAML` file `<file-name>.yml` for data folder in storage url is as below:
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: cloud_file_wasbs_example
-description: Data asset created from folder in cloud using wasbs URL.
-type: uri_folder
-path: wasbs://mainstorage9c05dabf5c924.blob.core.windows.net/azureml-blobstore-54887b46-3cb0-485b-bb15-62e7b5578ee6/example-data/
-```
-### Consume registered URI Folder data assets in job
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-registered_data_asset = ml_client.data.get(name='titanic', version='1')
-
-my_job_inputs = {
- "input_data": Input(
- type=AssetTypes.URI_FOLDER,
- path=registered_data_asset.id
- )
-}
-
-job = command(
- code="./src",
- command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-### Register data as URI File type Data
-# [Python-SDK](#tab/Python-SDK)
-```python
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-# select one from:
-my_file_path = '<path>/<file>' # local
-my_file_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>' # adls gen2
-my_file_path = 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>' # blob
+## Create a `uri_file` data asset
-my_data = Data(
- path=my_file_path,
- type=AssetTypes.URI_FILE,
- description="description here",
- name="a_name",
- version='1'
-)
+Below shows you how to create a *specific file* as a data asset:
-ml_client.data.create_or_update(my_data)
-```
# [CLI](#tab/CLI)
-You can also use CLI to register a URI File type Data as below example.
-```cli
-> az ml data create -f <file-name>.yml
-```
Sample `YAML` file `<file-name>.yml` for data in local path is as below:
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: uri_file_my_data
-description: Local data asset will be created as URI folder type Data in Azure ML.
-path: ./paths/example-data.csv
-```
-Sample `YAML` file `<file-name>.yml` for data in an existing Azure ML Datastore is as below:
```yaml $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: uri_file_my_data
-description: Datastore data asset will be created as URI folder type Data in Azure ML.
-type: uri_file
-path: azureml://datastores/workspaceblobstore/paths/example-data.csv
-```
-
-Sample `YAML` file `<file-name>.yml` for data in storage url is as below:
-```yaml
-$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-name: cloud_file_wasbs_example
-description: Data asset created from folder in cloud using wasbs URL.
-type: uri_file
-path: wasbs://mainstorage9c05dabf5c924.blob.core.windows.net/azureml-blobstore-54887b46-3cb0-485b-bb15-62e7b5578ee6/paths/example-data.csv
-```
-
-
-## MLTable
-### Register data as MLTable type Data assets
-Registering a `mltable` as an asset in Azure Machine Learning
-You can register a `mltable` as a data asset in Azure Machine Learning.
+# Supported paths include:
+# local: ./<path>/<file>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>/<file>
-In the MLTable file, the path attribute supports any Azure ML supported URI format:
--- a relative file: "file://foo/bar.csv"-- a short form entity URI: "azureml://datastores/foo/paths/bar/baz"-- a long form entity URI: "azureml://subscriptions/my-sub-id/resourcegroups/my-rg/workspaces/myworkspace/datastores/mydatastore/paths/path_to_data/"-- a storage URI: "https://", "wasbs://", "abfss://", "adl://"-- a public URI: "http://mypublicdata.com/foo.csv"
+type: uri_file
+name: <name>
+description: <description>
+path: <uri>
+```
+```cli
+> az ml data create -f <file-name>.yml
+```
-Below we show an example of versioning the sample data in this repo. The data is uploaded to cloud storage and registered as an asset.
# [Python-SDK](#tab/Python-SDK) ```python from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes
-import mltable
+
+# Supported paths include:
+# local: './<path>/<file>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>'
+my_path = '<path>'
my_data = Data(
- path="./sample_data",
- type=AssetTypes.MLTABLE,
- description="Titanic Data",
- name="titanic-mltable",
- version='1'
+ path=my_path,
+ type=AssetTypes.URI_FILE,
+ description="<description>",
+ name="<name>",
+ version="<version>"
)
-
+ ml_client.data.create_or_update(my_data) ```
-> [!TIP]
-> Although the above example shows a local file. Remember that path supports cloud storage (https, abfss, wasbs protocols). Therefore, if you want to register data in a > cloud location just specify the path with any of the supported protocols.
-
-# [CLI](#tab/CLI)
-You can also use CLI and following YAML that describes an MLTable to register MLTable Data.
-```cli
-> az ml data create -f <file-name>.yml
-```
-```yaml
-paths:
- - file: ./titanic.csv
-transformations:
- - read_delimited:
- delimiter: ','
- encoding: 'ascii'
- empty_as_string: false
- header: from_first_file
-```
-The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame. The important part here's that the MLTable-artifact doesn't have any absolute paths, making it *self-contained*. All the information stored in one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
-
-To consume the data in a job or interactive session, use `mltable`:
-
-```python
-import mltable
-
-tbl = mltable.load("./sample_data")
-df = tbl.to_pandas_dataframe()
-```
-
-For a full example of using an MLTable, see the [Working with MLTable notebook](https://github.com/Azure/azureml-examples/blob/samuel100/mltable/sdk/assets/data/working_with_mltable.ipynb).
+## Create a `mltable` data asset
-## mltable-artifact
+`mltable` is a way to abstract the schema definition for tabular data to make it easier to share data assets (an overview can be found in [MLTable](concept-data.md#mltable)).
-Here the files that make up the mltable-artifact are stored on the user's local machine:
+In this section, we show you how to create a data asset when the type is an `mltable`.
-```
-.
-Γö£ΓöÇΓöÇ MLTable
-ΓööΓöÇΓöÇ iris.csv
-```
+### The MLTable file
-The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame:
+The MLTable file is a file that provides the specification of the data's schema so that the `mltable` *engine* can materialize the data into an in-memory object (Pandas/Dask/Spark). An *example* MLTable file is provided below:
-```yaml
-#source ../configs/dataset/iris/MLTable
-$schema: http://azureml/sdk-2-0/MLTable.json
+```yml
type: mltable paths:
- - file: ./iris.csv
+ - pattern: ./*.txt
transformations: - read_delimited:
- delimiter: ","
+ delimiter: ,
encoding: ascii header: all_files_same_headers ```
+> [!IMPORTANT]
+> We recommend co-locating the MLTable file with the underlying data in storage. For example:
+>
+> ```Text
+> Γö£ΓöÇΓöÇ my_data
+> Γöé Γö£ΓöÇΓöÇ MLTable
+> Γöé Γö£ΓöÇΓöÇ file_1.txt
+> .
+> .
+> .
+> Γöé Γö£ΓöÇΓöÇ file_n.txt
+> ```
+> Co-locating the MLTable with the data ensures a **self-contained *artifact*** where all that is needed is stored in that one folder (`my_data`); regardless of whether that folder is stored on your local drive or in your cloud store or on a public http server. You should **not** specify *absolute paths* in the MLTable file.
+
+In your Python code, you materialize the MLTable artifact into a Pandas dataframe using:
-The important part here's that the MLTable-artifact doesn't have any absolute paths, hence it's self-contained and all that is needed is stored in that one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
-
-This artifact file can be consumed in a command job as follows:
+```python
+import mltable
-```yaml
-#source ../configs/dataset/01-mltable-CommandJob.yaml
-$schema: http://azureml/sdk-2-0/CommandJob.json
-
-inputs:
- my_mltable_artifact:
- type: mltable
- # folder needs to contain an MLTable file
- mltable: file://iris
-
-command: |
- python -c "
- from mltable import load
- # load a table from a folder containing an MLTable file
- tbl = load(${{my_mltable_artifact}})
- tbl.to_pandas_dataframe()
- ...
- "
+tbl = mltable.load(uri="./my_data")
+df = tbl.to_pandas_dataframe()
```
-> [!NOTE]
-> **For local files and folders**, only relative paths are supported. To be explicit, we will **not** support absolute paths as that would require us to change the MLTable file that is residing on disk before we move it to cloud storage.
+The `uri` parameter in `mltable.load()` should be a valid path to a local or cloud **folder** which contains a valid MLTable file.
-You can put MLTable file and underlying data in the *same folder* but in a cloud object store. You can specify `mltable:` in their job that points to a location on a datastore that contains the MLTable file:
+> [!NOTE]
+> You will need the `mltable` library installed in your Environment (`pip install mltable`).
-```yaml
-#source ../configs/dataset/04-mltable-CommandJob.yaml
-$schema: http://azureml/sdk-2-0/CommandJob.json
-
-inputs:
- my_mltable_artifact:
- type: mltable
- mltable: azureml://datastores/some_datastore/paths/data/iris
-
-command: |
- python -c "
- from mltable import load
- # load a table from a folder containing an MLTable file
- tbl = load(${{my_mltable_artifact}})
- tbl.to_pandas_dataframe()
- ...
- "
-```
+Below shows you how to create an `mltable` data asset. The `path` can be any of the supported path formats outlined above.
-You can also have an MLTable file stored on the *local machine*, but no data files. The underlying data is stored on the cloud. In this case, the MLTable should reference the underlying data with an **absolute expression (i.e. a URI)**:
-```
-.
-Γö£ΓöÇΓöÇ MLTable
-```
+# [CLI](#tab/CLI)
+Create a `YAML` file (`<file-name>.yml`):
```yaml
-#source ../configs/dataset/iris-cloud/MLTable
-$schema: http://azureml/sdk-2-0/MLTable.json
-type: mltable
-
-paths:
- - file: azureml://datastores/mydatastore/paths/data/iris.csv
-transformations:
- - read_delimited:
- delimiter: ","
- encoding: ascii
- header: all_files_same_headers
-```
--
-### Supporting multiple files in a table
-While above scenarios are creating rectangular data, it's also possible to create an mltable-artifact that just contains files:
-
-```
-.
-ΓööΓöÇΓöÇ MLTable
-```
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
-Where the contents of the MLTable file is:
+# path must point to **folder** containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: ./<path>
+# blob: https://<account_name>.blob.core.windows.net/<container_name>/<path>
+# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/
+# Datastore: azureml://datastores/<data_store_name>/paths/<path>
-```yaml
-#source ../configs/dataset/multiple-files/MLTable
-$schema: http://azureml/sdk-2-0/MLTable.json
type: mltable-
-# creating dataset from folder on cloud path.
-paths:
- - file: http://foo.com/1.csv
- - file: http://foo.com/2.csv
- - file: http://foo.com/3.csv
- - file: http://foo.com/4.csv
- - file: http://foo.com/5.csv
-```
-
-As outlined above, MLTable can be created from a URI or a local folder path:
-
-```yaml
-#source ../configs/types/22_input_mldataset_artifacts-PipelineJob.yaml
-
-$schema: http://azureml/sdk-2-0/PipelineJob.json
-
-jobs:
- first:
- description: this job takes a mltable-artifact as input and mounts it.
- Note that the actual data could be in a different location
-
- inputs:
- mnist:
- type: mltable # redundant but there for clarity
- # needs to point to a folder that contains an MLTable file
- mltable: azureml://datastores/some_datastore/paths/data/public/mnist
- mode: ro_mount # or download
-
- command: |
- python -c "
- import mltable as mlt
- # load a table from a folder containing an MLTable file
- tbl = mlt.load('${{inputs.mnist}}')
- tbl.list_files()
- ...
- "
-
- second:
- description: this job loads a table artifact from a local_path.
- Note that the folder needs to contain a well-formed MLTable file
-
- inputs:
- tbl_access_artifact:
- type: mltable
- mltable: file:./iris
- mode: download
-
- command: |
- python -c "
- import mltable as mlt
- # load a table from a folder containing an MLTable file
- tbl = MLTable.load('${{inputs.tbl_access_artifact}}')
- tbl.list_files()
- ...
- "
+name: <name_of_data>
+description: <description goes here>
+path: <path>
```
-MLTable-artifacts can yield files that aren't necessarily located in the `mltable`'s storage. Or it can **subset or shuffle** the data that resides in the storage using the `take_random_sample` transform for example. That view is only visible if the MLTable file is evaluated by the engine. The user can do that as described above by using the MLTable SDK by running `mltable.load`, but that requires python and the installation of the SDK.
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
-### Support globbing of files
-Along with users being able to provide a `file` or `folder`, the MLTable artifact file will also allow customers to specify a *pattern* to do globbing of files:
+Next, create the data asset using the CLI:
-```yaml
-#source ../configs/dataset/parquet-artifact-search/MLTable
-$schema: http://azureml/sdk-2-0/MLTable.json
-type: mltable
-paths:
- - pattern: parquet_files/*1.parquet # only get files with this specific pattern
-transformations:
- - read_parquet:
- include_path_column: false
+```azurecli
+az ml data create -f <file-name>.yml
```
+# [Python-SDK](#tab/Python-SDK)
+You can create a data asset in Azure Machine Learning using the following Python Code:
-### Delimited text: Transformations
-There are the following transformations that are *specific to delimited text*.
--- `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the data source is accessible from current compute. Currently type inference will only pull first 200 rows. If the data contains multiple types of value, it's better to provide desired type as an override via `set_column_types` argument-- `encoding`: Specify the file encoding. Supported encodings are 'utf8', 'iso88591', 'latin1', 'ascii', 'utf16', 'utf32', 'utf8bom' and 'windows1252'. Defaults to utf8.-- header: user can choose one of the following options:
- - `no_header`
- - `from_first_file`
- - `all_files_different_headers`
- - `all_files_same_headers` (default)
-- `delimiter`: The separator used to split columns.-- `empty_as_string`: Specify if empty field values should be loaded as empty strings. The default (False) will read empty field values as nulls. Passing this as True will read empty field values as empty strings. If the values are converted to numeric or datetime, then this has no effect as empty values will be converted to nulls.-- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.-- `support_multi_line`: By default (support_multi_line=False), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This should be set to True when the delimited files are known to contain quoted line breaks.-
-### Parquet files: Transforms
-If user doesn't define options for `read_parquet` transformation, default options will be selected (see below).
--- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.-
-### Json lines: Transformations
-Below are the supported transformations that are specific for json lines:
--- `include_path` Boolean to keep path information as column in the MLTable. Defaults to False. This is useful when reading multiple files, and want to know which file a particular record originated from, or to keep useful information in file path.-- `invalid_lines` How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`.-- `encoding` Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.-
-## Global transforms
-
-MLTable-artifacts provide transformations specific to the delimited text, parquet, Delta. There are other transforms that mltable-artifact files support:
--- `take`: Takes the first *n* records of the table-- `take_random_sample`: Takes a random sample of the table where each record has a *probability* of being selected. The user can also include a *seed*.-- `skip`: This skips the first *n* records of the table-- `drop_columns`: Drops the specified columns from the table. This transform supports regex so that users can drop columns matching a particular pattern.-- `keep_columns`: Keeps only the specified columns in the table. This transform supports regex so that users can keep columns matching a particular pattern.-- `filter`: Filter the data, leaving only the records that match the specified expression.-- `extract_partition_format_into_columns`: Specify the partition format of path. Defaults to None. The partition information of each path will be extracted into columns based on the specified format. Format part '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute and second for the datetime type. The format should start from the position of first partition key until the end of file path. For example, given the path '../Accounts/2019/01/01/data.csv' where the partition is by department name and time, partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.csv' creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value '2019-01-01'.
-Our principle here's to support transforms *specific to data delivery* and not to get into wider feature engineering transforms.
--
-## Traits
-The keen eyed among you may have spotted that `mltable` type supports a `traits` section. Traits define fixed characteristics of the table (that is, they are **not** freeform metadata that users can add) and they don't perform any transformations but can be used by the engine.
--- `index_columns`: Set the table index using existing columns. This trait can be used by partition_by in the data plane to split data by the index.-- `timestamp_column`: Defines the timestamp column of the table. This trait can be used in filter transforms, or in other data plane operations (SDK) such as drift detection.
+```python
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
-Moreover, *in the future* we can use traits to define RAI aspects of the data, for example:
+# my_path must point to folder containing MLTable artifact (MLTable file + data
+# Supported paths include:
+# local: './<path>'
+# blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>'
+# ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/'
+# Datastore: 'azureml://datastores/<data_store_name>/paths/<path>'
-- `sensitive_columns`: Here the user can define certain columns that contain sensitive information.
+my_path = '<path>'
-Again, this isn't a transform but is informing the system of some extra properties in the data.
+my_data = Data(
+ path=my_path,
+ type=AssetTypes.MLTABLE,
+ description="<description>",
+ name="<name>",
+ version='<version>'
+)
+ml_client.data.create_or_update(my_data)
+```
+> [!NOTE]
+> The path points to the **folder** containing the MLTable artifact.
+ ## Next steps
-* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
-* [Install and use the CLI (v2)](how-to-configure-cli.md)
-* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md)
-* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
-* Learn more about [Data in Azure Machine Learning](concept-data.md)
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
-# Connect to storage with Azure Machine Learning datastores
+# Create datastores
In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores.
In this article, learn how to connect to data storage services on Azure with Azu
- An Azure Machine Learning workspace. > [!NOTE]
-> Azure Machine Learning datastores do **not** create the underlying storage accounts, rather they register an **existing** storage account for use in Azure Machine Learning. It is not a requirement to use Azure Machine Learning datastores - you can use storage URIs directly assuming you have access to the underlying data.
+> Azure Machine Learning datastores do **not** create the underlying storage accounts, rather they link an **existing** storage account for use in Azure Machine Learning. It is not a requirement to use Azure Machine Learning datastores - you can use storage URIs directly assuming you have access to the underlying data.
## Create an Azure Blob datastore
ml_client.create_or_update(store)
```python from azure.ai.ml.entities import AzureBlobDatastore
-from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = AccountKeyCredentials(account_key="")
- store = AzureBlobDatastore(
- name="",
- description="",
- account_name="",
- container_name="",
- credentials=creds
+ name="blob-protocol-example",
+ description="Datastore pointing to a blob container using wasbs protocol.",
+ account_name="mytestblobstore",
+ container_name="data-container",
+ protocol="wasbs",
+ credentials={
+ "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX"
+ },
) ml_client.create_or_update(store)
ml_client.create_or_update(store)
```python from azure.ai.ml.entities import AzureBlobDatastore
-from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = SasTokenCredentials(sas_token="")
- store = AzureBlobDatastore(
- name="",
- description="",
- account_name="",
- container_name="",
- credentials=creds
+ name="blob-sas-example",
+ description="Datastore pointing to a blob container using SAS token.",
+ account_name="mytestblobstore",
+ container_name="data-container",
+ credentials={
+ "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX"
+ },
) ml_client.create_or_update(store)
ml_client.create_or_update(store)
```python from azure.ai.ml.entities import AzureDataLakeGen2Datastore
-from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = ServicePrincipalCredentials(
- authority_url="",
- resource_url=""
- tenant_id="",
- secrets=""
-)
- store = AzureDataLakeGen2Datastore(
- name="",
- description="",
- account_name="",
- file_system="",
- credentials=creds
+ name="adls-gen2-example",
+ description="Datastore pointing to an Azure Data Lake Storage Gen2.",
+ account_name="mytestdatalakegen2",
+ filesystem="my-gen2-container",
+ credentials={
+ "tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
+ },
) ml_client.create_or_update(store)
az ml datastore create --file my_files_datastore.yml
```python from azure.ai.ml.entities import AzureFileDatastore
-from azure.ai.ml.entities._datastore.credentials import AccountKeyCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = AccountKeyCredentials(account_key="")
- store = AzureFileDatastore(
- name="",
- description="",
- account_name="",
- file_share_name="",
- credentials=creds
+ name="file-example",
+ description="Datastore pointing to an Azure File Share.",
+ account_name="mytestfilestore",
+ file_share_name="my-share",
+ credentials={
+ "account_key": "XXXxxxXXXxXXXXxxXXXXXxXXXXXxXxxXxXXXxXXXxXXxxxXXxxXXXxXxXXXxxXxxXXXXxxxxxXXxxxxxxXXXxXXX"
+ },
) ml_client.create_or_update(store)
ml_client.create_or_update(store)
```python from azure.ai.ml.entities import AzureFileDatastore
-from azure.ai.ml.entities._datastore.credentials import SasTokenCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = SasTokenCredentials(sas_token="")
- store = AzureFileDatastore(
- name="",
- description="",
- account_name="",
- file_share_name="",
- credentials=creds
+ name="file-sas-example",
+ description="Datastore pointing to an Azure File Share using SAS token.",
+ account_name="mytestfilestore",
+ file_share_name="my-share",
+ credentials={
+ "sas_token": "?xx=XXXX-XX-XX&xx=xxxx&xxx=xxx&xx=xxxxxxxxxxx&xx=XXXX-XX-XXXXX:XX:XXX&xx=XXXX-XX-XXXXX:XX:XXX&xxx=xxxxx&xxx=XXxXXXxxxxxXXXXXXXxXxxxXXXXXxxXXXXXxXXXXxXXXxXXxXX"
+ },
) ml_client.create_or_update(store)
ml_client.create_or_update(store)
```python from azure.ai.ml.entities import AzureDataLakeGen1Datastore
-from azure.ai.ml.entities._datastore.credentials import ServicePrincipalCredentials
from azure.ai.ml import MLClient ml_client = MLClient.from_config()
-creds = ServicePrincipalCredentials(
- authority_url="",
- resource_url=""
- tenant_id="",
- secrets=""
-)
- store = AzureDataLakeGen1Datastore(
- name="",
- store_name="",
- description="",
- credentials=creds
+ name="adls-gen1-example",
+ description="Datastore pointing to an Azure Data Lake Storage Gen1.",
+ store_name="mytestdatalakegen1",
+ credentials={
+ "tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
+ "client_secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
+ },
) - ml_client.create_or_update(store) ```
ml_client.create_or_update(store)
## Next steps
-* [Register and Consume your data](how-to-create-register-data-assets.md)
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
+- [Create data assets](how-to-create-register-data-assets.md#create-data-assets)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Use the following steps to enable access to data stored in Azure Blob and File s
> [!TIP] > The first step is not required for the default storage account for the workspace. All other steps are required for *any* storage account behind the VNet and used by the workspace, including the default storage account.
-1. **If the storage account is the *default* storage for your workspace, skip this step**. If it is not the default, **Grant the workspace managed identity the 'Storage Blob Data Reader' role** for the Azure storage account so that it can read data from blob storage.
+1. **If the storage account is the *default* storage for your workspace, skip this step**. If it is not the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage.
For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
-1. **Grant the workspace managed identity the 'Reader' role for storage private endpoints**. If your storage service uses a __private endpoint__, grant the workspace's managed identity **Reader** access to the private endpoint. The workspace's managed identity in Azure AD has the same name as your Azure Machine Learning workspace.
+1. __Grant the workspace managed identity the 'Reader' role for storage private endpoints__. If your storage service uses a __private endpoint__, grant the workspace's managed identity __Reader__ access to the private endpoint. The workspace's managed identity in Azure AD has the same name as your Azure Machine Learning workspace.
> [!TIP] > Your storage account may have multiple private endpoints. For example, one storage account may have separate private endpoint for blob, file, and dfs (Azure Data Lake Storage Gen2). Add the managed identity to all these endpoints.
Use the following steps to enable access to data stored in Azure Blob and File s
For more information, see the [Reader](../role-based-access-control/built-in-roles.md#reader) built-in role. <a id='enable-managed-identity'></a>
-1. **Enable managed identity authentication for default storage accounts**. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account, which are defined when you create your workspace. You can also set new defaults in the **Datastore** management page.
+1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account, which are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page.
![Screenshot showing where default datastores can be found](./media/how-to-enable-studio-virtual-network/default-datastores.png)
Use the following steps to enable access to data stored in Azure Blob and File s
|Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.| |Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. |
-1. **Configure datastores to use managed identity authentication**. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
+1. __Configure datastores to use managed identity authentication__. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
- Azure Machine Learning uses [datastores](concept-data.md#datastores) to connect to storage accounts. When creating a new datastore, use the following steps to configure a datastore to use managed identity authentication:
+ Azure Machine Learning uses [datastore](concept-data.md#datastore) to connect to storage accounts. When creating a new datastore, use the following steps to configure a datastore to use managed identity authentication:
1. In the studio, select __Datastores__.
When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-s
When using Azure Data Lake Storage Gen2 as a datastore, you can use both Azure RBAC and POSIX-style access control lists (ACLs) to control data access inside of a virtual network.
-**To use Azure RBAC**, follow the steps in the [Datastore: Azure Storage Account](#datastore-azure-storage-account) section of this article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
+__To use Azure RBAC__, follow the steps in the [Datastore: Azure Storage Account](#datastore-azure-storage-account) section of this article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
-**To use ACLs**, the workspace's managed identity can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
+__To use ACLs__, the workspace's managed identity can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
## Datastore: Azure SQL Database
After you create a SQL contained user, grant permissions to it by using the [GRA
When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps: 1. Select the component whose output you'd like to specify.
-1. In the component settings pane that appears to the right, select **Output settings**.
+1. In the component settings pane that appears to the right, select __Output settings__.
1. Specify the datastore you want to use for each component output. Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline will fail.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
To determine the current usage for an endpoint, [view the metrics](how-to-monito
To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases) section and provide the following information:
+1. When opening the support request, __do not select Service and subscription limits (quotas)__. Instead, select __Technical__ as the issue type.
1. Provide the Azure __subscriptions__ and __regions__ where you want to increase the quota. 1. Provide the __tenant ID__ and __customer name__. 1. Provide the __quota type__ and __new limit__. Use the following table as a guide:
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
The extension currently supports datastores of the following types:
- Azure Data Lake Gen 2 - Azure File
-For more information, see [datastores](concept-data.md#datastores).
+For more information, see [datastore](concept-data.md#datastore).
### Create a datastore
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
+
+ Title: 'Migrate from v1 to v2'
+
+description: Migrate from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK (preview).
++++++ Last updated : 06/01/2022++++
+# How to migrate from v1 to v2
+
+Azure Machine Learning's v2 REST APIs, Azure CLI extension, and Python SDK (preview) introduce consistency and a set of new features to accelerate the production machine learning lifecycle. In this article, we'll overview migrating from v1 to v2 with recommendations to help you decide on v1, v2, or both.
+
+## Prerequisites
+
+- General familiarity with Azure ML and the v1 Python SDK.
+- Understand [what is v2?](concept-v2.md)
+
+## Should I use v2?
+
+You should use v2 if you're starting a new machine learning project. A new v2 project can reuse resources like workspaces and compute and assets like models and environments created using v1. You can also use v1 and v2 in tandem, for example using the v1 Python SDK within jobs that are submitted from the v2 CLI extension. However, see the [section below](#can-i-use-v1-and-v2-together) for details on why separating v1 and v2 use is recommended.
+
+We recommend assessing the effort needed to migrate a project from v1 to v2. First, you should ensure all the features needed from v1 are available in v2. Some notable feature gaps include:
+
+- Spark support in jobs.
+- Publishing jobs (pipelines in v1) as endpoints.
+- AutoML jobs within pipeline jobs (AutoML step in a pipeline in v1).
+- Model deployment to Azure Container Instance (ACI), replaced with managed online endpoints.
+- An equivalent for ParallelRunStep in jobs.
+- Support for SQL/database datastores.
+- Built-in components in the designer.
+
+You should then ensure the features you need in v2 meet your organization's requirements, such as being generally available. You and your team will need to assess on a case-by-case basis whether migrating to v2 is right for you.
+
+> [!IMPORTANT]
+> New features in Azure ML will only be launched in v2.
+
+## How do I migrate to v2?
+
+To migrate to v2, start by prototyping an existing v1 workflow into v2. Migrating will typically include:
+
+- Optionally (and recommended in most cases), re-create resources and assets with v2.
+- Refactor model training code to de-couple Azure ML code from ML model code (model training, model logging, and other model tracking code).
+- Refactor Azure ML model deployment code and test with v2 endpoints.
+- Refactor CI/CD code to use the v2 CLI (recommended), v2 Python SDK, or directly use REST.
+
+Based on this prototype, you can estimate the effort involved for a full migration to v2. Consider the workflow patterns (like [GitOps](#a-note-on-gitops-with-v2)) your organization wants to establish for use with v2 and factor this effort in.
+
+## Which v2 API should I use?
+
+In v2 interfaces via REST API, CLI, and Python SDK (preview) are available. The interface you should use depends on your scenario and preferences.
+
+|API|Notes|
+|-|-|
+|REST|Fewest dependencies and overhead. Use for building applications on Azure ML as a platform, directly in programming languages without a SDK provided, or per personal preference.|
+|CLI|Recommended for automation with CI/CD or per personal preference. Allows quick iteration with YAML files and straightforward separation between Azure ML and ML model code.|
+|Python SDK|Recommended for complicated scripting (for example, programmatically generating large pipeline jobs) or per personal preference. Allows quick iteration with YAML files or development solely in Python.|
+
+## Can I use v1 and v2 together?
+
+Generally, yes. Resources like workspace, compute, and datastore work across v1 and v2, with exceptions. A user can call the v1 Python SDK to change a workspace's description, then using the v2 CLI extension change it again. Jobs (experiments/runs/pipelines in v1) can be submitted to the same workspace from the v1 or v2 Python SDK. A workspace can have both v1 and v2 model deployment endpoints. You can also call v1 Python SDK code within jobs created via v2, though [this pattern isn't recommended](#production-model-training).
+
+We recommend creating a new workspace for using v2 to keep v1/v2 entities separate and avoid backward/forward compatibility considerations.
+
+> [!IMPORTANT]
+> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details.
+
+## Migrating resources and assets
+
+This section gives an overview of migration recommendations for specific resources and assets in Azure ML. See the concept article for each entity for details on their usage in v2.
+
+### Workspace
+
+Workspaces don't need to be migrated with v2. You can use the same workspace, regardless of whether you're using v1 or v2. We recommend creating a new workspace for using v2 to keep v1/v2 entities separate and avoid backward/forward compatibility considerations.
+
+Do consider migrating the code for deploying a workspace to v2. Typically Azure resources are managed via Azure Resource Manager (and Bicep) or similar resource provisioning tools. Alternatively, you can use the CLI (v2) and YAML files.
+
+> [!IMPORTANT]
+> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md) for details.
+
+### Connection (workspace connection in v1)
+
+Workspace connections from v1 are persisted on the workspace, and fully available with v2.
+
+We recommend migrating the code for creating connections to v2.
+
+### Datastore
+
+Object storage datastore types created with v1 are fully available for use in v2. Database datastores are not supported; export to object storage (usually Azure Blob) is the recommended migration path.
+
+We recommend migrating the code for creating datastores to v2.
+
+### Compute
+
+Compute of type `AmlCompute` and `ComputeInstance` are fully available for use in v2.
+
+We recommend migrating the code for creating compute to v2.
+
+### Endpoint and deployment (endpoint or web service in v1)
+
+You can continue using your existing v1 model deployments. For new model deployments, we recommend migrating to v2. In v2, we offer managed endpoints or Kubernetes endpoints. The following table guides our recommendation:
+
+|Endpoint type in v2|Migrate from|Notes|
+|-|-|-|
+|Local|ACI|Quick test of model deployment locally; not for production.|
+|Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.|
+|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively-parallel batch processing for production.|
+|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.|
+|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-prem, giving flexibility and granular control at the cost of IT overhead.|
+
+### Jobs (experiments, runs, pipelines in v1)
+
+In v2, "experiments", "runs", and "pipelines" are consolidated into jobs. A job has a type. Most jobs are `command` jobs that run a command, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else. Another common type of job is `pipeline`, which defines child jobs that may have input/output relationships, forming a directed acyclic graph (DAG).
+
+To migrate, you'll need to change your code for submitting jobs to v2. We recommend refactoring the control-plane code authoring a job into YAML file specification, which can then be submitted through the v2 CLI or Python SDK (preview). A simple `command` job looks like this:
++
+What you run *within* the job does not need to be migrated to v2. However, it is recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. See [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md) for details.
+
+We recommend migrating the code for creating jobs to v2. You can see [how to train models with the CLI (v2)](how-to-train-cli.md) and the [job YAML references](reference-yaml-job-command.md) for authoring jobs in v2 YAMLs.
+
+### Data (datasets in v1)
+
+Datasets are renamed to data assets. Interoperability between v1 datasets and v2 data assets is the most complex of any entity in Azure ML.
+
+Data assets in v2 (or File Datasets in v1) are *references* to files in object storage. Thus, deleting a data asset (or v1 dataset) doesn't actually delete anything in underlying storage, only a reference. Therefore, it may be easier to avoid backward and forward compatibility considerations for data by re-creating v1 datasets as v2 data assets.
+
+For details on data in v2, see the [data concept article](concept-data.md).
+
+We recommend migrating the code for creating data assets to v2.
+
+### Model
+
+Models created from v1 can be used in v2. In v2, explicit model types are introduced. Similar to data assets, it may be easier to re-create a v1 model as a v2 model, setting the type appropriately.
+
+We recommend migrating the code for creating models to v2.
+
+### Environment
+
+Environments created from v1 can be used in v2. In v2, environments have new features like creation from a local Docker context.
+
+We recommend migrating the code for creating environments to v2.
+
+## Scenarios across the machine learning lifecycle
+
+There are a few scenarios that are common across the machine learning lifecycle using Azure ML. We'll look at a few and give general recommendations for migrating to v2.
+
+### Azure setup
+
+Azure recommends Azure Resource Manager templates (often via Bicep for ease of use) to create resources. The same is a good approach for creating Azure ML resources as well.
+
+If your team is only using Azure ML, you may consider provisioning the workspace and any other resources via YAML files and CLI instead.
+
+### Prototyping models
+
+We recommend v2 for prototyping models. You may consider using the CLI for an interactive use of Azure ML, while your model training code is Python or any other programming language. Alternatively, you may adopt a full-stack approach with Python solely using the Azure ML SDK or a mixed approach with the Azure ML Python SDK and YAML files.
+
+### Production model training
+
+We recommend v2 for production model training. Jobs consolidate the terminology and provide a set of consistency that allows for easier transition between types (for example, `command` to `sweep`) and a GitOps-friendly process for serializing jobs into YAML files.
+
+With v2, you should separate your machine learning code from the control plane code. This separation allows for easier iteration and allows for easier transition between local and cloud.
+
+Typically, converting to v2 will involve refactoring your code to use MLflow for tracking and model logging. See the [MLflow concept article](concept-mlflow.md) for details.
+
+### Production model deployment
+
+We recommend v2 for production model deployment. Managed endpoints abstract the IT overhead and provide a performant solution for deploying and scoring models, both for online (near real-time) and batch (massively parallel) scenarios.
+
+Kubernetes deployments are supported in v2 through AKS or Azure Arc, enabling Azure cloud and on-premise deployments managed by your organization.
+
+### Machine learning operations (MLOps)
+
+A MLOps workflow typically involves CI/CD through an external tool. It's recommended refactor existing CI/CD workflows to use v2 APIs. Typically a CLI is used in CI/CD, though you can alternatively invoke Python or directly use REST.
+
+The solution accelerator for MLOps with v2 is being developed at https://github.com/Azure/mlops-v2 and can be used as reference or adopted for setup and automation of the machine learning lifecycle.
+
+#### A note on GitOps with v2
+
+A key paradigm with v2 is serializing machine learning entities as YAML files for source control with `git`, enabling better GitOps approaches than were possible with v1. For instance, you could enforce policy by which only a service principal used in CI/CD pipelines can create/update/delete some or all entities, ensuring changes go through a governed process like pull requests with required reviewers. Since the files in source control are YAML, they're easy to diff and track changes over time. You and your team may consider shifting to this paradigm as you migrate to v2.
+
+You can obtain a YAML representation of any entity with the CLI via `az ml <entity> show --output yaml`. Note that this output will have system-generated properties, which can be ignored or deleted.
+
+## Next steps
+
+- [Get started with the CLI (v2)](how-to-configure-cli.md)
+- [Get started with the Python SDK (v2)](https://aka.ms/sdk-v2-install)
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Title: Read and write data
+ Title: Read and write data in jobs
-description: Learn how to read and write data for consumption in Azure Machine Learning training jobs.
+description: Learn how to read and write data in Azure Machine Learning training jobs.
#Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
-# Read and write data for ML experiments
+# Read and write data in a job
+ [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-Learn how to read and write data for your training jobs with the Azure Machine Learning Python SDK v2(preview) and the Azure Machine Learning CLI extension v2.
+Learn how to read and write data for your jobs with the Azure Machine Learning Python SDK v2(preview) and the Azure Machine Learning CLI extension v2.
## Prerequisites
Learn how to read and write data for your training jobs with the Azure Machine L
- An Azure Machine Learning workspace
-```python
+## Supported paths
-from azure.ai.ml import MLClient
-from azure.identity import InteractiveBrowserCredential
+When you provide a data input/output to a Job, you'll need to specify a `path` parameter that points to the data location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter:
-#enter details of your AML workspace
-subscription_id = '<SUBSCRIPTION_ID>'
-resource_group = '<RESOURCE_GROUP>'
-workspace = '<AML_WORKSPACE_NAME>'
-#get a handle to the workspace
-ml_client = MLClient(InteractiveBrowserCredential(), subscription_id, resource_group, workspace)
-```
+|Location | Examples |
+|||
+|A path on your local computer | `./home/username/data/my_data` |
+|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` |
+|A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` |
+|A path on a Datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
+
+## Supported modes
-## Read local data in a job
+When you run a job with data inputs/outputs, you can specify the *mode* - for example, whether you would like the data to be read-only mounted or downloaded to the compute target. The table below shows the possible modes for different type/mode/input/output combinations:
+
+Type | Input/Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct` | `eval_download` | `eval_mount`
+ | | :: | :: | :: | :: | :: | :: | ::
+`uri_folder` | Input | | Γ£ô | Γ£ô | | Γ£ô | |
+`uri_file` | Input | | Γ£ô | Γ£ô | | Γ£ô | |
+`mltable` | Input | | Γ£ô | Γ£ô | | Γ£ô | Γ£ô | Γ£ô
+`uri_folder` | Output | Γ£ô | | | Γ£ô | Γ£ô | |
+`uri_file` | Output | Γ£ô | | | Γ£ô | Γ£ô | |
+`mltable` | Output | Γ£ô | | | Γ£ô | Γ£ô | |
+
+> [!NOTE]
+> `eval_download` and `eval_mount` are unique to `mltable`. Whilst `ro_mount` is the default mode for MLTable, there are scenarios where an MLTable can yield files that are not necessarily co-located with the MLTable file in storage. Alternatively, an `mltable` can subset or shuffle the data that resides in the storage. That view is only visible if the MLTable file is actually evaluated by the engine. These modes will provide that view of the files.
-You can use data from your current working directory in a training job with the Input class.
-The Input class allows you to define data inputs from a specific file, `uri_file` or a folder location, `uri_folder`. In the Input object, you specify the `path` of where your data is located; the path can be a local path or a cloud path. Azure Machine Learning supports `https://`, `abfss://`, `wasbs://` and `azureml://` URIs.
-> [!IMPORTANT]
-> If the path is local, but your compute is defined to be in the cloud, Azure Machine Learning will automatically upload the data to cloud storage for you.
+## Read data in a job
+
+# [CLI](#tab/CLI)
+Create a job specification YAML file (`<file-name>.yml`). Specify in the `inputs` section of the job:
+
+1. The `type`; whether the data you are pointing to is a specific file (`uri_file`) or a folder location (`uri_folder`) or an `mltable`.
+1. The `path` of where your data is located; the path can be any of those outlined in the [Supported Paths](#supported-paths) section.
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
+
+# Possible Paths for Data:
+# Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>
+# Datastore: azureml://datastores/paths/<folder>/<file>
+# Data Asset: azureml:<my_data>:<version>
+
+command: |
+ ls ${{inputs.my_data}}
+code: <folder where code is located>
+inputs:
+ my_data:
+ type: <type> # uri_file, uri_folder, mltable
+ path: <path>
+environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+compute: azureml:cpu-cluster
+```
+
+Next, run in the CLI
+
+```azurecli
+az ml job create -f <file-name>.yml
+```
# [Python-SDK](#tab/Python-SDK)
-```python
-from azure.ai.ml import Input, command
+The `Input` class allows you to define:
+
+1. The `type`; whether the data you are pointing to is a specific file (`uri_file`) or a folder location (`uri_folder`) or an `mltable`.
+1. The `path` of where your data is located; the path can be any of those outlined in the [Supported Paths](#supported-paths) section.
+
+```python
+from azure.ai.ml import command
from azure.ai.ml.entities import Data
+from azure.ai.ml import Input
from azure.ai.ml.constants import AssetTypes
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+# Possible Asset Types for Data:
+# AssetTypes.URI_FILE
+# AssetTypes.URI_FOLDER
+# AssetTypes.MLTABLE
+
+# Possible Paths for Data:
+# Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>
+# Datastore: azureml://datastores/paths/<folder>/<file>
+# Data Asset: azureml:<my_data>:<version>
my_job_inputs = {
- "input_data": Input(
- path='./sample_data', # change to be your local directory
- type=AssetTypes.URI_FOLDER
- )
+ "input_data": Input(type=AssetTypes.URI_FOLDER, path="<path>")
} job = command(
- code="./src", # local path where the code is stored
- command='python train.py --input_folder ${{inputs.input_data}}',
+ code="./src", # local path where the code is stored
+ command="ls ${{inputs.input_data}}",
inputs=my_job_inputs, environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
+ compute="cpu-cluster",
)
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
+# submit the command
+returned_job = ml_client.jobs.create_or_update(job)
+# get a URL for the status of the job
returned_job.services["Studio"].endpoint ``` ++
+### Read V1 data assets
+This section outlines how you can read V1 `FileDataset` and `TabularDataset` data entities in a V2 job.
+
+#### Read a `FileDataset`
+ # [CLI](#tab/CLI)
-The following code shows how to read in uri_file type data from local.
-```azurecli
-az ml job create -f <file-name>.yml
-```
+Create a job specification YAML file (`<file-name>.yml`), with the type set to `mltable` and the mode set to `eval_mount`:
+ ```yaml $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json+ command: |
- python hello-iris.py --iris-csv ${{inputs.iris_csv}}
-code: src
+ ls ${{inputs.my_data}}
+code: <folder where code is located>
inputs:
- iris_csv:
- type: uri_file
- path: ./example-data/iris.csv
-environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+ my_data:
+ type: mltable
+ mode: eval_mount
+ path: azureml:<filedataset_name>@latest
+environment: azureml:<environment_name>@latest
compute: azureml:cpu-cluster ``` -
+Next, run in the CLI
-## Read data stored in storage service on Azure in a job
-
-You can read your data in from existing storage on Azure.
-You can leverage Azure Machine Learning datastore to register these exiting Azure storage.
-Azure Machine Learning datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
-You can access your data and create datastores with,
-- Credential-based data authentication, like a service principal or shared access signature (SAS) token. These credentials can be accessed by users who have Reader access to the workspace.-- Identity-based data authentication to connect to storage services with your Azure Active Directory ID or other managed identity.
+```azurecli
+az ml job create -f <file-name>.yml
+```
# [Python-SDK](#tab/Python-SDK)
-The following code shows how to read in uri_folder type data from Azure Data Lake Storage Gen 2 or Blob via SDK V2.
+In the `Input` object specify the `type` as `AssetTypes.MLTABLE` and `mode` as `InputOutputModes.EVAL_MOUNT`:
```python-
-from azure.ai.ml import Input, command
+from azure.ai.ml import command
from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
+from azure.ai.ml import Input
+from azure.ai.ml.constants import AssetTypes, InputOutputModes
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+filedataset_asset = ml_client.data.get(name="<filedataset_name>", version="<version>")
my_job_inputs = { "input_data": Input(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', # Blob: 'https://<account_name>.blob.core.windows.net/<container_name>/path'
- type=AssetTypes.URI_FOLDER
+ type=AssetTypes.MLTABLE,
+ path=filedataset_asset,
+ mode=InputOutputModes.EVAL_MOUNT
) } job = command(
- code="./src", # local path where the code is stored
- command='python train.py --input_folder ${{inputs.input_data}}',
+ code="./src", # local path where the code is stored
+ command="ls ${{inputs.input_data}}",
inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
+ environment="<environment_name>:<version>",
+ compute="cpu-cluster",
)
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
+# submit the command
+returned_job = ml_client.jobs.create_or_update(job)
+# get a URL for the status of the job
returned_job.services["Studio"].endpoint ``` +++
+#### Read a `TabularDataset`
+ # [CLI](#tab/CLI)
-The following code shows how to read in uri_file type data from Azure ML datastore via CLI V2.
-```azurecli
-az ml job create -f <file-name>.yml
-```
+Create a job specification YAML file (`<file-name>.yml`), with the type set to `mltable` and the mode set to `direct`:
+ ```yaml $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json+ command: |
- echo "--iris-csv: ${{inputs.iris_csv}}"
- python hello-iris.py --iris-csv ${{inputs.iris_csv}}
-code: src
+ ls ${{inputs.my_data}}
+code: <folder where code is located>
inputs:
- iris_csv:
- type: uri_file
- path: azureml://datastores/workspaceblobstore/paths/example-data/iris.csv
-environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest
+ my_data:
+ type: mltable
+ mode: direct
+ path: azureml:<tabulardataset_name>@latest
+environment: azureml:<environment_name>@latest
compute: azureml:cpu-cluster ``` --
-## Read and write data to cloud-based storage
+Next, run in the CLI
-You can read and write data from your job into your cloud-based storage.
-
-The Input defaults the mode - how the input will be exposed during job runtime - to InputOutputModes.RO_MOUNT (read-only mount). Put another way, Azure Machine Learning will mount the file or folder to the compute and set the file/folder to read-only. By design, you can't write to JobInputs only JobOutputs. The data is automatically uploaded to cloud storage.
-
-Matrix of possible types and modes for job inputs and outputs:
-
-Type | Input/Output | `upload` | `download` | `ro_mount` | `rw_mount` | `direct` | `eval_download` | `eval_mount`
- | | | | | | | |
-`uri_folder` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ❌ | ❌
-`uri_file` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ❌ | ❌
-`mltable` | Input | ❌ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅
-`uri_folder` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
-`uri_file` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
-`mltable` | Output | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌
-
-As you can see from the table, `eval_download` and `eval_mount` are unique to `mltable`. A MLTable-artifact can yield files that are not necessarily located in the `mltable`'s storage. Or it can subset or shuffle the data that resides in the storage. That view is only visible if the MLTable file is actually evaluated by the engine. These modes will provide that view of the files.
+```azurecli
+az ml job create -f <file-name>.yml
+```
+# [Python-SDK](#tab/Python-SDK)
+In the `Input` object specify the `type` as `AssetTypes.MLTABLE` and `mode` as `InputOutputModes.DIRECT`:
+```python
+from azure.ai.ml import command
+from azure.ai.ml.entities import Data
+from azure.ai.ml import Input
+from azure.ai.ml.constants import AssetTypes, InputOutputModes
+from azure.ai.ml import MLClient
-# [Python-SDK](#tab/Python-SDK)
+ml_client = MLClient.from_config()
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, JobOutput
-from azure.ai.ml.constants import AssetTypes
+filedataset_asset = ml_client.data.get(name="<tabulardataset_name>", version="<version>")
my_job_inputs = { "input_data": Input(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-my_job_outputs = {
- "output_folder": JobOutput(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
- type=AssetTypes.URI_FOLDER
+ type=AssetTypes.MLTABLE,
+ path=filedataset_asset,
+ mode=InputOutputModes.DIRECT
) } job = command(
- code="./src", #local path where the code is stored
- command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
+ code="./src", # local path where the code is stored
+ command="python train.py --inputs ${{inputs.input_data}}",
inputs=my_job_inputs,
- outputs=my_job_outputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
+ environment="<environment_name>:<version>",
+ compute="cpu-cluster",
)
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-
-#get a URL for the status of the job
+# submit the command
+returned_job = ml_client.jobs.create_or_update(job)
+# get a URL for the status of the job
returned_job.services["Studio"].endpoint- ``` ++
+## Write data in a job
+
+In your job you can write data to your cloud-based storage using *outputs*. The [Supported modes](#supported-modes) section showed that only job *outputs* can write data because the mode can be either `rw_mount` or `upload`.
+ # [CLI](#tab/CLI)
+Create a job specification YAML file (`<file-name>.yml`), with the `outputs` section populated with the type and path of where you would like to write your data to:
+ ```yaml $schema: https://azuremlschemas.azureedge.net/latest/CommandJob.schema.json
-code: src/prep
+
+# Possible Paths for Data:
+# Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>
+# Datastore: azureml://datastores/paths/<folder>/<file>
+# Data Asset: azureml:<my_data>:<version>
+
+code: src
command: >- python prep.py --raw_data ${{inputs.raw_data}} --prep_data ${{outputs.prep_data}} inputs: raw_data:
- type: uri_folder
- path: ./data
+ type: <type> # uri_file, uri_folder, mltable
+ path: <path>
outputs: prep_data:
- mode: upload
-environment: azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu@latest
+ type: <type> # uri_file, uri_folder, mltable
+ path: <path>
+environment: azureml:<environment_name>@latest
compute: azureml:cpu-cluster- ``` --
-## Register data
-
-You can register data as an asset to your workspace. The benefits of registering data are:
-
-* Easy to share with other members of the team (no need to remember file locations)
-* Versioning of the metadata (location, description, etc.)
-* Lineage tracking
-
-The following example demonstrates versioning of sample data, and shows how to register a local file as a data asset. The data is uploaded to cloud storage and registered as an asset.
-
-```python
-
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-my_data = Data(
- path="./sample_data/titanic.csv",
- type=AssetTypes.URI_FILE,
- description="Titanic Data",
- name="titanic",
- version='1'
-)
-
-ml_client.data.create_or_update(my_data)
-```
-
-To register data that is in a cloud location, you can specify the path with any of the supported protocols for the storage type. The following example shows what the path looks like for data from Azure Data Lake Storage Gen 2.
-
-```python
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
-
-my_data = Data(
- path=my_path,
- type=AssetTypes.URI_FOLDER,
- description="description here",
- name="a_name",
- version='1'
-)
-
-ml_client.data.create_or_update(my_data)
+Next create a job using the CLI:
+```azurecli
+az ml job create --file <file-name>.yml
```
-## Consume registered data assets in jobs
-
-Once your data is registered as an asset to the workspace, you can consume that data asset in jobs.
-The following example demonstrates how to consume `version` 1 of the registered data asset `titanic`.
+# [Python-SDK](#tab/Python-SDK)
```python-
-from azure.ai.ml import Input, command
+from azure.ai.ml import command
from azure.ai.ml.entities import Data
+from azure.ai.ml import Input, Output
from azure.ai.ml.constants import AssetTypes
-registered_data_asset = ml_client.data.get(name='titanic', version='1')
+# Possible Asset Types for Data:
+# AssetTypes.URI_FILE
+# AssetTypes.URI_FOLDER
+# AssetTypes.MLTABLE
+
+# Possible Paths for Data:
+# Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>
+# Datastore: azureml://datastores/paths/<folder>/<file>
+# Data Asset: azureml:<my_data>:<version>
my_job_inputs = {
- "input_data": Input(
- type=AssetTypes.URI_FOLDER,
- path=registered_data_asset.id
- )
+ "raw_data": Input(type=AssetTypes.URI_FOLDER, path="<path>")
+}
+
+my_job_outputs = {
+ "prep_data": Output(type=AssetTypes.URI_FOLDER, path="<path>")
} job = command(
- code="./src",
- command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
+ code="./src", # local path where the code is stored
+ command="python process_data.py --raw_data ${{inputs.raw_data}} --prep_data ${{outputs.prep_data}}",
inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
+ outputs=my_job_outputs,
+ environment="<environment_name>:<version>",
+ compute="cpu-cluster",
)
-#submit the command job
+# submit the command
returned_job = ml_client.create_or_update(job)-
-#get a URL for the status of the job
+# get a URL for the status of the job
returned_job.services["Studio"].endpoint+ ```
-## Use data in pipelines
++
+## Data in pipelines
If you're working with Azure Machine Learning pipelines, you can read data into and move data between pipeline components with the Azure Machine Learning CLI v2 extension or the Python SDK v2 (preview). ### Azure Machine Learning CLI v2 The following YAML file demonstrates how to use the output data from one component as the input for another component of the pipeline using the Azure Machine Learning CLI v2 extension:
-## Python SDK v2 (preview)
+### Python SDK v2 (preview)
The following example defines a pipeline containing three nodes and moves data between each node.
The following example defines a pipeline containing three nodes and moves data b
[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)] ## Next steps
-* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
-* [Install and use the CLI (v2)](how-to-configure-cli.md)
+ * [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md) * [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md) * Learn more about [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
Title: Hyperparameter tuning a model (v2)
description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning. -+
Azure Machine Learning supports the following early termination policies:
[Bandit policy](/python/api/azure-ai-ml/azure.ai.ml.sweep.banditpolicy) is based on slack factor/slack amount and evaluation interval. Bandit policy ends a job when the primary metric isn't within the specified slack factor/slack amount of the most successful job.
-> [!NOTE]
-> Bayesian sampling does not support early termination. When using Bayesian sampling, set `early_termination_policy = None`.
- Specify the following configuration parameters: * `slack_factor` or `slack_amount`: the slack allowed with respect to the best performing training job. `slack_factor` specifies the allowable slack as a ratio. `slack_amount` specifies the allowable slack as an absolute amount, instead of a ratio.
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
# import required libraries from azure.ai.ml import MLClient, Input from azure.ai.ml.entities import (
+ AmlCompute,
BatchEndpoint, BatchDeployment, Model,
To create an online endpoint, we'll use `BatchEndpoint`. This class allows user
ml_client.begin_create_or_update(endpoint) ```
+## Create batch compute
+
+Batch endpoint runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster. Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `cpu-cluster`.
+
+```python
+compute_name = "cpu-cluster"
+compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_instances=0, max_instances=5)
+ml_client.begin_create_or_update(compute_cluster)
+```
+ ## Create a deployment A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `BatchDeployment` class. This class allows user to configure the following key aspects.
A deployment is a set of resources required for hosting the model that does the
code_path="./mnist/code/", scoring_script="digit_identification.py", environment=env,
- compute="cpu-cluster",
+ compute=compute_name,
instance_count=2, max_concurrency_per_instance=2, mini_batch_size=10,
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The en
# invoke the endpoint for batch scoring job job = ml_client.batch_endpoints.invoke( endpoint_name=batch_endpoint_name,
- input_data=input,
+ input=input,
deployment_name="non-mlflow-deployment", # name is required as default deployment is not set params_override=[{"mini_batch_size": "20"}, {"compute.instance_count": "4"}], )
ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name)
## Next steps
-If you encounter problems using batch endpoints, see [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
+If you encounter problems using batch endpoints, see [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
See [How to tune hyperparameters](how-to-tune-hyperparameters.md).
Efficiency of training for deep learning and sometimes classical machine learning training jobs can be drastically improved via multinode distributed training. Azure Machine Learning compute clusters offer the latest GPU options.
-Supported via Azure Arc-attached Kubernetes (preview) and Azure ML compute clusters:
+Supported via Azure ML Kubernetes and Azure ML compute clusters:
- PyTorch - TensorFlow
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
+
+ Title: Reference for configuring Kubernetes cluster for Azure Machine Learning (Preview)
+
+description: Reference for configuring Kubernetes cluster for Azure Machine Learning.
+++++++ Last updated : 06/06/2022++
+# Reference for configuring Kubernetes cluster for Azure Machine Learning (Preview)
+
+This article contains reference information that may be useful when [configuring Kubernetes with Azure Machine Learning](./how-to-attach-kubernetes-anywhere.md).
+
+## Supported Kubernetes version and region
++
+- Kubernetes clusters installing AzureML extension have a version support window of "N-2", that is aligned with [Azure Kubernetes Service (AKS) version support policy](../aks/supported-kubernetes-versions.md#kubernetes-version-support-policy), where 'N' is the latest GA minor version of Azure Kubernetes Service.
+
+ - For example, if AKS introduces 1.20.a today, versions 1.20.a, 1.20.b, 1.19.c, 1.19.d, 1.18.e, and 1.18.f are supported.
+
+ - If customers are running an unsupported Kubernetes version, they'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the AzureML extension support policies.
+- AzureML extension region availability:
+ - AzureML extension can be deployed to AKS or Azure Arc-enabled Kubernetes in supported regions listed in [Azure Arc enabled Kubernetes region support](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc&regions=all).
+
+## Prerequisites for ARO or OCP clusters
+### Disable Security Enhanced Linux (SELinux)
+
+[AzureML dataset](./how-to-train-with-datasets.md) (used in AzureML training jobs) isn't supported on machines with SELinux enabled. Therefore, you need to disable `selinux` on all workers in order to use AzureML dataset.
+
+### Privileged setup for ARO and OCP
+
+For AzureML extension deployment on ARO or OCP cluster, grant privileged access to AzureML service accounts, run ```oc edit scc privileged``` command, and add following service accounts under "users:":
+
+* ```system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa```
+* ```system:serviceaccount:azureml:{EXTENSION-NAME}-kube-state-metrics```
+* ```system:serviceaccount:azureml:prom-admission```
+* ```system:serviceaccount:azureml:default```
+* ```system:serviceaccount:azureml:prom-operator```
+* ```system:serviceaccount:azureml:load-amlarc-selinux-policy-sa```
+* ```system:serviceaccount:azureml:azureml-fe-v2```
+* ```system:serviceaccount:azureml:prom-prometheus```
+* ```system:serviceaccount:{KUBERNETES-COMPUTE-NAMESPACE}:default```
+* ```system:serviceaccount:azureml:azureml-ingress-nginx```
+* ```system:serviceaccount:azureml:azureml-ingress-nginx-admission```
+
+> [!NOTE]
+> * `{EXTENSION-NAME}`: is the extension name specified with the `az k8s-extension create --name` CLI command.
+>* `{KUBERNETES-COMPUTE-NAMESPACE}`: is the namespace of the Kubernetes compute specified when attaching the compute to the Azure Machine Learning workspace. Skip configuring `system:serviceaccount:{KUBERNETES-COMPUTE-NAMESPACE}:default` if `KUBERNETES-COMPUTE-NAMESPACE` is `default`.
+
+## AzureML extension components
+
+For Arc-connected cluster, AzureML extension deployment will create [Azure Relay](../azure-relay/relay-what-is-it.md) in Azure cloud, used to route traffic between Azure services and the Kubernetes cluster. For AKS cluster without Arc connected, Azure Relay resource won't be created.
+
+Upon AzureML extension deployment completes, it will create following resources in Kubernetes cluster, depending on each AzureML extension deployment scenario:
+
+ |Resource name |Resource type |Training |Inference |Training and Inference| Description | Communication with cloud|
+ |--|--|--|--|--|--|--|
+ |relayserver|Kubernetes deployment|**&check;**|**&check;**|**&check;**|relayserver is only needed in arc-connected cluster, and won't be installed in AKS cluster. Relayserver works with Azure Relay to communicate with the cloud services.|Receive the request of job creation, model deployment from cloud service; sync the job status with cloud service.|
+ |gateway|Kubernetes deployment|**&check;**|**&check;**|**&check;**|The gateway is used to communicate and send data back and forth.|Send nodes and cluster resource information to cloud services.|
+ |aml-operator|Kubernetes deployment|**&check;**|N/A|**&check;**|Manage the lifecycle of training jobs.| Token exchange with the cloud token service for authentication and authorization of Azure Container Registry.|
+ |metrics-controller-manager|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Manage the configuration for Prometheus|N/A|
+ |{EXTENSION-NAME}-kube-state-metrics|Kubernetes deployment|**&check;**|**&check;**|**&check;**|Export the cluster-related metrics to Prometheus.|N/A|
+ |{EXTENSION-NAME}-prometheus-operator|Kubernetes deployment|Optional|Optional|Optional| Provide Kubernetes native deployment and management of Prometheus and related monitoring components.|N/A|
+ |amlarc-identity-controller|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with the cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
+ |amlarc-identity-proxy|Kubernetes deployment|N/A|**&check;**|**&check;**|Request and renew Azure Blob/Azure Container Registry token through managed identity.|Token exchange with the cloud token service for authentication and authorization of Azure Container Registry and Azure Blob used by inference/model deployment.|
+ |azureml-fe-v2|Kubernetes deployment|N/A|**&check;**|**&check;**|The front-end component that routes incoming inference requests to deployed services.|Send service logs to Azure Blob.|
+ |inference-operator-controller-manager|Kubernetes deployment|N/A|**&check;**|**&check;**|Manage the lifecycle of inference endpoints. |N/A|
+ |volcano-admission|Kubernetes deployment|Optional|N/A|Optional|Volcano admission webhook.|N/A|
+ |volcano-controllers|Kubernetes deployment|Optional|N/A|Optional|Manage the lifecycle of Azure Machine Learning training job pods.|N/A|
+ |volcano-scheduler |Kubernetes deployment|Optional|N/A|Optional|Used to perform in-cluster job scheduling.|N/A|
+ |fluent-bit|Kubernetes daemonset|**&check;**|**&check;**|**&check;**|Gather the components' system log.| Upload the components' system log to cloud.|
+ |{EXTENSION-NAME}-dcgm-exporter|Kubernetes daemonset|Optional|Optional|Optional|dcgm-exporter exposes GPU metrics for Prometheus.|N/A|
+ |nvidia-device-plugin-daemonset|Kubernetes daemonset|Optional|Optional|Optional|nvidia-device-plugin-daemonset exposes GPUs on each node of your cluster| N/A|
+ |prometheus-prom-prometheus|Kubernetes statefulset|**&check;**|**&check;**|**&check;**|Gather and send job metrics to cloud.|Send job metrics like cpu/gpu/memory utilization to cloud.|
+
+> [!IMPORTANT]
+ > * Azure Relay resource is under the same resource group as the Arc cluster resource. It is used to communicate with the Kubernetes cluster and modifying them will break attached compute targets.
+ > * By default, the kubernetes deployment resources are randomly deployed to 1 or more nodes of the cluster, and daemonset resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes, use `nodeSelector` configuration setting described as below.
+
+> [!NOTE]
+ > * **{EXTENSION-NAME}:** is the extension name specified with ```az k8s-extension create --name``` CLI command.
++
+## Create and use instance types for efficient compute resource usage
+
+### What are instance types?
+
+Instance types are an Azure Machine Learning concept that allows targeting certain types of
+compute nodes for training and inference workloads. For an Azure VM, an example for an
+instance type is `STANDARD_D2_V3`.
+
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the AzureML extension. Instance types are represented by two elements in AzureML extension:
+[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
+and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+In short, a `nodeSelector` lets us specify which node a pod should run on. The node must have a
+corresponding label. In the `resources` section, we can set the compute resources (CPU, memory and
+NVIDIA GPU) for the pod.
+
+### Default instance type
+
+By default, a `defaultinstancetype` with following definition is created when you attach Kuberenetes cluster to AzureML workspace:
+- No `nodeSelector` is applied, meaning the pod can get scheduled on any node.
+- The workload's pods are assigned default resources with 0.6 cpu cores, 1536Mi memory and 0 GPU:
+```yaml
+resources:
+ requests:
+ cpu: "0.6"
+ memory: "1536Mi"
+ limits:
+ cpu: "0.6"
+ memory: "1536Mi"
+ nvidia.com/gpu: null
+```
+
+> [!NOTE]
+> - The default instance type purposefully uses little resources. To ensure all ML workloads
+run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
+> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
+> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+
+### Create custom instance types
+
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
+
+```bash
+kubectl apply -f my_instance_type.yaml
+```
+
+With `my_instance_type.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceType
+metadata:
+ name: myinstancetypename
+spec:
+ nodeSelector:
+ mylabel: mylabelvalue
+ resources:
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 1
+ memory: "2Gi"
+ requests:
+ cpu: "700m"
+ memory: "1500Mi"
+```
+
+The following steps will create an instance type with the labeled behavior:
+- Pods will be scheduled only on nodes with label `mylabel: mylabelvalue`.
+- Pods will be assigned resource requests of `700m` CPU and `1500Mi` memory.
+- Pods will be assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.
+
+> [!NOTE]
+> - NVIDIA GPU resources are only specified in the `limits` section as integer values. For more information,
+ see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins).
+> - CPU and memory resources are string values.
+> - CPU can be specified in millicores, for example `100m`, or in full numbers, for example `"1"`
+ is equivalent to `1000m`.
+> - Memory can be specified as a full number + suffix, for example `1024Mi` for 1024 MiB.
+
+It's also possible to create multiple instance types at once:
+
+```bash
+kubectl apply -f my_instance_type_list.yaml
+```
+
+With `my_instance_type_list.yaml`:
+```yaml
+apiVersion: amlarc.azureml.com/v1alpha1
+kind: InstanceTypeList
+items:
+ - metadata:
+ name: cpusmall
+ spec:
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "100Mi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+
+ - metadata:
+ name: defaultinstancetype
+ spec:
+ resources:
+ requests:
+ cpu: "1"
+ memory: "1Gi"
+ limits:
+ cpu: "1"
+ nvidia.com/gpu: 0
+ memory: "1Gi"
+```
+
+The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition will override the `defaultinstancetype` definition created when Kubernetes cluster was attached to AzureML workspace.
+
+If a training or inference workload is submitted without an instance type, it uses the default
+instance type. To specify a default instance type for a Kubernetes cluster, create an instance
+type with name `defaultinstancetype`. It will automatically be recognized as the default.
+
+### Select instance type to submit training job
+
+To select an instance type for a training job using CLI (V2), specify its name as part of the
+`resources` properties section in job YAML. For example:
+```yaml
+command: python -c "print('Hello world!')"
+environment:
+ image: library/python:latest
+compute: azureml:<compute_target_name>
+resources:
+ instance_type: <instance_type_name>
+```
+
+In the above example, replace `<compute_target_name>` with the name of your Kubernetes compute
+target and `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to submit job.
+
+### Select instance type to deploy model
+
+To select an instance type for a model deployment using CLI (V2), specify its name for `instance_type` property in deployment YAML. For example:
+
+```yaml
+name: blue
+app_insights_enabled: true
+endpoint_name: <endpoint name>
+model:
+ path: ./model/sklearn_mnist_model.pkl
+code_configuration:
+ code: ./script/
+ scoring_script: score.py
+instance_type: <instance type name>
+environment:
+ conda_file: file:./model/conda.yml
+ image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1
+```
+
+In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system will use `defaultinstancetype` to deploy model.
+
+## AzureML jobs connect with on-premises data storage
+
+[Persistent Volume (PV) and Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) are Kubernetes concept, allowing user to provide and consume various storage resources.
+
+1. Create PV, take NFS as example,
+
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: nfs-pv
+spec:
+ capacity:
+ storage: 1Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: ""
+ nfs:
+ path: /share/nfs
+ server: 20.98.110.84
+ readOnly: false
+```
+2. Create PVC in the same Kubernetes namespace with ML workloads. In `metadata`, you **must** add label `ml.azure.com/pvc: "true"` to be recognized by AzureML, and add annotation `ml.azure.com/mountpath: <mount path>` to set the mount path.
+
+```
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: nfs-pvc
+ namespace: default
+ labels:
+ ml.azure.com/pvc: "true"
+ annotations:
+ ml.azure.com/mountpath: "/mnt/nfs"
+spec:
+ storageClassName: ""
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 1Gi
+```
+> [!IMPORTANT]
+> Only the job pods in the same Kubernetes namespace with the PVC(s) will be mounted the volume. Data scientist is able to access the `mount path` specified in the PVC annotation in the job.
++
+## Sample YAML definition of Kubernetes secret for TLS/SSL
+
+To enable HTTPS endpoint for real-time inference, you need to provide both PEM-encoded TLS/SSL certificate and key. The best practice is to save the certificate and key in a Kubernetes secret in the `azureml` namespace.
+
+The sample YAML definition of the TLS/SSL secret is as follows,
+
+```
+apiVersion: v1
+data:
+ cert.pem: <PEM-encoded SSL certificate>
+ key.pem: <PEM-encoded SSL key>
+kind: Secret
+metadata:
+ name: <secret name>
+ namespace: azureml
+type: Opaque
+```
+
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
Title: 'CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema'
+ Title: 'CLI (v2) Attached Kubernetes cluster (KubernetesCompute) YAML schema'
description: Reference documentation for the CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema.
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Learn about the architecture and concepts for [Azure Machine Learning](../overvi
A [machine learning workspace](../concept-workspace.md) is the top-level resource for Azure Machine Learning. The workspace is the centralized place to:
For more information about training compute targets, see [Training compute targe
For more information, see [Create and register Azure Machine Learning Datasets](how-to-create-register-datasets.md). For more examples using Datasets, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/datasets-tutorial).
-Datasets use [datastores](../concept-data.md#datastores) to securely connect to your Azure storage services. Datastores store connection information without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your Key Vault associated with the workspace, so you can securely access your storage without having to hard code them in your script.
+Datasets use [datastore](../concept-data.md#datastore) to securely connect to your Azure storage services. Datastores store connection information without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your Key Vault associated with the workspace, so you can securely access your storage without having to hard code them in your script.
## Environments
Because Machine Learning Compute is a managed compute target (that is, it's mana
* After the run completes, you can query runs and metrics. In the flow diagram below, this step occurs when the training compute target writes the run metrics back to Azure Machine Learning from storage in the Cosmos DB database. Clients can call Azure Machine Learning. Machine Learning will in turn pull metrics from the Cosmos DB database and return them back to the client.
-[![Training workflow](media/concept-azure-machine-learning-architecture/training-and-metrics.png)](media/concept-azure-machine-learning-architecture/training-and-metrics.png#lightbox)
+[![Training workflow](media/concept-Azure-machine-learning-architecture/training-and-metrics.png)](media/concept-azure-machine-learning-architecture/training-and-metrics.png#lightbox)
## Models
Here are the details:
* Scoring request details are stored in Application Insights, which is in the user's subscription. * Telemetry is also pushed to the Microsoft Azure subscription.
-[![Inference workflow](media/concept-azure-machine-learning-architecture/inferencing.png)](media/concept-azure-machine-learning-architecture/inferencing.png#lightbox)
+[![Inference workflow](media/concept-Azure-machine-learning-architecture/inferencing.png)](media/concept-azure-machine-learning-architecture/inferencing.png#lightbox)
For an example of deploying a model as a web service, see [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md).
Pipeline steps are reusable, and can be run without rerunning the previous steps
Azure Machine Learning provides the following monitoring and logging capabilities:
-* For __Data Scientists__, you can monitor your experiments and log information from your training runs. For more information, see the following articles:
+* For **Data Scientists**, you can monitor your experiments and log information from your training runs. For more information, see the following articles:
* [Start, monitor, and cancel training runs](../how-to-track-monitor-analyze-runs.md) * [Log metrics for training runs](../how-to-log-view-metrics.md) * [Track experiments with MLflow](../how-to-use-mlflow.md) * [Visualize runs with TensorBoard](../how-to-monitor-tensorboard.md)
-* For __Administrators__, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
-* For __DevOps__ or __MLOps__, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
+* For **Administrators**, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
+* For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
## Interacting with your workspace
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
In this article, learn how to set up your workspace to use these compute resourc
* Azure Databricks - used as a training compute target only in [machine learning pipelines](../how-to-create-machine-learning-pipelines.md) * Azure Data Lake Analytics * Azure Container Instance
-* Azure Kubernetes Service & Azure Arc-enabled Kubernetes (preview)
+* Azure Machine Learning Kubernetes
To use compute targets managed by Azure Machine Learning, see:
For a more detailed example, see an [example notebook](https://aka.ms/pl-adla) o
Azure Container Instances (ACI) are created dynamically when you deploy a model. You cannot create or attach ACI to your workspace in any other way. For more information, see [Deploy a model to Azure Container Instances](how-to-deploy-azure-container-instance.md).
-## <a id="kubernetes"></a>Kubernetes (preview)
+## <a id="kubernetes"></a>Kubernetes
-Azure Machine Learning provides you with the following options to attach your own Kubernetes clusters for training and inferencing:
-
-* [Azure Kubernetes Service](../../aks/intro-kubernetes.md). Azure Kubernetes Service provides a managed cluster in Azure.
-* [Azure Arc Kubernetes](../../azure-arc/kubernetes/overview.md). Use Azure Arc-enabled Kubernetes clusters if your cluster is hosted outside of Azure.
-
+Azure Machine Learning provides you with the option to attach your own Kubernetes clusters for training and inferencing. See [Configure Kubernetes cluster for Azure Machine Learning](../how-to-attach-kubernetes-anywhere.md).
To detach a Kubernetes cluster from your workspace, use the following method:
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
Title: Scans and ingestion description: This article explains scans and ingestion in Microsoft Purview.--++
remote-rendering Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/materials.md
# Materials
-Materials are [shared resources](../concepts/lifetime.md) that define how [meshes](meshes.md) are rendered. Materials are used to specify which [textures](textures.md) to apply, whether to make objects transparent and how lighting will be calculated.
+Materials are [shared resources](../concepts/lifetime.md) that define how **triangular [meshes](meshes.md)** are rendered. **Point clouds** on the other hand don't expose materials whatsoever.
+
+Materials are used to specify
+* which [textures](textures.md) to apply,
+* whether to make objects transparent,
+* how lighting interacts with the surface.
Materials are automatically created during [model conversion](../how-tos/conversion/model-conversion.md) and are accessible at runtime. You can also create custom materials from code and replace existing ones. This scenario makes especially sense if you want to share the same material across many meshes. Since modifications of a material are visible on every mesh that references it, this method can be used to easily apply changes.
Materials are automatically created during [model conversion](../how-tos/convers
Azure Remote Rendering has two distinct material types:
-* [PBR materials](../overview/features/pbr-materials.md) are used for surfaces that should be rendered as physically correct, as possible. Realistic lighting is computed for these materials using *physically based rendering* (PBR). To get the most out of this material type, it is important to provide high-quality input data, such as roughness and normal maps.
+* [PBR materials](../overview/features/pbr-materials.md) are used for surfaces that should be rendered as physically correct, as possible. Realistic lighting is computed for these materials using *physically based rendering* (PBR). To get the most out of this material type, it's important to provide high-quality input data, such as roughness and normal maps.
-* [Color materials](../overview/features/color-materials.md) are used for cases where no additional lighting is desired. These materials are always full bright and are easier to set up. Color materials are used for data that should either have no lighting at all, or already incorporates static lighting, such as models obtained through [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry).
+* [Color materials](../overview/features/color-materials.md) are used for cases where no extra lighting is desired. These materials are always full bright and are easier to set up. Color materials are used for data that should either have no lighting at all, or already incorporates static lighting, such as models obtained through [photogrammetry](https://en.wikipedia.org/wiki/Photogrammetry).
## Mesh vs. MeshComponent material assignment
-[Meshes](meshes.md) have one or more submeshes. Each submesh references one material. You can change the material to use either directly on the mesh, or you can override which material to use for a submesh on a [MeshComponent](meshes.md#meshcomponent).
+Triangular [Meshes](meshes.md) have one or more submeshes. Each submesh references one material. You can change the material to use either directly on the mesh, or you can override which material to use for a submesh on a [MeshComponent](meshes.md#meshcomponent).
When you modify a material directly on the mesh resource, this change affects all instances of that mesh. Changing it on the MeshComponent, however, only affects that one mesh instance. Which method is more appropriate depends on the desired behavior, but modifying a MeshComponent is the more common approach.
remote-rendering Meshes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/meshes.md
# Meshes
-## Mesh resource
+Meshes are immutable [shared resources](../concepts/lifetime.md) that can only be created through [model conversion](../how-tos/conversion/model-conversion.md). Meshes are used for rendering but also to provide a physics representation for [ray cast queries](../overview/features/spatial-queries.md). To place a mesh in 3D space, add a [MeshComponent](#meshcomponent) to an [Entity](entities.md).
-Meshes are an immutable [shared resource](../concepts/lifetime.md), that can only be created through [model conversion](../how-tos/conversion/model-conversion.md). Meshes contain one or multiple *submeshes* along with a physics representation for [raycast queries](../overview/features/spatial-queries.md). Each submesh references a [material](materials.md) with which it should be rendered by default. To place a mesh in 3D space, add a [MeshComponent](#meshcomponent) to an [Entity](entities.md).
+## Mesh types
+
+There are two distinct types of mesh resources in ARR: **Triangular meshes** and **point clouds**. Both types are represented by the same API class `Mesh`. Except for minor differences in behavior for the distinct mesh types, the exposed API functionality is identical.
+
+The conversion service automatically determines the appropriate mesh type by source file extension. For example, an FBX file is always converted as a triangular mesh, whereas PLY is treated as a point cloud. For the complete list of supported file formats, refer to the list of [source file formats](../how-tos/conversion/model-conversion.md#supported-source-formats).
+
+There are two significant user-facing differences between point cloud- and triangular mesh conversions:
+* Point cloud meshes don't expose any materials. The visual appearance of points is solely defined by their per-point color,
+* point clouds don't expose a scene graph. Instead, all points are attached to the root node entity.
### Mesh resource properties The `Mesh` class properties are:
-* **Materials:** An array of materials. Each material is used by a different submesh. Multiple entries in the array may reference the same [material](materials.md). This data cannot be modified at runtime.
+* **Materials:** An array of materials. Each material is used by a different submesh. Multiple entries in the array may reference the same [material](materials.md). Entries in this array can't be changed at runtime, however the material properties can.
+For point clouds, this array is empty.
* **Bounds:** A local-space axis-aligned bounding box (AABB) of the mesh vertices.
The `MeshComponent` class is used to place an instance of a mesh resource. Each
* **Materials:** The array of materials specified on the mesh component itself. The array will always have the same length as the *Materials* array on the mesh resource. Materials that shall not be overridden from the mesh default, are set to *null* in this array.
-* **UsedMaterials:** The array of actually used materials for each submesh. Will be identical to the data in the *Materials* array, for non-null values. Otherwise it contains the value from the *Materials* array in the mesh instance.
+* **UsedMaterials:** The array of actually used materials for each submesh. Will be identical to the data in the *Materials* array, for non-null values. Otherwise it contains the value from the *Materials* array in the mesh instance. This array is read-only.
### Sharing of meshes
remote-rendering Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/models.md
Each entity may have [components](components.md) attached. In the most common ca
## Creating models
-Creating models for runtime is achieved by [converting input models](../how-tos/conversion/model-conversion.md) from file formats such as FBX and GLTF. The conversion process extracts all the resources, such as textures, materials and meshes, and converts them to optimized runtime formats. It will also extract the structural information and convert that into ARR's entity/component graph structure.
+Creating models for runtime is achieved by [converting input models](../how-tos/conversion/model-conversion.md) from file formats such as FBX, GLTF or E57. The conversion process extracts all the resources, such as textures, materials and meshes, and converts them to optimized runtime formats. It will also extract the structural information and convert that into ARR's entity/component graph structure.
> [!IMPORTANT] > [Model conversion](../how-tos/conversion/model-conversion.md) is the only way to create [meshes](meshes.md). Although meshes can be shared between entities at runtime, there is no other way to get a mesh into the runtime, other than loading a model.
void LoadModel(ApiHandle<RenderingSession> session, ApiHandle<Entity> modelParen
} ```
-Afterwards you can traverse the entity hierarchy and modify the entities and components. Loading the same model multiple times creates multiple instances, each with their own copy of the entity/component structure. Since meshes, materials, and textures are [shared resources](../concepts/lifetime.md), their data will not be loaded again, though. Therefore instantiating a model more than once incurs relatively little memory overhead.
+Afterwards you can traverse the entity hierarchy and modify the entities and components. Loading the same model multiple times creates multiple instances, each with their own copy of the entity/component structure. Since meshes, materials, and textures are [shared resources](../concepts/lifetime.md), their data won't be loaded again, though. Therefore instantiating a model more than once incurs relatively little memory overhead.
## API documentation
remote-rendering Configure Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/configure-model-conversion.md
An example file `box.ConversionSettings.json` might be:
} ```
+The schema is identical for converting triangular meshes and point clouds. However, a point cloud conversion uses only a strict subset of feature as discussed below.
+
+## Settings for triangular meshes
+
+For converting a triangular mesh, for instance from an .fbx file, all parameters from the schema above do affect the outcome. The parameters are explained in detail now:
+ ### Geometry parameters * `scaling` - This parameter scales a model uniformly. Scaling can be used to grow or shrink a model, for example to display a building model on a table top.
If a model is defined using gamma space, then these options should be set to tru
### Scene parameters * `sceneGraphMode` - Defines how the scene graph in the source file is converted:
- * `dynamic` (default): All objects in the file are exposed as [entities](../../concepts/entities.md) in the API and can be transformed and re-parented arbitrarily. The node hierarchy at runtime is identical to the structure in the source file.
- * `static`: Similar to `dynamic`, but objects in the scene graph cannot be re-parented to other objects dynamically at runtime. For dynamic models with many moving parts (e.g. 'explosion view'), the `dynamic` option generates a model that is more efficient to render, but `static` mode still allows for individual part transforms. In case dynamic re-parenting is not required, the `static` option is the most suitable for models with many individual parts.
+ * `dynamic` (default): All objects in the file are exposed as [entities](../../concepts/entities.md) in the API and can be transformed and reparented arbitrarily. The node hierarchy at runtime is identical to the structure in the source file.
+ * `static`: Similar to `dynamic`, but objects in the scene graph can't be reparented to other objects dynamically at runtime. For dynamic models with many moving parts (for example, 'explosion view'), the `dynamic` option generates a model that is more efficient to render, but `static` mode still allows for individual part transforms. In case dynamic reparenting isn't required, the `static` option is the most suitable for models with many individual parts.
* `none`: The scene graph is collapsed into one object.
-Each mode has different runtime performance. In `dynamic` mode, the performance cost scales linearly with the number of [entities](../../concepts/entities.md) in the graph, even when no part is moved. Use `dynamic` mode only when it is necessary to move many parts or large sub-graphs simultaneously, for example for an 'explosion view' animation.
+Each mode has different runtime performance. In `dynamic` mode, the performance cost scales linearly with the number of [entities](../../concepts/entities.md) in the graph, even when no part is moved. Use `dynamic` mode only when it's necessary to move many parts or large subgraphs simultaneously, for example for an 'explosion view' animation.
-The `static` mode also exports the full scene graph. [Spatial queries](../../overview/features/spatial-queries.md) will return individual parts and each part can be modified through [state overrides](../../overview/features/override-hierarchical-state.md). With this mode, the runtime overhead per object is negligible. It is ideal for large scenes where you need per-object inspection, occasional transform changes on individual parts, but no object re-parenting.
+The `static` mode also exports the full scene graph. [Spatial queries](../../overview/features/spatial-queries.md) will return individual parts and each part can be modified through [state overrides](../../overview/features/override-hierarchical-state.md). With this mode, the runtime overhead per object is negligible. It's ideal for large scenes where you need per-object inspection, occasional transform changes on individual parts, but no object reparenting.
-The `none` mode has the least runtime overhead and also slightly better loading times. Inspection or transform of single objects is not possible in this mode. Use cases are, for example, photogrammetry models that do not have a meaningful scene graph in the first place.
+The `none` mode has the least runtime overhead and also slightly better loading times. Inspection or transform of single objects isn't possible in this mode. Use cases are, for example, photogrammetry models that don't have a meaningful scene graph in the first place.
> [!TIP] > Many applications will load multiple models. You should optimize the conversion parameters for each model depending on how it will be used. For example, if you want to display the model of a car for the user to take apart and inspect in detail, you need to convert it with `dynamic` mode. However, if you additionally want to place the car in a show room environment, that model can be converted with `sceneGraphMode` set to `static` or even `none`.
The `none` mode has the least runtime overhead and also slightly better loading
### Converting from older FBX formats, with a Phong material model
-* `fbxAssumeMetallic` - Older versions of the FBX format define their materials using a Phong material model. The conversion process has to infer how these materials map to the renderer's [PBR model](../../overview/features/pbr-materials.md). Usually this works well, but an ambiguity can arise when a material has no textures, high specular values, and a non-grey albedo color. In this circumstance, the conversion has to choose between prioritizing the high specular values, defining a highly reflective, metallic material where the albedo color dissolves away, or prioritizing the albedo color, defining something like a shiny colorful plastic. By default, the conversion process assumes that highly specular values imply a metallic material in cases where ambiguity applies. This parameter can be set to `false` to switch to the opposite.
+* `fbxAssumeMetallic` - Older versions of the FBX format define their materials using a Phong material model. The conversion process has to infer how these materials map to the renderer's [PBR model](../../overview/features/pbr-materials.md). Usually this mapping works well, but an ambiguity can arise when a material has no textures, high specular values, and a non-grey albedo color. In this circumstance, the conversion has to choose between prioritizing the high specular values, defining a highly reflective, metallic material where the albedo color dissolves away, or prioritizing the albedo color, defining something like a shiny colorful plastic. By default, the conversion process assumes that highly specular values imply a metallic material in cases where ambiguity applies. This parameter can be set to `false` to switch to the opposite.
### Coordinate system overriding
The `none` mode has the least runtime overhead and also slightly better loading
* `metadataKeys` - Allows you to specify keys of node metadata properties that you want to keep in the conversion result. You can specify exact keys or wildcard keys. Wildcard keys are of the format "ABC*" and match any key that starts with "ABC". Supported metadata value types are `bool`, `int`, `float`, and `string`.
- For GLTF files this data comes from the [extras object on nodes](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#nodeextras). For FBX files this data comes from the `Properties70` data on `Model nodes`. Please consult the documentation of your 3D Asset Tool for further details.
+ For GLTF files this data comes from the [extras object on nodes](https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#nodeextras). For FBX files this data comes from the `Properties70` data on `Model nodes`. Consult the documentation of your 3D Asset Tool for further details.
### :::no-loc text="Vertex"::: format
-It is possible to adjust the :::no-loc text="vertex"::: format for a mesh, to trade precision for memory savings. A lower memory footprint allows you to load larger models or achieve better performance. However, depending on your data, the wrong format can significantly impact rendering quality.
+It's possible to adjust the :::no-loc text="vertex"::: format for a mesh, to trade precision for memory savings. A lower memory footprint allows you to load larger models or achieve better performance. However, depending on your data, the wrong format can significantly impact rendering quality.
> [!CAUTION] > Changing the :::no-loc text="vertex"::: format should be a last resort when models don't fit into memory anymore, or when optimizing for the best possible performance. Changes can easily introduce rendering artifacts, both obvious ones and subtle ones. Unless you know what to look out for, you should not change the default.
These adjustments are possible:
* Specific data streams can be explicitly included or excluded. * The accuracy of data streams can be decreased to reduce the memory footprint.
-The following `vertex` section in the `.json` file is optional. For each portion that is not explicitly specified, the conversion service falls back to its default setting.
+The following `vertex` section in the `.json` file is optional. For each portion that isn't explicitly specified, the conversion service falls back to its default setting.
```json {
The following `vertex` section in the `.json` file is optional. For each portion
... ```
-By forcing a component to `NONE`, it is guaranteed that the output mesh does not have the respective stream.
+By forcing a component to `NONE`, it's guaranteed that the output mesh doesn't have the respective stream.
#### Component formats per :::no-loc text="vertex"::: stream
The memory footprints of the formats are as follows:
#### Best practices for component format changes
-* `position`: It is rare that reduced accuracy is sufficient. **16_16_16_16_FLOAT** introduces noticeable quantization artifacts, even for small models.
-* `normal`, `tangent`, `binormal`: Typically these values are changed together. Unless there are noticeable lighting artifacts that result from normal quantization, there is no reason to increase their accuracy. In some cases, though, these components can be set to **NONE**:
+* `position`: It's rare that reduced accuracy is sufficient. **16_16_16_16_FLOAT** introduces noticeable quantization artifacts, even for small models.
+* `normal`, `tangent`, `binormal`: Typically these values are changed together. Unless there are noticeable lighting artifacts that result from normal quantization, there's no reason to increase their accuracy. In some cases, though, these components can be set to **NONE**:
* `normal`, `tangent`, and `binormal` are only needed when at least one material in the model should be lit. In ARR this is the case when a [PBR material](../../overview/features/pbr-materials.md) is used on the model at any time. * `tangent` and `binormal` are only needed when any of the lit materials uses a normal map texture. * `texcoord0`, `texcoord1` : Texture coordinates can use reduced accuracy (**16_16_FLOAT**) when their values stay in the `[0; 1]` range and when the addressed textures have a maximum size of 2048 x 2048 pixels. If those limits are exceeded, the quality of texture mapping will suffer.
By default the converter has to assume that you may want to use PBR materials on
Knowing that you never need dynamic lighting on the model, and knowing that all texture coordinates are in `[0; 1]` range, you can set `normal`, `tangent`, and `binormal` to `NONE` and `texcoord0` to half precision (`16_16_FLOAT`), resulting in only 16 bytes per :::no-loc text="vertex":::. Cutting the mesh data in half enables you to load larger models and potentially improves performance.
+## Settings for point clouds
+
+When converting a point cloud, only a small subset of properties from the schema is used. Other properties are being ignored, if specified.
+
+The properties that do have an effect on point cloud conversion are:
+
+* `scaling` - same meaning as for triangular meshes.
+* `recenterToOrigin` - same meaning as for triangular meshes.
+* `axis` - same meaning as for triangular meshes. Default values are `["+x", "+y", "+z"]`, however most point cloud data will be rotated compared to renderer's own coordinate system. To compensate, in most cases `["+x", "+z", "-y"]` fixes the rotation.
+* `gammaToLinearVertex` - similar to triangular meshes, this flag can be used when point colors are expressed in gamma space. In practice, when enabled, makes the point cloud appear darker.
+* `generateCollisionMesh` - similar to triangular meshes, this flag needs to be enabled to support [spatial queries](../../overview/features/spatial-queries.md). But unlike for triangular meshes, this flag doesn't incurs longer conversion times, larger output file sizes, or longer runtime loading times. So disabling this flag can't be considered an optimization.
+ ## Memory optimizations Memory consumption of loaded content may become a bottleneck on the rendering system. If the memory payload becomes too large, it may compromise rendering performance or cause the model to not load altogether. This paragraph discusses some important strategies to reduce the memory footprint.
+> [!NOTE]
+> The following optimizations apply to triangular meshes. There is no way to optimize the output of point clouds through conversion settings.
+ ### Instancing Instancing is a concept where meshes are reused for parts with distinct spatial transformations, as opposed to every part referencing its own unique geometry. Instancing has significant impact on the memory footprint.
Example use cases for instancing are the screws in an engine model or chairs in
> [!NOTE] > Instancing can improve the memory consumption (and thus loading times) significantly, however the improvements on the rendering performance side are insignificant.
-The conversion service respects instancing if parts are marked up accordingly in the source file. However, conversion does not perform additional deep analysis of mesh data to identify reusable parts. Thus the content creation tool and its export pipeline are the decisive criteria for proper instancing setup.
+The conversion service respects instancing if parts are marked up accordingly in the source file. However, conversion doesn't perform extra deep analysis of mesh data to identify reusable parts. Thus the content creation tool and its export pipeline are the decisive criteria for proper instancing setup.
A simple way to test whether instancing information gets preserved during conversion is to have a look at the [output statistics](get-information.md#example-info-file), specifically the `numMeshPartsInstanced` member. If the value for `numMeshPartsInstanced` is larger than zero, it indicates that meshes are shared across instances. #### Example: Instancing setup in 3ds Max
-[Autodesk 3ds Max](https://www.autodesk.de/products/3ds-max) has distinct object cloning modes called **`Copy`**, **`Instance`**, and **`Reference`** that behave differently with regards to instancing in the exported `.fbx` file.
+[Autodesk 3ds Max](https://www.autodesk.de/products/3ds-max) has distinct object cloning modes called **`Copy`**, **`Instance`**, and **`Reference`** that behave differently with regard to instancing in the exported `.fbx` file.
![Cloning in 3ds Max](./media/3dsmax-clone-object.png) * **`Copy`** : In this mode the mesh is cloned, so no instancing is used (`numMeshPartsInstanced` = 0). * **`Instance`** : The two objects share the same mesh, so instancing is used (`numMeshPartsInstanced` = 1).
-* **`Reference`** : Distinct modifiers can be applied to the geometries, so the exporter chooses a conservative approach and does not use instancing (`numMeshPartsInstanced` = 0).
+* **`Reference`** : Distinct modifiers can be applied to the geometries, so the exporter chooses a conservative approach and doesn't use instancing (`numMeshPartsInstanced` = 0).
### Depth-based composition mode
As discussed in the [best practices for component format changes](configure-mode
### Texture sizes Depending on the type of scenario, the amount of texture data may outweigh the memory used for mesh data. Photogrammetry models are candidates.
-The conversion configuration does not provide a way to automatically scale down textures. If necessary, texture scaling has to be done as a client-side pre-processing step. The conversion step however does pick a suitable [texture compression format](/windows/win32/direct3d11/texture-block-compression-in-direct3d-11):
+The conversion configuration doesn't provide a way to automatically scale down textures. If necessary, texture scaling has to be done as a client-side pre-processing step. The conversion step however does pick a suitable [texture compression format](/windows/win32/direct3d11/texture-block-compression-in-direct3d-11):
* `BC1` for opaque color textures * `BC7` for source color textures with alpha channel
-Since format `BC7` has twice the memory footprint compared to `BC1`, it is important to make sure that the input textures do not provide an alpha channel unnecessarily.
+Since format `BC7` has twice the memory footprint compared to `BC1`, it's important to make sure that the input textures don't provide an alpha channel unnecessarily.
## Typical use cases
There are certain classes of use cases that qualify for specific optimizations.
* When you need to move parts around, that typically also means that you need support for raycasts or other [spatial queries](../../overview/features/spatial-queries.md), so that you can pick those parts in the first place. On the other hand, if you don't intend to move something around, chances are high that you also don't need it to participate in spatial queries and therefore can turn off the `generateCollisionMesh` flag. This switch has significant impact on conversion times, loading times, and also runtime per-frame update costs.
-* If the application does not use [cut planes](../../overview/features/cut-planes.md), the `opaqueMaterialDefaultSidedness` flag should be turned off. The performance gain is typically 20%-30%. Cut planes can still be used, but there won't be back-faces when looking into the inner parts of objects, which looks counter-intuitive. For more information, see [:::no-loc text="single sided"::: rendering](../../overview/features/single-sided-rendering.md).
+* If the application doesn't use [cut planes](../../overview/features/cut-planes.md), the `opaqueMaterialDefaultSidedness` flag should be turned off. The performance gain is typically 20%-30%. Cut planes can still be used, but there won't be back-faces when looking into the inner parts of objects, which looks counter-intuitive. For more information, see [:::no-loc text="single sided"::: rendering](../../overview/features/single-sided-rendering.md).
### Use case: Photogrammetry models
-When rendering photogrammetry models there is typically no need for a scene graph, so you could set the `sceneGraphMode` to `none`. Since those models rarely contain a complex scene graph to begin with, the impact of this option should be insignificant, though.
+When rendering photogrammetry models there's typically no need for a scene graph, so you could set the `sceneGraphMode` to `none`. Since those models rarely contain a complex scene graph to begin with, the impact of this option should be insignificant, though.
Because lighting is already baked into the textures, no dynamic lighting is needed. Therefore:
Because lighting is already baked into the textures, no dynamic lighting is need
### Use case: Visualization of compact machines, etc.
-In these use cases, the models often have very high detail within a small volume. The renderer is heavily optimized to handle such cases well. However, most of the optimizations mentioned in the previous use case do not apply here:
+In these use cases, the models often have very high detail within a small volume. The renderer is heavily optimized to handle such cases well. However, most of the optimizations mentioned in the previous use case don't apply here:
* Individual parts should be selectable and movable, so the `sceneGraphMode` must be left to `dynamic`. * Ray casts are typically an integral part of the application, so collision meshes must be generated.
In these use cases, the models often have very high detail within a small volume
## Deprecated features Providing settings using the non-model-specific filename `conversionSettings.json` is still supported but deprecated.
-Please use the model-specific filename `<modelName>.ConversionSettings.json` instead.
+Use the model-specific filename `<modelName>.ConversionSettings.json` instead.
The use of a `material-override` setting to identify a [Material Override file](override-materials.md) in the conversion settings file is still supported but deprecated.
-Please use the model-specific filename `<modelName>.MaterialOverrides.json` instead.
+Use the model-specific filename `<modelName>.MaterialOverrides.json` instead.
## Next steps
remote-rendering Get Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/get-information.md
It contains the following information:
This section provides information about the source scene. There will often be discrepancies between the values in this section and the equivalent values in the tool that created the source model. Such differences are expected, because the model gets modified during the export and conversion steps. * `numMeshes`: The number of mesh parts, where each part can reference a single material.
-* `numFaces`: The total number of _triangles_ in the whole model. Note that the mesh is triangulated during conversion. This number contributes to the polygon limit in the [standard rendering server size](../../reference/vm-sizes.md#how-the-renderer-evaluates-the-number-of-polygons).
+* `numFaces`: The total number of triangles/points in the whole model. This number contributes to the primitive limit in the [standard rendering server size](../../reference/vm-sizes.md#how-the-renderer-evaluates-the-number-of-primitives).
* `numVertices`: The total number of vertices in the whole model. * `numMaterial`: The total number of materials in the whole model.
-* `numFacesSmallestMesh`: The number of triangles in the smallest mesh of the model.
-* `numFacesBiggestMesh`: The number of triangles in the biggest mesh of the model.
+* `numFacesSmallestMesh`: The number of triangles/points in the smallest mesh of the model.
+* `numFacesBiggestMesh`: The number of triangles/points in the biggest mesh of the model.
* `numNodes`: The number of nodes in the model's scene graph. * `numMeshUsagesInScene`: The number of times nodes reference meshes. More than one node may reference the same mesh. * `maxNodeDepth`: The maximum depth of the nodes within the scene graph.
This section records information calculated from the converted asset.
## Deprecated features
-The conversion service writes the files `stdout.txt` and `stderr.txt` to the output container, and these had been the only source of warnings and errors.
-These files are now deprecated. Instead, please use
+The conversion service writes the files `stdout.txt` and `stderr.txt` to the output container, and these files had been the only source of warnings and errors.
+These files are now deprecated. Instead, use
[result files](#information-about-a-conversion-the-result-file) for this purpose. ## Next steps
remote-rendering Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/model-conversion.md
# Convert models
-Azure Remote Rendering allows you to render very complex models. To achieve maximum performance, the data must be preprocessed to be in an optimal format. Depending on the amount of data, this step might take a while. It would be impractical, if this time was spent during model loading. Also, it would be wasteful to repeat this process for multiple sessions.
+Azure Remote Rendering allows you to render very complex models. To achieve maximum performance, the data must be preprocessed to be in an optimal format. Depending on the amount of data, this step might take a while. It would be impractical, if this time was spent during model loading. Also, it would be wasteful to repeat this process for multiple sessions.
For these reasons, ARR service provides a dedicated *conversion service*, which you can run ahead of time. Once converted, a model can be loaded from an Azure Storage Account.
Once converted, a model can be loaded from an Azure Storage Account.
The conversion service supports these formats: -- **FBX** (version 2011 to version 2020)-- **GLTF**/**GLB** (version 2.x)
+### Triangular meshes
+
+* **FBX** (version 2011 to version 2020)
+* **GLTF**/**GLB** (version 2.x)
There are minor differences between the formats with regard to material property conversion, as listed in chapter [material mapping for model formats](../../reference/material-mapping.md).
+### Point clouds
+
+* **XYZ** : Text file format where every line contains a single point, formatted as `position_x position_y position_z red green blue`
+* **PLY** : Only binary PLY files are supported. Properties other than position and color are ignored. Every PLY file has a human-readable header, which can be used to verify whether the following requirements are met:
+ * file must be encoded using the `binary_little_endian 1.0` format,
+ * file contains a point cloud (that is, no triangles),
+ * positions contain all three components (x, y, z),
+ * colors contain all three components (red, green, blue).
+
+ In case any other properties exist, they're ignored during ingestion.
+* **E57** : E57 contains two types of data: `data3d` and `image2d`. The conversion service only loads the `data3d` part of the file, while the `image2d` part of the file is being ignored.
+ ## The conversion process 1. [Prepare two Azure Blob Storage containers](blob-storage.md): one for input, one for output
remote-rendering Override Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/override-materials.md
The material settings in the source model define the [PBR materials](../../overview/features/pbr-materials.md) used by the renderer. Sometimes the default conversion doesn't give the desired results and you need to make changes. For more information, see [Material mapping for model formats](../../reference/material-mapping.md).
-When a model is converted for use in Azure Remote Rendering, you can provide a material override file to customize how material conversion is done on a per-material basis.
+When a triangular mesh is converted for use in Azure Remote Rendering, you can provide a material override file to customize how material conversion is done on a per-material basis.
If a file called *\<modelName>.MaterialOverrides.json* is found in the input container with the input model *\<modelName>.\<ext>*, it's used as the material override file. ## The override file used during conversion
remote-rendering Fresnel Effect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/fresnel-effect.md
The fresnel effect material feature is a non-physically correct, ad-hoc effect.
The fresnel effect gives affected objects a colored shine around their edges. Information about effect customization and examples of the rendering results can be found in the following sections.
+> [!NOTE]
+> The fresnel effect can't be applied to point clouds.
+ ## Enabling the fresnel effect To use the fresnel effect feature, it needs to be enabled on the materials in question. You can enable it by setting the FresnelEffect bit of the [PbrMaterialFeatures](/dotnet/api/microsoft.azure.remoterendering.pbrmaterialfeatures) on the [PBR material](../../overview/features/pbr-materials.md). The same pattern applies to the [ColorMaterialFeatures](/dotnet/api/microsoft.azure.remoterendering) and the [Color material](../../overview/features/color-materials.md). See the code samples section for a usage demonstration.
-After enabling, the fresnel effect will immediately be visible. By default the shine will be white (1, 1, 1, 1) and have an exponent of 1. You can customize these settings using the parameter setters below.
+After enabled through the API, the fresnel effect will immediately be visible. By default the shine will be white (1, 1, 1, 1) and have an exponent of 1. You can customize these settings using the parameter setters below.
## Customizing the effect appearance
remote-rendering Lights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/lights.md
# Scene lighting
-By default the remotely rendered objects are lit using a [sky light](sky.md). For most applications this is already sufficient, but you can add further light sources to the scene.
+By default the remotely rendered objects are lit using a [sky light](sky.md). For most applications, a static sky light is already sufficient, but you can add further dynamic light sources to the scene.
> [!IMPORTANT]
-> Only [PBR materials](pbr-materials.md) are affected by light sources. [Color materials](color-materials.md) always appear fully bright.
+> Only [PBR materials](pbr-materials.md) are affected by light sources. [Color materials](color-materials.md) and point clouds always appear fully bright.
> [!NOTE] > Casting shadows is currently not supported. Azure Remote Rendering is optimized to render huge amounts of geometry, utilizing multiple GPUs if necessary. Traditional approaches for shadow casting do not work well in such scenarios.
In Azure Remote Rendering the `PointLightComponent` can not only emit light from
* **Radius:** The default radius is zero, in which case the light acts as a point light. If the radius is larger than zero, it acts as spherical light source, which changes the appearance of specular highlights.
-* **Length:** If both `Length` and `Radius` are non-zero, the light acts as a tube light. This can be used to simulate neon tubes.
+* **Length:** If both `Length` and `Radius` are non-zero, the light acts as a tube light. This combination can be used to simulate neon tubes.
* **AttenuationCutoff:** If left to (0,0) the attenuation of the light only depends on its `Intensity`. However, you can provide custom min/max distances over which the light's intensity is scaled linearly down to 0. This feature can be used to enforce a smaller range of influence of a specific light.
remote-rendering Outlines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/outlines.md
Selected objects can be highlighted visually by adding outline rendering via the [Hierarchical state override component](../../overview/features/override-hierarchical-state.md). This chapter explains how global parameters for outline rendering are changed through the client API.
-Outline properties are a global setting. All objects that use outline rendering will use the same setting - it is not possible to use a per-object outline color.
+Outline properties are a global setting. All objects that use outline rendering will use the same setting - it isn't possible to use a per-object outline color.
+
+> [!NOTE]
+> The outline rendering effect can't be applied to point clouds.
## Parameters for `OutlineSettings`
remote-rendering Override Hierarchical State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/override-hierarchical-state.md
# Hierarchical state override
-In many cases, it is necessary to dynamically change the appearance of parts of a [model](../../concepts/models.md), for example hiding sub graphs or switching parts to transparent rendering. Changing the materials of each part involved is not practical since it requires to iterate over the whole scene graph, and manage material cloning and assignment on each node.
+In many cases, it's necessary to dynamically change the appearance of parts of a [model](../../concepts/models.md), for example hiding sub graphs or switching parts to transparent rendering. Changing the materials of each part involved isn't practical since it requires to iterate over the whole scene graph, and manage material cloning and assignment on each node.
-To accomplish this use case with the least possible overhead, use the `HierarchicalStateOverrideComponent`. This component implements hierarchical state updates on arbitrary branches of the scene graph. That means, a state can be defined on any level in the scene graph and it trickles down the hierarchy until it is either overridden by a new state, or applied to a leaf object.
+To accomplish this use case with the least possible overhead, use the `HierarchicalStateOverrideComponent`. This component implements hierarchical state updates on arbitrary branches of the scene graph. That means, a state can be defined on any level in the scene graph and it trickles down the hierarchy until it's either overridden by a new state, or applied to a leaf object.
As an example, consider the model of a car and you want to switch the whole car to be transparent, except for the inner engine part. This use case involves only two instances of the component: * The first component is assigned to the model's root node and turns on transparent rendering for the whole car. * The second component is assigned to the root node of the engine and overrides the state again by explicitly turning off see-through mode.
+> [!NOTE]
+> Point clouds do not expose a full scene graph (see [mesh type differences](../../concepts/meshes.md#mesh-types)), so assigning a hierarchical override to the root entity of a point cloud model will apply the state to the full point cloud. Furthermore, some state override features are not supported for point clouds, as mentioned in the respective section.
+ ## Features The fixed set of states that can be overridden are:
The fixed set of states that can be overridden are:
> [!IMPORTANT] > The see-through effect only works when the *TileBasedComposition* [rendering mode](../../concepts/rendering-modes.md) is used.
+ > [!NOTE]
+ > The see-through effect is ignored for point clouds.
+ * **`Shell`**: The geometry is rendered as a transparent, de-saturated shell. This mode allows fading out non-important parts of a scene while still retaining a sense of shape and relative positioning. To change the shell rendering's appearance, use the [ShellRenderingSettings](shell-effect.md) state. See the following image for the car model being entirely shell-rendered, except for the blue springs: ![Shell mode used to fade out specific objects](./media/shell.png)
The fixed set of states that can be overridden are:
> [!IMPORTANT] > The shell effect only works when the *TileBasedComposition* [rendering mode](../../concepts/rendering-modes.md) is used.
+ > [!NOTE]
+ > The shell effect is ignored for point clouds.
+ * **`Selected`**: The geometry is rendered with a [selection outline](outlines.md). ![Outline option used to highlight a selected part](./media/selection-outline.png)
+ > [!NOTE]
+ > Selection outline rendering is ignored for point clouds.
+ * **`DisableCollision`**: The geometry is exempt from [spatial queries](spatial-queries.md). The **`Hidden`** flag doesn't affect the collision state flag, so these two flags are often set together. * **`UseCutPlaneFilterMask`**: Use an individual filter bit mask to control the cut plane selection. This flag determines whether the individual filter mask should be used or inherited from its parent. The filter bit mask itself is set via the `CutPlaneFilterMask` property. For detailed information about how the filtering works, refer to the [Selective cut planes paragraph](cut-planes.md#selective-cut-planes). See the following example where only the tire and rim is cut while the rest of the scene remains unaffected.
The `tint color` override is slightly special in that there's both an on/off/inh
## Performance considerations
-An instance of `HierarchicalStateOverrideComponent` itself doesn't add much runtime overhead. However, it's always good practice to keep the number of active components low. For instance, when implementing a selection system that highlights the picked object, it is recommended to delete the component when the highlight is removed. Keeping the components around with neutral features can quickly add up.
+An instance of `HierarchicalStateOverrideComponent` itself doesn't add much runtime overhead. However, it's always good practice to keep the number of active components low. For instance, when implementing a selection system that highlights the picked object, it's recommended to delete the component when the highlight is removed. Keeping the components around with neutral features can quickly add up.
Transparent rendering puts more workload on the server's GPUs than standard rendering. If large parts of the scene graph are switched to *see-through*, with many layers of geometry being visible, it may become a performance bottleneck. The same is valid for objects with [selection outlines](../../overview/features/outlines.md#performance) and for [shell rendering](../../overview/features/shell-effect.md#performance) .
remote-rendering Pbr Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/pbr-materials.md
# PBR materials
-*PBR materials* are one of the supported [material types](../../concepts/materials.md) in Azure Remote Rendering. They are used for [meshes](../../concepts/meshes.md) that should receive realistic lighting.
+*PBR materials* are one of the supported [material types](../../concepts/materials.md) in Azure Remote Rendering. They're used for triangular [meshes](../../concepts/meshes.md) that should receive realistic lighting. Point clouds on the other hand aren't affected by dynamic lighting.
-PBR stands for **P**hysically **B**ased **R**endering and means that the material describes the visual properties of a surface in a physically plausible way, such that realistic results are possible under all lighting conditions. Most modern game engines and content creation tools support PBR materials because they are considered the best approximation of real world scenarios for real-time rendering.
+PBR stands for **P**hysically **B**ased **R**endering and means that the material describes the visual properties of a surface in a physically plausible way, such that realistic results are possible under all lighting conditions. Most modern game engines and content creation tools support PBR materials because they're considered the best approximation of real world scenarios for real-time rendering.
![Helmet glTF sample model rendered by ARR](media/helmet.png)
-PBR materials are not a universal solution, though. There are materials that reflect color differently depending on the viewing angle. For example, some fabrics or car paints. These kinds of materials are not handled by the standard PBR model, and are currently not supported by Azure Remote Rendering. This includes PBR extensions, such as *Thin-Film* (multi-layered surfaces) and *Clear-Coat* (for car paints).
+PBR materials aren't a universal solution, though. There are materials that reflect color differently depending on the viewing angle. For example, some fabrics or car paints. These kinds of materials aren't handled by the standard PBR model, and are currently not supported by Azure Remote Rendering. This limitation includes PBR extensions, such as *Thin-Film* (multi-layered surfaces) and *Clear-Coat* (for car paints).
## Common material properties
These properties are common to all materials:
The core idea of physically based rendering is to use *BaseColor*, *Metalness*, and *Roughness* properties to emulate a wide range of real-world materials. A detailed description of PBR is beyond the scope of this article. For more information about PBR, see [other sources](http://www.pbr-book.org). The following properties are specific to PBR materials:
-* **baseColor:** In PBR materials, the *albedo color* is referred to as the *base color*. In Azure Remote Rendering the *albedo color* property is already present through the common material properties, so there is no additional base color property.
+* **baseColor:** In PBR materials, the *albedo color* is referred to as the *base color*. In Azure Remote Rendering the *albedo color* property is already present through the common material properties, so there's no additional base color property.
* **roughness** and **roughnessMap:** Roughness defines how rough or smooth the surface is. Rough surfaces scatter the light in more directions than smooth surfaces, which make reflections blurry rather than sharp. The value range is from `0.0` to `1.0`. When `roughness` equals `0.0`, reflections will be sharp. When `roughness` equals `0.5`, reflections will become blurry.
The core idea of physically based rendering is to use *BaseColor*, *Metalness*,
![An object rendered with and without ambient occlusion](./media/boom-box-ao2.gif)
-* **transparent:** For PBR materials, there is only one transparency setting: it is enabled or not. The opacity is defined by the albedo color's alpha channel. When enabled, a more complex rendering pipeline is invoked to draw semi-transparent surfaces. Azure Remote Rendering implements true [order independent transparency](https://en.wikipedia.org/wiki/Order-independent_transparency) (OIT).
+* **transparent:** For PBR materials, there's only one transparency setting: it's enabled or not. The opacity is defined by the albedo color's alpha channel. When enabled, a more complex rendering pipeline is invoked to draw semi-transparent surfaces. Azure Remote Rendering implements true [order independent transparency](https://en.wikipedia.org/wiki/Order-independent_transparency) (OIT).
- Transparent geometry is expensive to render. If you only need holes in a surface, for example for the leaves of a tree, it is better to use alpha clipping instead.
+ Transparent geometry is expensive to render. If you only need holes in a surface, for example for the leaves of a tree, it's better to use alpha clipping instead.
![Spheres rendered with zero to full transparency](./media/transparency.png) Notice in the image above, how the right-most sphere is fully transparent, but the reflection is still visible.
The core idea of physically based rendering is to use *BaseColor*, *Metalness*,
Azure Remote Rendering uses the Cook-Torrance micro-facet BRDF with GGX NDF, Schlick Fresnel, and a GGX Smith correlated visibility term with a Lambert diffuse term. This model is the de facto industry standard at the moment. For more in-depth details, refer to this article: [Physically based Rendering - Cook Torrance](http://www.codinglabs.net/article_physically_based_rendering_cook_torrance.aspx)
- An alternative to the *Metalness-Roughness* PBR model used in Azure Remote Rendering is the *Specular-Glossiness* PBR model. This model can represent a broader range of materials. However, it is more expensive, and usually does not work well for real-time cases.
- It is not always possible to convert from *Specular-Glossiness* to *Metalness-Roughness* as there are *(Diffuse, Specular)* value pairs that cannot be converted to *(BaseColor, Metalness)*. The conversion in the other direction is simpler and more precise, since all *(BaseColor, Metalness)* pairs correspond to well-defined *(Diffuse, Specular)* pairs.
+ An alternative to the *Metalness-Roughness* PBR model used in Azure Remote Rendering is the *Specular-Glossiness* PBR model. This model can represent a broader range of materials. However, it's more expensive, and usually doesn't work well for real-time cases.
+ It isn't always possible to convert from *Specular-Glossiness* to *Metalness-Roughness* as there are *(Diffuse, Specular)* value pairs that can't be converted to *(BaseColor, Metalness)*. The conversion in the other direction is simpler and more precise, since all *(BaseColor, Metalness)* pairs correspond to well-defined *(Diffuse, Specular)* pairs.
## API documentation
remote-rendering Point Cloud Rendering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/point-cloud-rendering.md
+
+ Title: Point cloud rendering
+description: High-level overview of point cloud rendering and the API to change global point cloud settings
++ Last updated : 06/02/2022++++
+# Point cloud rendering
+
+> [!NOTE]
+> **The ARR point cloud rendering feature is currently in public preview.**
+>
+> This feature is being actively developed, and may not be complete. It's made available on a ΓÇ£PreviewΓÇ¥ basis. You can test and use this feature in your scenarios, and [provide feedback](../../resources/support.md).
+>
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+ARR supports rendering of point clouds as an alternative to triangular meshes. Point cloud rendering enables additional use cases where converting point clouds to triangular meshes as a preprocessing step is either impractical (turnaround times, complexity) or if the conversion process drops important detail.
+
+Similar to triangular mesh conversion, point cloud conversion doesn't decimate the input data.
+
+## Point cloud conversion
+
+Conversion of point cloud assets works fully analogously to converting triangular meshes: A single point cloud input file is converted to an `.arrAsset` file, which in turn can be consumed by the runtime API for loading.
+
+The list of supported point cloud file formats can be found in the [model conversion](../../how-tos/conversion/model-conversion.md#point-clouds) section.
+
+Conversion settings specifically for point cloud files are explained in the [conversion settings](../../how-tos/conversion/configure-model-conversion.md#settings-for-point-clouds) paragraph.
+
+## Size limitations
+
+For the maximum number of allowed points, the same kind of distinctions between a `standard` and `premium` rendering session applies, as described in paragraph about [server size limits](../../reference/limits.md#overall-number-of-primitives).
+
+## Global rendering properties
+
+There's a single API to access global rendering settings for point clouds. The `_Experimental` suffix has been added to indicate that the API is currently in public preview and might be subject to change.
+
+```cs
+void ChangeGlobalPointCloudSettings(RenderingSession session)
+{
+ PointCloudSettings settings = session.Connection.PointCloudSettings_Experimental;
+
+ // Make all points bigger (default = 1.0)
+ settings.PointSizeScale = 1.25f;
+}
+```
+
+```cpp
+void ChangeGlobalPointCloudSettings(ApiHandle<RenderingSession> session)
+{
+ ApiHandle<PointCloudSettings> settings = session->Connection()->PointCloudSettings_Experimental();
+
+ // Make all points bigger (default = 1.0)
+ settings->SetPointSizeScale(1.25f);
+}
+```
+
+## API documentation
+
+* [C# RenderingConnection.PointCloudSettings_Experimental property](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.pointcloudsettings_experimental)
+* [C++ RenderingConnection::PointCloudSettings()](/cpp/api/remote-rendering/renderingconnection#pointcloudsettings_experimental)
+
+## Next steps
+
+* [Configuring the model conversion](../../how-tos/conversion/configure-model-conversion.md)
+* [Using the Azure Remote Rendering Toolkit (ARRT)](../../samples/azure-remote-rendering-asset-tool.md)
remote-rendering Shell Effect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/shell-effect.md
The shell state of the [Hierarchical state override component](../../overview/fe
You can configure the appearance of shell-rendered objects via the `ShellRenderingSettings` global state. All objects that use shell rendering will use the same setting. There are no per object parameters.
+> [!NOTE]
+> The shell rendering effect can't be applied to point clouds.
+ ## ShellRenderingSettings parameters Class `ShellRenderingSettings` holds the settings related to global shell rendering properties:
remote-rendering Sky https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/sky.md
# Sky reflections In Azure Remote Rendering, a sky texture is used to light objects realistically. For augmented reality applications, this texture should resemble your real-world surroundings, to make objects appear convincing. This article describes how to change the sky texture.
+The sky only affects the rendering of [PBR materials](../../overview/features/pbr-materials.md). [Color materials](../../overview/features/color-materials.md) and [point clouds](../../overview/features/point-cloud-rendering.md) aren't affected.
> [!NOTE] > The sky texture is also referred to as an *environment map*. These terms are used interchangeably.
void ChangeEnvironmentMap(ApiHandle<RenderingSession> session)
} ```
-Note that the `LoadTextureFromSasAsync` variant is used above because a built-in texture is loaded. In case of loading from [linked blob storages](../../how-tos/create-an-account.md#link-storage-accounts), use the `LoadTextureAsync` variant.
+The `LoadTextureFromSasAsync` variant is used above because a built-in texture is loaded. When loading from [linked blob storages](../../how-tos/create-an-account.md#link-storage-accounts) instead, use the `LoadTextureAsync` variant.
## Sky texture types
All textures have to be in a [supported texture format](../../concepts/textures.
### Cube environment maps
-For reference, here is an unwrapped cubemap:
+For reference, here's an unwrapped cubemap:
![An unwrapped cubemap](media/Cubemap-example.png)
remote-rendering Spatial Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/spatial-queries.md
Spatial queries are operations with which you can ask the remote rendering service which objects are located in an area. Spatial queries are frequently used to implement interactions, such as figuring out which object a user is pointing at.
-All spatial queries are evaluated on the server. Consequently they are asynchronous operations and results will arrive with a delay that depends on your network latency. Since every spatial query generates network traffic, be careful not to do too many at once.
+All spatial queries are evaluated on the server. Accordingly, they're asynchronous operations and results will arrive with a delay that depends on your network latency. Since every spatial query generates network traffic, be careful not to do too many at once.
## Collision meshes
-Spatial queries are powered by the [Havok Physics](https://www.havok.com/products/havok-physics) engine and require a dedicated collision mesh to be present. By default, [model conversion](../../how-tos/conversion/model-conversion.md) generates collision meshes. If you don't require spatial queries on a complex model, consider disabling collision mesh generation in the [conversion options](../../how-tos/conversion/configure-model-conversion.md), as it has an impact in multiple ways:
+For triangular meshes, spatial queries are powered by the [Havok Physics](https://www.havok.com/products/havok-physics) engine and require a dedicated collision mesh to be present. By default, [model conversion](../../how-tos/conversion/model-conversion.md) generates collision meshes. If you don't require spatial queries on a complex model, consider disabling collision mesh generation in the [conversion options](../../how-tos/conversion/configure-model-conversion.md), as it has an impact in multiple ways:
* [Model conversion](../../how-tos/conversion/model-conversion.md) will take considerably longer. * Converted model file sizes are noticeably larger, impacting download speed.
Spatial queries are powered by the [Havok Physics](https://www.havok.com/product
* Runtime CPU memory consumption is higher. * There's a slight runtime performance overhead for every model instance.
+For point clouds, none of these drawbacks apply.
+ ## Ray casts A *ray cast* is a spatial query where the runtime checks which objects are intersected by a ray, starting at a given position and pointing into a certain direction. As an optimization, a maximum ray distance is also given, to not search for objects that are too far away.
remote-rendering Z Fighting Mitigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/z-fighting-mitigation.md
# Z-fighting mitigation
-When two surfaces overlap, it isn't clear which one should be rendered on top of the other. The result even varies per pixel, resulting in camera view-dependent artifacts. When the camera or the mesh moves, these patterns flicker noticeably. This artifact is called *z-fighting*. For augmented reality and virtual reality applications, the problem is intensified because head-mounted devices naturally always move. To prevent viewer discomfort, Azure Remote Rendering offers z-fighting mitigation functionality.
+When two triangular surfaces overlap, it isn't clear which one should be rendered on top of the other. The result even varies per pixel, resulting in camera view-dependent artifacts. When the camera or the mesh moves, these patterns flicker noticeably. This artifact is called *z-fighting*. For augmented reality and virtual reality applications, the problem is intensified because head-mounted devices naturally always move. To prevent viewer discomfort, Azure Remote Rendering offers z-fighting mitigation functionality.
+
+> [!NOTE]
+> The z-fighting mitigation settings have no effect on point cloud rendering.
## Z-fighting mitigation modes
void EnableZFightingMitigation(ApiHandle<RenderingSession> session, bool highlig
Z-fighting happens mainly for two reasons:
-* When surfaces are very far away from the camera, the precision of their depth values degrades and the values become indistinguishable
+* When surfaces are far away from the camera, the precision of their depth values degrades and the values become indistinguishable
* When surfaces in a mesh physically overlap The first problem can always happen and is difficult to eliminate. If this situation happens in your application, make sure that the ratio of the *near plane* distance to the *far plane* distance is as low as practical. For example, a near plane at distance 0.01 and far plane at distance 1000 creates this problem much earlier than having the near plane at 0.1 and the far plane at distance 20.
remote-rendering Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/reference/limits.md
# Limitations
-A number of features have size, count, or other limitations.
+Many features have size, count, or other limitations.
## Azure Frontend
The following limitations apply to the frontend API (C++ and C#):
## Geometry
-* **Animation:** Animations are limited to animating individual transforms of [game objects](../concepts/entities.md). Skeletal animations with skinning or vertex animations are not supported. Animation tracks from the source asset file are not preserved. Instead, object transform animations have to be driven by client code.
-* **Custom shaders:** Authoring of custom shaders is not supported. Only built-in [Color materials](../overview/features/color-materials.md) or [PBR materials](../overview/features/pbr-materials.md) can be used.
+* **Animation:** Animations are limited to animating individual transforms of [game objects](../concepts/entities.md). Skeletal animations with skinning or vertex animations aren't supported. Animation tracks from the source asset file aren't preserved. Instead, object transform animations have to be driven by client code.
+* **Custom shaders:** Authoring of custom shaders isn't supported. Only built-in [Color materials](../overview/features/color-materials.md) or [PBR materials](../overview/features/pbr-materials.md) can be used.
* **Maximum number of distinct materials** in an asset: 65,535. For more information about automatic material count reduction, see the [material de-duplication](../how-tos/conversion/configure-model-conversion.md#material-de-duplication) chapter.
-* **Maximum number of distinct textures**: There is no hard limit on the number of distinct textures. The only constraint is overall GPU memory and the number of distinct materials.
-* **Maximum dimension of a single texture**: 16,384 x 16,384. Larger textures cannot be used by the renderer. The conversion process can sometimes reduce larger textures in size, but in general it will fail to process textures larger than this limit.
+* **Maximum number of distinct textures**: There's no hard limit on the number of distinct textures. The only constraint is overall GPU memory and the number of distinct materials.
+* **Maximum dimension of a single texture**: 16,384 x 16,384. Larger textures can't be used by the renderer. The conversion process can sometimes reduce larger textures in size, but in general it will fail to process textures larger than this limit.
-### Overall number of polygons
+### Overall number of primitives
-The allowable number of polygons for all loaded models depends on the size of the VM as passed to [the session management REST API](../how-tos/session-rest-api.md):
+A primitive is either a single triangle (in triangular meshes) or a single point (in point cloud meshes).
+The allowable number of primitives for all loaded models depends on the size of the VM as passed to [the session management REST API](../how-tos/session-rest-api.md):
-| Server size | Maximum number of polygons |
+| Server size | Maximum number of primitives |
|:--|:| |standard| 20 million | |premium| no limit |
For detailed information on this limitation, see the [server size](../reference/
**Windows 10/11 desktop**
-* Win32/x64 is the only supported Win32 platform. Win32/x86 is not supported.
+* Win32/x64 is the only supported Win32 platform. Win32/x86 isn't supported.
**HoloLens 2**
-* The [render from PV camera](/windows/mixed-reality/mixed-reality-capture-for-developers#render-from-the-pv-camera-opt-in) feature is not supported.
+* The [render from PV camera](/windows/mixed-reality/mixed-reality-capture-for-developers#render-from-the-pv-camera-opt-in) feature isn't supported.
remote-rendering Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/reference/vm-sizes.md
Azure Remote Rendering is available in two server configurations: `Standard` and `Premium`.
-## Polygon limits
+## Primitive limits
-Remote Rendering with `Standard` size server has a maximum scene size of 20 million polygons. Remote Rendering with `Premium` size does not enforce a hard maximum, but performance may be degraded if your content exceeds the rendering capabilities of the service.
+A primitive is either a single triangle (in triangular meshes) or a single point (in point cloud meshes). Triangular meshes can be instantiated together with point clouds, in which case the sum of all points and triangles in the session are counted against the limit.
-When the renderer on on a 'Standard' server size hits this limitation, it switches rendering to a checkerboard background:
+Remote Rendering with `Standard` size server has a maximum scene size of 20 million primitives. Remote Rendering with `Premium` size doesn't enforce a hard maximum, but performance may be degraded if your content exceeds the rendering capabilities of the service.
+
+When the renderer on a 'Standard' server size hits this limitation, it switches rendering to a checkerboard background:
![Screenshot shows a grid of black and white squares with a Tools menu.](media/checkerboard.png) ## Specify the server size
-The desired type of server configuration has to be specified at rendering session initialization time. It cannot be changed within a running session. The following code examples show the place where the server size must be specified:
+The desired type of server configuration has to be specified at rendering session initialization time. It can't be changed within a running session. The following code examples show the place where the server size must be specified:
```cs async void CreateRenderingSession(RemoteRenderingClient client)
For the [example PowerShell scripts](../samples/powershell-example-scripts.md),
}, ```
-### How the renderer evaluates the number of polygons
+### How the renderer evaluates the number of primitives
-The number of polygons that are considered for the limitation test are the number of polygons that are actually passed to the renderer. This geometry is typically the sum of all instantiated models, but there are also exceptions. The following geometry is **not included**:
+The number of primitives that are considered for the limitation test are the number of primitives that are actually passed to the renderer. This geometry is typically the sum of all instantiated meshes, but there are also exceptions. The following geometry is **not included**:
* Loaded model instances that are fully outside the view frustum. * Models or model parts that are switched to invisible, using the [hierarchical state override component](../overview/features/override-hierarchical-state.md).
-Accordingly, it is possible to write an application that targets the `standard` size that loads multiple models with a polygon count close to the limit for every single model. When the application only shows a single model at a time, the checkerboard is not triggered.
+Accordingly, it's possible to write an application that targets the `standard` size that loads multiple models with a primitive count close to the limit for every single model. When the application only shows a single model at a time, the checkerboard isn't triggered.
-### How to determine the number of polygons
+### How to determine the number of primitives
-There are two ways to determine the number of polygons of a model or scene that contribute to the budget limit of the `standard` configuration size:
-* On the model conversion side, retrieve the [conversion output json file](../how-tos/conversion/get-information.md), and check the `numFaces` entry in the [*inputStatistics* section](../how-tos/conversion/get-information.md#the-inputstatistics-section)
-* If your application is dealing with dynamic content, the number of rendered polygons can be queried dynamically during runtime. Use a [performance assessment query](../overview/features/performance-queries.md#performance-assessment-queries) and check for the `polygonsRendered` member in the `FrameStatistics` struct. The `PolygonsRendered` field will be set to `bad` when the renderer hits the polygon limitation. The checkerboard background is always faded in with some delay to ensure user action can be taken after this asynchronous query. User action can for instance be hiding or deleting model instances.
+There are two ways to determine the number of primitives of a model or scene that contribute to the budget limit of the `standard` configuration size:
+* On the model conversion side, retrieve the [conversion output json file](../how-tos/conversion/get-information.md), and check the `numFaces` entry in the [*inputStatistics* section](../how-tos/conversion/get-information.md#the-inputstatistics-section). This number denotes the triangle count in triangular meshes and number of points in point clouds respectively.
+* If your application is dealing with dynamic content, the number of rendered primitives can be queried dynamically during runtime. Use a [performance assessment query](../overview/features/performance-queries.md#performance-assessment-queries) and check for the `polygonsRendered` member in the `FrameStatistics` struct. The `PolygonsRendered` field will be set to `bad` when the renderer hits the primitive limitation. The checkerboard background is always faded in with some delay to ensure user action can be taken after this asynchronous query. User action can, for instance, be hiding or deleting model instances.
## Pricing
remote-rendering Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/resources/troubleshoot.md
Make sure that your firewalls (on device, inside routers, etc.) don't block the
## Failed to load model
-When loading a model (for example, via a Unity sample) fails although the blob configuration is correct, it's likely that the blob storage isn't properly linked. This is explained in the [linking of a storage account](../how-tos/create-an-account.md#link-storage-accounts) chapter. Note that after correct linking it can take up to 30 minutes until the changes take effect.
+When loading a model (for example, via a Unity sample) fails although the blob configuration is correct, it's likely that the blob storage isn't properly linked. Proper linking is explained in the [linking of a storage account](../how-tos/create-an-account.md#link-storage-accounts) chapter. After correct linking it can take up to 30 minutes until the changes take effect.
## Can't link storage account to ARR account
Sometimes during [linking of a storage account](../how-tos/create-an-account.md#
Check that your GPU supports hardware video decoding. See [Development PC](../overview/system-requirements.md#development-pc).
-If you're working on a laptop with two GPUs, it's possible that the GPU you're running on by default, doesn't provide hardware video decoding functionality. If so, try to force your app to use the other GPU. This is often possible in the GPU driver settings.
+If you're working on a laptop with two GPUs, it's possible that the GPU you're running on by default, doesn't provide hardware video decoding functionality. If so, try to force your app to use the other GPU. Changing the used GPU is often possible in the GPU driver settings.
## Retrieve session/conversion status fails
The reason for this issue is an incorrect security setting on the DLLs. This pro
1. Repeat the steps above for the other folder 1. Also repeat the steps above on each DLL file inside both folders. There should be four DLLs altogether.
-To verify that the settings are now correct, do this for each of the four DLLs:
+To verify that the settings are now correct, do the following steps for each of the four DLLs:
1. Select **Properties > Security > Edit** 1. Go through the list of all **Groups / Users** and make sure each one has the **Read & Execute** right set (the checkmark in the **allow** column must be ticked)
If these two steps didn't help, it's required to find out whether video frames a
### Common client-side issues
-**The model exceeds the limits of the selected VM, specifically the maximum number of polygons:**
+**The model exceeds the limits of the selected VM, specifically the maximum number of primitives:**
-See specific [server size limits](../reference/limits.md#overall-number-of-polygons).
+See specific [server size limits](../reference/limits.md#overall-number-of-primitives).
**The model is not inside the camera frustum:**
Make sure to follow the [Unity Tutorial: View remote models](../tutorials/unity/
Reasons for this issue could be MSAA, HDR, or enabling post processing. Make sure that the low-quality profile is selected and set as default in the Unity. To do so go to *Edit > Project Settings... > Quality*.
-When using the OpenXR plugin in Unity 2020, there are versions of the URP (Universal Render Pipeline) that create this extra off-screen render target regardless of post processing being enabled. It's thus important to upgrade the URP version manually to at least 10.5.1 (or higher). This is described in the [system requirements](../overview/system-requirements.md#unity-2020).
+When using the OpenXR plugin in Unity 2020, there are versions of the URP (Universal Render Pipeline) that create this extra off-screen render target regardless of post processing being enabled. It's thus important to upgrade the URP version manually to at least 10.5.1 (or higher). This upgrade process is described in the [system requirements](../overview/system-requirements.md#unity-2020).
## Unity code using the Remote Rendering API doesn't compile ### Use Debug when compiling for Unity Editor
-Switch the *build type* of the Unity solution to **Debug**. When testing ARR in the Unity editor the define `UNITY_EDITOR` is only available in 'Debug' builds. Note that this is unrelated to the build type used for [deployed applications](../quickstarts/deploy-to-hololens.md), where you should prefer 'Release' builds.
+Switch the *build type* of the Unity solution to **Debug**. When testing ARR in the Unity editor the define `UNITY_EDITOR` is only available in 'Debug' builds. Note that this setting is unrelated to the build type used for [deployed applications](../quickstarts/deploy-to-hololens.md), where you should prefer 'Release' builds.
### Compile failures when compiling Unity samples for HoloLens 2
-We have seen spurious failures when trying to compile Unity samples (quickstart, ShowCaseApp,.. ) for HoloLens 2. Visual Studio complains about not being able to copy some files albeit they're there. If you hit this problem:
+We have seen spurious failures when trying to compile Unity samples (quickstart, ShowCaseApp, ... ) for HoloLens 2. Visual Studio complains about not being able to copy some files albeit they're there. If you hit this problem:
* Remove all temporary Unity files from the project and try again. That is, close Unity, delete the temporary *library* and *obj* folders in the project directory and load/build the project again. * Make sure the projects are located in a directory on disk with reasonably short path, since the copy step sometimes seems to run into problems with long filenames. * If that doesn't help, it could be that MS Sense interferes with the copy step. To set up an exception, run this registry command from command line (requires admin rights):
We have seen spurious failures when trying to compile Unity samples (quickstart,
The `AudioPluginMsHRTF.dll` for Arm64 was added to the *Windows Mixed Reality* package *(com.unity.xr.windowsmr.metro)* in version 3.0.1. Ensure that you have version 3.0.1 or later installed via the Unity Package Manager. From the Unity menu bar, navigate to *Window > Package Manager* and look for the *Windows Mixed Reality* package.
-## The Unity `Cinemachine` plugin does not work in Remote pose mode
+## The Unity `Cinemachine` plugin doesn't work in Remote pose mode
In [Remote pose mode](../overview/features/late-stage-reprojection.md#reprojection-pose-modes), the ARR Unity binding code implicitly creates a proxy camera that performs the actual rendering. In this case, the main camera's culling mask is set to 0 ("nothing") to effectively turn off the rendering for it. However, some third party plugins (like `Cinemachine`) that drive the camera, may rely on at least some layer bits being set.
For this purpose, The binding code allows you to programmatically change the lay
![Screenshot that shows Unity's inspector panel for camera settings in `Cinemachine`.](./media/cinemachine-camera-config.png)
-The local pose mode isn't affected by this, since in this case the ARR binding doesn't redirect rendering to an internal proxy camera.
+The local pose mode isn't affected by this problem, since in this case the ARR binding doesn't redirect rendering to an internal proxy camera.
## Native C++ based application doesn't compile
Inside the C++ NuGet package, there's file `microsoft.azure.remoterendering.Cpp.
In case rendered objects seem to be moving along with head movements, you might be encountering issues with *Late Stage Reprojection* (LSR). Refer to the section on [Late Stage Reprojection](../overview/features/late-stage-reprojection.md) for guidance on how to approach such a situation.
-Another reason for unstable holograms (wobbling, warping, jittering, or jumping holograms) can be poor network connectivity, in particular insufficient network bandwidth, or too high latency. A good indicator for the quality of your network connection is the [performance statistics](../overview/features/performance-queries.md) value `ServiceStatistics.VideoFramesReused`. Reused frames indicate situations where an old video frame needed to be reused on the client side because no new video frame was available ΓÇô for example because of packet loss or because of variations in network latency. If `ServiceStatistics.VideoFramesReused` is frequently larger than zero, this indicates a network problem.
+Another reason for unstable holograms (wobbling, warping, jittering, or jumping holograms) can be poor network connectivity, in particular insufficient network bandwidth, or too high latency. A good indicator for the quality of your network connection is the [performance statistics](../overview/features/performance-queries.md) value `ServiceStatistics.VideoFramesReused`. Reused frames indicate situations where an old video frame needed to be reused on the client side because no new video frame was available ΓÇô for example because of packet loss or because of variations in network latency. If `ServiceStatistics.VideoFramesReused` is frequently larger than zero, it indicates a network problem.
Another value to look at is `ServiceStatistics.LatencyPoseToReceiveAvg`. It should consistently be below 100 ms. Seeing higher values could indicate that you're connected to a data center that is too far away.
For a list of potential mitigations, see the [guidelines for network connectivit
## Local content (UIs, ...) on HoloLens 2 renders with significantly more distortion artifacts than without ARR
-This is a default setting that trades local content projection quality for runtime performance. Refer to the chapter about the [reprojection pose modes](../overview/features/late-stage-reprojection.md#reprojection-pose-modes) to see how the projection mode can be changed so that local content is rendered at the same reprojection quality level as without ARR.
+This artifact is due to a default setting that trades local content projection quality for runtime performance. Refer to the chapter about the [reprojection pose modes](../overview/features/late-stage-reprojection.md#reprojection-pose-modes) to see how the projection mode can be changed so that local content is rendered at the same reprojection quality level as without ARR.
## Z-fighting
In some cases, custom native C++ apps that use a multi-pass stereo rendering mod
The Conversion service may encounter errors downloading files from blob storage because of file system limitations. Specific failure cases are listed below. Comprehensive information on Windows file system limitations can be found in the [Naming Files, Paths, and Namespaces](/windows/win32/fileio/naming-a-file) documentation. ### Colliding path and file name
-In blob storage, it's possible to create a file and a folder of the exact same name as sibling entries. In Windows file system this isn't possible. Accordingly, the service will emit a download error in that case.
+In blob storage, it's possible to create a file and a folder of the exact same name as sibling entries. The Windows file system doesn't allow this. Accordingly, the service will emit a download error in that case.
### Path length There are path length limits imposed by Windows and the service. File paths and file names in your blob storage must not exceed 178 characters. For example given a `blobPrefix` of `models/Assets`, which is 13 characters:
remote-rendering Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/samples/sample-model.md
Model statistics:
## Third-party data
+### Triangular meshes
+ The Khronos Group maintains a set of glTF sample models for testing. ARR supports the glTF format both in text (*.gltf*) and in binary (*.glb*) form. We suggest using the PBR models for best visual results: * [glTF Sample Models](https://github.com/KhronosGroup/glTF-Sample-Models)
+### Point clouds
+
+The libE57 website provides many sample point clouds for testing in the supported E57 file format:
+
+* [libE57 sample models](http://www.libe57.org/data.html)
+ ## Next steps * [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN
![Diagram showing ExpressRoute and VPN gateway configured with Route Server.](./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png)
+> [!IMPORTANT]
+> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred.
+>
++ ## Next steps - Learn more about [Azure Route Server](route-server-faq.md).
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
No, Azure Route Server doesn't support configuring a UDR on the RouteServerSubne
No, Azure Route Server doesn't support NSG association to the RouteServerSubnet.
+### When the same route is learned over ExpressRoute, VPN or SDWAN, which network is preferred.
+
+ExpressRoute is preferred over VPN or SDWAN.
+ ### Can I peer two route servers in two peered virtual networks and enable the NVAs connected to the route servers to talk to each other? ***Topology: NVA1 -> RouteServer1 -> (via VNet Peering) -> RouteServer2 -> NVA2***
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 03/21/2022 Last updated : 06/08/2022 # Data encryption models
The Azure services that support each encryption model:
| **Databases** | | | | | SQL Server on Virtual Machines | Yes | Yes | Yes | | Azure SQL Database | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes |
-| Azure SQL Database Managed Instance | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes |
+| Azure SQL Managed Instance | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes |
| Azure SQL Database for MariaDB | Yes | - | - | | Azure SQL Database for MySQL | Yes | Yes | - | | Azure SQL Database for PostgreSQL | Yes | Yes | - |
sentinel Migration Arcsight Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-historical-data.md
Use the lacat utility to export data from ArcSight Logger. lacat exports CEF rec
To export data with the lacat utility:
-1. [Download the lacat utility](https://github.com/hpsec/lacat). For large volumes of data, we suggest that you modify the script for better performance. [Use the modified version](https://aka.ms/lacatmicrosoft).
+1. [Download the lacat utility](https://github.com/hpsec/lacat). For large volumes of data, we suggest that you modify the script for better performance. [Use the modified version](https://aka.ms/lacatmicrosoft).
1. Follow the examples in the lacat repository on how to run the script. ## Next steps
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Site Recovery doesn't support "hot remove" of disks from a replicated VM. If you
Replication is continuous when replicating Azure VMs to another Azure region. [Learn more](./azure-to-azure-architecture.md#replication-process) about the replication process.
-### Can I replicate virtual machines within a region?
+### Can I replicate non-zoned virtual machines within a region?
-You can't use Site Recovery to replicate disks within a region.
+You can't use Site Recovery to replicate non-zoned virtual machines within a region. But you can replicate zoned machines to a different zone in the same region.
### Can I replicate VM instances to any Azure region?
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Guest/server disk > 1 GB | Yes, disk must be larger than 1024 MB<br/><br/>Up to
Guest/server disk with 4K logical and 4k physical sector size | No Guest/server disk with 4K logical and 512-bytes physical sector size | No Guest/server volume with striped disk >4 TB | Yes
-Logical volume management (LVM)| Thick provisioning - Yes <br></br> Thin provisioning - No
+Logical volume management (LVM)| Thick provisioning - Yes <br></br> Thin provisioning - Yes, it is supported from [Update Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) onwards. It wasn't supported in earlier Mobility service versions.
Guest/server - Storage Spaces | No Guest/server - NVMe interface | No Guest/server hot add/remove disk | No
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
description: You can simplify the task of migrating from Azure Data Lake Storage
Previously updated : 05/11/2022 Last updated : 06/07/2022
Whichever option you choose, after you've migrated and verified that all your wo
> [!div class="mx-imgBorder"] > ![Consent checkbox](./media/data-lake-storage-migrate-gen1-to-gen2-azure-portal/migration-consent.png)
- > [!IMPORTANT]
+ > [!IMPORTANT]
> While your data is being migrated, your Gen1 account becomes read-only and the Gen2-enabled account is disabled. > > Also, while the Gen1 URI is being redirected, both accounts are disabled. >
- > When the migration is finished, your Gen1 account is disabled and you can read and write to your Gen2-enabled account.
+ > When the migration is finished, your Gen1 account will be disabled. The data in your Gen1 account won't be accessible and will be deleted after 30 days. Your Gen2 account will be available for reads and writes.
You can stop the migration at any time before the URI is redirected by selecting the **Stop migration** button.
Make sure all your Azure Data lake Analytics accounts are [migrated to Azure Syn
#### After the migration completes, can I go back to using the Gen1 account?
-This is not supported. After the migration completes, the data in your Gen1 account will not be accessible. You can continue to view the Gen1 account in the Azure portal, and when you are ready, you can delete the account.
+If you used [Option 1: Copy data from Gen1 to Gen2](#option-1-copy-data-from-gen1-to-gen2) mentioned above, then both the Gen1 and Gen2 accounts are available for reads and writes post migration. However, if you used [Option 2: Perform a complete migration](#option-2-perform-a-complete-migration), then going back to the Gen1 account isn't supported. In Option 2, after the migration completes, the data in your Gen1 account won't be accessible and will be deleted after 30 days. You can continue to view the Gen1 account in the Azure portal, and when you're ready, you can delete the Gen1 account.
#### I would like to enable Geo-redundant storage (GRS) on the Gen2 account, how do I do that?
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
Execute the following PowerShell code to download the appropriate version of the
$osver = [System.Environment]::OSVersion.Version # Download the appropriate version of the Azure File Sync agent for your OS.
-if ($osver.Equals([System.Version]::new(10, 0, 17763, 0))) {
+if ($osver.Equals([System.Version]::new(10, 0, 20348, 0))) {
+ Invoke-WebRequest `
+ -Uri https://aka.ms/afs/agent/Server2022 `
+ -OutFile "StorageSyncAgent.msi"
+} elseif ($osver.Equals([System.Version]::new(10, 0, 17763, 0))) {
Invoke-WebRequest ` -Uri https://aka.ms/afs/agent/Server2019 ` -OutFile "StorageSyncAgent.msi"
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
ORDER BY total_elapsed_time DESC;
From the preceding query results, **note the Request ID** of the query that you would like to investigate.
-Queries in the **Suspended** state can be queued due to a large number of active running queries. These queries also appear in the [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) waits query with a type of UserConcurrencyResourceType. For information on concurrency limits, see [Memory and concurrency limits](memory-concurrency-limits.md) or [Resource classes for workload management](resource-classes-for-workload-management.md). Queries can also wait for other reasons such as for object locks. If your query is waiting for a resource, see [Investigating queries waiting for resources](#monitor-waiting-queries) further down in this article.
+Queries in the **Suspended** state can be queued due to a large number of active running queries. These queries also appear in the [sys.dm_pdw_waits](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-waits-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). In that case, look for waits such as UserConcurrencyResourceType. For information on concurrency limits, see [Memory and concurrency limits](memory-concurrency-limits.md) or [Resource classes for workload management](resource-classes-for-workload-management.md). Queries can also wait for other reasons such as for object locks. If your query is waiting for a resource, see [Investigating queries waiting for resources](#monitor-waiting-queries) further down in this article.
To simplify the lookup of a query in the [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) table, use [LABEL](/sql/t-sql/queries/option-clause-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to assign a comment to your query, which can be looked up in the sys.dm_pdw_exec_requests view.
WHERE DB_NAME(ssu.database_id) = 'tempdb'
ORDER BY sr.request_id; ```
-If you have a query that is consuming a large amount of memory or have received an error message related to allocation of tempdb, it could be due to a very large [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) or [INSERT SELECT](/sql/t-sql/statements/insert-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement running that is failing in the final data movement operation. This can usually be identified as a ShuffleMove operation in the distributed query plan right before the final INSERT SELECT. Use [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to monitor ShuffleMove operations.
+If you have a query that is consuming a large amount of memory or have received an error message related to the allocation of tempdb, it could be due to a very large [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) or [INSERT SELECT](/sql/t-sql/statements/insert-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement running that is failing in the final data movement operation. This can usually be identified as a ShuffleMove operation in the distributed query plan right before the final INSERT SELECT. Use [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to monitor ShuffleMove operations.
The most common mitigation is to break your CTAS or INSERT SELECT statement into multiple load statements so the data volume will not exceed the 2TB per node tempdb limit (when at or above DW500c). You can also scale your cluster to a larger size which will spread the tempdb size across more nodes reducing the tempdb on each individual node.
ORDER BY
nbr_files desc, gb_processed desc; ```+
+## Monitor query blockings
+
+The following query provides the top 500 blocked queries in the environment.
+
+```sql
+
+--Collect the top blocking
+SELECT
+ TOP 500 waiting.request_id AS WaitingRequestId,
+ waiting.object_type AS LockRequestType,
+ waiting.object_name AS ObjectLockRequestName,
+ waiting.request_time AS ObjectLockRequestTime,
+ blocking.session_id AS BlockingSessionId,
+ blocking.request_id AS BlockingRequestId
+FROM
+ sys.dm_pdw_waits waiting
+ INNER JOIN sys.dm_pdw_waits blocking
+ ON waiting.object_type = blocking.object_type
+ AND waiting.object_name = blocking.object_name
+WHERE
+ waiting.state = 'Queued'
+ AND blocking.state = 'Granted'
+ORDER BY
+ ObjectLockRequestTime ASC;
+ ## Retrieve query text from waiting and blocking queries The following query provides the query text and identifier for the waiting and blocking queries to easily troubleshoot.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Some general system constraints might affect your workload:
| Maximum number of databases objects per database | The sum of the number of all objects in a database can't exceed 2,147,483,647. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects). | | Maximum identifier length in characters | 128. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects).| | Maximum query duration | 30 minutes. |
-| Maximum size of the result set | Up to 200 GB shared between concurrent queries. |
+| Maximum size of the result set | Up to 400 GB shared between concurrent queries. |
| Maximum concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1,000 active sessions that are executing lightweight queries. The numbers will drop if the queries are more complex or scan a larger amount of data. | ### Can't create a database in serverless SQL pool
If you have [partitioned files](query-specific-files.md), make sure you use [par
### Copy and transform data (CETAS)
-Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
+Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Last updated 04/15/2022
# What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in Mar 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
+This article lists updates to Azure Synapse Analytics that are published in April 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
The following updates are new to Azure Synapse Analytics this month.
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
Azure Virtual Desktop stores various information for service objects, such as ho
## Customer input
-To set up Azure Virtual Desktop, you must create host pools and other service objects. During configuration, you must enter information such as the host pool name, application group name, and so on. This information is considered *customer input*. Customer input is stored in the geography associated with the Azure region the resource is created in. Azure Resource Manager paths to the objects are considered organizational information, so data residency doesn't apply to them. Data about Azure Resource Manager paths will be stored outside of the chosen geography.
+To set up Azure Virtual Desktop, you must create host pools and other service objects. During configuration, you must enter information such as the host pool name, application group name, and so on. This information is considered "customer input." Customer input is stored in the geography associated with the Azure region the resource is created in. The stored data includes all data that you input into the host pool deployment process and any data you add after deployment while making configuration changes to Azure Virtual Desktop objects. Basically, stored data is the same data you can access using the Azure Virtual Desktop portal, PowerShell, or Azure command-line interface (CLI).
+
+Azure Resource Manager paths to service objects are considered organizational information, so data residency doesn't apply to them. Data about Azure Resource Manager paths is stored outside of the chosen geography.
## Customer data
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 01/13/2022 Last updated : 06/07/2022
Some popular applications running on WSFC include:
Azure shared disks are supported on: - [SUSE SLE HA 15 SP1 and above](https://www.suse.com/c/azure-shared-disks-excercise-w-sles-for-sap-or-sle-ha/) - [Ubuntu 18.04 and above](https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-pacemaker-shared-disk-environments/14874)-- [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/index?lb_target=production#azure-configuring-shared-block-storage-configuring-rhel-high-availability-on-azure)
- - It may be possible to use RHEL 7 or an older version of RHEL 8 with shared disks, contact SharedDiskFeedback @microsoft.com
+- Red Hat Enterprise Linux (RHEL) ([support policy](https://access.redhat.com/articles/3444601))
+ - [RHEL 7.9](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content)
+ - [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content)
- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/) Linux clusters can use cluster managers such as [Pacemaker](https://wiki.clusterlabs.org/wiki/Pacemaker). Pacemaker builds on [Corosync](http://corosync.github.io/corosync/), enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include [ocfs2](https://oss.oracle.com/projects/ocfs2/) and [gfs2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2). You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as [fence_scsi](http://manpages.ubuntu.com/manpages/eoan/man8/fence_scsi.8.html) and [sg_persist](https://linux.die.net/man/8/sg_persist).
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
>The below article is scoped to Azure Hybrid Benefit for BYOS VMs (AHB BYOS) which caters to conversion of custom image VMs and RHEL or SLES BYOS VMs. For conversion of RHEL PAYG or SLES PAYG VMs, refer to [Azure Hybrid Benefit for PAYG VMs here](./azure-hybrid-benefit-linux.md). >[!NOTE]
->Azure Hybrid Benefit for BYOS VMs is in Preview now. [Please fill the form here and wait for email from the AHB team to get started.](https://aka.ms/ahb-linux-form) You can start using the capability on Azure by following steps provided in the [section below](#get-started).
+>Azure Hybrid Benefit for BYOS VMs is in Public Preview now. You can start using the capability on Azure by following steps provided in the [section below](#get-started).
Azure Hybrid Benefit for BYOS VMs is a licensing benefit that helps you to get software updates and integrated support for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) directly from Azure infrastructure. This benefit is available to RHEL and SLES custom image VMs (VMs generated from on-premises images), and to RHEL and SLES Marketplace bring-your-own-subscription (BYOS) VMs.
virtual-machines States Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/states-billing.md
Title: States and billing status of Azure Virtual Machines
-description: Overview of various states a VM can enter and when a user is billed.
+ Title: States and billing status
+description: Learn about the provisioning and power states that a virtual machine can enter. Provisioning and power states affect billing.
Previously updated : 03/8/2021 Last updated : 06/08/2022 + # States and billing status of Azure Virtual Machines **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Azure Virtual Machines (VMs) go through different states that can be categorized into *provisioning* and *power* states. The purpose of this article is to describe these states and specifically highlight when customers are billed for instance usage.
+Azure Virtual Machines (VM) instances go through different states. There are *provisioning* and *power* states. This article describes these states and highlights when customers are billed for instance usage.
## Get states using Instance View
-The instance view API provides VM running-state information. For more information, see the [Virtual Machines - Instance View](/rest/api/compute/virtualmachines/instanceview) API documentation.
+The instance view API provides VM running-state information. For more information, see [Virtual Machines - Instance View](/rest/api/compute/virtualmachines/instanceview).
Azure Resources Explorer provides a simple UI for viewing the VM running state: [Resource Explorer](https://resources.azure.com/).
-The VM provisioning state is available (in slightly different forms) from within the VM properties `provisioningState` and the InstanceView. In the VM InstanceView there will be an element within the `status` array in the form of `ProvisioningState/<state>[/<errorCode>]`.
+The VM provisioning state is available, in slightly different forms, from within the VM properties `provisioningState` and the InstanceView. In the VM InstanceView, there's an element within the `status` array in the form of `ProvisioningState/<state>[/<errorCode>]`.
-To retrieve the power state of all the VMs in your subscription, use the [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter **statusOnly** set to *true*.
+To retrieve the power state of all the VMs in your subscription, use the [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter `statusOnly` set to `true`.
> [!NOTE]
-> [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter **statusOnly** set to true will retrieve the power states of all VMs in a subscription. However, in some rare situations, the power state may not available due to intermittent issues in the retrieval process. In such situations, we recommend retrying using the same API or using [Azure Resource Health](../service-health/resource-health-overview.md) to check the power state of your VMs.
-
+> [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter `statusOnly` set to `true` retrieves the power states of all VMs in a subscription. However, in some rare situations, the power state may not available due to intermittent issues in the retrieval process. In such situations, we recommend retrying using the same API or using [Azure Resource Health](../service-health/resource-health-overview.md) to check the power state of your VMs.
+ ## Power states and billing The power state represents the last known state of the VM.
-The following table provides a description of each instance state and indicates whether it is billed for instance usage or not.
+The following table provides a description of each instance state and indicates whether that state is billed for instance usage.
| Power state | Description | Billing | ||||
-| Starting| Virtual Machine is powering up. | Billed |
-| Running | Virtual Machine is fully up. This is the standard working state. | Billed |
-| Stopping | This is a transitional state between running and stopped. | Billed|
-|Stopped | The Virtual Machine is allocated on a host but not running. Also called PoweredOff state or *Stopped (Allocated)*. This can be result of invoking the PowerOff API operation or invoking shutdown from within the guest OS. The Stopped state may also be observed briefly during VM creation or while starting a VM from Deallocated state. | Billed |
-| Deallocating | This is the transitional state between running and deallocated. | Not billed* |
-| Deallocated | The Virtual Machine has released the lease on the underlying hardware and is completely powered off. This state is also referred to as *Stopped (Deallocated)*. | Not billed* |
+| Starting| Virtual machine is powering up. | Billed |
+| Running | Virtual machine is fully up. This state is the standard working state. | Billed |
+| Stopping | This state is transitional between running and stopped. | Billed |
+| Stopped | The virtual machine is allocated on a host but not running. Also called *PoweredOff* state or *Stopped (Allocated)*. This state can be result of invoking the `PowerOff` API operation or invoking shutdown from within the guest OS. The *Stopped* state may also be observed briefly during VM creation or while starting a VM from *Deallocated* state. | Billed |
+| Deallocating | This state is transitional between *Running* and *Deallocated*. | Not billed* |
+| Deallocated | The virtual machine has released the lease on the underlying hardware and is powered off. This state is also referred to as *Stopped (Deallocated)*. | Not billed* |
+\* Some Azure resources, such as [Disks](https://azure.microsoft.com/pricing/details/managed-disks) and [Networking](https://azure.microsoft.com/pricing/details/bandwidth/) continue to incur charges.
-**Example of PowerState in JSON**
+Example of PowerState in JSON:
```json {
The following table provides a description of each instance state and indicates
} ```
-&#42; Some Azure resources, such as [Disks](https://azure.microsoft.com/pricing/details/managed-disks) and [Networking](https://azure.microsoft.com/pricing/details/bandwidth/) will continue to incur charges.
-- ## Provisioning states The provisioning state is the status of a user-initiated, control-plane operation on the VM. These states are separate from the power state of a VM.
The provisioning state is the status of a user-initiated, control-plane operatio
||| | Creating | Virtual machine is being created. | | Updating | Virtual machine is updating to the latest model. Some non-model changes to a virtual machine such as start and restart fall under the updating state. |
-| Failed | Last operation on the virtual machine resource was not successful. |
-| Succeeded | Last operation on the virtual machine resource was successful. |
-| Deleting | Virtual machine is being deleted. |
-| Migrating | Seen when migrating from Azure Service Manager to Azure Resource Manager. |
+| Failed | Last operation on the virtual machine resource was unsuccessful. |
+| Succeeded | Last operation on the virtual machine resource was successful. |
+| Deleting | Virtual machine is being deleted. |
+| Migrating | Seen when migrating from Azure Service Manager to Azure Resource Manager. |
## OS Provisioning states
-OS Provisioning states only apply to virtual machines created with a [generalized](./linux/imaging.md#generalized-images) OS image. [Specialized](./linux/imaging.md#specialized-images) images and disks attached as OS disk will not display these states. The OS provisioning state is not shown separately. It is a sub-state of the Provisioning State in the VM instanceView. For example, `ProvisioningState/creating/osProvisioningComplete`.
+OS Provisioning states only apply to virtual machines created with a [generalized](./linux/imaging.md#generalized-images) OS image. [Specialized](./linux/imaging.md#specialized-images) images and disks attached as OS disk don't display these states. The OS provisioning state isn't shown separately. It's a substate of the Provisioning State in the VM InstanceView. For example, `ProvisioningState/creating/osProvisioningComplete`.
-| OS Provisioning state | Description |
+
+| OS Provisioning state | Description |
||| | OSProvisioningInProgress | The VM is running and the initialization (setup) of the Guest OS is in progress. |
-| OSProvisioningComplete | This is a short-lived state. The virtual machine quickly transitions from this state to **Success**. If extensions are still being installed you will continue to see this state until they are complete. |
-| Succeeded | The user-initiated actions have completed. |
-| Failed | Represents a failed operation. Refer to the error code for more information and possible solutions. |
+| OSProvisioningComplete | This state is a short-lived state. The virtual machine quickly transitions from this state to *Success*. If extensions are still being installed, you continue to see this state until installation is complete. |
+| Succeeded | The user-initiated actions have completed. |
+| Failed | Represents a failed operation. For more information and possible solutions, see the error code. |
## Troubleshooting VM states
To troubleshoot specific VM state issues, see [Troubleshoot Windows VM deploymen
For other troubleshooting help visit [Azure Virtual Machines troubleshooting documentation](/troubleshoot/azure/virtual-machines/welcome-virtual-machines). - ## Next steps+ - Review the [Azure Cost Management and Billing documentation](../cost-management-billing/index.yml) - Use the [Azure Pricing calculator](https://azure.microsoft.com/pricing/calculator/) to plan your deployments.-- Learn more about monitoring your VM, see [Monitor virtual machines in Azure](../azure-monitor/vm/monitor-vm-azure.md).
+- Learn more about monitoring your VM, see [Monitor virtual machines in Azure](../azure-monitor/vm/monitor-vm-azure.md).
virtual-network Monitor Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network-reference.md
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
**Virtual network**
-Azure virtual network does not have diagnostic logs.
+Azure virtual network doesn't have diagnostic logs.
## Activity log
For more information on the schema of Activity Log entries, see [Activity Log sc
## See also -- See [Monitoring Azure Azure virtual network](monitor-virtual-network.md) for a description of monitoring Azure Azure virtual network.
+- See [Monitoring Azure virtual network](monitor-virtual-network.md) for a description of monitoring Azure virtual network.
- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
virtual-network Monitor Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/monitor-virtual-network.md
Title: Monitoring Azure virtual networks description: Start here to learn how to monitor Azure virtual networks --++
Last updated 06/29/2021
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure virtual network. Azure virtual network uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+This article describes the monitoring data generated by Azure virtual network. Azure virtual network uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
## Monitoring data
See [Monitoring Azure virtual network data reference](monitor-virtual-network-re
Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure virtual network* are listed in [Azure virtual network monitoring data reference](monitor-virtual-network-reference.md#resource-logs).
For reference, you can see a list of [all resource metrics supported in Azure Mo
## Analyzing logs
-Azure virtual network does not support resource logs.
+Azure virtual network doesn't support resource logs.
For a list of the types of resource logs collected for resources in a virtual network, see [Monitoring virtual network data reference](monitor-virtual-network-reference.md#resource-logs)
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
## Alerts
The following table lists common and recommended activity alert rules for Azure
| Alert type | Condition | Description | |:|:|:|
-| Create or Update Virtual Network | Event Level: All selected, Status: All selected, Event initiated by: All services and users | When a user creates or make configuration changes to the virtual network. |
-| Delete Virtual Network | Event Level: All selected, Status: Started | When a user delete a virtual network. |
+| Create or Update Virtual Network | Event Level: All selected, Status: All selected, Event initiated by: All services and users | When a user creates or makes configuration changes to the virtual network. |
+| Delete Virtual Network | Event Level: All selected, Status: Started | When a user deletes a virtual network. |
## Next steps
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
Adjust the virtual hub capacity when you need to support additional virtual mach
To add additional virtual hub capacity, go to the virtual hub in the Azure portal. On the **Overview** page, click **Edit virtual hub**. Adjust the **Virtual hub capacity** using the dropdown, then **Confirm**.
+> [!NOTE]
+> When you edit virtual hub capacity, there will be data path disruption if the change in scale units has resulted in an underlying VPN GW SKU change.
+>
+ ### Routing infrastructure unit table For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md
Use the VPN device configuration file to configure your on-premises VPN device.
The device configuration file contains the settings to use when configuring your on-premises VPN device. When you view this file, notice the following information:
-* **vpnSiteConfiguration -** This section denotes the device details set up as a site connecting to the virtual WAN. It includes the name and public ip address of the branch device.
+* **vpnSiteConfiguration -** This section denotes the device details set up as a site connecting to the virtual WAN. It includes the name and public IP address of the branch device.
* **vpnSiteConnections -** This section provides information about the following settings: * **Address space** of the virtual hub(s) VNet.<br>Example:
On the **Edit VPN Gateway** page, you can see the following settings:
* **Public IP Address**: Assigned by Azure. * **Private IP Address**: Assigned by Azure. * **Default BGP IP Address**: Assigned by Azure.
-* **Custom BGP IP Address**: This field is reserved for APIPA (Automatic Private IP Addressing). Azure supports BGP IP in the ranges 169.254.21.* and 169.254.22.*. Azure accepts BGP connections in these ranges but will dial connection with the default BGP IP.
+* **Custom BGP IP Address**: This field is reserved for APIPA (Automatic Private IP Addressing). Azure supports BGP IP in the ranges 169.254.21.* and 169.254.22.*. Azure accepts BGP connections in these ranges but will dial connection with the default BGP IP. Users can specify multiple custom BGP IP addresses for each instance. The same custom BGP IP address shouldn't be used for both instances.
:::image type="content" source="media/virtual-wan-site-to-site-portal/edit-gateway.png" alt-text="Screenshot shows the Edit VPN Gateway page with the Edit button highlighted." lightbox="media/virtual-wan-site-to-site-portal/edit-gateway.png":::