Updates from: 06/09/2022 01:14:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Previously updated : 03/31/2022 Last updated : 06/08/2022
Open your web app in a code editor such as Visual Studio Code. Under the project
|Key |Value | ||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.1](#step-2-register-a-web-application). |
-|`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.2](#step-22-create-a-web-app-client-secret) |
+|`APP_CLIENT_SECRET`|The client secret value for the web app you created in [step 2.2](#step-22-create-a-web-app-client-secret) |
|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). | |`RESET_PASSWORD_POLICY_AUTHORITY`| The **Reset password** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<reset-password-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your Reset password user flow such as `B2C_1_reset_password_node_app`.| |`EDIT_PROFILE_POLICY_AUTHORITY`|The **Profile editing** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<profile-edit-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your reset password user flow such as `B2C_1_edit_profile_node_app`. |
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
Previously updated : 03/30/2022 Last updated : 06/08/2022
Open your web app in a code editor such as Visual Studio Code. Under the `call-p
|Key |Value | ||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.3](#step-23-register-the-web-app). |
-|`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.4](#step-24-create-a-client-secret) |
+|`APP_CLIENT_SECRET`|The client secret value for the web app you created in [step 2.4](#step-24-create-a-client-secret) |
|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority for the user flow you created in [step 1](#step-1-configure-your-user-flow) such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). | |`AUTHORITY_DOMAIN`| The Azure AD B2C authority domain such as `https://<your-tenant-name>.b2clogin.com`. Replace `<your-tenant-name>` with the name of your tenant.| |`APP_REDIRECT_URI`| The application redirect URI where Azure AD B2C will return authentication responses (tokens). It matches the **Redirect URI** you set while registering your app in Azure portal. This URL need to be publicly accessible. Leave the value as is.|
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
Previously updated : 09/15/2021 Last updated : 06/08/2022
Open the *app_config.py* file. This file contains information about your Azure A
||| |`b2c_tenant`| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name) (for example, `contoso`).| |`CLIENT_ID`| The web API application ID from [step 2.1](#step-21-register-the-app).|
-|`CLIENT_SECRET`| The client secret you created in [step 2.2](#step-22-create-a-web-app-client-secret). To help increase security, consider storing it instead in an environment variable, as recommended in the comments. |
+|`CLIENT_SECRET`| The client secret value you created in [step 2.2](#step-22-create-a-web-app-client-secret). To help increase security, consider storing it instead in an environment variable, as recommended in the comments. |
|`*_user_flow`|The user flows or custom policy you created in [step 1](#step-1-configure-your-user-flow).| | | |
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Previously updated : 01/18/2022 Last updated : 06/08/2022
In this step, configure the claims AD FS application returns to Azure AD B2C.
1. For **Client ID**, enter the application ID that you previously recorded. 1. For the **Scope**, enter the `openid`.
-1. For **Response type**, select **id_token**, which makes the **Client secret** optional. Learn more about use of [Client ID and secret](identity-provider-generic-openid-connect.md#client-id-and-secret) when adding a generic OpenID Connect identity provider.
+1. For **Response type**, select **id_token**. So, the **Client secret** value isn't needed. Learn more about use of [Client ID and secret](identity-provider-generic-openid-connect.md#client-id-and-secret) when adding a generic OpenID Connect identity provider.
1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider). 1. Under **Identity provider claims mapping**, select the following claims:
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 09/16/2021 Last updated : 06/08/2022
If you want to get the `family_name` and `given_name` claims from Azure AD, you
For example, `https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration`. If you use a custom domain, replace `contoso.com` with your custom domain in `https://login.microsoftonline.com/contoso.com/v2.0/.well-known/openid-configuration`. 1. For **Client ID**, enter the application ID that you previously recorded.
-1. For **Client secret**, enter the client secret that you previously recorded.
+1. For **Client secret**, enter the client secret value that you previously recorded.
1. For **Scope**, enter `openid profile`. 1. Leave the default values for **Response type**, and **Response mode**. 1. (Optional) For the **Domain hint**, enter `contoso.com`. For more information, see [Set up direct sign-in using Azure Active Directory B2C](direct-signin.md#redirect-sign-in-to-a-social-provider).
You need to store the application key that you created in your Azure AD B2C tena
1. Select **Policy keys** and then select **Add**. 1. For **Options**, choose `Manual`. 1. Enter a **Name** for the policy key. For example, `ContosoAppSecret`. The prefix `B2C_1A_` is added automatically to the name of your key when it's created, so its reference in the XML in following section is to *B2C_1A_ContosoAppSecret*.
-1. In **Secret**, enter your client secret that you recorded earlier.
+1. In **Secret**, enter your client secret value that you recorded earlier.
1. For **Key usage**, select `Signature`. 1. Select **Create**.
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Title: Tutorial for configuring N8 Identity with Azure Active Directory B2C
+ Title: Configure TheAccessHub Admin Tool by using Azure Active Directory B2C
-description: Tutorial for configuring TheAccessHub Admin Tool with Azure Active Directory B2C to address customer accounts migration and Customer Service Requests (CSR) administration.
+description: In this tutorial, configure TheAccessHub Admin Tool by using Azure Active Directory B2C to address customer account migration and customer service request (CSR) administration.
-# Tutorial for configuring TheAccessHub Admin Tool with Azure Active Directory B2C
+# Configure TheAccessHub Admin Tool by using Azure Active Directory B2C
-In this sample tutorial, we provide guidance on how to integrate Azure Active Directory (AD) B2C with [TheAccessHub Admin Tool](https://n8id.com/products/theaccesshub-admintool/) from N8 Identity. This solution addresses customer accounts migration and Customer Service Requests (CSR) administration.
+In this tutorial, we provide guidance on how to integrate Azure Active Directory B2C (Azure AD B2C) with [TheAccessHub Admin Tool](https://n8id.com/products/theaccesshub-admintool/) from N8 Identity. This solution addresses customer account migration and customer service request (CSR) administration.
-This solution is suited for you, if you have one or more of the following needs:
+This solution is suited for you if you have one or more of the following needs:
-- You have an existing site and you want to migrate to Azure AD B2C. However, you're struggling with migration of your customer accounts including passwords
+- You have an existing site and you want to migrate to Azure AD B2C. However, you're struggling with migration of your customer accounts, including passwords.
-- You require a tool for your CSR to administer Azure AD B2C accounts.
+- You need a tool for your CSR to administer Azure AD B2C accounts.
- You have a requirement to use delegated administration for your CSRs. - You want to synchronize and merge your data from many repositories into Azure AD B2C.
-## Pre-requisites
+## Prerequisites
To get started, you'll need: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -- An [Azure AD B2C tenant](./tutorial-create-tenant.md). Tenant must be linked to your Azure subscription.
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md). The tenant must be linked to your Azure subscription.
-- A TheAccessHub Admin Tool environment: Contact [N8 Identity](https://n8id.com/contact/) to provision a new environment.
+- A TheAccessHub Admin Tool environment. Contact [N8 Identity](https://n8id.com/contact/) to provision a new environment.
-- [Optional] Connection and credential information for any databases or Lightweight Directory Access Protocols (LDAPs) you want to migrate customer data from.
+- (Optional:) Connection and credential information for any databases or Lightweight Directory Access Protocols (LDAPs) that you want to migrate customer data from.
-- [Optional] Configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you wish to integrate TheAccessHub Admin Tool into your sign-up policy flow.
+- (Optional:) A configured Azure AD B2C environment for using [custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy), if you want to integrate TheAccessHub Admin Tool into your sign-up policy flow.
## Scenario description
-The TheAccessHub Admin Tool runs like any other application in Azure. It can run in either N8 IdentityΓÇÖs Azure subscription, or the customerΓÇÖs subscription. The following architecture diagram shows the implementation.
+The TheAccessHub Admin Tool runs like any other application in Azure. It can run in either N8 Identity's Azure subscription or the customer's subscription. The following architecture diagram shows the implementation.
-![Image showing n8identity architecture diagram](./media/partner-n8identity/n8identity-architecture-diagram.png)
+![Diagram of the n8identity architecture.](./media/partner-n8identity/n8identity-architecture-diagram.png)
|Step | Description | |:--| :--|
-| 1. | User arrives at a login page. Users select sign-up to create a new account and enter information into the page. Azure AD B2C collects the user attributes.
-| 2. | Azure AD B2C calls the TheAccessHub Admin Tool and passes on the user attributes
+| 1. | Each user arrives at a login page. The user creates a new account and enters information on the page. Azure AD B2C collects the user attributes.
+| 2. | Azure AD B2C calls the TheAccessHub Admin Tool and passes on the user attributes.
| 3. | TheAccessHub Admin Tool checks your existing database for current user information.
-| 4. | The user record is synced from the database to TheAccessHub Admin Tool.
+| 4. | User records are synced from the database to TheAccessHub Admin Tool.
| 5. | TheAccessHub Admin Tool shares the data with the delegated CSR/helpdesk admin. | 6. | TheAccessHub Admin Tool syncs the user records with Azure AD B2C.
-| 7. |Based on the success/failure response from the TheAccessHub Admin Tool, Azure AD B2C sends a customized welcome email to the user.
+| 7. |Based on the success/failure response from the TheAccessHub Admin Tool, Azure AD B2C sends a customized welcome email to users.
-## Create a Global Admin in your Azure AD B2C tenant
+## Create a Global Administrator in your Azure AD B2C tenant
-The TheAccessHub Admin Tool requires permissions to act on behalf of a Global Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators won; t impact TheAccessHub Admin ToolΓÇÖs ability to interact with the tenant.
+The TheAccessHub Admin Tool requires permissions to act on behalf of a Global Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators won't affect TheAccessHub Admin Tool's ability to interact with the tenant.
-To create a Global Administrator, follow these steps:
+To create a Global Administrator:
-1. In the Azure portal, sign into your Azure AD B2C tenant as an administrator. Navigate to **Azure Active Directory** > **Users**
-2. Select **New User**
-3. Choose **Create User** to create a regular directory user and not a customer
-4. Complete the Identity information form
+1. In the Azure portal, sign in to your Azure AD B2C tenant as an administrator. Go to **Azure Active Directory** > **Users**.
+2. Select **New User**.
+3. Choose **Create User** to create a regular directory user and not a customer.
+4. Complete the identity information form:
- a. Enter the username such as ΓÇÿtheaccesshubΓÇÖ
+ a. Enter the username, such as **theaccesshub**.
- b. Enter the name such as ΓÇÿTheAccessHub Service AccountΓÇÖ
+ b. Enter the account name, such as **TheAccessHub Service Account**.
-5. Select **Show Password** and copy the initial password for later use
-6. Assign the Global Administrator role
+5. Select **Show Password** and copy the initial password for later use.
+6. Assign the Global Administrator role:
- a. Select the userΓÇÖs current roles **User** to change it
+ a. For **User**, select the user's current role to change it.
- b. Check the record for Global Administrator
+ b. Select the **Global Administrator** record.
- c. **Select** > **Create**
+ c. Select **Create**.
## Connect TheAccessHub Admin Tool with your Azure AD B2C tenant
-TheAccessHub Admin Tool uses MicrosoftΓÇÖs Graph API to read and make changes to your directory. It acts as a Global Administrator in your tenant. Additional permission is needed by TheAccessHub Admin Tool, which you can grant from within the tool.
+TheAccessHub Admin Tool uses the Microsoft Graph API to read and make changes to your directory. It acts as a Global Administrator in your tenant. TheAccessHub Admin Tool needs additional permission, which you can grant from within the tool.
-To authorize TheAccessHub Admin Tool to access your directory, follow these steps:
+To authorize TheAccessHub Admin Tool to access your directory:
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Azure AD B2C Config**
+2. Go to **System Admin** > **Azure AD B2C Config**.
-3. Select **Authorize Connection**
+3. Select **Authorize Connection**.
-4. In the new window sign-in with your Global Administrator account. You may be asked to reset your password if you're signing in for the first time with the new service account.
+4. In the new window, sign in with your Global Administrator account. You might be asked to reset your password if you're signing in for the first time with the new service account.
5. Follow the prompts and select **Accept** to grant TheAccessHub Admin Tool the requested permissions.
-## Configure a new CSR user using your enterprise identity
+## Configure a new CSR user by using your enterprise identity
-Create a CSR/Helpdesk user who accesses TheAccessHub Admin Tool using their existing enterprise Azure Active Directory credentials.
+Create a CSR/Helpdesk user who accesses TheAccessHub Admin Tool by using their existing enterprise Azure Active Directory credentials.
-To configure CSR/Helpdesk user with Single Sign-on (SSO), the following steps are recommended:
+To configure a CSR/Helpdesk user with single sign-on (SSO), we recommend the following steps:
-1. Log into TheAccessHub Admin Tool using credentials provided by N8 Identity.
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Manager Tools** > **Manage Colleagues**
+2. Go to **Manager Tools** > **Manage Colleagues**.
-3. Select **Add Colleague**
+3. Select **Add Colleague**.
-4. Select **Colleague Type Azure Administrator**
+4. For **Colleague Type**, select **Azure Administrator**.
-5. Enter the colleagueΓÇÖs profile information
+5. Enter the colleague's profile information:
- a. Choosing a Home Organization will control who has permission to manage this user.
+ a. Choose a home organization to control who has permission to manage this user.
- b. For Login ID/Azure AD User Name supply the User Principal Name from the userΓÇÖs Azure Active Directory account.
+ b. For **Login ID/Azure AD User Name**, supply the user principal name from the user's Azure Active Directory account.
- c. On the TheAccessHub Roles tab, check the managed role Helpdesk. It will allow the user to access the manage colleagues view. The user will still need to be placed into a group or be made an organization owner to act on customers.
+ c. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the **Manage Colleagues** view. The user will still need to be placed into a group or be made an organization owner to act on customers.
6. Select **Submit**.
-## Configure a new CSR user using a new identity
+## Configure a new CSR user by using a new identity
-Create a CSR/Helpdesk user who will access TheAccessHub Admin Tool with a new local credential unique to TheAccessHub Admin Tool. This will be used mainly by organizations that don't use an Azure AD for their enterprise.
+Create a CSR/Helpdesk user who will access TheAccessHub Admin Tool by using a new local credential that's unique to the tool. This user will be used mainly by organizations that don't use Azure Active Directory.
-To [setup a CSR/Helpdesk](https://youtu.be/iOpOI2OpnLI) user without SSO, follow these steps:
+To [set up a CSR/Helpdesk user](https://youtu.be/iOpOI2OpnLI) without SSO:
-1. Log into TheAccessHub Admin Tool using credentials provided by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Manager Tools** > **Manage Colleagues**
+2. Go to **Manager Tools** > **Manage Colleagues**.
-3. Select **Add Colleague**
+3. Select **Add Colleague**.
-4. Select **Colleague Type Local Administrator**
+4. For **Colleague Type**, select **Local Administrator**.
-5. Enter the colleagueΓÇÖs profile information
+5. Enter the colleague's profile information:
- a. Choosing a Home Organization will control who has permission to manage this user.
+ a. Choose a home organization to control who has permission to manage this user.
- b. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the manage colleagues view. The user will still need to be placed into a group or be made an organization owner to act on customers.
+ b. On the **TheAccessHub Roles** tab, select the managed role **Helpdesk**. It will allow the user to access the **Manage Colleagues** view. The user will still need to be placed into a group or be made an organization owner to act on customers.
-6. Copy the **Login ID/Email** and **One Time Password** attributes. Provide it to the new user. They'll use these credentials to log in to TheAccessHub Admin Tool. The user will be prompted to enter a new password on their first login.
+6. Copy the **Login ID/Email** and **One Time Password** attributes. Provide them to the new user. The user will use these credentials to log in to TheAccessHub Admin Tool. The user will be prompted to enter a new password on first login.
-7. Select **Submit**
+7. Select **Submit**.
## Configure partitioned CSR administration
-Permissions to manage customer and CSR/Helpdesk users in TheAccessHub Admin Tool are managed with the use of an organization hierarchy. All colleagues and customers have a home organization where they reside. Specific colleagues or groups of colleagues can be assigned as owners of organizations. Organization owners can manage (make changes to) colleagues and customers in organizations or suborganizations they own. To allow multiple colleagues to manage a set of users, a group can be created with many members. The group can then be assigned as an organization owner and all of the groupΓÇÖs members can manage colleagues and customers in the organization.
+Permissions to manage customer and CSR/Helpdesk users in TheAccessHub Admin Tool are managed through an organization hierarchy. All colleagues and customers have a home organization where they reside. You can assign specific colleagues or groups of colleagues as owners of organizations.
+
+Organization owners can manage (make changes to) colleagues and customers in organizations or suborganizations that they own. To allow multiple colleagues to manage a set of users, you can create a group that has many members. You can then assign the group as an organization owner. All of the group's members can then manage colleagues and customers in the organization.
### Create a new group
-1. Log into TheAccessHub Admin Tool using **credentials** provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **Organization > Manage Groups**
+2. Go to **Organization > Manage Groups**.
-3. Select > **Add Group**
+3. Select **Add Group**.
-4. Enter a **Group name**, **Group description**, and **Group owner**
+4. Enter values for **Group name**, **Group description**, and **Group owner**.
-5. Search for and check the boxes on the colleagues you want to be members of the group then select >**Add**
+5. Search for and select the boxes on the colleagues you want to be members of the group, and then select **Add**.
6. At the bottom of the page, you can see all members of the group.
-7. If needed members can be removed by selecting the **x** at the end of the row
+ If necessary, you can remove members by selecting the **x** at the end of the row.
-8. Select **Submit**
+7. Select **Submit**.
### Create a new organization
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to Organization > **Manage Organizations**
+2. Go to **Organization** > **Manage Organizations**.
-3. Select > **Add Organization**
+3. Select **Add Organization**.
-4. Supply an **Organization name**, **Organization owner**, and **Parent organization**.
+4. Supply values for **Organization name**, **Organization owner**, and **Parent organization**:
- a. The organization name is ideally a value that corresponds to your customer data. When loading colleague and customer data, if you supply the name of the organization in the load, the colleague can be automatically placed into the organization.
+ a. The organization name is ideally a value that corresponds to your customer data. When you're loading colleague and customer data, if you supply the name of the organization in the load, the colleague can be automatically placed into the organization.
- b. The owner represents the person or group who will manage the customers and colleagues in this organization and any suborganization within.
+ b. The owner represents the person or group that will manage the customers and colleagues in this organization and any suborganization within it.
- c. The parent organization indicates which other organization is inherently, also responsible for this organization.
+ c. The parent organization indicates which other organization is also responsible for this organization.
5. Select **Submit**. ### Modify the hierarchy via the tree view
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
-
-2. Navigate to **Manager Tools** > **Tree View**
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-3. In this representation, you can visualize which colleagues and groups can manage which organizations.
+2. Go to **Manager Tools** > **Tree View**.
-4. The hierarchies can be modified by dragging organizations overtop organizations you want them to be parented by.
+3. In this representation, you can visualize which colleagues and groups can manage which organizations. Modify the hierarchy by dragging organizations into parent organizations.
5. Select **Save** when you're finished altering the hierarchy.
-## Customize welcome notification
+## Customize the welcome notification
-While you're using TheAccessHub to migrate users from a previous authentication solution into Azure AD B2C, you may want to customize the user welcome notification, which is sent to the user by TheAccessHub during migration. This message normally includes the link for the customer to set a new password in the Azure AD B2C directory.
+While you're using TheAccessHub Admin Tool to migrate users from a previous authentication solution into Azure AD B2C, you might want to customize the user welcome notification. TheAccessHub Admin Tool sends this notification to users during migration. This message normally includes a link for users to set a new password in the Azure AD B2C directory.
To customize the notification:
-1. Log into TheAccessHub using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Notifications**
+2. Go to **System Admin** > **Notifications**.
-3. Select the **Create Colleague template**
+3. Select the **Create Colleague** template.
-4. Select **Edit**
+4. Select **Edit**.
-5. Alter the Message and Template fields as necessary. The Template field is HTML aware and can send HTML formatted notifications to customer emails.
+5. Alter the **Message** and **Template** fields as necessary. The **Template** field is HTML aware and can send HTML-formatted email notifications to customers.
-6. Select **Save** when finished.
+6. Select **Save** when you're finished.
## Migrate data from external data sources to Azure AD B2C
-Using TheAccessHub Admin Tool, you can import data from various databases, LDAPs, and CSV files and then push that data to your Azure AD B2C tenant. It's done by loading data into the Azure AD B2C user colleague type within TheAccessHub Admin Tool. If the source of data isn't Azure itself, the data will be placed into both TheAccessHub Admin Tool and Azure AD B2C. If the source of your external data isn't a simple .csv file on your machine, set up a data source before doing the data load. The below steps describe creating a data source and doing the data load.
+By using TheAccessHub Admin Tool, you can import data from various databases, LDAPs, and .csv files and then push that data to your Azure AD B2C tenant. You migrate the data by loading it into the Azure AD B2C user colleague type within TheAccessHub Admin Tool.
+
+If the source of data isn't Azure itself, the data will be placed into both TheAccessHub Admin Tool and Azure AD B2C. If the source of your external data isn't a simple .csv file on your machine, set up a data source before doing the data load. The following steps describe creating a data source and loading the data.
### Configure a new data source
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Data Sources**
+2. Go to **System Admin** > **Data Sources**.
-3. Select **Add Data Source**
+3. Select **Add Data Source**.
-4. Supply a **Name** and **Type** for this data source
+4. Supply **Name** and **Type** values for this data source.
-5. Fill in the form data, depending on your data source type:
+5. Fill in the form data, depending on your data source type.
- **For databases**
+ For databases:
- a. **Type** ΓÇô Database
+ a. For **Type**, enter **Database**.
- b. **Database type** ΓÇô Select a database from one of the supported database types.
+ b. For **Database type**, select a database from one of the supported database types.
- c. **Connection URL** ΓÇô Enter a well-formatted JDBC connection string. Such as: ``jdbc:postgresql://myhost.com:5432/databasename``
+ c. For **Connection URL**, enter a well-formatted JDBC connection string, such as `jdbc:postgresql://myhost.com:5432/databasename`.
- d. **Username** ΓÇô Enter the username for accessing the database
+ d. For **Username**, enter the username for accessing the database.
- e. **Password** ΓÇô Enter the password for accessing the database
+ e. For **Password**, enter the password for accessing the database.
- f. **Query** ΓÇô Enter the SQL query to extract the customer details. Such as: ``SELECT * FROM mytable;``
+ f. For **Query**, enter the SQL query to extract the customer details, such as `SELECT * FROM mytable;`.
- g. Select **Test Connection**, you'll see a sample of your data to ensure the connection is working.
+ g. Select **Test Connection**. You'll see a sample of your data to ensure that the connection is working.
- **For LDAPs**
+ For LDAPs:
- a. **Type** ΓÇô LDAP
+ a. For **Type**, enter **LDAP**.
- b. **Host** ΓÇô Enter the hostname or IP for machine in which the LDAP server is running. Such as: ``mysite.com``
+ b. For **Host**, enter the host name or IP address for the machine in which the LDAP server is running, such as `mysite.com`.
- c. **Port** ΓÇô Enter the port number in which the LDAP server is listening.
+ c. For **Port**, enter the port number in which the LDAP server is listening.
- d. **SSL** ΓÇô Check the box if TheAccessHub Admin Tool should communicate to the LDAP securely using SSL. Using SSL is highly recommended.
+ d. For **SSL**, select the box if TheAccessHub Admin Tool should communicate to the LDAP securely by using SSL. We highly recommend using SSL.
- e. **Login DN** ΓÇô Enter the DN of the user account to log in and do the LDAP search
+ e. For **Login DN**, enter the distinguished name (DN) of the user account to log in and do the LDAP search.
- f. **Password** ΓÇô Enter the password for the user
+ f. For **Password**, enter the password for the user.
- g. **Base DN** ΓÇô Enter the DN at the top of the hierarchy in which you want to do the search
+ g. For **Base DN**, enter the DN at the top of the hierarchy in which you want to do the search.
- h. **Filter** ΓÇô Enter the LDAP filter string, which will obtain your customer records
+ h. For **Filter**, enter the LDAP filter string, which will obtain your customer records.
- i. **Attributes** ΓÇô Enter a comma-separated list of attributes from your customer records to pass to TheAccessHub Admin Tool
+ i. For **Attributes**, enter a comma-separated list of attributes from your customer records to pass to TheAccessHub Admin Tool.
- j. Select the **Test Connection**, you'll see a sample of your data to ensure the connection is working.
+ j. Select the **Test Connection**. You'll see a sample of your data to ensure that the connection is working.
- **For OneDrive**
+ For OneDrive:
- a. **Type** ΓÇô OneDrive for Business
+ a. For **Type**, select **OneDrive for Business**.
- b. Select **Authorize Connection**
+ b. Select **Authorize Connection**.
- c. A new window will prompt you to log in to **OneDrive**, login with a user with read access to your OneDrive account. TheAccessHub Admin Tool, will act for this user to read CSV load files.
+ c. A new window prompts you to sign in to OneDrive. Sign in with a user that has read access to your OneDrive account. TheAccessHub Admin Tool will act for this user to read .csv load files.
d. Follow the prompts and select **Accept** to grant TheAccessHub Admin Tool the requested permissions.
-6. Select **Save** when finished.
+6. Select **Save** when you're finished.
### Synchronize data from your data source into Azure AD B2C
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
+
+2. Go to **System Admin** > **Data Synchronization**.
-2. Navigate to **System Admin** > **Data Synchronization**
+3. Select **New Load**.
-3. Select **New Load**
+4. For **Colleague Type**, select **Azure AD B2C User**.
-4. Select the **Colleague Type** Azure AD B2C User
+5. Select **Source**. In the pop-up dialog, select your data source. If you created a OneDrive data source, also select the file.
-5. Select **Source**, in the pop-up dialog, select your data source. If you created a OneDrive data source, also select the file.
+6. If you don't want to create new customer accounts with this load, change the first policy (**IF colleague not found in TheAccessHub THEN**) to **Do Nothing**.
-6. If you donΓÇÖt want to create new customer accounts with this load, then change the first policy: **IF colleague not found in TheAccessHub THEN** to **Do Nothing**
+7. If you don't want to update existing customer accounts with this load, change the second policy (**IF source and TheAccessHub data mismatch THEN**) to **Do Nothing**.
-7. If you donΓÇÖt want to update existing customer accounts with this load, then change the second policy **IF source and TheAccessHub data mismatch THEN** to **Do Nothing**
+8. Select **Next**.
-8. Select **Next**
+9. In **Search-Mapping configuration**, you identify how to correlate load records with customers already loaded into TheAccessHub Admin Tool.
-9. In the **Search-Mapping configuration**, we identify how to correlate load records with customers already loaded into TheAccessHub Admin Tool. Choose one or more identifying attributes in the source. Match the attributes with an attribute in TheAccessHub Admin Tool that holds the same values. If a match is found, then the existing record will be overridden; otherwise, a new customer will be created. You can sequence a number of these checks. For example, you could check email first, and then first and last name.
+ Choose one or more identifying attributes in the source. Match the attributes with an attribute in TheAccessHub Admin Tool that holds the same values. If a match is found, the existing record will be overridden. Otherwise, a new customer will be created.
+
+ You can sequence a number of these checks. For example, you could check email first, and then check first and last name.
-10. On the left-hand side menu, select **Data Mapping**.
+10. On the left-side menu, select **Data Mapping**.
-11. In the Data-Mapping configuration, assign which TheAccessHub Admin Tool attributes should be populated from your source attributes. No need to map all the attributes. Unmapped attributes will remain unchanged for existing customers.
+11. In **Data-Mapping configuration**, assign the TheAccessHub Admin Tool attributes that should be populated from your source attributes. There's no need to map all the attributes. Unmapped attributes will remain unchanged for existing customers.
-12. If you map to the attribute org_name with a value that is the name of an existing organization, then new customers created will be placed in that organization.
+12. If you map to the attribute `org_name` with a value that is the name of an existing organization, newly created customers will be placed in that organization.
-13. Select **Next**
+13. Select **Next**.
-14. A Daily/Weekly or Monthly schedule may be specified if this load should be reoccurring. Otherwise keep the default **Now**.
+14. If you want this load to be recurring, specify a **Daily/Weekly** or **Monthly** schedule. Otherwise, keep the default of **Now**.
-15. Select **Submit**
+15. Select **Submit**.
-16. If the **Now schedule** was selected, a new record will be added to the Data Synchronizations screen immediately. Once the validation phase has reached 100%, select the **new record** to see the expected outcome for the load. For scheduled loads, these records will only appear after the scheduled time.
+16. If you selected the **Now** schedule, a new record will be added to the **Data Synchronizations** screen immediately. After the validation phase has reached 100 percent, select the new record to see the expected outcome for the load. For scheduled loads, these records will appear only after the scheduled time.
-17. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again. Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Finally, you can continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
+17. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again.
-18. When the **Data Synchronization** record becomes 100% on the load phase, all the changes resulting from the load will have been initiated. Customers should begin appearing or receiving changes in Azure AD B2C.
+ Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Another option is to continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
+
+18. When the **Data Synchronization** record becomes 100 percent on the load phase, all the changes resulting from the load have been initiated. Customers should begin appearing or receiving changes in Azure AD B2C.
## Synchronize Azure AD B2C customer data
-As a one-time or ongoing operation, TheAccessHub Admin Tool can synchronize all the customer information from Azure AD B2C into TheAccessHub Admin Tool. This ensures that CSR/Helpdesk administrators are seeing up-to-date customer information.
+As a one-time or ongoing operation, TheAccessHub Admin Tool can synchronize all the customer information from Azure AD B2C into TheAccessHub Admin Tool. This operation ensures that CSR/Helpdesk administrators see up-to-date customer information.
To synchronize data from Azure AD B2C into TheAccessHub Admin Tool:
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Data Synchronization**
+2. Go to **System Admin** > **Data Synchronization**.
-3. Select **New Load**
+3. Select **New Load**.
-4. Select the **Colleague Type** Azure AD B2C User
+4. For **Colleague Type**, select **Azure AD B2C User**.
5. For the **Options** step, leave the defaults.
-6. Select **Next**
+6. Select **Next**.
+
+7. For the **Data Mapping & Search** step, leave the defaults. Exception: if you map to the attribute `org_name` with a value that is the name of an existing organization, newly created customers will be placed in that organization.
-7. For the **Data Mapping & Search** step, leave the defaults. Except if you map to the attribute **org_name** with a value that is the name of an existing organization, then new customers created will be placed in that organization.
+8. Select **Next**.
-8. Select **Next**
+9. If you want this load to be recurring, specify a **Daily/Weekly** or **Monthly** schedule. Otherwise, keep the default of **Now**. We recommend syncing from Azure AD B2C on a regular basis.
-9. A Daily/Weekly or Monthly schedule may be specified if this load should be reoccurring. Otherwise keep the default: **Now**. We recommend syncing from Azure AD B2C on a regular basis.
+10. Select **Submit**.
-10. Select **Submit**
+11. If you selected the **Now** schedule, a new record will be added to the **Data Synchronizations** screen immediately. After the validation phase has reached 100 percent, select the new record to see the expected outcome for the load. For scheduled loads, these records will appear only after the scheduled time.
-11. If the **Now** schedule was selected, a new record will be added to the Data Synchronizations screen immediately. Once the validation phase has reached 100%, select the new record to see the expected outcome for the load. For scheduled loads, these records will only appear after the scheduled time.
+12. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the **More** menu to remove the load. You can then correct the source data or load mappings and try again.
-12. If there are no errors, select **Run** to commit the changes. Otherwise, select **Remove** from the More menu to remove the load. You can then correct the source data or load mappings and try again. Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Finally, you can continue with any errors and resolve them later as Support Interventions in TheAccessHub Admin Tool.
+ Instead, if the number of errors is small, you can manually update the records and select **Update** on each record to make corrections. Another option is to continue with any errors and resolve them later as **Support Interventions** in TheAccessHub Admin Tool.
-13. When the **Data Synchronization** record becomes 100% on the load phase, all the changes resulting from the load will have been initiated.
+13. When the **Data Synchronization** record becomes 100 percent on the load phase, all the changes resulting from the load have been initiated.
## Configure Azure AD B2C policies
-Occasionally syncing TheAccessHub Admin Tool is limited in its ability to keep its state up to date with Azure AD B2C. We can leverage TheAccessHub Admin ToolΓÇÖs API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./user-flow-overview.md). In the next section, we'll give you an example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your Sign-Up custom policies.
+Occasional syncing of TheAccessHub Admin Tool limits the tool's ability to keep its state up to date with Azure AD B2C. You can use TheAccessHub Admin Tool's API and Azure AD B2C policies to inform TheAccessHub Admin Tool of changes as they happen. This solution requires technical knowledge of [Azure AD B2C custom policies](./user-flow-overview.md).
+
+The following procedures give you example policy steps and a secure certificate to notify TheAccessHub Admin Tool of new accounts in your sign-up custom policies.
-### Create a secure credential to invoke TheAccessHub Admin ToolΓÇÖs API
+### Create a secure credential to invoke TheAccessHub Admin Tool's API
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Admin Tools** > **API Security**
+2. Go to **System Admin** > **Admin Tools** > **API Security**.
-3. Select **Generate**
+3. Select **Generate**.
-4. Copy the **Certificate Password**
+4. Copy the **Certificate Password**.
5. Select **Download** to get the client certificate.
-6. Follow this [tutorial](./secure-rest-api.md#https-client-certificate-authentication ) to add the client certificate into Azure AD B2C.
+6. Follow [this tutorial](./secure-rest-api.md#https-client-certificate-authentication ) to add the client certificate into Azure AD B2C.
### Retrieve your custom policy examples
-1. Log into TheAccessHub Admin Tool using credentials provided to you by N8 Identity
+1. Log in to TheAccessHub Admin Tool by using the credentials that N8 Identity has provided.
-2. Navigate to **System Admin** > **Admin Tools** > **Azure B2C Policies**
+2. Go to **System Admin** > **Admin Tools** > **Azure B2C Policies**.
-3. Supply your Azure AD B2C tenant domain and the two Identity Experience Framework IDs from your Identity Experience Framework configuration
+3. Supply your Azure AD B2C tenant domain and the two Identity Experience Framework IDs from your Identity Experience Framework configuration.
-4. Select **Save**
+4. Select **Save**.
-5. Select **Download** to get a zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
+5. Select **Download** to get a .zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
-6. Follow this [tutorial](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to get started with designing custom policies in Azure AD B2C.
+6. Follow [this tutorial](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to get started with designing custom policies in Azure AD B2C.
## Next steps
-For additional information, review the following articles:
+For more information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
Previously updated : 05/03/2021 Last updated : 06/08/2022
If the type of authentication is set to `ApiKeyHeader`, the **CryptographicKeys*
| The name of the HTTP header, such as `x-functions-key`, or `x-api-key`. | Yes | The key that is used to authenticate. | > [!NOTE]
-> At this time, Azure AD B2C supports only one HTTP header for authentication. If your RESTful call requires multiple headers, such as a client ID and client secret, you will need to proxy the request in some manner.
+> At this time, Azure AD B2C supports only one HTTP header for authentication. If your RESTful call requires multiple headers, such as a client ID and client secret value, you will need to proxy the request in some manner.
```xml <TechnicalProfile Id="REST-API-SignUp">
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **CryptographicKeys** element contains the following attributes:
| Attribute |Required | Description | | | -- | -- | | SamlMessageSigning |Yes | The X509 certificate (RSA key set) to use to sign SAML messages. Azure AD B2C uses this key to sign the requests and send them to the identity provider. |
-| SamlAssertionDecryption |No | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. |
+| SamlAssertionDecryption |No* | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. <br/><br/> * Required if the external IDP Encryts SAML assertions.|
| MetadataSigning |No | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. | ## Next steps
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
Previously updated : 04/05/2022 Last updated : 06/08/2022 zone_pivot_groups: b2c-policy-type
For a client credentials flow, you need to create an application secret. The cli
#### Create Azure AD B2C policy keys
-You need to store the client ID and the client secret that you previously recorded in your Azure AD B2C tenant.
+You need to store the client ID and the client secret value that you previously recorded in your Azure AD B2C tenant.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
active-directory-b2c View Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md
Previously updated : 02/20/2020 Last updated : 06/08/2022
You can try this script in the [Azure Cloud Shell](overview.md). Be sure to upda
# Constants $ClientID = "your-client-application-id-here" # Insert your application's client ID, a GUID
-$ClientSecret = "your-client-application-secret-here" # Insert your application's client secret
+$ClientSecret = "your-client-application-secret-here" # Insert your application's client secret value
$tenantdomain = "your-b2c-tenant.onmicrosoft.com" # Insert your Azure AD B2C tenant domain name $loginURL = "https://login.microsoftonline.com"
active-directory About Microsoft Identity Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/about-microsoft-identity-platform.md
Title: Evolution of Microsoft identity platform - Azure
+ Title: Evolution of Microsoft identity platform
description: Learn about Microsoft identity platform, an evolution of the Azure Active Directory (Azure AD) identity service and developer platform.
The [Microsoft identity platform](../develop/index.yml) is an evolution of the A
Many developers have previously worked with the Azure AD v1.0 platform to authenticate work and school accounts (provisioned by Azure AD) by requesting tokens from the Azure AD v1.0 endpoint, using Azure AD Authentication Library (ADAL), Azure portal for application registration and configuration, and the Microsoft Graph API for programmatic application configuration.
-With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the applicationΓÇÖs usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs.
+With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the application's usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs.
With Microsoft identity platform, expand your reach to these kinds of users:
The following diagram shows the Microsoft identity experience at a high level, i
### App registration experience
-The Azure portal **[App registrations](https://go.microsoft.com/fwlink/?linkid=2083908)** experience is the one portal experience for managing all applications youΓÇÖve integrated with Microsoft identity platform. If you have been using the Application Registration Portal, start using the Azure portal app registration experience instead.
+The Azure portal **[App registrations](https://go.microsoft.com/fwlink/?linkid=2083908)** experience is the one portal experience for managing all applications you've integrated with Microsoft identity platform. If you have been using the Application Registration Portal, start using the Azure portal app registration experience instead.
-For integration with Azure AD B2C (when authenticating social or local identities), youΓÇÖll need to register your application in an Azure AD B2C tenant. This experience is also part of the Azure portal.
+For integration with Azure AD B2C (when authenticating social or local identities), you'll need to register your application in an Azure AD B2C tenant. This experience is also part of the Azure portal.
Use the [Application API](/graph/api/resources/application) to programmatically configure your applications integrated with Microsoft identity platform for authenticating any Microsoft identity.
active-directory Active Directory Devhowto Adal Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-devhowto-adal-error-handling.md
Title: ADAL client app error handling best practices | Azure
+ Title: ADAL client app error handling best practices
description: Provides error handling guidance and best practices for ADAL client applications.
active-directory App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/app-types.md
Title: Application types in v1.0 | Azure
+ Title: Application types in v1.0
description: Describes the types of apps and scenarios supported by the Azure Active Directory v2.0 endpoint.
active-directory Azure Ad Endpoint Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md
Title: Why update to Microsoft identity platform (v2.0) | Azure
+ Title: Why update to Microsoft identity platform (v2.0)
description: Know the differences between the Microsoft identity platform (v2.0) endpoint and the Azure Active Directory (Azure AD) v1.0 endpoint, and learn the benefits of updating to v2.0.
active-directory V1 Authentication Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-authentication-scenarios.md
Title: Azure AD for developers (v1.0) | Azure
+ Title: Azure AD for developers (v1.0)
description: Learn authentication basics for Azure AD for developers (v1.0) such as the app model, API, provisioning, and the most common authentication scenarios. documentationcenter: dev-center-name
active-directory Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/videos.md
Title: Azure ADAL to MSAL migration videos | Azure
+ Title: Azure ADAL to MSAL migration videos
description: Videos that help you migrate from the Azure Active Directory developer platform to the Microsoft identity platform -+ Last updated 02/12/2020
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
Title: Tutorial - Web app accesses Microsoft Graph as the app| Azure
+ Title: Tutorial - Web app accesses Microsoft Graph as the app
description: In this tutorial, you learn how to access data in Microsoft Graph by using managed identities.
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) | Azure
+ Title: Request custom claims (MSAL iOS/macOS)
description: Learn how to request custom claims.
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview - Azure
+ Title: Microsoft identity platform overview
description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications.
There are several components that make up the Microsoft identity platform:
- **Application configuration API and PowerShell**: Programmatic configuration of your applications through the Microsoft Graph API and PowerShell so you can automate your DevOps tasks. - **Developer content**: Technical documentation including quickstarts, tutorials, how-to guides, and code samples.
-For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You donΓÇÖt need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations.
+For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You don't need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations.
With the Microsoft identity platform, you can write code once and reach any user. You can build an app once and have it work across many platforms, or build an app that functions as a client as well as a resource application (API).
active-directory Conditional Access Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/conditional-access-exclusion.md
Title: Manage users excluded from Conditional Access policies - Azure AD
+ Title: Manage users excluded from Conditional Access policies
description: Learn how to use Azure Active Directory (Azure AD) access reviews to manage users that have been excluded from Conditional Access policies documentationcenter: ''
active-directory How To Connect Fed Sha256 Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-sha256-guidance.md
Title: Change signature hash algorithm for Microsoft 365 relying party trust - Azure
+ Title: Change signature hash algorithm for Microsoft 365 relying party trust
description: This page provides guidelines for changing SHA algorithm for federation trust with Microsoft 365. keywords: SHA1,SHA256,M365,federation,aadconnect,adfs,ad fs,change sha,federation trust,relying party trust
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-single-adfs-multitenant-federation.md
Title: Federating multiple Azure AD with single AD FS - Azure
+ Title: Federating multiple Azure AD with single AD FS
description: In this document, you will learn how to federate multiple Azure AD with a single AD FS. keywords: federate, ADFS, AD FS, multiple tenants, single AD FS, one ADFS, multi-tenant federation, multi-forest adfs, aad connect, federation, cross-tenant federation
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets, credentials, certificates, keys etc used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
-While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credetials.
+While developers can securely store the secrets in [Azure Key Vault](../../key-vault/general/overview.md), services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Azure Active Directory for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication. Applications can use managed identities to obtain Azure AD tokens without having manage any credentials.
The following video shows how you can use managed identities:</br>
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
Title: Tutorial`:` Use a managed identity to access Azure Key Vault - Linux - Azure AD
+ Title: "Tutorial: Use a managed identity to access Azure Key Vault - Linux"
description: A tutorial that walks you through the process of using a Linux VM system-assigned managed identity to access Azure Resource Manager. documentationcenter: ''
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Title: Use managed identities from a virtual machine to access Cosmos DB | Microsoft Docs
+ Title: Use managed identities from a virtual machine to access Cosmos DB
description: Learn how to use managed identities with Windows VMs using the Azure portal, CLI, PowerShell, Azure Resource Manager template
Depending on your API version, you have to take [different steps](qs-configure-t
```json "variables": {
- "identityName": "my-user-assigned"
-
- },
+ "identityName": "my-user-assigned"
+
+ },
``` Under the resources element, add the following entry to assign a user-assigned managed identity to your VM. Be sure to replace ```<identityName>``` with the name of the user-assigned managed identity you created.
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
az aks nodepool add \
--cluster-name myAKSCluster \ --resource-group myResourceGroup \ --name myNodepool \
- --enable-custom-ca-trust
+ --enable-custom-ca-trust \
+ --os-type Linux
``` ## Configure an existing nodepool to use a custom CA
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
This article assumes that you have an existing AKS cluster. If you need an AKS c
You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-The AKS cluster cluster identity needs permission to manage network resources if you use an existing subnet or resource group. For information see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the the AKS cluster identity also has read access to that subnet.
+The AKS cluster identity needs permission to manage network resources if you use an existing subnet or resource group. For information, see [Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)][use-kubenet] or [Configure Azure CNI networking in Azure Kubernetes Service (AKS)][advanced-networking]. If you are configuring your load balancer to use an [IP address in a different subnet][different-subnet], ensure the AKS cluster identity also has read access to that subnet.
For more information on permissions, see [Delegate AKS access to other Azure resources][aks-sp].
internal-app LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m
For more information on configuring your load balancer in a different subnet, see [Specify a different subnet][different-subnet]
+## Connect Azure Private Link service to internal load balancer (Preview)
+
+To attach an Azure Private Link Service to an internal load balancer, create a service manifest named `internal-lb-pls.yaml` with the service type *LoadBalancer* and the *azure-load-balancer-internal* and *azure-pls-create* annotation as shown in the example below. For more options, refer to the [Azure Private Link Service Integration](https://kubernetes-sigs.github.io/cloud-provider-azure/development/design-docs/pls-integration/) design document
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: internal-app
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+ service.beta.kubernetes.io/azure-pls-create: "true"
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: internal-app
+```
+
+Deploy the internal load balancer using the [kubectl apply][kubectl-apply] and specify the name of your YAML manifest:
+
+```console
+kubectl apply -f internal-lb-pls.yaml
+```
+
+An Azure load balancer is created in the node resource group and connected to the same virtual network as the AKS cluster.
+
+When you view the service details, the IP address of the internal load balancer is shown in the *EXTERNAL-IP* column. In this context, *External* is in relation to the external interface of the load balancer, not that it receives a public, external IP address. It may take a minute or two for the IP address to change from *\<pending\>* to an actual internal IP address, as shown in the following example:
+
+```
+$ kubectl get service internal-app
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+internal-app LoadBalancer 10.125.17.53 10.125.0.66 80:30430/TCP 64m
+```
+
+Additionally, a Private Link Service object will also be created that connects to the Frontend IP configuration of the Load Balancer associated with the Kubernetes service. Details of the Private Link Service object can be retrieved as shown in the following example:
+```
+$ AKS_MC_RG=$(az aks show -g myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv)
+$ az network private-link-service list -g ${AKS_MC_RG} --query "[].{Name:name,Alias:alias}" -o table
+
+Name Alias
+-- -
+pls-xyz pls-xyz.abc123-defg-4hij-56kl-789mnop.eastus2.azure.privatelinkservice
+
+```
+
+### Create a Private Endpoint to the Private Link Service
+
+A Private Endpoint allows you to privately connect to your Kubernetes service object via the Private Link Service created above. To do so, follow the example shown below:
+
+```azurecli
+$ AKS_PLS_ID=$(az network private-link-service list -g ${AKS_MC_RG} --query "[].id" -o tsv)
+$ az network private-endpoint create \
+ -g myOtherResourceGroup \
+ --name myAKSServicePE \
+ --vnet-name myOtherVNET \
+ --subnet pe-subnet \
+ --private-connection-resource-id ${AKS_PLS_ID} \
+ --connection-name connectToMyK8sService
+```
+ ## Use private networks When you create your AKS cluster, you can specify advanced networking settings. This approach lets you deploy the cluster into an existing Azure virtual network and subnets. One scenario is to deploy your AKS cluster into a private network connected to your on-premises environment and run services only accessible internally. For more information, see configure your own virtual network subnets with [Kubenet][use-kubenet] or [Azure CNI][advanced-networking].
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
Access policies determine which identities can use the authorization that the ac
### Process flow for creating authorizations
-The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available. Please see [this](https://aka.ms/apimauthorizations/postmancollection) Postman collection.
+The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available.
:::image type="content" source="media/authorizations-overview/get-token.svg" alt-text="Process flow for creating authorizations" border="false":::
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
For example, insert the policy fragment named *ForwardContext* in the inbound po
## Manage policy fragments
-After creating a policy fragment, you can view and update policy properties, or delete the policy at any time.
+After creating a policy fragment, you can view and update the properties of a policy fragment, or delete the policy fragment at any time.
-**To view properties of a fragment:**
+**To view properties of a policy fragment:**
1. In the left navigation of your API Management instance, under **APIs**, select **Policy fragments**. Select the name of your fragment. 1. On the **Overview** page, review the **Policy document references** to see the policy definitions that include the fragment.
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 05/10/2022 Last updated : 06/03/2022
page lists the **compliance domains** and **security controls** for Azure App Se
assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard. +
+## Release notes
+
+### June 2022
+
+- Deprecation of policy "API App should only be accessible over HTTPS"
+- Rename of policy "Web Application should only be accessible over HTTPS" to "App Service apps should only be accessible over HTTPS"
+- Update scope of policy "App Service apps should only be accessible over HTTPS" to include all app types except Function apps
+- Update scope of policy "App Service apps should only be accessible over HTTPS" to include slots
+- Update scope of policy "Function apps should only be accessible over HTTPS" to include slots
+- Update logic of policy "App Service apps should use a SKU that supports private link" to include checks on App Service plan tier or name so that the policy supports Terraform deployments
+- Update list of supported SKUs of policy "App Service apps should use a SKU that supports private link" to include the Basic and Standard tiers
## Next steps
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
# Common key vault errors in Azure Application Gateway
-Application Gateway enables customers to securely store TLS certificates in Azure Key Vault. When using a Key Vault resource, it is important that the gateway always has access to the linked key vault. If your Application Gateway is unable to fetch the certificate, the associated HTTPS listeners will be placed in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
+Application Gateway enables customers to securely store TLS certificates in Azure Key Vault. When using a key vault resource, it is important that the gateway always has access to the linked key vault. If your Application Gateway is unable to fetch the certificate, the associated HTTPS listeners will be placed in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
-This article helps you understand the details of key vault error codes you might encounter, including what is causing these errors. This article also contains steps to resolve such misconfigurations.
+This article helps you understand the details of the error codes and the steps to resolve such key vault misconfigurations.
> [!TIP] > Use a secret identifier that doesn't specify a version. This way, Azure Application Gateway will automatically rotate the certificate, if a newer version is available in Azure Key Vault. An example of a secret URI without a version is: `https://myvault.vault.azure.net/secrets/mysecret/`. ## List of error codes and their details
-The following sections cover various errors you might encounter. You can find the details in Azure Advisor, and use this troubleshooting article to fix the problems. For more information, see [Create Azure Advisor alerts on new recommendations by using the Azure portal](../advisor/advisor-alerts-portal.md).
+The following sections describe the various errors you might encounter. You can verify if your gateway has any such problem by visting [**Azure Advisor**](./key-vault-certs.md#investigating-and-resolving-key-vault-errors) for your account, and use this troubleshooting article to fix the problem. We recommend configuring Azure Advisor alerts to stay informed when a key vault problem is detected for your gateway.
> [!NOTE] > Azure Application Gateway generates logs for key vault diagnostics every four hours. If the diagnostic continues to show the error after you have fixed the configuration, you might have to wait for the logs to be refreshed.
The following sections cover various errors you might encounter. You can find th
[comment]: # (Error Code 1) ### Error code: UserAssignedIdentityDoesNotHaveGetPermissionOnKeyVault
-**Description:** The associated user-assigned managed identity doesn't have the "Get" permission.
+**Description:** The associated user-assigned managed identity doesn't have the required permission.
-**Resolution:** Configure the access policy of Key Vault to grant the user-assigned managed identity this permission on secrets.
-1. Go to the linked key vault in the Azure portal.
-1. Open the **Access policies** pane.
-1. For **Permission model**, select **Vault access policy**.
-1. Under **Secret Management Operations**, select the **Get** permission.
-1. Select **Save**.
+**Resolution:** Configure the access policies of your key vault to grant the user-assigned managed identity permission on secrets. You may do so in any of the following ways:
+
+ **Vault access policy**
+ 1. Go to the linked key vault in the Azure portal.
+ 1. Open the **Access policies** blade.
+ 1. For **Permission model**, select **Vault access policy**.
+ 1. Under **Secret Management Operations**, select the **Get** permission.
+ 1. Select **Save**.
:::image type="content" source="./media/application-gateway-key-vault-common-errors/no-get-permssion-for-managed-identity.png " alt-text=" Screenshot that shows how to resolve the Get permission error."::: For more information, see [Assign a Key Vault access policy by using the Azure portal](../key-vault/general/assign-access-policy-portal.md).
+ **Azure role-based access control**
+ 1. Go to the linked key vault in the Azure portal.
+ 1. Open the **Access policies** blade.
+ 1. For **Permission model**, select **Azure role-based access control**.
+ 1. After this, navigate to **Access Control (IAM)** blade to configure permissions.
+ 1. **Add role assignment** for your managed identity by choosing the following<br>
+ a. **Role**: Key Vault Secrets User<br>
+ b. **Assign access to**: Managed identity<br>
+ c. **Members**: select the user-assigned managed identity which you've associated with your application gateway.<br>
+ 1. Select **Review + assign**.
+
+For more information, see [Azure role-based access control in Key Vault](../key-vault/general/rbac-guide.md).
+
+> [!NOTE]
+> Portal support for adding a new key vault-based certificate is currently not available when using **Azure role-based access control**. You can accomplish it by using ARM template, CLI, or PowerShell. Visit [this page](./key-vault-certs.md#key-vault-azure-role-based-access-control-permission-model) for guidance.
+ [comment]: # (Error Code 2) ### Error code: SecretDisabled
On the other hand, if a certificate object is permanently deleted, you will need
**Description:** The associated user-assigned managed identity has been deleted.
-**Resolution:** To use the identity again:
-1. Re-create a managed identity with the same name that was used previously, and under the same resource group. Resource activity logs contain more details.
-1. After you create the identity, go to **Application Gateway - Access Control (IAM)**. Assign the identity the **Reader** role, at a minimum.
-1. Finally, go to the desired Key Vault resource, and set its access policies to grant **Get** secret permissions for this new managed identity.
-
-For more information, see [How integration works](./key-vault-certs.md#how-integration-works).
+**Resolution:** Create a new managed identity and use it with the key vault.
+1. Re-create a managed identity with the same name that was previously used, and under the same resource group. (**TIP**: Refer to resource Activity Logs for naming details).
+1. Go to the desired key vault resource, and set its access policies to grant this new managed identity the required permission. You can follow the same steps as mentioned under [UserAssignedIdentityDoesNotHaveGetPermissionOnKeyVault](./application-gateway-key-vault-common-errors.md#error-code-userassignedidentitydoesnothavegetpermissiononkeyvault).
[comment]: # (Error Code 5) ### Error code: KeyVaultHasRestrictedAccess
Select **Managed deleted vaults**. From here, you can find the deleted Key Vault
These troubleshooting articles might be helpful as you continue to use Application Gateway:
+- [Understanding and fixing disabled listeners](disabled-listeners.md)
- [Azure Application Gateway Resource Health overview](resource-health-overview.md)-- [Troubleshoot Azure Application Gateway session affinity issues](how-to-troubleshoot-application-gateway-session-affinity-issues.md)+
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Title: Layouts - Form Recognizer
-description: Learn concepts related to Layout API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Learn concepts related to the Layout API with Form Recognizer REST API usage and limits.
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | **Supported paragraph roles**:
+The paragraph roles are best used with unstructured documents, structured documents and forms. Roles help analyze the structure of the extracted content for better semantic search and analysis.
* title * sectionHeading
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
* pageFooter * pageNumber
-For a richer semantic analysis, paragraph roles are best used with unstructured documents to better understand the layout of the extracted content.
- ## Development options The following tools are supported by Form Recognizer v2.1:
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Try extracting text from forms and documents using the Form Recognizer Studio. Y
### Form Recognizer Studio (preview) > [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API. The latest service preview is currently not enabled for analyzing Microsoft Word, Excel, PowerPoint, and HTML file formats using the Form Recognizer Studio.
+> Currently, Form Recognizer Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read preview.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
Try extracting text from forms and documents using the Form Recognizer Studio. Y
## Input requirements
-* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, Microsoft Word, Excel, PowerPoint, and HTML files are supported with the Read API in **2022-06-30-preview**.
+* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, the newest API version `2022-06-30-preview` supports Microsoft Word (DOCX), Excel (XLS), PowerPoint (PPT), and HTML files.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier. * Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. Get started with exploring the pre-trained models with sample documents or your own. Create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts. ## Prerequisites for new users
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com). In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
+After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com).
+
+In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
+
+ :::image border="true" type="content" source="../media/quickstarts/form-recognizer-general-document-demo-preview3.gif" alt-text="Selecting the General Document API to analysis a document in the Form Recognizer Studio.":::
1. Select a Form Recognizer service feature from the Studio home page.
-1. This is a one-time step unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
+1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
1. Select the Analyze command to run analysis on the sample document or try your document by using the Add command.
-1. Observe the highlighted extracted content in the document view. Hover your move over the keys and values to see details.
- 1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
-1. Show and hide the text, tables, and selection marks layers to focus on each one of them at a time.
+1. Observe the highlighted extracted content in the document view. Hover your move over the keys and values to see details.
-1. In the output section's Result tab, browse the JSON output to understand the service response format. Copy and download to jumpstart integration.
+1. In the output section's Result tab, browse the JSON output to understand the service response format.
+1. In the Code tab, browse the sample code for integration. Copy and download to get started.
## Additional prerequisites for custom projects
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
### Configure CORS
-[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
+[CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
-1. Select the CORS blade for the storage account.
+1. Select the CORS tab for the storage account.
:::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Before you run the cURL command, make the following changes:
#### POST request ```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Reference table
After you've called the [**Analyze document**](https://westus.dev.cognitive.micr
```bash <<<<<<< HEAD
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
=======
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-06-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
>>>>>>> resolve-merge-conflict ```
applied-ai-services Resource Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/resource-customer-stories.md
The following customers and partners have adopted Form Recognizer across a wide
||-|-| | **Acumatica** | [**Acumatica**](https://www.acumatica.com/) is a technology provider that develops cloud and browser-based enterprise resource planning (ERP) software for small and medium-sized businesses (SMBs). To bring expense claims into the modern age, Acumatica incorporated Form Recognizer into its native application. The Form Recognizer's prebuilt-receipt API and machine learning capabilities are used to automatically extract data from receipts. Acumatica's customers can file multiple, error-free claims in a matter of seconds, freeing up more time to focus on other important tasks. | [Customer story](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure) | | **Air Canada** | In September 2021, [**Air Canada**](https://www.aircanada.com/) was tasked with verifying the COVID-19 vaccination status of thousands of worldwide employees in only two months. After realizing manual verification would be too costly and complex within the time constraint, Air Canada turned to its internal AI team for an automated solution. The AI team partnered with Microsoft and used Form Recognizer to roll out a fully functional, accurate solution within weeks. This partnership met the government mandate on time and saved thousands of hours of manual work. | [Customer story](https://customers.microsoft.com/story/1505667713938806113-air-canada-travel-transportation-azure-form-recognizer)|
-|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, Arkas Logistics has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
+|**Arkas Logistics** | [**Arkas Logistics**](http://www.arkaslojistik.com.tr/) is operates under the umbrella of Arkas Holding, Turkey's leading holding institution and operating in 23 countries. During the COVID-19 crisis, the company has been able to provide outstanding, complete logistical services thanks to its focus on contactless operation and digitalization steps. Form Recognizer powers a solution that maintains the continuity of the supply chain and allows for uninterrupted service. | [Customer story](https://customers.microsoft.com/story/842149-arkas-logistics-transportation-azure-en-turkey ) |
|**Automation Anywhere**| [**Automation Anywhere**](https://www.automationanywhere.com/) is on a singular and unwavering mission to democratize automation by liberating teams from mundane, repetitive tasks, and allowing more time for innovation and creativity with cloud-native robotic process automation (RPA)software. To protect the citizens of the United Kingdom, healthcare providers must process tens of thousands of COVID-19 tests daily, each one accompanied by a form for the World Health Organization (WHO). Manually completing and processing these forms would potentially slow testing and divert resources away from patient care. In response, Automation Anywhere built an AI-powered bot to help a healthcare provider automatically process and submit the COVID-19 test forms at scale. | [Customer story](https://customers.microsoft.com/story/811346-automation-anywhere-partner-professional-services-azure-cognitive-services) | |**AvidXchange**| [**AvidXchange**](https://www.avidxchange.com/) has developed an accounts payable automation solution applying Form Recognizer. AvidXchange partners with Azure Cognitive Services to deliver an accounts payable automation solution for the middle market. Customers benefit from faster invoice processing times and increased accuracy to ensure their suppliers are paid the right amount, at the right time. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Blue Prism**| [**Blue Prism**](https://www.blueprism.com/) Decipher is an AI-powered document processing capability that's directly embedded into the company's connected-RPA platform. Decipher works with Form Recognizer to help organizations process forms faster and with less human effort. One of Blue Prism's customers has been testing the solution to automate invoice handling as part of its procurement process. | [Customer story](https://customers.microsoft.com/story/737482-blue-prism-partner-professional-services-azure) |
The following customers and partners have adopted Form Recognizer across a wide
|**GEP**| [**GEP**](https://www.gep.com/) has developed an invoice processing solution for a client using Form Recognizer. GEP combined their AI solution with Azure Form Recognizer to automate the processing of 4,000 invoices a day for a client saving them tens of thousands of hours of manual effort. This collaborative effort improved accuracy, controls, and compliance on a global scale." Sarateudu Sethi, GEP's Vice President of Artificial Intelligence. | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**HCA Healthcare**| [**HCA Healthcare**](https://hcahealthcare.com/) is one of the nation's leading providers of healthcare with over 180 hospitals and 2,000 sites-of-care located throughout the United States and serving approximately 35 million patients each year. Currently, they're using Azure Form Recognizer to simplify and improve the patient onboarding experience and reducing administrative time spent entering repetitive data into the care center's system. | [Customer story](https://customers.microsoft.com/story/1404891793134114534-hca-healthcare-healthcare-provider-azure)| |**Icertis**| [**Icertis**](https://www.icertis.com/), is a Software as a Service (SaaS) provider headquartered in Bellevue, Washington. Icertis digitally transforms the contract management process with a cloud-based, AI-powered, contract lifecycle management solution. Azure Form Recognizer enables Icertis Contract Intelligence to take key-value pairs embedded in contracts and create structured data understood and operated upon by machine algorithms. Through these and other powerful Azure Cognitive and AI services, Icertis empowers customers in every industry to improve business in multiple ways: optimized manufacturing operations, added agility to retail strategies, reduced risk in IT services, and faster delivery of life-saving pharmaceutical products. | [Blog](https://cloudblogs.microsoft.com/industry-blog/en-in/unicorn/2022/01/12/how-icertis-built-a-contract-management-solution-using-azure-form-recognizer/)|
-|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. Instabase then brings this data into business workflows as organized information. The platform provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. Instabase applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
+|**Instabase**| [**Instabase**](https://instabase.com/) is a horizontal application platform that provides best-in-class machine learning processes to help retrieve, organize, identify, and understand complex masses of unorganized data. The application platform then brings this data into business workflows as organized information. This workflow provides a repository of integrative applications to orchestrate and harness that information with the means to rapidly extend and enhance them as required. The applications are fully containerized for widespread, infrastructure-agnostic deployment. | [Customer story](https://customers.microsoft.com/en-gb/story/1376278902865681018-instabase-partner-professional-services-azure)|
|**Northern Trust**| [**Northern Trust**](https://www.northerntrust.com/) is a leading provider of wealth management, asset servicing, asset management, and banking to corporations, institutions, families, and individuals. As part of its initiative to digitize alternative asset servicing, Northern Trust has launched an AI-powered solution to extract unstructured investment data from alternative asset documents and making it accessible and actionable for asset-owner clients. Azure Applied AI services accelerate time-to-value for enterprises building AI solutions. This proprietary solution transforms crucial information from various unstructured formats into digital, actionable insights for investment teams. | [Customer story](https://www.businesswire.com/news/home/20210914005449/en/Northern-Trust-Automates-Data-Extraction-from-Alternative-Asset-Documentation)|
+|**Old Mutual**| [**Old Mutual**](https://www.oldmutual.co.za/) is Africa's leading financial services group with a comprehensive range of investment capabilities. They're the industry leader in retirement fund solutions, investments, asset management, group risk benefits, insurance, and multi-fund management. The Old Mutual team used Microsoft Natural Language Processing and Optical Character Recognition to provide the basis for automating key customer transactions received via emails. It also offered an opportunity to identify incomplete customer requests in order to nudge customers to the correct digital channels. Old Mutual's extensible solution technology was further developed as a microservice to be consumed by any enterprise application through a secure API management layer. | [Customer story](https://customers.microsoft.com/en-us/story/1507561807660098567-old-mutual-banking-capital-markets-azure-en-south-africa)|
|**Standard Bank**| [**Standard Bank of South Africa**](https://www.standardbank.co.za/southafrica/personal/home) is Africa's largest bank by assets. Standard Bank is headquartered in Johannesburg, South Africa, and has more than 150 years of trade experience in Africa and beyond. When manual due diligence in cross-border transactions began absorbing too much staff time, the bank decided it needed a new way forward. Standard Bank uses Form Recognizer to significantly reduce its cross-border payments registration and processing time. | [Customer story](https://customers.microsoft.com/en-hk/story/1395059149522299983-standard-bank-of-south-africa-banking-capital-markets-azure-en-south-africa)| | **WEX**| [**WEX**](https://www.wexinc.com/) has developed a tool to process Explanation of Benefits documents using Form Recognizer. "The technology is truly amazing. I was initially worried that this type of solution wouldn't be feasible, but I soon realized that Form Recognizer can read virtually any document with accuracy." Matt Dallahan, Senior Vice President of Product Management and Strategy | [Blog](https://techcommunity.microsoft.com/t5/azure-ai/form-recognizer-now-reads-more-languages-processes-ids-and/ba-p/2179428)| |**Wilson Allen** | [**Wilson Allen**](https://wilsonallen.com/) took advantage of AI container support for Azure Cognitive Services and created a powerful AI solution that help firms around the world find unprecedented levels of insight in previously siloed and unstructured data. Its clients can use this data to support business development and foster client relationships. | [Customer story](https://customers.microsoft.com/story/814361-wilson-allen-partner-professional-services-azure)|
-|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the Zelros platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the Zelros platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
+|**Zelros**| [**Zelros**](http://www.zelros.com/) offers AI-powered software for the insurance industry. Insurers use the platform to take in forms and seamlessly manage customer enrollment and claims filing. The company combined its technology with Form Recognizer to automatically pull key-value pairs and text out of documents. When insurers use the platform, they can quickly process paperwork, ensure high accuracy, and redirect thousands of hours previously spent on manual data extraction toward better service. | [Customer story](https://customers.microsoft.com/story/816397-zelros-insurance-azure)|
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (
Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [Trusted launch](../virtual-machines/trusted-launch.md)) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring of boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
-## Azure Attestation can run in a TEE
+## Azure Attestation runs in a TEE
Azure Attestation is critical to Confidential Computing scenarios, as it performs the following actions:
Azure Attestation is critical to Confidential Computing scenarios, as it perform
- Manages and stores tenant-specific policies. - Generates and signs a token that is used by relying parties to interact with the enclave.
-Azure Attestation is built to run in two types of environments:
-- Azure Attestation running in an SGX enabled TEE.-- Azure Attestation running in a non-TEE.-
-Azure Attestation customers have expressed a requirement for Microsoft to be operationally out of trusted computing base (TCB). This is to prevent Microsoft entities such as VM admins, host admins, and Microsoft developers from modifying attestation requests, policies, and Azure Attestation-issued tokens. Azure Attestation is also built to run in TEE, where features of Azure Attestation like quote validation, token generation, and token signing are moved into an SGX enclave.
+To keep Microsoft operationally out of trusted computing base (TCB), critical operations of Azure Attestation like quote validation, token generation, policy evaluation and token signing are moved into an SGX enclave.
## Why use Azure Attestation
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
To help troubleshoot issues with your runbooks running on a hybrid runbook worke
## Next steps
+* For more information on Hybrid Runbook Worker, see [Automation Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
* If your runbooks aren't completing successfully, review the troubleshooting guide for [runbook execution failures](troubleshoot/hybrid-runbook-worker.md#runbook-execution-fails). * For more information on PowerShell, including language reference and learning modules, see [PowerShell Docs](/powershell/scripting/overview). * Learn about [using Azure Policy to manage runbook execution](enforce-job-execution-hybrid-worker.md) with Hybrid Runbook Workers.
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Title: Azure Automation Hybrid Runbook Worker overview
-description: This article provides an overview of the Hybrid Runbook Worker, which you can use to run runbooks on machines in your local datacenter or cloud provider.
+description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider.
Last updated 11/11/2021
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: - The latest releases
+- New features
+- Improvements to existing features
- Known issues - Bug fixes + This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md
Learn more about backup to URL here:
RESTORE DATABASE <database name> FROM URL = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>/<file name>.bak' WITH MOVE 'Test' to '/var/opt/mssql/datf' ,MOVE 'Test_log' to '/var/opt/mssql/data/<file name>.ldf'
- ,RECOVERY
- ,REPLACE
- ,STATS = 5;
+ ,RECOVERY;
GO ```
Prepare and run the RESTORE command to restore the backup file to the Azure SQL
RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/<file name>.bak' WITH MOVE '<database name>' to '/var/opt/mssql/datf' ,MOVE '<database name>' to '/var/opt/mssql/data/<file name>_log.ldf'
-,RECOVERY
-,REPLACE
-,STATS = 5;
+,RECOVERY;
GO ```
Example:
RESTORE DATABASE test FROM DISK = '/var/opt/mssql/data/test.bak' WITH MOVE 'test' to '/var/opt/mssql/datf' ,MOVE 'test' to '/var/opt/mssql/data/test_log.ldf'
-,RECOVERY
-,REPLACE
-,STATS = 5;
+,RECOVERY;
GO ```
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
description: Understand the default Redis configuration for Azure Cache for Redi
Previously updated : 03/22/2022 Last updated : 06/07/2022 -+
Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemo
For more information about `maxmemory` policies, see [Eviction policies](https://redis.io/topics/lru-cache#eviction-policies).
-The **maxmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
+The **maxmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-The **maxfragmentationmemory-reserved** setting configures the amount of memory, in MB per instance in a cluster, that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
+The **maxfragmentationmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Available metrics and reporting intervals](cache-how-to-monitor.md#available-metrics-and-reporting-intervals).
The settings in the **Administration** section allow you to perform the followin
### Import/Export
-Import/Export is an Azure Cache for Redis data management operation, which allows you to import and export data in the cache by importing and exporting an Azure Cache for Redis Database (RDB) snapshot from a premium cache to a page blob in an Azure Storage Account. Import/Export enables you to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
+Import/Export is an Azure Cache for Redis data management operation that allows you to import and export data in the cache. You can import and export an Azure Cache for Redis Database (RDB) snapshot from a premium cache to a page blob in an Azure Storage Account. Use Import/Export to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
Import can be used to bring Redis compatible RDB files from any Redis server running in any cloud or environment, including Redis running on Linux, Windows, or any cloud provider such as Amazon Web Services and others. Importing data is an easy way to create a cache with pre-populated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory, and then inserts the keys into the cache.
-Export allows you to export the data stored in Azure Cache for Redis to Redis compatible RDB files. You can use this feature to move data from one Azure Cache for Redis instance to another or to another Redis server. During the export process, a temporary file is created on the VM that hosts the Azure Cache for Redis server instance, and the file is uploaded to the designated storage account. When the export operation completes with either a status of success or failure, the temporary file is deleted.
+Export allows you to export the data stored in Azure Cache for Redis to Redis compatible RDB files. You can use this feature to move data from one Azure Cache for Redis instance to another or to another Redis server. During the export process, a temporary file is created on the VM that hosts the Azure Cache for Redis server instance. The temporary file is uploaded to the designated storage account. When the export operation completes with either a status of success or failure, the temporary file is deleted.
> [!IMPORTANT] > Import/Export is only available for Premium tier caches. For more information and instructions, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
New Azure Cache for Redis instances are configured with the following default Re
| | | | | `databases` |16 |The default number of databases is 16 but you can configure a different number based on the pricing tier.<sup>1</sup> The default database is DB 0, you can select a different one on a per-connection basis using `connection.GetDatabase(dbid)` where `dbid` is a number between `0` and `databases - 1`. | | `maxclients` |Depends on the pricing tier<sup>2</sup> |This value is the maximum number of connected clients allowed at the same time. Once the limit is reached Redis closes all the new connections, returning a 'max number of clients reached' error. |
-| `maxmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
-| `maxfragmentationmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they are re-evaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they're reevaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
+| `maxfragmentationmemory-reserved` | 10% of `maxmemory` | The allowed range for `maxfragmentationmemory-reserved` is 10% - 60% of `maxmemory`. If you try to set these values lower than 10% or higher than 60%, they're reevaluated and set to the 10% minimum and 60% maximum. The values are rendered in megabytes. |
| `maxmemory-policy` |`volatile-lru` | Maxmemory policy is the setting used by the Redis server to select what to remove when `maxmemory` (the size of the cache that you selected when you created the cache) is reached. With Azure Cache for Redis, the default setting is `volatile-lru`. This setting removes the keys with an expiration set using an LRU algorithm. This setting can be configured in the Azure portal. For more information, see [Memory policies](#memory-policies). | | `maxmemory-samples` |3 |To save memory, LRU and minimal TTL algorithms are approximated algorithms instead of precise algorithms. By default Redis checks three keys and picks the one that was used less recently. | | `lua-time-limit` |5,000 |Max execution time of a Lua script in milliseconds. If the maximum execution time is reached, Redis logs that a script is still in execution after the maximum allowed time, and starts to reply to queries with an error. |
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Previously updated : 07/31/2017 Last updated : 06/07/2022
This article provides a guide for importing and exporting data with Azure Cache
> [!IMPORTANT] > Import/Export is only available for [Premium tier](cache-overview.md#service-tiers) caches.
->
->
## Import
Use import to bring Redis compatible RDB files from any Redis server running in
1. To import one or more exported cache blobs, [browse to your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the Azure portal and select **Import data** from the **Resource menu**. ![Import data](./media/cache-how-to-import-export-data/cache-import-data.png)+ 2. Select **Choose Blob(s)** and select the storage account that contains the data to import. ![Choose storage account](./media/cache-how-to-import-export-data/cache-import-choose-storage-account.png)+ 3. Select the container that contains the data to import. ![Choose container](./media/cache-how-to-import-export-data/cache-import-choose-container.png)+ 4. Select one or more blobs to import by selecting the area to the left of the blob name, and then **Select**. ![Choose blobs](./media/cache-how-to-import-export-data/cache-import-choose-blobs.png)+ 5. Select **Import** to begin the import process. > [!IMPORTANT]
Export allows you to export the data stored in Azure Cache for Redis to Redis co
> ![Storage account](./media/cache-how-to-import-export-data/cache-export-data-choose-account.png)+ 3. Choose the blob container you want, then **Select**. To use new a container, select **Add Container** to add it first and then select it from the list. ![On Containers for contoso55, the + Container option is highlighted. There is one container in the list, cachesaves, and it is selected and highlighted. The Selection option is selected and highlighted.](./media/cache-how-to-import-export-data/cache-export-data-container.png)+ 4. Type a **Blob name prefix** and select **Export** to start the export process. The blob name prefix is used to prefix the names of files generated by this export operation. ![Export](./media/cache-how-to-import-export-data/cache-export-data.png)
Import/Export is available only in the premium pricing tier.
### Can I import data from any Redis server?
-Yes, you can importing data exported from Azure Cache for Redis instances, and you can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration.
+Yes, you can import data that was exported from Azure Cache for Redis instances. You can import RDB files from any Redis server running in any cloud or environment. The environments include Linux, Windows, or cloud providers such as Amazon Web Services. To do import this data, upload the RDB file from the Redis server you want into a page or block blob in an Azure Storage Account. Then, import it into your premium Azure Cache for Redis instance. For example, you might want to export the data from your production cache and import it into a cache that is used as part of a staging environment for testing or migration.
> [!IMPORTANT] > To successfully import data exported from Redis servers other than Azure Cache for Redis when using a page blob, the page blob size must be aligned on a 512 byte boundary. For sample code to perform any required byte padding, see [Sample page blob upload](https://github.com/JimRoberts-MS/SamplePageBlobUpload).
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
Title: Configure data persistence - Premium Azure Cache for Redis description: Learn how to configure and manage data persistence your Premium tier Azure Cache for Redis instances - Last updated 05/17/2022+ # Configure data persistence for a Premium Azure Cache for Redis instance
Azure Cache for Redis offers Redis persistence using the Redis database (RDB) an
- **RDB persistence** - When you use RDB persistence, Azure Cache for Redis persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. The configurable backup frequency determines how often to persist the snapshot. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the most recent snapshot. Learn more about the [advantages](https://redis.io/topics/persistence#rdb-advantages) and [disadvantages](https://redis.io/topics/persistence#rdb-disadvantages) of RDB persistence. - **AOF persistence** - When you use AOF persistence, Azure Cache for Redis saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. If a catastrophic event occurs that disables both the primary and replica cache, the cache is reconstructed using the stored write operations. Learn more about the [advantages](https://redis.io/topics/persistence#aof-advantages) and [disadvantages](https://redis.io/topics/persistence#aof-disadvantages) of AOF persistence.
-Azure Cache for Redis persistence features are intended to be used to restore data after data loss, not importing it to a new cache. You cannot import from AOF page blob backups to a new cache. To export data for importing back to a new cache, use the export RDB feature or automatic recurring RDB export. For more information on importing to a new cache, see [Import](cache-how-to-import-export-data.md#import).
+Azure Cache for Redis persistence features are intended to be used to restore data after data loss, not importing it to a new cache. You can't import from AOF page blob backups to a new cache. To export data for importing back to a new cache, use the export RDB feature or automatic recurring RDB export. For more information on importing to a new cache, see [Import](cache-how-to-import-export-data.md#import).
> [!NOTE] > Importing from AOF page blob backups to a new cache is not a supported option.
Persistence writes Redis data into an Azure Storage account that you own and man
> [!NOTE] > > Azure Storage automatically encrypts data when it is persisted. You can use your own keys for the encryption. For more information, see [Customer-managed keys with Azure Key Vault](../storage/common/storage-service-encryption.md).
->
->
## Set up data persistence
Persistence writes Redis data into an Azure Storage account that you own and man
:::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
-2. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
+2. On the **Create a resource** page, select **Databases** and then select **Azure Cache for Redis**.
:::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
Previously updated : 08/11/2020 Last updated : 06/07/2022+ # Enable zone redundancy for Azure Cache for Redis+ In this article, you'll learn how to configure a zone-redundant Azure Cache instance using the Azure portal. Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../availability-zones/az-overview.md). It provides higher resilience and availability.
Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in r
> Data transfer between Azure Availability Zones will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/). ## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
## Create a cache+ To create a cache, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
To create a cache, follow these steps:
1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**. :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis.":::
-
+ 1. On the **Basics** page, configure the settings for your new cache.
-
+ | Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
+ | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
+ | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
| **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Select a [Premium or Enterprise tier](https://azure.microsoft.com/pricing/details/cache/) cache. | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
+ 1. On the **Advanced** page, for a Premium tier cache, choose **Replica count**.
-
+ :::image type="content" source="media/cache-how-to-multi-replicas/create-multi-replicas.png" alt-text="Replica count":::
-1. Select **Availability zones**.
-
+1. Select **Availability zones**.
+ :::image type="content" source="media/cache-how-to-zone-redundancy/create-zones.png" alt-text="Availability zones"::: 1. Configure your settings for clustering and/or RDB persistence.
To create a cache, follow these steps:
> Zone redundancy doesn't support AOF persistence or work with geo-replication currently. >
-1. Select **Create**.
-
- It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
-
+1. Select **Create**.
+
+ It takes a while for the cache to be created. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+ > [!NOTE]
- > Availability zones can't be changed or enabled after a cache is created.
- >
+ > Availability zones can't be changed or enabled after a cache is created.
## Zone Redundancy FAQ
Zone redundancy is available only in Azure regions that have Availability Zones.
### Why can't I select all three zones during cache create?
-A Premium cache has one primary and one replica nodes by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating.
+A Premium cache has one primary and one replica node by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating.
### Can I update my existing Premium cache to use zone redundancy?
-No, this is not supported currently.
+No, this isn't supported currently.
### How much does it cost to replicate my data across Azure Availability Zones?
-When using zone redundancy, configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+When using zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
## Next Steps+ Learn more about Azure Cache for Redis features.
-> [!div class="nextstepaction"]
-> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/18/2021 Last updated : 06/07/2022 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you get started, make sure you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-+ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/) (preview). Use the `node --version` command to check your version.
++ [Node.js 14.x](https://nodejs.org/en/download/releases/) or [Node.js 16.x](https://nodejs.org/en/download/releases/). Use the `node --version` command to check your version. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | [Cognitive
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | | | | [Cognitive
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
Response:
} ] ```
-For more information, see [public documentation](../cognitive-services/Face/index.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
+For more information, see [public documentation](../cognitive-services/computer-vision/index-identity.yml), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
## Text Analytics
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
For AI and machine learning services availability in Azure Government, see [Prod
- Configure encryption at rest of content in Cognitive Services Custom Vision [using customer-managed keys in Azure Key Vault](../cognitive-services/custom-vision-service/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
-### [Cognitive
+### [Cognitive
- Configure encryption at rest of content in the Face service by [using customer-managed keys in Azure Key Vault](../cognitive-services/face/encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault).
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
Title: How to secure an application in Microsoft Azure Maps with SAS token
+ Title: How to secure an Azure Maps application with a SAS token
-description: This article describes how to configure an application to be secured with SAS token authentication.
+description: Create an Azure Maps account secured with SAS token authentication.
Previously updated : 01/05/2022 Last updated : 06/08/2022
-custom.ms: subject-rbac-steps
+
-# Secure an application with SAS token
+# Secure an Azure Maps account with a SAS token
-This article describes how to create an Azure Maps account with a SAS token that can be used to call the Azure Maps REST API.
+This article describes how to create an Azure Maps account with a securely stored SAS token you can use to call the Azure Maps REST API.
## Prerequisites
-This scenario assumes:
+- An Azure subscription. If you don't already have an Azure account, [sign up for a free one](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **Owner** role permission on the Azure subscription. You need the **Owner** permissions to:
-- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.-- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](../key-vault/general/basic-concepts.md), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.-- Azure CLI is installed to deploy the resources. Read more on [How to install the Azure CLI](/cli/azure/install-azure-cli).-- The current user is signed-in to Azure CLI with an active Azure subscription using `az login`.
+ - Create a key vault in [Azure Key Vault](../key-vault/general/basic-concepts.md).
+ - Create a user-assigned managed identity.
+ - Assign the managed identity a role.
+ - Create an Azure Maps account.
-## Scenario: SAS token
+- [Azure CLI installed](/cli/azure/install-azure-cli) to deploy the resources.
-Applications that use SAS token authentication should store the keys in a secure store. A SAS token is a credential that grants the level of access specified during its creation to anyone who holds it, until the token expires or access is revoked. This scenario describes how to safely store your SAS token as a secret in Azure Key Vault and distribute the SAS token into a public client. Events in an applicationΓÇÖs lifecycle may generate new SAS tokens without interrupting active connections using existing tokens. To understand how to configure Azure Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
+## Example scenario: SAS token secure storage
-The following sample scenario will perform the steps outlined below with two Azure Resource Manager (ARM) template deployments:
+A SAS token credential grants the access level it specifies to anyone who holds it, until the token expires or access is revoked. Applications that use SAS token authentication should store the keys securely.
-- Create an Azure Key Vault.-- Create a user-assigned managed identity.-- Assign Azure RBAC `Azure Maps Data Reader` role to the user-assigned managed identity.-- Create a map account with a CORS configuration and attach the user-assigned managed identity.-- Create and save a SAS token into the Azure Key Vault-- Retrieve the SAS token secret from Azure Key Vault.-- Create an Azure Maps REST API request using the SAS token.
+This scenario safely stores a SAS token as a secret in Key Vault, and distributes the token into a public client. Application lifecycle events can generate new SAS tokens without interrupting active connections that use existing tokens.
-When completed, you should see output from Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources will be deployed with permissions to connect to the Azure Maps account with controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
+For more information about configuring Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
-### Azure resource deployment with Azure CLI
+The following example scenario uses two Azure Resource Manager (ARM) template deployments to do the following steps:
-The following steps describe how to create and configure an Azure Maps account with SAS token authentication. The Azure CLI is assumed to be running in a PowerShell instance.
+1. Create a key vault.
+1. Create a user-assigned managed identity.
+1. Assign Azure role-based access control (RBAC) **Azure Maps Data Reader** role to the user-assigned managed identity.
+1. Create an Azure Maps account with a [Cross Origin Resource Sharing (CORS) configuration](azure-maps-authentication.md#cross-origin-resource-sharing-cors), and attach the user-assigned managed identity.
+1. Create and save a SAS token in the Azure key vault.
+1. Retrieve the SAS token secret from the key vault.
+1. Create an Azure Maps REST API request that uses the SAS token.
-1. Register Key Vault, Managed Identities, and Azure Maps for your subscription
+When you finish, you should see Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources deploy with permissions to connect to the Azure Maps account. There are controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
- ```azurecli
- az provider register --namespace Microsoft.KeyVault
- az provider register --namespace Microsoft.ManagedIdentity
- az provider register --namespace Microsoft.Maps
- ```
+## Azure resource deployment with Azure CLI
+
+The following steps describe how to create and configure an Azure Maps account with SAS token authentication. In this example, Azure CLI runs in a PowerShell instance.
+
+1. Sign in to your Azure subscription with `az login`.
+
+1. Register Key Vault, Managed Identities, and Azure Maps for your subscription.
+
+ ```azurecli
+ az provider register --namespace Microsoft.KeyVault
+ az provider register --namespace Microsoft.ManagedIdentity
+ az provider register --namespace Microsoft.Maps
+ ```
-1. Retrieve your Azure AD object ID
+1. Retrieve your Azure Active Directory (Azure AD) object ID.
```azurecli $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id") ```
-1. Create a template file `prereq.azuredeploy.json` with the following content.
+1. Create a template file named *prereq.azuredeploy.json* with the following content:
```json {
The following steps describe how to create and configure an Azure Maps account w
"objectId": { "type": "string", "metadata": {
- "description": "Specifies the object ID of a user, service principal or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
+ "description": "Specifies the object ID of a user, service principal, or security group in the Azure AD tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
} }, "secretsPermissions": {
The following steps describe how to create and configure an Azure Maps account w
```
-1. Deploy the prerequisite resources. Make sure to pick the location where the Azure Maps accounts is enabled.
+1. Deploy the prerequisite resources you created in the previous step. Supply your own value for `<group-name>`. Make sure to use the same `location` as the Azure Maps account.
- ```azurecli
- az group create --name {group-name} --location "East US"
- $outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
- ```
+ ```azurecli
+ az group create --name <group-name> --location "East US"
+ $outputs = $(az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+ ```
-1. Create a template file `azuredeploy.json` to provision the Map account, role assignment, and SAS token.
+1. Create a template file *azuredeploy.json* to provision the Azure Maps account, role assignment, and SAS token.
```json {
The following steps describe how to create and configure an Azure Maps account w
"type": "string", "defaultValue": "[guid(resourceGroup().id)]", "metadata": {
- "description": "Input string for new GUID associated with assigning built in role types"
+ "description": "Input string for new GUID associated with assigning built in role types."
} }, "startDateTime": { "type": "string", "defaultValue": "[utcNow('u')]", "metadata": {
- "description": "Current Universal DateTime in ISO 8601 'u' format to be used as start of the SAS token."
+ "description": "Current Universal DateTime in ISO 8601 'u' format to use as the start of the SAS token."
} }, "duration" : { "type": "string", "defaultValue": "P1Y", "metadata": {
- "description": "The duration of the SAS token, P1Y is maximum, ISO 8601 format is expected."
+ "description": "The duration of the SAS token. P1Y is maximum, ISO 8601 format is expected."
} }, "maxRatePerSecond": {
The following steps describe how to create and configure an Azure Maps account w
"defaultValue": [], "maxLength": 10, "metadata": {
- "description": "The specified application's web host header origins (example: https://www.azure.com) which the Maps account allows for Cross Origin Resource Sharing (CORS)."
+ "description": "The specified application's web host header origins (example: https://www.azure.com) which the Azure Maps account allows for CORS."
} }, "allowedRegions": { "type": "array", "defaultValue": [], "metadata": {
- "description": "The specified SAS token allowed locations which the token may be used."
+ "description": "The specified SAS token allowed locations where the token may be used."
} } },
The following steps describe how to create and configure an Azure Maps account w
} ```
-1. Deploy the template using ID parameters from the Azure Key Vault and managed identity resources created in the previous step. Note that when creating the SAS token, the `allowedRegions` parameter is set to `eastus`, `westus2`, and `westcentralus`. We use these locations because we plan to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
+1. Deploy the template with the ID parameters from the Key Vault and managed identity resources you created in the previous step. Supply your own value for `<group-name>`. When creating the SAS token, you set the `allowedRegions` parameter to `eastus`, `westus2`, and `westcentralus`. You can then use these locations to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
- > [!IMPORTANT]
- > We save the SAS token into the Azure Key Vault to prevent its credentials from appearing in the Azure deployment logs. The Azure Key Vault SAS token secret's `tags` also contain the start, expiry, and signing key name to help understand when the SAS token will expire.
+ > [!IMPORTANT]
+ > You save the SAS token in the key vault to prevent its credentials from appearing in the Azure deployment logs. The SAS token secret's `tags` also contain the start, expiry, and signing key name, to show when the SAS token will expire.
- ```azurecli
- az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
- ```
+ ```azurecli
+ az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+ ```
-1. Locate, then save a copy of the single SAS token secret from Azure Key Vault.
+1. Locate and save a copy of the single SAS token secret from Key Vault.
- ```azurecli
- $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
- $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
- ```
+ ```azurecli
+ $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+ $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+ ```
-1. Test the SAS Token by making a request to an Azure Maps endpoint. We specify the `us.atlas.microsoft.com` to ensure that our request will be routed to the US geography because our SAS Token has allowed regions within the geography.
+1. Test the SAS token by making a request to an Azure Maps endpoint. This example specifies the `us.atlas.microsoft.com` to ensure your request routes to US geography. Your SAS token allows regions within the US geography.
```azurecli
- az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+ az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
```
-## Complete example
+## Complete script example
-In the current directory of the PowerShell session you should have:
+To run the complete example, the following template files must be in the same directory as the current PowerShell session:
-- `prereq.azuredeploy.json` This creates the Key Vault and managed identity.-- `azuredeploy.json` This creates the Azure Maps account and configures the role assignment and managed identity, then stores the SAS Token into the Azure Key Vault.
+- *prereq.azuredeploy.json* to create the key vault and managed identity.
+- *azuredeploy.json* to create the Azure Maps account, configure the role assignment and managed identity, and store the SAS token in the key vault.
```powershell az login
az provider register --namespace Microsoft.ManagedIdentity
az provider register --namespace Microsoft.Maps $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
-az group create --name {group-name} --location "East US"
-$outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
-az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+az group create --name <group-name> --location "East US"
+$outputs = $(az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+az deployment group create --name ExampleDeployment --resource-group <group-name> --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
$secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv) $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
-az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+```
+
+## Real-world example
+
+You can run requests to Azure Maps APIs from most clients, like C#, Java, or JavaScript. [Postman](https://learning.postman.com/docs/sending-requests/generate-code-snippets) converts an API request into a basic client code snippet in almost any programming language or framework you choose. You can use this generated code snippet in your front-end applications.
+
+The following small JavaScript code example shows how you could use your SAS token with the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options) to get and return Azure Maps information. The example uses the Azure Maps [Get Search Address](/rest/api/maps/search/get-search-address) API version 1.0. Supply your own value for `<your SAS token>`.
+
+For this sample to work, make sure to run it from within the same origin as the `allowedOrigins` for the API call. For example, if you provide `https://contoso.com` as the `allowedOrigins` in the API call, the HTML page that hosts the JavaScript script should be `https://contoso.com`.
+
+```javascript
+async function getData(url = 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052') {
+ const response = await fetch(url, {
+ method: 'GET',
+ mode: 'cors',
+ headers: {
+ 'Content-Type': 'application/json',
+ 'Authorization': 'jwt-sas <your SAS token>',
+ }
+ });
+ return response.json(); // parses JSON response into native JavaScript objects
+}
+
+postData('https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=1 Microsoft Way, Redmond, WA 98052')
+ .then(data => {
+ console.log(data); // JSON data parsed by `data.json()` call
+ });
``` ## Clean up resources
az group delete --name {group-name}
## Next steps
-For more detailed examples:
+Deploy a quickstart ARM template to create an Azure Maps account that uses a SAS token:
+> [!div class="nextstepaction"]
+> [Create an Azure Maps account](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.maps/maps-use-sas)
+
+For more detailed examples, see:
> [!div class="nextstepaction"] > [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md)
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
To enable telemetry collection with Application Insights, only the Application s
|App setting name | Definition | Value | |--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux |
-|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to insure optimal performance. | `disabled` or `recommended`. |
+|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. |
|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with Application Insights SDK. Loads the extension side-by-side with the SDK and uses it to send telemetry (disables the Application Insights SDK). |`1`|
To enable telemetry collection with Application Insights, only the Application s
### Upgrade from versions 2.8.9 and up
-Upgrading from version 2.8.9 happens automatically, without any additional actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
+Upgrading from version 2.8.9 happens automatically, without any extra actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. ### Upgrade from versions 1.0.0 - 2.6.5
Below is our step-by-step troubleshooting guide for extension/agent based monito
If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
- - Confirm that `IKeyExists` is `true`
- If it is `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey guid to your application settings.
+ - Confirm that `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
- - In case your application refers to any Application Insights packages, for example if you've previously instrumented (or attempted to instrument) your app with the [ASP.NET Core SDK](./asp-net-core.md), enabling the App Service integration may not take effect and the data may not appear in Application Insights. To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights.
+ - If your application refers to any Application Insights packages, enabling the App Service integration may not take effect and the data may not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights.
- > [!IMPORTANT] > This functionality is in preview
Below is our step-by-step troubleshooting guide for extension/agent based monito
# [Linux](#tab/linux)
-1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
-2. Navigate to */home\LogFiles\ApplicationInsights\status* and open *status_557de146e7fa_27_1.json*.
-
- Confirm that `AppAlreadyInstrumented` is set to false, `AiHostingStartupLoaded` to true and `IKeyExists` to true.
-
- Below is an example of the JSON file:
-
- ```json
- "AppType":".NETCoreApp,Version=v6.0",
-
- "MachineName":"557de146e7fa",
-
- "PID":"27",
-
- "AppDomainId":"1",
-
- "AppDomainName":"dotnet6demo",
-
- "InstrumentationEngineLoaded":false,
-
- "InstrumentationEngineExtensionLoaded":false,
-
- "HostingStartupBootstrapperLoaded":true,
-
- "AppAlreadyInstrumented":false,
-
- "AppDiagnosticSourceAssembly":"System.Diagnostics.DiagnosticSource, Version=6.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51",
-
- "AiHostingStartupLoaded":true,
-
- "IKeyExists":true,
-
- "IKey":"00000000-0000-0000-0000-000000000000",
-
- "ConnectionString":"InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://westus-0.in.applicationinsights.azure.com/"
-
- ```
-
- If `AppAlreadyInstrumented` is true this indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off.
+1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2"
+1. Browse to https:// your site name .scm.azurewebsites.net/ApplicationInsights
+1. Within this site, confirm:
+ * The status source exists and looks like: `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`
+ * `Auto-Instrumentation enabled successfully`, is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available.
+ * `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings.
+ ##### No Data
Below is our step-by-step troubleshooting guide for extension/agent based monito
-#### Default website deployed with web apps does not support automatic client-side monitoring
+#### Default website deployed with web apps doesn't support automatic client-side monitoring
-When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads a ASP.NET managed web part in IIS. This allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
+When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET managed web part in IIS. This behavior allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring.
If you wish to test out codeless server and client-side monitoring for ASP.NET Core in an Azure App Services web app, we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring. [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)]
-### PHP and WordPress are not supported
+### PHP and WordPress aren't supported
PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md).
The table below provides a more detailed explanation of what these values mean,
|Problem Value |Explanation |Fix | |- |-||
-| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio may add references to `Microsoft.ApplicationInsights`. |
+| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio reference `Microsoft.ApplicationInsights`. |
|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of Microsoft.ApplicationsInsights dll in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). |
-|`IKeyExists:false`|This value indicates that the instrumentation key is not present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
+|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. |
## Release notes
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-failure-diagnostics.md
Title: Smart Detection - failure anomalies, in Application Insights | Microsoft Docs
+ Title: Smart Detection of Failure Anomalies in Application Insights | Microsoft Docs
description: Alerts you to unusual changes in the rate of failed requests to your web app, and provides diagnostic analysis. No configuration is needed. Last updated 12/18/2018
This feature works for any web app, hosted in the cloud or on your own servers, that generate application request or dependency data. For example, if you have a worker role that calls [TrackRequest()](./api-custom-events-metrics.md#trackrequest) or [TrackDependency()](./api-custom-events-metrics.md#trackdependency).
-After setting up [Application Insights for your project](./app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts.
+After setting up [Application Insights for your project](./app-insights-overview.md), and if your app generates a certain minimum amount of data, Smart Detection of Failure Anomalies takes 24 hours to learn the normal behavior of your app, before it is switched on and can send alerts.
Here's a sample alert:
Click the alert to configure it.
:::image type="content" source="./media/proactive-failure-diagnostics/032.png" alt-text="Rule configuration screen." lightbox="./media/proactive-failure-diagnostics/032.png":::
-Notice that you can disable or delete a Failure Anomalies alert rule, but you can't create another one on the same Application Insights resource.
+## Delete alerts
+
+You can disable or delete a Failure Anomalies alert rule, but once deleted you can't create another one for the same Application Insights resource.
+
+Notice that if you delete an Application Insights resource, the associated Failure Anomalies alert rule doesn't get deleted automatically. You can do so manually on the Alert rules page or with the following Azure CLI command:
+
+```azurecli
+az resource delete --ids <Resource ID of Failure Anomalies alert rule>
+```
## Example of Failure Anomalies alert webhook payload
Click **Alerts** in the Application Insights resource page to get to the most re
:::image type="content" source="./media/proactive-failure-diagnostics/070.png" alt-text="Alerts summary." lightbox="./media/proactive-failure-diagnostics/070.png"::: ## What's the difference ...
-Smart Detection of failure anomalies complements other similar but distinct features of Application Insights.
+Smart Detection of Failure Anomalies complements other similar but distinct features of Application Insights.
-* [metric alerts](../alerts/alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of failure anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
+* [metric alerts](../alerts/alerts-log.md) are set by you and can monitor a wide range of metrics such as CPU occupancy, request rates, page load times, and so on. You can use them to warn you, for example, if you need to add more resources. By contrast, Smart Detection of Failure Anomalies covers a small range of critical metrics (currently only failed request rate), designed to notify you in near real-time manner once your web app's failed request rate increases compared to web app's normal behavior. Unlike metric alerts, Smart Detection automatically sets and updates thresholds in response changes in the behavior. Smart Detection also starts the diagnostic work for you, saving you time in resolving issues.
-* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of failure anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for failure anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
+* [Smart Detection of performance anomalies](proactive-performance-diagnostics.md) also uses machine intelligence to discover unusual patterns in your metrics, and no configuration by you is required. But unlike Smart Detection of Failure Anomalies, the purpose of Smart Detection of performance anomalies is to find segments of your usage manifold that might be badly served - for example, by specific pages on a specific type of browser. The analysis is performed daily, and if any result is found, it's likely to be much less urgent than an alert. By contrast, the analysis for Failure Anomalies is performed continuously on incoming application data, and you will be notified within minutes if server failure rates are greater than expected.
## If you receive a Smart Detection alert *Why have I received this alert?*
These diagnostic tools help you inspect the data from your app:
Smart detections are automatic. But maybe you'd like to set up some more alerts? * [Manually configured metric alerts](../alerts/alerts-log.md)
-* [Availability web tests](./monitor-web-app-availability.md)
+* [Availability web tests](./monitor-web-app-availability.md)
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
Last updated 05/09/2022
+ms.reviwer: cawa
# Navigate to a change using custom filters in Change Analysis
azure-monitor Change Analysis Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-powershell.md
ms.devlang: azurepowershell
Last updated 04/11/2022
+ms.reviwer: cawa
# Azure PowerShell for Change Analysis in Azure Monitor (preview)
azure-monitor Change Analysis Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-query.md
ms.contributor: cawa
Last updated 05/12/2022
+ms.reviwer: cawa
# Pin and share a Change Analysis query to the Azure dashboard
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
ms.contributor: cawa
Last updated 05/20/2022 -+ # Use Change Analysis in Azure Monitor (preview)
azure-monitor Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cli-samples.md
- Title: Azure Monitor CLI samples
-description: Sample CLI commands for Azure Monitor features. Azure Monitor is a Microsoft Azure service, which allows you to send alert notifications, call web URLs based on values of configured telemetry data, and autoScale Cloud Services, Virtual Machines, and Web Apps.
--- Previously updated : 05/16/2018 ----
-# Azure Monitor CLI samples
-This article shows you sample command-line interface (CLI) commands to help you access Azure Monitor features. Azure Monitor allows you to AutoScale Cloud Services, Virtual Machines, and Web Apps and to send alert notifications or call web URLs based on values of configured telemetry data.
-
-## Prerequisites
-
-If you haven't already installed the Azure CLI, follow the instructions for [Install the Azure CLI](/cli/azure/install-azure-cli). You can also use [Azure Cloud Shell](/azure/cloud-shell) to run the CLI as an interactive experience in your browser. See a full reference of all available commands in the [Azure Monitor CLI reference](/cli/azure/monitor).
-
-## Log in to Azure
-The first step is to log in to your Azure account.
-
-```azurecli
-az login
-```
-
-After running this command, you have to sign in via the instructions on the screen. All commands work in the context of your default subscription.
-
-List the details of your current subscription.
-
-```azurecli
-az account show
-```
-
-Change working context to a different subscription.
-
-```azurecli
-az account set -s <Subscription ID or name>
-```
-
-View a list of all supported Azure Monitor commands.
-
-```azurecli
-az monitor -h
-```
-
-## View activity log
-
-View a list of activity log events.
-
-```azurecli
-az monitor activity-log list
-```
-
-View all available options.
-
-```azurecli
-az monitor activity-log list -h
-```
-
-List logs by a resourceGroup.
-
-```azurecli
-az monitor activity-log list --resource-group <group name>
-```
-
-List logs by caller.
-
-```azurecli
-az monitor activity-log list --caller myname@company.com
-```
-
-List logs by caller on a resource type, within a date range.
-
-```azurecli
-az monitor activity-log list --resource-provider Microsoft.Web \
- --caller myname@company.com \
- --start-time 2016-03-08T00:00:00Z \
- --end-time 2016-03-16T00:00:00Z
-```
-
-## Work with alerts
-> [!NOTE]
-> Only alerts (classic) is supported in CLI at this time.
-
-### Get alert (classic) rules in a resource group
-
-```azurecli
-az monitor activity-log alert list --resource-group <group name>
-az monitor activity-log alert show --resource-group <group name> --name <alert name>
-```
-
-### Create a metric alert (classic) rule
-
-```azurecli
-az monitor alert create --name <alert name> --resource-group <group name> \
- --action email <email1 email2 ...> \
- --action webhook <URI> \
- --target <target object ID> \
- --condition "<METRIC> {>,>=,<,<=} <THRESHOLD> {avg,min,max,total,last} ##h##m##s"
-```
-
-### Delete an alert (classic) rule
-
-```azurecli
-az monitor alert delete --name <alert name> --resource-group <group name>
-```
-
-## Log profiles
-
-Use the information in this section to work with log profiles.
-
-### Get a log profile
-
-```azurecli
-az monitor log-profiles list
-az monitor log-profiles show --name <profile name>
-```
-
-### Add a log profile with retention
-
-```azurecli
-az monitor log-profiles create --name <profile name> --location <location of profile> \
- --locations <locations to monitor activity in: location1 location2 ...> \
- --categories <categoryName1 categoryName2 ...> \
- --days <# days to retain> \
- --enabled true \
- --storage-account-id <storage account ID to store the logs in>
-```
-
-### Add a log profile with retention and EventHub
-
-```azurecli
-az monitor log-profiles create --name <profile name> --location <location of profile> \
- --locations <locations to monitor activity in: location1 location2 ...> \
- --categories <categoryName1 categoryName2 ...> \
- --days <# days to retain> \
- --enabled true
- --storage-account-id <storage account ID to store the logs in>
- --service-bus-rule-id <service bus rule ID to stream to>
-```
-
-### Remove a log profile
-
-```azurecli
-az monitor log-profiles delete --name <profile name>
-```
-
-## Diagnostics
-
-Use the information in this section to work with diagnostic settings.
-
-### Get a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings list --resource <target resource ID>
-```
-
-### Create a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings create --name <diagnostic name> \
- --storage-account <storage account ID> \
- --resource <target resource object ID> \
- --logs '[
- {
- "category": <category name>,
- "enabled": true,
- "retentionPolicy": {
- "days": <# days to retain>,
- "enabled": true
- }
- }]'
-```
-
-### Delete a diagnostic setting
-
-```azurecli
-az monitor diagnostic-settings delete --name <diagnostic name> \
- --resource <target resource ID>
-```
-
-## Autoscale
-
-Use the information in this section to work with autoscale settings. You need to modify these examples.
-
-### Get autoscale settings for a resource group
-
-```azurecli
-az monitor autoscale list --resource-group <group name>
-```
-
-### Get autoscale settings by name in a resource group
-
-```azurecli
-az monitor autoscale show --name <settings name> --resource-group <group name>
-```
-
-### Set autoscale settings
-
-```azurecli
-az monitor autoscale create --name <settings name> --resource-group <group name> \
- --count <# instances> \
- --resource <target resource ID>
-```
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
Title: Configure Container insights agent data collection | Microsoft Docs
description: This article describes how you can configure the Container insights agent to control stdout/stderr and environment variables log collection. Last updated 10/09/2020+ # Configure agent data collection for Container insights
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
Title: Kubernetes monitoring with Container insights | Microsoft Docs
description: This article describes how you can view and analyze the performance of a Kubernetes cluster with Container insights. Last updated 03/26/2020+ # Monitor your Kubernetes cluster performance with Container insights
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Title: Monitoring cost for Container insights | Microsoft Docs
description: This article describes the monitoring cost for metrics & inventory data collected by Container insights to help customers manage their usage and associated costs. Last updated 05/29/2020+ # Understand monitoring costs for Container insights
azure-monitor Container Insights Deployment Hpa Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-deployment-hpa-metrics.md
Title: Deployment & HPA metrics with Container insights | Microsoft Docs
description: This article describes what deployment & HPA (Horizontal pod autoscaler) metrics are collected with Container insights. Last updated 08/09/2020+ # Deployment & HPA metrics with Container insights
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Title: Enable AKS Monitoring Addon using Azure Policy
description: Describes how to enable AKS Monitoring Addon using Azure Custom Policy. Last updated 02/04/2021+ # Enable AKS monitoring addon using Azure Policy
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.+ # Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS)
Last updated 05/24/2022 + # Enable monitoring of Azure Kubernetes Service (AKS) cluster already deployed
azure-monitor Container Insights Enable New Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-new-cluster.md
Last updated 05/24/2022 ms.devlang: azurecli+ # Enable monitoring of a new Azure Kubernetes Service (AKS) cluster
azure-monitor Container Insights Gpu Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-gpu-monitoring.md
Title: Configure GPU monitoring with Container insights
description: This article describes how you can configure monitoring Kubernetes clusters with NVIDIA and AMD GPU enabled nodes with Container insights. Last updated 05/24/2022+ # Configure GPU monitoring with Container insights
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
Title: Configure Hybrid Kubernetes clusters with Container insights | Microsoft
description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environment. Last updated 06/30/2020+ # Configure hybrid Kubernetes clusters with Container insights
azure-monitor Container Insights Livedata Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-deployments.md
description: This article describes the real-time view of Kubernetes Deployments
Last updated 10/15/2019 + # How to view Deployments (preview) in real-time
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
description: This article describes the real-time view of metrics without using
Last updated 05/24/2022 + # How to view metrics in real-time
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
description: This article describes the real-time view of Kubernetes logs, event
Last updated 05/24/2022 + # How to view Kubernetes logs, events, and pod metrics in real-time
azure-monitor Container Insights Livedata Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-setup.md
description: This article describes how to set up the real-time view of containe
Last updated 05/24/2022 + # How to configure Live Data in Container insights
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log alerts from Container insights | Microsoft Docs
description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights. Last updated 07/29/2021+
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: How to query logs from Container insights
description: Container insights collects metrics and log data and this article describes the records and includes sample queries. Last updated 07/19/2021+
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Title: Configure ContainerLogv2 schema (preview) for Container Insights
description: Switch your ContainerLog table to the ContainerLogv2 schema - Last updated 05/11/2022+ # Enable ContainerLogV2 schema (preview)
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Title: How to manage the Container insights agent | Microsoft Docs
description: This article describes managing the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. Last updated 07/21/2020-+ # How to manage the Container insights agent
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Title: Metric alerts from Container insights
description: This article reviews the recommended metric alerts available from Container insights in public preview. Last updated 05/24/2022-+ # Recommended metric alerts (preview) from Container insights
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Title: Enable Container insights
description: This article describes how to enable and configure Container insights so that you can understand how your container is performing and what performance-related issues have been identified. Last updated 05/24/2022+ # Enable Container insights
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
description: This article describes how you can stop monitoring of your hybrid K
Last updated 05/24/2022 -+ # How to stop monitoring your hybrid cluster
azure-monitor Container Insights Optout Openshift V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v3.md
description: This article describes how you can stop monitoring of your Azure Re
Last updated 05/24/2022 +
azure-monitor Container Insights Optout Openshift V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-openshift-v4.md
Title: How to stop monitoring your Azure and Red Hat OpenShift v4 cluster | Micr
description: This article describes how you can stop monitoring of your Azure Red Hat OpenShift and Red Hat OpenShift version 4 cluster with Container insights. Last updated 05/24/2022+
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Last updated 05/24/2022 ms.devlang: azurecli+
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
description: This article describes Container insights that monitors AKS Contain
Last updated 09/08/2020-+ # Container insights overview
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs
description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Last updated 05/24/2022+ # Configure PV monitoring with Container insights
azure-monitor Container Insights Prometheus Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-integration.md
Title: Configure Container insights Prometheus Integration | Microsoft Docs
description: This article describes how you can configure the Container insights agent to scrape metrics from Prometheus with your Kubernetes cluster. Last updated 04/22/2020+ # Configure scraping of Prometheus metrics with Container insights
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
description: Describes the region mappings supported between Container insights,
Last updated 05/27/2022 + # Region mappings supported by Container insights
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
Title: Reports in Container insights
description: Describes reports available to analyze data collected by Container insights. Last updated 05/24/2022+ # Reports in Container insights
azure-monitor Container Insights Transition Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-hybrid.md
description: "Learn how to migrate from using script-based hybrid monitoring solutions to Container Insights on Azure Arc-enabled Kubernetes clusters"+ # Transition to using Container Insights on Azure Arc-enabled Kubernetes
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"+ # Transition from the Container Monitoring Solution to using Container Insights
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: How to Troubleshoot Container insights | Microsoft Docs
description: This article describes how you can troubleshoot and resolve issues with Container insights. Last updated 05/24/2022+
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
description: This article describes how you update Container insights to enable
Last updated 10/09/2020 +
azure-monitor Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/containers.md
Last updated 07/06/2020+
azure-monitor Resource Manager Container Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/resource-manager-container-insights.md
Title: Resource Manager template samples for Container insights description: Sample Azure Resource Manager templates to deploy and configureContainer insights. - Last updated 05/05/2022+
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/continuous-monitoring.md
description: Describes specific steps for using Azure Monitor to enable Continuo
Previously updated : 10/12/2018 Last updated : 06/07/2022
azure-monitor Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/powershell-samples.md
- Title: Azure Monitor PowerShell samples
-description: Use PowerShell to access Azure Monitor features such as autoscale, alerts, webhooks and searching Activity logs.
--- Previously updated : 2/14/2018 ----
-# Azure Monitor PowerShell samples
-This article shows you sample PowerShell commands to help you access Azure Monitor features.
-
-> [!NOTE]
-> Azure Monitor is the new name for what was called "Azure Insights" until Sept 25th, 2016. However, the namespaces and thus the following commands still contain the word *insights*.
--
-## Set up PowerShell
-If you haven't already, set up PowerShell to run on your computer. For more information, see [How to Install and Configure PowerShell](/powershell/azure/).
-
-## Examples in this article
-The examples in the article illustrate how you can use Azure Monitor cmdlets. You can also review the entire list of Azure Monitor PowerShell cmdlets at [Azure Monitor (Insights) Cmdlets](/powershell/module/az.applicationinsights).
-
-## Sign in and use subscriptions
-First, log in to your Azure subscription.
-
-```powershell
-Connect-AzAccount
-```
-
-You'll see a sign in screen. Once you sign in your Account, TenantID, and default Subscription ID are displayed. All the Azure cmdlets work in the context of your default subscription. To view the list of subscriptions you have access to, use the following command:
-
-```powershell
-Get-AzSubscription
-```
-
-To see your working context (which subscription your commands are run against), use the following command:
-
-```powershell
-Get-AzContext
-```
-To change your working context to a different subscription, use the following command:
-
-```powershell
-Set-AzContext -SubscriptionId <subscriptionid>
-```
--
-## Retrieve Activity log
-Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet. The following are some common examples. The Activity Log holds the last 90 days of operations. Using dates before this time results in an error message.
-
-See what the current date/time are to verify what times to use in the commands below:
-```powershell
-Get-Date
-```
-
-Get log entries from this time/date to present:
-
-```powershell
-Get-AzLog -StartTime 2019-03-01T10:30
-```
-
-Get log entries between a time/date range:
-
-```powershell
-Get-AzLog -StartTime 2019-01-01T10:30 -EndTime 2015-01-01T11:30
-```
-
-Get log entries from a specific resource group:
-
-```powershell
-Get-AzLog -ResourceGroup 'myrg1'
-```
-
-Get log entries from a specific resource provider between a time/date range:
-
-```powershell
-Get-AzLog -ResourceProvider 'Microsoft.Web' -StartTime 2015-01-01T10:30 -EndTime 2015-01-01T11:30
-```
-
-Get all log entries with a specific caller:
-
-```powershell
-Get-AzLog -Caller 'myname@company.com'
-```
-
-The following command retrieves the last 1000 events from the activity log:
-
-```powershell
-Get-AzLog -MaxRecord 1000
-```
-
-`Get-AzLog` supports many other parameters. See the `Get-AzLog` reference for more information.
-
-> [!NOTE]
-> `Get-AzLog` only provides 15 days of history. Using the **-MaxRecords** parameter allows you to query the last N events, beyond 15 days. To access events older than 15 days, use the REST API or SDK (C# sample using the SDK). If you do not include **StartTime**, then the default value is **EndTime** minus one hour. If you do not include **EndTime**, then the default value is current time. All times are in UTC.
->
->
-
-## Retrieve alerts history
-To view all alert events, you can query the Azure Resource Manager logs using the following examples.
-
-```powershell
-Get-AzLog -Caller "Microsoft.Insights/alertRules" -DetailedOutput -StartTime 2015-03-01
-```
-
-To view the history for a specific alert rule, you can use the `Get-AzAlertHistory` cmdlet, passing in the resource ID of the alert rule.
-
-```powershell
-Get-AzAlertHistory -ResourceId /subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/alertrules/myalert -StartTime 2016-03-1 -Status Activated
-```
-
-The `Get-AzAlertHistory` cmdlet supports various parameters. More information, see [Get-AlertHistory](/previous-versions/azure/mt282453(v=azure.100)).
-
-## Retrieve information on alert rules
-All of the following commands act on a Resource Group named "montest".
-
-View all the properties of the alert rule:
-
-```powershell
-Get-AzAlertRule -Name simpletestCPU -ResourceGroup montest -DetailedOutput
-```
-
-Retrieve all alerts on a resource group:
-
-```powershell
-Get-AzAlertRule -ResourceGroup montest
-```
-
-Retrieve all alert rules set for a target resource. For example, all alert rules set on a VM.
-
-```powershell
-Get-AzAlertRule -ResourceGroup montest -TargetResourceId /subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig
-```
-
-`Get-AzAlertRule` supports other parameters. See [Get-AlertRule](/previous-versions/azure/mt282459(v=azure.100)) for more information.
-
-## Create metric alerts
-You can use the `Add-AlertRule` cmdlet to create, update, or disable an alert rule.
-
-You can create email and webhook properties using `New-AzAlertRuleEmail` and `New-AzAlertRuleWebhook`, respectively. In the Alert rule cmdlet, assign these properties as actions to the **Actions** property of the Alert Rule.
-
-The following table describes the parameters and values used to create an alert using a metric.
-
-| parameter | value |
-| | |
-| Name |simpletestdiskwrite |
-| Location of this alert rule |East US |
-| ResourceGroup |montest |
-| TargetResourceId |/subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig |
-| MetricName of the alert that is created |\PhysicalDisk(_Total)\Disk Writes/sec. See the `Get-MetricDefinitions` cmdlet about how to retrieve the exact metric names |
-| operator |GreaterThan |
-| Threshold value (count/sec in for this metric) |1 |
-| WindowSize (hh:mm:ss format) |00:05:00 |
-| aggregator (statistic of the metric, which uses Average count, in this case) |Average |
-| custom emails (string array) |'foo@example.com','bar@example.com' |
-| send email to owners, contributors and readers |-SendToServiceOwners |
-
-Create an Email action
-
-```powershell
-$actionEmail = New-AzAlertRuleEmail -CustomEmail myname@company.com
-```
-
-Create a Webhook action
-
-```powershell
-$actionWebhook = New-AzAlertRuleWebhook -ServiceUri https://example.com?token=mytoken
-```
-
-Create the alert rule on the CPU% metric on a classic VM
-
-```powershell
-Add-AzMetricAlertRule -Name vmcpu_gt_1 -Location "East US" -ResourceGroup myrg1 -TargetResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.ClassicCompute/virtualMachines/my_vm1 -MetricName "Percentage CPU" -Operator GreaterThan -Threshold 1 -WindowSize 00:05:00 -TimeAggregationOperator Average -Action $actionEmail, $actionWebhook -Description "alert on CPU > 1%"
-```
-
-Retrieve the alert rule
-
-```powershell
-Get-AzAlertRule -Name vmcpu_gt_1 -ResourceGroup myrg1 -DetailedOutput
-```
-
-The Add alert cmdlet also updates the rule if an alert rule already exists for the given properties. To disable an alert rule, include the parameter **-DisableRule**.
-
-## Get a list of available metrics for alerts
-You can use the `Get-AzMetricDefinition` cmdlet to view the list of all metrics for a specific resource.
-
-```powershell
-Get-AzMetricDefinition -ResourceId <resource_id>
-```
-
-The following example generates a table with the metric Name and the Unit for it.
-
-```powershell
-Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
-```
-
-A full list of available options for `Get-AzMetricDefinition` is available at [Get-MetricDefinitions](/previous-versions/azure/mt282458(v=azure.100)).
-
-## Create and manage Activity Log alerts
-You can use the `Set-AzActivityLogAlert` cmdlet to set an Activity Log alert. An Activity Log alert requires that you first define your conditions as a dictionary of conditions, then create an alert that uses those conditions.
-
-```powershell
-
-$condition1 = New-AzActivityLogAlertCondition -Field 'category' -Equal 'Administrative'
-$condition2 = New-AzActivityLogAlertCondition -Field 'operationName' -Equal 'Microsoft.Compute/virtualMachines/write'
-$additionalWebhookProperties = New-Object "System.Collections.Generic.Dictionary``2[System.String,System.String]"
-$additionalWebhookProperties.Add('customProperty', 'someValue')
-$actionGrp1 = New-AzActionGroup -ActionGroupId '/subscriptions/<subid>/providers/Microsoft.Insights/actiongr1' -WebhookProperty $additionalWebhookProperties
-Set-AzActivityLogAlert -Location 'Global' -Name 'alert on VM create' -ResourceGroupName 'myResourceGroup' -Scope '/subscriptions/<subid>' -Action $actionGrp1 -Condition $condition1, $condition2
-
-```
-
-The additional webhook properties are optional. You can get back the contents of an Activity Log Alert using `Get-AzActivityLogAlert`.
-
-## Create and manage AutoScale settings
-
-> [!NOTE]
-> For Cloud Services (Microsoft.ClassicCompute), autoscale supports a time grain of 5 minutes (PT5M). For the other services autoscale supports a time grain of minimum of 1 minute (PT1M)
-
-A resource (a Web app, VM, Cloud Service, or Virtual Machine Scale Set) can have only one autoscale setting configured for it.
-However, each autoscale setting can have multiple profiles. For example, one for a performance-based scale profile and a second one for a schedule-based profile. Each profile can have multiple rules configured on it. For more information about Autoscale, see [How to Autoscale an Application](../cloud-services/cloud-services-how-to-scale-portal.md).
-
-Here are the steps to use:
-
-1. Create rule(s).
-2. Create profile(s) mapping the rules that you created previously to the profiles.
-3. Optional: Create notifications for autoscale by configuring webhook and email properties.
-4. Create an autoscale setting with a name on the target resource by mapping the profiles and notifications that you created in the previous steps.
-
-The following examples show you how you can create an Autoscale setting for a Virtual Machine Scale Set for a Windows operating system based by using the CPU utilization metric.
-
-First, create a rule to scale out, with an instance count increase.
-
-```powershell
-$rule1 = New-AzAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -Operator GreaterThan -MetricStatistic Average -Threshold 60 -TimeGrain 00:01:00 -TimeWindow 00:10:00 -ScaleActionCooldown 00:10:00 -ScaleActionDirection Increase -ScaleActionValue 1
-```
-
-Next, create a rule to scale in, with an instance count decrease.
-
-```powershell
-$rule2 = New-AzAutoscaleRule -MetricName "Percentage CPU" -MetricResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -Operator GreaterThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -TimeWindow 00:10:00 -ScaleActionCooldown 00:10:00 -ScaleActionDirection Decrease -ScaleActionValue 1
-```
-
-Then, create a profile for the rules.
-
-```powershell
-$profile1 = New-AzAutoscaleProfile -DefaultCapacity 2 -MaximumCapacity 10 -MinimumCapacity 2 -Rules $rule1,$rule2 -Name "My_Profile"
-```
-
-Create a webhook property.
-
-```powershell
-$webhook_scale = New-AzAutoscaleWebhook -ServiceUri "https://example.com?mytoken=mytokenvalue"
-```
-
-Create the notification property for the autoscale setting, including email and the webhook that you created previously.
-
-```powershell
-$notification1= New-AzAutoscaleNotification -CustomEmails ashwink@microsoft.com -SendEmailToSubscriptionAdministrators SendEmailToSubscriptionCoAdministrators -Webhooks $webhook_scale
-```
-
-Finally, create the autoscale setting to add the profile that you created previously.
-
-```powershell
-Add-AzAutoscaleSetting -Location "East US" -Name "MyScaleVMSSSetting" -ResourceGroup big2 -TargetResourceId /subscriptions/s1/resourceGroups/big2/providers/Microsoft.Compute/virtualMachineScaleSets/big2 -AutoscaleProfiles $profile1 -Notifications $notification1
-```
-
-For more information about managing Autoscale settings, see [Get-AutoscaleSetting](/previous-versions/azure/mt282461(v=azure.100)).
-
-## Autoscale history
-The following example shows you how you can view recent autoscale and alert events. Use the activity log search to view the autoscale history.
-
-```powershell
-Get-AzLog -Caller "Microsoft.Insights/autoscaleSettings" -DetailedOutput -StartTime 2015-03-01
-```
-
-You can use the `Get-AzAutoScaleHistory` cmdlet to retrieve AutoScale history.
-
-```powershell
-Get-AzAutoScaleHistory -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/microsoft.insights/autoscalesettings/myScaleSetting -StartTime 2016-03-15 -DetailedOutput
-```
-
-For more information, see [Get-AutoscaleHistory](/previous-versions/azure/mt282464(v=azure.100)).
-
-### View details for an autoscale setting
-You can use the `Get-Autoscalesetting` cmdlet to retrieve more information about the autoscale setting.
-
-The following example shows details about all autoscale settings in the resource group 'myrg1'.
-
-```powershell
-Get-AzAutoscalesetting -ResourceGroup myrg1 -DetailedOutput
-```
-
-The following example shows details about all autoscale settings in the resource group 'myrg1' and specifically the autoscale setting named 'MyScaleVMSSSetting'.
-
-```powershell
-Get-AzAutoscalesetting -ResourceGroup myrg1 -Name MyScaleVMSSSetting -DetailedOutput
-```
-
-### Remove an autoscale setting
-You can use the `Remove-Autoscalesetting` cmdlet to delete an autoscale setting.
-
-```powershell
-Remove-AzAutoscalesetting -ResourceGroup myrg1 -Name MyScaleVMSSSetting
-```
-
-## Manage log profiles for activity log
-You can create a *log profile* and export data from your activity log to a storage account and you can configure data retention for it. Optionally, you can also stream the data to your Event Hub. This feature is currently in Preview and you can only create one log profile per subscription. You can use the following cmdlets with your current subscription to create and manage log profiles. You can also choose a particular subscription. Although PowerShell defaults to the current subscription, you can always change that using `Set-AzContext`. You can configure activity log to route data to any storage account or Event Hub within that subscription. Data is written as blob files in JSON format.
-
-### Get a log profile
-To fetch your existing log profiles, use the `Get-AzLogProfile` cmdlet.
-
-### Add a log profile without data retention
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia
-```
-
-### Remove a log profile
-```powershell
-Remove-AzLogProfile -name my_log_profile_s1
-```
-
-### Add a log profile with data retention
-You can specify the **-RetentionInDays** property with the number of days, as a positive integer, where the data is retained.
-
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia -RetentionInDays 90
-```
-
-### Add log profile with retention and EventHub
-In addition to routing your data to storage account, you can also stream it to an Event Hub. In this preview release the storage account configuration is mandatory but Event Hub configuration is optional.
-
-```powershell
-Add-AzLogProfile -Name my_log_profile_s1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/my_storage -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Location global,westus,eastus,northeurope,westeurope,eastasia,southeastasia,japaneast,japanwest,northcentralus,southcentralus,eastus2,centralus,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia -RetentionInDays 90
-```
-
-## Configure diagnostics logs
-Many Azure services provide additional logs and telemetry that can do one or more of the following:
-
-The operation can only be performed at a resource level. The storage account or event hub should be present in the same region as the target resource where the diagnostics setting is configured.
-
-### Get diagnostic setting
-```powershell
-Get-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp
-```
-
-Disable diagnostic setting
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $false
-```
-
-Enable diagnostic setting without retention
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $true
-```
-
-Enable diagnostic setting with retention
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Logic/workflows/andy0315logicapp -StorageAccountId /subscriptions/s1/resourceGroups/Default-Storage-WestUS/providers/Microsoft.Storage/storageAccounts/mystorageaccount -Enable $true -RetentionEnabled $true -RetentionInDays 90
-```
-
-Enable diagnostic setting with retention for a specific log category
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -StorageAccountId /subscriptions/s1/resourceGroups/myrg1/providers/Microsoft.Storage/storageAccounts/sakteststorage -Categories NetworkSecurityGroupEvent -Enable $true -RetentionEnabled $true -RetentionInDays 90
-```
-
-Enable diagnostic setting for Event Hubs
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -serviceBusRuleId /subscriptions/s1/resourceGroups/Default-ServiceBus-EastUS/providers/Microsoft.ServiceBus/namespaces/mytestSB/authorizationrules/RootManageSharedAccessKey -Enable $true
-```
-
-Enable diagnostic setting for Log Analytics
-
-```powershell
-Set-AzDiagnosticSetting -ResourceId /subscriptions/s1/resourceGroups/insights-integration/providers/Microsoft.Network/networkSecurityGroups/viruela1 -WorkspaceId /subscriptions/s1/resourceGroups/insights-integration/providers/providers/microsoft.operationalinsights/workspaces/myWorkspace -Enabled $true
-
-```
-
-Note that the WorkspaceId property takes the *resource ID* of the workspace. You can obtain the resource ID of your Log Analytics workspace using the following command:
-
-```powershell
-(Get-AzOperationalInsightsWorkspace).ResourceId
-
-```
-
-These commands can be combined to send data to multiple destinations.
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
## Application Insights ## Next Steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
+- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
azure-monitor Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md
description: Describes recent terminology changes made to Azure monitoring servi
Previously updated : 10/08/2019 Last updated : 06/07/2022
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-security.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Last updated 06/21/2021+
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Last updated 06/02/2021+
azure-monitor Resource Manager Vminsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/resource-manager-vminsights.md
Last updated 05/18/2020+
azure-monitor Service Map Scom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map-scom.md
Last updated 07/12/2019+
azure-monitor Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md
Last updated 07/24/2019+
azure-monitor Tutorial Monitor Vm Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert.md
Last updated 11/04/2021+ # Tutorial: Create alert when Azure virtual machine is unavailable
azure-monitor Tutorial Monitor Vm Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable.md
Last updated 11/04/2021+ # Tutorial: Enable monitoring for Azure virtual machine
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
Last updated 11/08/2021+ # Tutorial: Collect guest logs and metrics from Azure virtual machine
azure-monitor Vminsights Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-workbooks.md
description: Simplify complex reporting with predefined and custom parameterized
Previously updated : 03/12/2020 Last updated : 05/27/2022
VM insights includes the following workbooks. You can use these workbooks or use
| Failed Connections | Display the count of failed connections on your monitored VMs, the failure trend, and if the percentage of failures is increasing over time. | | Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. | | TCP Traffic | A ranked report for your monitored VMs and their sent, received, and total network traffic in a grid and displayed as a trend line. |
-| Traffic Comparison | This workbooks lets you compare network traffic trends for a single machine or a group of machines. |
+| Traffic Comparison | This workbook lets you compare network traffic trends for a single machine or a group of machines. |
## Creating a new workbook A workbook is made up of sections consisting of independently editable charts, tables, text, and input controls. To better understand workbooks, let's start by opening a template and walk through creating a custom workbook.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to the **Monitor** menu in the Azure portal.
-2. Select **Virtual Machines**.
+2. Select a virtual machine.
-3. From the list, select a VM.
+3. On the VM insights page, select **Performance** or **Maps** tab and then select **View Workbooks** from the link on the page. From the drop-down list, select **Go to Gallery**.
-4. On the VM page, in the **Monitoring** section, select **Insights**.
-
-5. On the VM insights page, select **Performance** or **Maps** tab and then select **View Workbooks** from the link on the page. From the drop-down list, select **Go to Gallery**.
-
- ![Screenshot of workbook drop-down list](media/vminsights-workbooks/workbook-dropdown-gallery-01.png)
+ :::image type="content" source="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" lightbox="media/vminsights-workbooks/workbook-dropdown-gallery-01.png" alt-text="Screenshot of workbook drop-down list in V M insights.":::
This launches the workbook gallery with a number of prebuilt workbooks to help you get started. 7. Create a new workbook by selecting **New**.
- ![Screenshot of workbook gallery](media/vminsights-workbooks/workbook-gallery-01.png)
## Editing workbook sections
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
Title: Azure Video Indexer scenes, shots, and keyframes description: This topic gives an overview of the Azure Video Indexer scenes, shots, and keyframes. Previously updated : 07/05/2019 Last updated : 06/07/2022
To extract high-resolution keyframes for your video, you must first upload and i
#### With the Azure Video Indexer website
-To extract keyframes using the Azure Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer.
+To extract keyframes using the Azure Video Indexer website, upload and index your video. Once the indexing job is complete, click on the **Download** button and select **Artifacts (ZIP)**. This will download the artifacts folder to your computer (make sure to view the warning regarding artifacts below). Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
![Screenshot that shows the "Download" drop-down with "Artifacts" selected.](./media/scenes-shots-keyframes/extracting-keyframes2.png)
-Unzip and open the folder. In the *_KeyframeThumbnail* folder, and you will find all of the keyframes that were extracted from your video.
#### With the Azure Video Indexer API
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-> [!TIP]
-> The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
- To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website. ![Screenshot of the Insights tab in Azure Video Indexer.](./media/video-indexer-output-json/video-indexer-summarized-insights.png)
-When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false` to save time and reduce response length.
+When indexing with an API and the response status is OK, you get a detailed JSON output as the response content. When calling the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, we recommend passing `&includeSummarizedInsights=false`.
+ This article examines the Azure Video Indexer output (JSON content). For information about what features and insights are available to you, see [Azure Video Indexer insights](video-indexer-overview.md#video-insights). > [!NOTE] > All the access tokens in Azure Video Indexer expire in one hour.
-## Get the insights
+## Get the insights using the website
To get insights produced on the website or the Azure portal: 1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 1. Find a video whose output you want to examine. 1. Press **Play**.
-1. Select the **Insights** tab to get summarized insights. Or select the **Timeline** tab to filter the relevant insights.
-1. Download artifacts and what's in them.
+1. Choose the **Insights** tab.
+2. Select which insights you want to view (under the **View** drop-down).
+3. Go to the **Timeline** tab to see timestamped transcript lines.
+4. Select **Download** > **Insights (JSON)** to get the insights output file.
+5. If you want to download artifacts, beware of the following:
+
+ [!INCLUDE [artifacts](./includes/artifacts.md)]
For more information, see [View and edit video insights](video-indexer-view-edit.md).
-To get insights produced by the API:
+## Get insights produced by the API
+
+To retrieve the JSON file (OCR, face, keyframe, etc.) or an artifact type, call the [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).
+
+This API returns a URL only with a link to the specific resource type you request. An additional GET request must be made to this URL for the specific artifact. The file types for each artifact type vary depending on the artifact:
+
+### JSON
+
+* OCR
+* Faces
+* VisualContentModeration
+* LanguageDetection
+* MultiLanguageDetection
+* Metadata
+* Emotions
+* TextualContentModeration
+* AudioEffects
+* ObservedPeople
+* Labels
-- To retrieve the JSON file, call the [Get Video Index API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index).-- If you're interested in specific artifacts, call the [Get Video Artifact Download URL API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
+### Zip file containing JPG images
- In the API call, specify the requested artifact type (for example, OCR, face, or keyframe).
+* KeyframesThumbnails
+* FacesThumbnails
## Root elements of the insights
To get insights produced by the API:
|`isEditable`|Indicates whether the current user is authorized to edit the playlist.| |`isBase`|Indicates whether the playlist is a base playlist (a video) or a playlist made of other videos (derived).| |`durationInSeconds`|The total duration of the playlist.|
-|`summarizedInsights`|Contains one [summarized insight](#summarizedinsights).
+|`summarizedInsights`|Contains one [summarized insight](#summary-of-the-insights).
|`videos`|A list of [videos](#videos) that construct the playlist.<br/>If this playlist is constructed of time ranges of other videos (derived), the videos in this list will contain only data from the included time ranges.| ```json
To get insights produced by the API:
} ```
-## summarizedInsights
+## Summary of the insights
This section shows a summary of the insights.
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View and edit Azure Video Indexer insights description: This article demonstrates how to view and edit Azure Video Indexer insights.- Previously updated : 05/15/2019 Last updated : 06/07/2022
This topic shows you how to view and edit the Azure Video Indexer insights of a
2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md). 3. Press **Play**.
- The page shows the video's summarized insights.
+ The page shows the video's insights.
![Insights](./media/video-indexer-view-edit/video-indexer-summarized-insights.png)-
-4. View the summarized insights of the video.
+4. View the insights of the video.
Summarized insights show an aggregated view of the data: faces, keywords, sentiments. For example, you can see the faces of people and the time ranges each face appears in and the % of the time it is shown.
+ [!INCLUDE [insights](./includes/insights.md)]
+
+ Select the **Timeline** tab to see transcripts with timelines and other information that you can choose from the **View** drop-down.
+ The player and the insights are synchronized. For example, if you click a keyword or the transcript line, the player brings you to that moment in the video. You can achieve the player/insights view and synchronization in your application. For more information, see [Embed Azure Indexer widgets into your application](video-indexer-embed-widgets.md).
+ If you want to download artifact files, beware of the following:
+
+ [!INCLUDE [artifacts](./includes/artifacts.md)]
+
+ For more information, see [Insights output](video-indexer-output-json-v2.md).
+
## Next steps [Use your videos' deep insights](use-editor-create-project.md)
azure-web-pubsub Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md
Last updated 11/08/2021
-# What is Azure Web PubSub service?
+# What is Azure Web PubSub service?
The Azure Web PubSub Service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. This real-time functionality allows publishing content updates between server and connected clients (for example a single page web application or mobile application). The clients do not need to poll the latest updates, or submit new HTTP requests for updates.
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
Title: Back up Azure Database for PostgreSQL description: Learn about Azure Database for PostgreSQL backup with long-term retention Previously updated : 02/25/2022 Last updated : 06/07/2022
You can configure backup on multiple databases across multiple Azure PostgreSQL
1. **Select Azure PostgreSQL databases to back up**: Choose one of the Azure PostgreSQL servers across subscriptions if they're in the same region as that of the vault. Expand the arrow to see the list of databases within a server. >[!Note]
- >You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+ >- You don't need to back up the databases *azure_maintenance* and *azure_sys*. Additionally, you can't back up a database already backed-up to a Backup vault.
+ >- Backup of Azure PostgreSQL servers with Private endpoint enabled is currently not supported.
:::image type="content" source="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-inline.png" alt-text="Screenshot showing the option to select an Azure PostgreSQL database." lightbox="./media/backup-azure-database-postgresql/select-azure-postgresql-databases-to-back-up-expanded.png":::
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 05/06/2022 Last updated : 06/08/2022
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features: - Supports *Multiple Backups Per Day* (in preview).-- Instant Restore tier is zonally redundant using Zone-redundant storage (ZRS) resiliency. See the [pricing details for Enhanced policy storage here](https://azure.microsoft.com/pricing/details/managed-disks/).
+- Instant Restore tier is zonally redundant using Zone-redundant storage (ZRS) resiliency. See the [pricing details for Managed Disk Snapshots](https://azure.microsoft.com/pricing/details/managed-disks/).
:::image type="content" source="./media/backup-azure-vms-enhanced-policy/enhanced-backup-policy-settings.png" alt-text="Screenshot showing the enhanced backup policy options.":::
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
Title: Multi-user authorization using Resource Guard description: An overview of Multi-user authorization using Resource Guard. Previously updated : 05/05/2022 Last updated : 06/08/2022
The following table lists the operations defined as critical operations and can
| Disable soft delete | Mandatory Disable MUA protection | Mandatory
-Modify backup policy | Optional: Can be excluded
-Modify protection | Optional: Can be excluded
-Stop protection | Optional: Can be excluded
+Modify backup policy (reduced retention) | Optional: Can be excluded
+Modify protection (reduced retention) | Optional: Can be excluded
+Stop protection with delete data | Optional: Can be excluded
Change MARS security PIN | Optional: Can be excluded ### Concepts and process
certification How To Indirectly Connected Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-indirectly-connected-devices.md
# Mandatory fields. Title: Certifing device bundles and indirectly connected devices
+ Title: Certify bundled or indirectly connected devices
-description: See how to submit an indirectly connected device for certification.
+description: Learn how to submit a bundled or indirectly connected device for Azure Certified Device certification. See how to configure dependencies and components.
Previously updated : 02/23/2021 Last updated : 06/07/2022 -+ # Optional fields. Don't forget to remove # if you need a field.
-#
# # # Device bundles and indirectly connected devices
-To support devices that interact with Azure through a device, SaaS or PaaS offerings, our submission portal (https://certify.azure.com/), and device catalog (https://devicecatalog.azure.com) enable concepts of bundling and dependencies to promote and enable these device combinations access to our Azure Certified Device program.
+Many devices interact with Azure indirectly. Some communicate through another device, such as a gateway. Others connect through software as a service (SaaS) or platform as a service (PaaS) offerings.
+
+The [submission portal](https://certify.azure.com/) and [device catalog](https://devicecatalog.azure.com) offer support for indirectly connected devices:
+
+- By listing dependencies in the portal, you can specify that your device needs another device or service to connect to Azure.
+- By adding components, you can indicate that your device is part of a bundle.
+
+This functionality gives indirectly connected devices access to the Azure Certified Device program.
-Depending on your product line and services offered, your situation may require a combination of these steps:
+Depending on your product line and the services that you offer or use, your situation might require a combination of dependencies and bundling. The Azure Edge Certification Portal provides a way for you to list dependencies and additional components.
-![Create project dependencies](./media/indirect-connected-device/picture-1.png )
## Sensors and indirect devices
-Many sensors require a device to connect to Azure. In addition, you may have multiple compatible devices that will work with the sensor device. **To accommodate these scenarios, you must first certify the device(s) before certifying the sensor that will pass information through them.**
-Example matrix of submission combinations
-![Submission example](./media/indirect-connected-device/picture-2.png )
+Many sensors require a device to connect to Azure. In addition, you might have multiple compatible devices that work with the sensor. **To accommodate these scenarios, certify the devices before you certify the sensor that passes information through them.**
+
+The following matrix provides some examples of submission combinations:
++
+To certify a sensor that requires a separate device:
+
+1. Go to the [Azure Certified Device portal](https://certify.azure.com) to certify the device and publish it to the Azure Certified Device catalog. If you have multiple, compatible pass-through devices, as in the earlier example, submit them separately for certification and catalog publication.
-To certify your sensor, which requires a separate device:
-1. First, [certify the device](https://certify.azure.com) and publish to the Azure Certified Device Catalog
- - If you have multiple, compatible passthrough devices (as in the example above), Submit them separately for certification and publish to the catalog as well
-2. With the sensor connected through the device, submit the sensor for certification
- * In the ΓÇ£DependenciesΓÇ¥ tab of the ΓÇ£Device detailsΓÇ¥ section, set the following values
- * Dependency type = ΓÇ£Hardware gatewayΓÇ¥
- * Dependency URL = ΓÇ£URL link to the device on the device catalogΓÇ¥
- * Used during testing = ΓÇ£YesΓÇ¥
- * Add any Customer-facing comments that should be provided to a user who sees the product description in the device catalog. (example: ΓÇ£Series 100 devices are required for sensors to connect to AzureΓÇ¥)
+1. With the sensor connected through the device, submit the sensor for certification. In the **Dependencies** tab of the **Device details** section, set the following values:
-3. If you have more devices you would like added as optional for this device, you can select ΓÇ£+ Add additional dependencyΓÇ¥. Then follow the same guidance and note that it was not used during testing. In the Customer-facing comments, ensure your customers are aware that other devices are associated with this sensor are available (as an alternative to the device that was used during testing).
+ - **Dependency type**: Select **Hardware gateway**.
+ - **Dependency URL**: Enter the URL of the device in the device catalog.
+ - **Used during testing**: Select **Yes**.
+ - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the device catalog. For example, you might enter **Series 100 devices are required for sensors to connect to Azure**.
-![Alt text](./media/indirect-connected-device/picture-3.png "Hardware dependency type")
+1. If you'd like to add more devices as optional for this device:
+
+ 1. Select **Add additional dependency**.
+ 1. Enter **Dependency type** and **Dependency URL** values.
+ 1. For **Used during testing**, select **No**.
+ 1. For **Customer-facing comments**, enter a comment that informs your customers that other devices are available as alternatives to the device that was used during testing.
+ ## PaaS and SaaS offerings
-As part of your product portfolio, you may have devices that you certify, but your device also requires other services from your company or other third-party companies. To add this dependency, follow these steps:
-1. Start the submission process for your device
-2. In the ΓÇ£DependenciesΓÇ¥ tab, set the following values
- - Dependency type = ΓÇ£Software serviceΓÇ¥
- - Service name = ΓÇ£[your product name]ΓÇ¥
- - Dependency URL = ΓÇ£URL link to a product page that describes the serviceΓÇ¥
- - Add any customer facing comments that should be provided to a user who sees the product description in the Azure Certified Device Catalog
-3. If you have other software, services or hardware dependencies you would like added as optional for this device, you can select ΓÇ£+ Add additional dependencyΓÇ¥ and follow the same guidance.
-![Software dependency type](./media/indirect-connected-device/picture-4.png )
+As part of your product portfolio, you might certify a device that requires services from your company or third-party companies. To add this type of dependency:
+
+1. Go to the [Azure Certified Device portal](https://certify.azure.com) and start the submission process for your device.
+
+1. In the **Dependencies** tab, enter the following values:
+
+ - **Dependency type**: Select **Software service**.
+ - **Service name**: Enter the name of your product.
+ - **Dependency URL**: Enter the URL of a product page that describes the service.
+ - **Customer-facing comments**: Enter any comments that you'd like to provide to a user who sees the product description in the Azure Certified Device catalog.
+
+1. If you have other software, services, or hardware dependencies that you'd like to add as optional for this device, select **Add additional dependency** and enter the required information.
+ ## Bundled products
-Bundled product listings are simply the successful certification of a device with another components that will be sold as part of the bundle in one product listing. You have the ability to submit a device that includes extra components such as a temperature sensor and a camera sensor (#1) or you could submit a touch sensor that includes a passthrough device (#2). Through the ΓÇ£ComponentΓÇ¥ feature, you have the ability to add multiple components to your listing.
-If you intend to do this, you format the product listing image to indicate this product comes with other components. In addition, if your bundle requires additional services to certify, you will need to identify those through the services dependency.
-Example matrix of bundled products
+With bundled product listings, a device is successfully certified in the Azure Certified Device program with other components. The device and the components are then sold together under one product listing.
+
+The following matrix provides some examples of bundled products. You can submit a device that includes extra components such as a temperature sensor and a camera sensor, as in submission example 1. You can also submit a touch sensor that includes a pass-through device, as in submission example 2.
-![Bundle submission example](./media/indirect-connected-device/picture-5.png )
-For a more detailed description on how to use the component functionality in the Azure Certified Device portal, see our [help documentation](./how-to-using-the-components-feature.md).
+Use the component feature to add multiple components to your listing. Format the product listing image to indicate that your product comes with other components. If your bundle requires additional services for certification, identify those services through service dependencies.
-If a device is a passthrough device with a separate sensor in the same product, create one component to reflect the passthrough device, and another component to reflect the sensor. Components can be added to your project in the Product details tab of the Device details section:
+For a more detailed description of how to use the component functionality in the Azure Certified Device portal, see [Add components on the portal](./how-to-using-the-components-feature.md).
-![Adding components](./media/indirect-connected-device/picture-6.png )
+If a device is a pass-through device with a separate sensor in the same product, create one component to reflect the pass-through device, and another component to reflect the sensor. As the following screenshot shows, you can add components to your project in the **Product details** tab of the **Device details** section:
-For the passthrough device, set the Component type as a Customer Ready Product, and fill in the other fields as relevant for your product. Example:
-![Component details](./media/indirect-connected-device/picture-7.png )
+Configure the pass-through device first. For **Component type**, select **Customer Ready Product**. Enter the other values, as relevant for your product. The following screenshot provides an example:
-For the sensor, add a second component, setting the Component type as Peripheral and Attachment method as Discrete. Example:
-![Second component details](./media/indirect-connected-device/picture-8.png )
+For the sensor, add a second component. For **Component type**, select **Peripheral**. For **Attachment method**, select **Discrete**. The following screenshot provides an example:
-Once the Sensor component has been created, Edit the details, navigate to the Sensors tab, and then add the sensor details. Example:
-![Sensor details](./media/indirect-connected-device/picture-9.png )
+After you've created the sensor component, enter its information. Then go to the **Sensors** tab and enter detailed sensor information, as the following screenshot shows.
-Complete your projects details and Submit your device for certification as normal.
+Complete the rest of your project's details, and then submit your device for certification as usual.
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/available-sizes.md
This article describes the available virtual machine sizes for Cloud Services (e
|[G](../virtual-machines/sizes-previous-gen.md?bc=%2fazure%2fvirtual-machines%2flinux%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#g-series) | 180-240* | |[H](../virtual-machines/h-series.md) | 290 - 300* | + >[!NOTE] > ACUs marked with a * use Intel® Turbo technology to increase CPU frequency and provide a performance boost. The amount of the boost can vary based on the VM size, workload, and other workloads running on the same host. + ## Configure sizes for Cloud Services (extended support) You can specify the virtual machine size of a role instance as part of the service model in the service definition file. The size of the role determines the number of CPU cores, memory capacity and the local file system size.
For example, setting the web role instance size to `Standard_D2`:
<WorkerRole name="Worker1" vmsize="Standard_D2"> </WorkerRole> ```
+>[!IMPORTANT]
+> Microsoft Azure has introduced newer generations of high-performance computing (HPC), general purpose, and memory-optimized virtual machines (VMs). For this reason, we recommend that you migrate workloads from the original H-series and H-series Promo VMs to our newer offerings by August 31, 2022. Azure [HC](../virtual-machines/hc-series.md), [HBv2](../virtual-machines/hbv2-series.md), [HBv3](../virtual-machines/hbv3-series.md), [Dv4](../virtual-machines/dv4-dsv4-series.md), [Dav4](../virtual-machines/dav4-dasv4-series.md), [Ev4](../virtual-machines/ev4-esv4-series.md), and [Eav4](../virtual-machines/eav4-easv4-series.md) VMs have greater memory bandwidth, improved networking capabilities, and better cost and performance across various HPC workloads.
## Change the size of an existing role
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-sizes-specs.md
In addition to the substantial CPU power, the H-series offers diverse options fo
\*RDMA capable
+>[!IMPORTANT]
+> Microsoft Azure has introduced newer generations of high-performance computing (HPC), general purpose, and memory-optimized virtual machines (VMs). For this reason, we recommend that you migrate workloads from the original H-series and H-series Promo VMs to our newer offerings by August 31, 2022. Azure [HC](../virtual-machines/hc-series.md), [HBv2](../virtual-machines/hbv2-series.md), [HBv3](../virtual-machines/hbv3-series.md), [Dv4](../virtual-machines/dv4-dsv4-series.md), [Dav4](../virtual-machines/dav4-dasv4-series.md), [Ev4](../virtual-machines/ev4-esv4-series.md), and [Eav4](../virtual-machines/eav4-easv4-series.md) VMs have greater memory bandwidth, improved networking capabilities, and better cost and performance across various HPC workloads.
+
+ On August 31, 2022, we're retiring the following H-series Azure VM sizes:
+
+- H8
+- H8m
+- H16
+- H16r
+- H16m
+- H16mr
+- H8 Promo
+- H8m Promo
+- H16 Promo
+- H16r Promo
+- H16m Promo
+- H16mr Promo
+ ## Configure sizes for Cloud Services You can specify the Virtual Machine size of a role instance as part of the service model described by the [service definition file](cloud-services-model-and-package.md#csdef). The size of the role determines the number of CPU cores, the memory capacity, and the local file system size that is allocated to a running instance. Choose the role size based on your application's resource requirement.
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
Title: Best practices for using the Anomaly Detector Multivariate API
+ Title: Best practices for using the Multivariate Anomaly Detector API
description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data.
Previously updated : 04/01/2021 Last updated : 06/07/2022 keywords: anomaly detection, machine learning, algorithms
-# Best practices for using the Anomaly Detector multivariate API
+# Best practices for using the Multivariate Anomaly Detector API
This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs. In this tutorial, you'll:
Follow the instructions in this section to avoid errors while using MVAD. If you
## Data engineering
-Now you're able to run the your code with MVAD APIs without any error. What could be done to improve your model accuracy?
+Now you're able to run your code with MVAD APIs without any error. What could be done to improve your model accuracy?
### Data quality
-* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It is hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
-* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learnt as normal patterns. That may result in real (not missing) data points being detected as anomalies.
- However, there are cases when a high missing ratio is acceptable. For example, if you have two variables (time series) in a group using `Outer` mode to align their timestamps. One of them has one-minute granularity, the other one has hourly granularity. Then the hourly variable by nature has at least 59 / 60 = 98.33% missing data points. In such cases, it's fine to fill the hourly variable using the only value available (not missing) if it typically does not fluctuate too much.
+* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It's hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy.
+* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learned as normal patterns. That may result in real (not missing) data points being detected as anomalies.
+ ### Data quantity * The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **15,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable. * Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job.
- Anything beyond that or "before" the leading sliding window will not impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
+ Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade.Anything below that may lead to an `NotEnoughInput` error.
### Timestamp round-up
-In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here is a simple example.
+In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here's a simple example.
*Variable-1*
In a group of variables (time series), each variable may be collected from an in
| 12:01:34 | 1.7 | | 12:02:04 | 2.0 |
-We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors are not sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
+We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD will take into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment.
Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table will be
Let's see what happens if they're not pre-processed. If we set `alignMode` to be
| 12:02:04 | `nan` | 2.0 | | 12:02:08 | 1.3 | `nan` |
-`nan` indicates missing values. Obviously, the merged table is not what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model cannot extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there is no common timestamp in variable 1 and variable 2.
+`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table will be empty as there's no common timestamp in variable 1 and variable 2.
Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are
Now the merged table is more reasonable.
Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information.
+### Limitations
+
+There are some limitations in both the training and inference APIs, you should be aware of these limitations to avoid errors.
+
+#### General Limitations
+* Sliding window: 28-2880 timestamps, default is 300. For periodic data, set the length of 2-4 cycles as the sliding window.
+* API calls: At most 20 API calls per minute.
+* Variable numbers: For training and asynchronized inference, at most 301 variables.
+#### Training Limitations
+* Timestamps: At most 1000000. Too few timestamps may decrease model quality. Recommend having more than 15000 timestamps.
+* Granularity: The minimum granularity is `per_second`.
+
+#### Asynchronized inference limitations
+* Timestamps: At most 20000, at least 1 sliding window length.
+#### Synchronized inference limitations
+* Timestamps: At most 2880, at least 1 sliding window length.
+* Detecting timestamps: From 1 to 10.
+
+## Model quality
+
+### How to deal with false positive and false negative in real scenarios?
+We have provided severity which indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives.
+
+### How to estimate which model is best to use according to training loss and validation loss?
+Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values.
++ ## Common pitfalls Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues. | Pitfall | Consequence |Explanation and solution | | | -- | -- |
-| Timestamps in training data and/or inference data were not rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results are not as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
+| Timestamps in training data and/or inference data weren't rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results aren't as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). |
| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. | | Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.|
-| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
+| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that aren't severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. |
| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. | | Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. |
-| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You will get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
+| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You'll get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. |
## FAQ
Apart from the [error code table](./troubleshoot.md), we've learned from custome
Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity.
-* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because, MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
+* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z".
* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive) to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`).
-### Why only accepting zip files for training and inference?
+### Why does the service only accept zip files for training and inference when sending data asynchronously?
-We use zip files because in batch scenarios, we expect the size of both training and inference data would be very large and cannot be put in the HTTP request body. This allows users to perform batch inference on historical data either for model validation or data analysis.
+We use zip files because in batch scenarios, we expect the size of both training and inference data would be very large and can't be put in the HTTP request body. This allows users to perform batch inference on historical data either for model validation or data analysis.
However, this might be somewhat inconvenient for streaming inference and for high frequency data. We have a plan to add a new API specifically designed for streaming inference that users can pass data in the request body. ### What's the difference between `severity` and `score`?
-Normally we recommend you use `severity` as the filter to sift out 'anomalies' that are not so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
+Normally we recommend you to use `severity` as the filter to sift out 'anomalies' that aren't so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes.
In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help:
-We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it is also marked as an anomaly.
+We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it's also marked as an anomaly.
+ ## Next steps * [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md).
-* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
+* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/Tutorials/build-enrollment-app.md
+
+ Title: Build a React app to add users to a Face service
+
+description: Learn how to set up your development environment and deploy a Face app to get consent from customers.
++++++ Last updated : 11/17/2020+++
+# Build a React app to add users to a Face service
+
+This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identity verification, attendance tracking, or personalization kiosk, based on their face data.
+
+When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
+
+The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
+
+## Prerequisites
+
+* An Azure subscription ΓÇô [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+ * You'll need the key and endpoint from the resource you created to connect your application to Face API.
+ * For local development and testing only, the API key and endpoint are environment variables. For final deployment, store the API key in a secure location and never in the code or environment variables.
+
+### Important Security Considerations
+* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
+* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
+* As a best practice, consider having separate API keys for development and production.
+
+## Set up the development environment
+
+#### [Android](#tab/android)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select your development OS and **Android** as the target OS. Complete the sections **Installing dependencies** and **Android development environment**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/).
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either the Android Virtual Device emulator from Android Studio, or your own Android device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+
+#### [iOS](#tab/ios)
+
+1. Clone the git repository for the [sample app](https://github.com/azure-samples/cognitive-services-FaceAPIEnrollmentSample).
+1. To set up your development environment, follow the <a href="https://reactnative.dev/docs/environment-setup" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>. Select **React Native CLI Quickstart**. Select **macOS** as your development OS and **iOS** as the target OS. Complete the section **Installing dependencies**.
+1. Download your preferred text editor such as [Visual Studio Code](https://code.visualstudio.com/). You will also need to download Xcode.
+1. Retrieve your FaceAPI endpoint and key in the Azure portal under the **Overview** tab of your resource. Don't check in your Face API key to your remote repository.
+1. Run the app using either a simulated device from Xcode, or your own iOS device. To test your app on a physical device, follow the relevant <a href="https://reactnative.dev/docs/running-on-device" title="React Native documentation" target="_blank">React Native documentation <span class="docon docon-navigate-external x-hidden-focus"></span></a>.
+++
+## Create a user add experience
+
+Now that you have set up the sample app, you can tailor it to your own needs.
+
+For example, you may want to add situation-specific information on your consent page:
+
+> [!div class="mx-imgBorder"]
+> ![app consent page](../media/enrollment-app/1-consent-1.jpg)
+
+Many face recognition issues are caused by low-quality reference images. Some factors that can degrade model performance are:
+* Face size (faces that are distant from the camera)
+* Face orientation (faces turned or tilted away from camera)
+* Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
+* Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
+* Blur (such as by rapid face movement when the photograph was taken).
+
+The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
++
+> [!div class="mx-imgBorder"]
+> ![app image capture instruction page](../media/enrollment-app/4-instruction.jpg)
+
+Notice the app also offers functionality for deleting the user's information and the option to re-add.
+
+> [!div class="mx-imgBorder"]
+> ![profile management page](../media/enrollment-app/10-manage-2.jpg)
+
+To extend the app's functionality to cover the full experience, read the [overview](../enrollment-overview.md) for additional features to implement and best practices.
+
+## Deploy the app
+
+#### [Android](#tab/android)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp).
+
+When you're ready to release your app for production, you'll generate a release-ready APK file, which is the package file format for Android apps. This APK file must be signed with a private key. With this release build, you can begin distributing the app to your devices directly.
+
+Follow the <a href="https://developer.android.com/studio/publish/preparing#publishing-build" title="Prepare for release" target="_blank">Prepare for release <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn how to generate a private key, sign your application, and generate a release APK.
+
+Once you've created a signed APK, see the <a href="https://developer.android.com/studio/publish" title="Publish your app" target="_blank">Publish your app <span class="docon docon-navigate-external x-hidden-focus"></span></a> documentation to learn more about how to release your app.
+
+#### [iOS](#tab/ios)
+
+First, make sure that your app is ready for production deployment: remove any keys or secrets from the app code and make sure you have followed the [security best practices](../../cognitive-services-security.md?tabs=command-line%2ccsharp). To prepare for distribution, you will need to create an app icon, a launch screen, and configure deployment info settings. Follow the [documentation from Xcode](https://developer.apple.com/documentation/Xcode/preparing_your_app_for_distribution) to prepare your app for distribution.
+
+When you're ready to release your app for production, you'll build an archive of your app. Follow the [Xcode documentation](https://developer.apple.com/documentation/Xcode/distributing_your_app_for_beta_testing_and_releases) on how to create an archive build and options for distributing your app.
+++
+## Next steps
+
+In this guide, you learned how to set up your development environment and get started with the sample app. If you're new to React Native, you can read their [getting started docs](https://reactnative.dev/docs/getting-started) to learn more background information. It also may be helpful to familiarize yourself with [Face API](../overview-identity.md). Read the other sections on adding users before you begin development.
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
keywords: on-premises, OCR, Docker, container
Containers enable you to run the Computer Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run Computer Vision containers.
-The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md).
+The *Read* OCR container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you are an existing customer, please follow the [download instructions](#docker-pull-for-the-read-ocr-container) to get started.
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face. > [!NOTE]
-> This feature is also offered by the Azure [Face](../face/index.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
+> This feature is also offered by the Azure [Face](./index-identity.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
+
+ Title: "Face detection and attributes concepts"
+
+description: Learn more about face detection; face detection is the action of locating human faces in an image and optionally returning different kinds of face-related data.
+++++++ Last updated : 10/27/2021+++
+# Face detection and attributes
+
+This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
+
+You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
+
+## Face rectangle
+
+Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
+
+## Face ID
+
+The face ID is a unique identifier string for each detected face in an image. You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+
+## Face landmarks
+
+Face landmarks are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. By default, there are 27 predefined landmark points. The following figure shows all 27 points:
+
+![A face diagram with all 27 landmarks labeled](./media/landmarks.1.jpg)
+
+The coordinates of the points are returned in units of pixels.
+
+The Detection_03 model currently has the most accurate landmark detection. The eye and pupil landmarks it returns are precise enough to enable gaze tracking of the face.
+
+## Attributes
+
+Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
+
+* **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
+* **Age**. The estimated age in years of a particular face.
+* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
+* **Exposure**. The exposure of the face in the image. This attribute returns a value between zero and one and an informal rating of underExposure, goodExposure, or overExposure.
+* **Facial hair**. The estimated facial hair presence and the length for the given face.
+* **Gender**. The estimated gender of the given face. Possible values are male, female, and genderless.
+* **Glasses**. Whether the given face has eyeglasses. Possible values are NoGlasses, ReadingGlasses, Sunglasses, and Swimming Goggles.
+* **Hair**. The hair type of the face. This attribute shows whether the hair is visible, whether baldness is detected, and what hair colors are detected.
+* **Head pose**. The face's orientation in 3D space. This attribute is described by the roll, yaw, and pitch angles in degrees, which are defined according to the [right-hand rule](https://en.wikipedia.org/wiki/Right-hand_rule). The order of three angles is roll-yaw-pitch, and each angle's value range is from -180 degrees to 180 degrees. 3D orientation of the face is estimated by the roll, yaw, and pitch angles in order. See the following diagram for angle mappings:
+
+ ![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg)
+
+ For more details on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
+* **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
+* **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
+* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
+* **Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
+* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
+* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
+ >[!NOTE]
+ > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
+
+> [!IMPORTANT]
+> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
+
+## Input data
+
+Use the following tips to make sure that your input images give the most accurate detection results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* The image file size should be no larger than 6 MB.
+* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size.
+* The maximum detectable face size is 4096 x 4096 pixels.
+* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+
+### Input data with orientation information:
+
+Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image.
+
+To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
+
+![Two face images with and without rotation](./media/image-rotation.png)
+
+### Video input
+
+If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera:
+
+* **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
+* **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
+* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This will result in clearer video frames.
+
+ >[!NOTE]
+ > A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.
+
+## Next steps
+
+Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image.
+
+* [Call the detect API](./how-to/identity-detect-faces.md)
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
+
+ Title: "Face recognition concepts"
+
+description: Learn the concept of Face recognition, its related operations, and the underlying data structures.
+++++++ Last updated : 10/27/2021+++
+# Face recognition concepts
+
+This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, Face recognition refers to the method of verifying or identifying an individual by their face.
+
+Verification is one-to-one matching that takes two faces and returns whether they are the same face, and identification is one-to-many matching that takes a single face as input and returns a set of matching candidates. Face recognition is important in implementing the identity verification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
+
+## Related data structures
+
+The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription, but name fields may be duplicated.
+
+|Name|Description|
+|:--|:--|
+|DetectedFace| This single face representation is retrieved by the [face detection](./how-to/identity-detect-faces.md) operation. Its ID expires 24 hours after it's created.|
+|PersistedFace| When DetectedFace objects are added to a group, such as FaceList or Person, they become PersistedFace objects. They can be [retrieved](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c) at any time and don't expire.|
+|[FaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b) or [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc)| This data structure is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string, and optionally a user data string.|
+|[Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c)| This data structure is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string, and optionally a user data string.|
+|[PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d)| This data structure is an assorted list of Person objects. It has a unique ID, a name string, and optionally a user data string. A PersonGroup must be [trained](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) before it can be used in recognition operations.|
+|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure](./how-to/use-persondirectory.md).
+
+## Recognition operations
+
+This section details how the underlying operations use the above data structures to identify and verify a face.
+
+### PersonGroup creation and training
+
+You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
+
+The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
+
+### Identification
+
+The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
+
+### Verification
+
+The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
+
+## Input data
+
+Use the following tips to ensure that your input images give the most accurate recognition results:
+
+* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
+* Image file size should be no larger than 6 MB.
+* When you create Person objects, use photos that feature different kinds of angles and lighting.
+* Some faces might not be recognized because of technical challenges, such as:
+ * Images with extreme lighting, for example, severe backlighting.
+ * Obstructions that block one or both eyes.
+ * Differences in hair type or facial hair.
+ * Changes in facial appearance because of age.
+ * Extreme facial expressions.
+* You can utilize the qualityForRecognition attribute in the [face detection](./how-to/identity-detect-faces.md) operation when using applicable detection models as a general guideline of whether the image is likely of sufficient quality to attempt face recognition on. Only "high" quality images are recommended for person enrollment and quality at or above "medium" is recommended for identification scenarios.
+
+## Next steps
+
+Now that you're familiar with face recognition concepts, Write a script that identifies faces against a trained PersonGroup.
+
+* [Face client library quickstart](./quickstarts-sdk/identity-client-library.md)
cognitive-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/enrollment-overview.md
+
+ Title: Best practices for adding users to a Face service
+
+description: Learn about the process of Face enrollment to register users in a face recognition service.
++++++ Last updated : 09/27/2021+++
+# Best practices for adding users to a Face service
+
+In order to use the Cognitive Services Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
+
+## Meaningful consent
+
+One of the key purposes of an enrollment application for facial recognition is to give users the opportunity to consent to the use of images of their face for specific purposes, such as access to a worksite. Because facial recognition technologies may be perceived as collecting sensitive personal data, it's especially important to ask for consent in a way that is both transparent and respectful. Consent is meaningful to users when it empowers them to make the decision that they feel is best for them.
+
+Based on Microsoft user research, Microsoft's Responsible AI principles, and [external research](ftp://ftp.cs.washington.edu/tr/2000/12/UW-CSE-00-12-02.pdf), we have found that consent is meaningful when it offers the following to users enrolling in the technology:
+
+* Awareness: Users should have no doubt when they are being asked to provide their face template or enrollment photos.
+* Understanding: Users should be able to accurately describe in their own words what they were being asked for, by whom, to what end, and with what assurances.
+* Freedom of choice: Users should not feel coerced or manipulated when choosing whether to consent and enroll in facial recognition.
+* Control: Users should be able to revoke their consent and delete their data at any time.
+
+This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
+
+> [!NOTE]
+> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices.
+
+## Application development
+
+Before you design an enrollment flow, think about how the application you're building can uphold the promises you make to users about how their data is protected. The following recommendations can help you build an enrollment experience that includes responsible approaches to securing personal data, managing users' privacy, and ensuring that the application is accessible to all users.
+
+|Category | Recommendations |
+|||
+|Hardware | Consider the camera quality of the enrollment device. |
+|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
+|Security | Cognitive Services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Cognitive Services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. |
+|User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. |
+|Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |
+
+## Next steps
+
+Follow the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide to get started with a sample enrollment app. Then customize it or write your own app to suit the needs of your product.
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
+
+ Title: "Example: Add faces to a PersonGroup - Face"
+
+description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure Cognitive Services Face service.
+++++++ Last updated : 04/10/2019+
+ms.devlang: csharp
+++
+# Add faces to a PersonGroup
+
+This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure Cognitive Services Face .NET client library.
+
+## Step 1: Initialization
+
+The following code declares several variables and implements a helper function to schedule the face add requests:
+
+- `PersonCount` is the total number of persons.
+- `CallLimitPerSecond` is the maximum calls per second according to the subscription tier.
+- `_timeStampQueue` is a Queue to record the request timestamps.
+- `await WaitCallLimitPerSecondAsync()` waits until it's valid to send the next request.
+
+```csharp
+const int PersonCount = 10000;
+const int CallLimitPerSecond = 10;
+static Queue<DateTime> _timeStampQueue = new Queue<DateTime>(CallLimitPerSecond);
+
+static async Task WaitCallLimitPerSecondAsync()
+{
+ Monitor.Enter(_timeStampQueue);
+ try
+ {
+ if (_timeStampQueue.Count >= CallLimitPerSecond)
+ {
+ TimeSpan timeInterval = DateTime.UtcNow - _timeStampQueue.Peek();
+ if (timeInterval < TimeSpan.FromSeconds(1))
+ {
+ await Task.Delay(TimeSpan.FromSeconds(1) - timeInterval);
+ }
+ _timeStampQueue.Dequeue();
+ }
+ _timeStampQueue.Enqueue(DateTime.UtcNow);
+ }
+ finally
+ {
+ Monitor.Exit(_timeStampQueue);
+ }
+}
+```
+
+## Step 2: Authorize the API call
+
+When you use a client library, you must pass your key to the constructor of the **FaceClient** class. For example:
+
+```csharp
+private readonly IFaceClient faceClient = new FaceClient(
+ new ApiKeyServiceClientCredentials("<SubscriptionKey>"),
+ new System.Net.Http.DelegatingHandler[] { });
+```
+
+To get the key, go to the Azure Marketplace from the Azure portal. For more information, see [Subscriptions](https://www.microsoft.com/cognitive-services/sign-up).
+
+## Step 3: Create the PersonGroup
+
+A PersonGroup named "MyPersonGroup" is created to save the persons.
+The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
+
+```csharp
+const string personGroupId = "mypersongroupid";
+const string personGroupName = "MyPersonGroup";
+_timeStampQueue.Enqueue(DateTime.UtcNow);
+await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
+```
+
+## Step 4: Create the persons for the PersonGroup
+
+Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit.
+
+```csharp
+Person[] persons = new Person[PersonCount];
+Parallel.For(0, PersonCount, async i =>
+{
+ await WaitCallLimitPerSecondAsync();
+
+ string personName = $"PersonName#{i}";
+ persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
+});
+```
+
+## Step 5: Add faces to the persons
+
+Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially.
+Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
+
+```csharp
+Parallel.For(0, PersonCount, async i =>
+{
+ Guid personId = persons[i].PersonId;
+ string personImageDir = @"/path/to/person/i/images";
+
+ foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
+ {
+ await WaitCallLimitPerSecondAsync();
+
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
+ }
+ }
+});
+```
+
+## Summary
+
+In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders:
+
+- This strategy also applies to FaceLists and LargePersonGroups.
+- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.
+- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.
+- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.
+
+The following features were explained and demonstrated:
+
+- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.
+- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.
+- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API.
+
+## Next steps
+
+In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
+
+- [Use the PersonDirectory structure](use-persondirectory.md)
cognitive-services Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/analyze-video.md
+
+ Title: Analyze videos in near real time - Computer Vision
+
+description: Learn how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API.
++++++ Last updated : 09/09/2019
+ms.devlang: csharp
+++
+# Analyze videos in near real time
+
+This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Computer Vision API. The basic elements of such an analysis are:
+
+- Acquiring frames from a video source.
+- Selecting which frames to analyze.
+- Submitting these frames to the API.
+- Consuming each analysis result that's returned from the API call.
+
+The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+## Approaches to running near real-time analysis
+
+You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
+
+### Design an infinite loop
+
+The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+}
+```
+
+If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
+
+### Allow the API calls to run in parallel
+
+Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ var t = Task.Run(async () =>
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+ }
+}
+```
+
+With this approach, you launch each analysis in a separate task. The task can run in the background while you continue grabbing new frames. The approach avoids blocking the main thread as you wait for an API call to return. However, the approach can present certain disadvantages:
+* It costs you some of the guarantees that the simple version provided. That is, multiple API calls might occur in parallel, and the results might get returned in the wrong order.
+* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe.
+* Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
+
+### Design a producer-consumer system
+
+For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
+
+```csharp
+// Queue that will contain the API call tasks.
+var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
+
+// Producer thread.
+while (true)
+{
+ // Grab a frame.
+ Frame f = GrabFrame();
+
+ // Decide whether to analyze the frame.
+ if (ShouldAnalyze(f))
+ {
+ // Start a task that will run in parallel with this thread.
+ var analysisTask = Task.Run(async () =>
+ {
+ // Put the frame, and the result/exception into a wrapper object.
+ var output = new ResultWrapper(f);
+ try
+ {
+ output.Analysis = await Analyze(f);
+ }
+ catch (Exception e)
+ {
+ output.Exception = e;
+ }
+ return output;
+ }
+
+ // Push the task onto the queue.
+ taskQueue.Add(analysisTask);
+ }
+}
+```
+
+You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
+
+```csharp
+// Consumer thread.
+while (true)
+{
+ // Get the oldest task.
+ Task<ResultWrapper> analysisTask = taskQueue.Take();
+
+ // Wait until the task is completed.
+ var output = await analysisTask;
+
+ // Consume the exception or result.
+ if (output.Exception != null)
+ {
+ throw output.Exception;
+ }
+ else
+ {
+ ConsumeResult(output.Analysis);
+ }
+}
+```
+
+## Implement the solution
+
+### Get started quickly
+
+To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+
+The library contains the `FrameGrabber` class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
+
+To illustrate some of the possibilities, we've provided two sample apps that use the library.
+
+The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
+
+```csharp
+using System;
+using System.Linq;
+using Microsoft.Azure.CognitiveServices.Vision.Face;
+using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
+using VideoFrameAnalyzer;
+
+namespace BasicConsoleSample
+{
+ internal class Program
+ {
+ const string ApiKey = "<your API key>";
+ const string Endpoint = "https://<your API region>.api.cognitive.microsoft.com";
+
+ private static async Task Main(string[] args)
+ {
+ // Create grabber.
+ FrameGrabber<DetectedFace[]> grabber = new FrameGrabber<DetectedFace[]>();
+
+ // Create Face Client.
+ FaceClient faceClient = new FaceClient(new ApiKeyServiceClientCredentials(ApiKey))
+ {
+ Endpoint = Endpoint
+ };
+
+ // Set up a listener for when we acquire a new frame.
+ grabber.NewFrameProvided += (s, e) =>
+ {
+ Console.WriteLine($"New frame acquired at {e.Frame.Metadata.Timestamp}");
+ };
+
+ // Set up a Face API call.
+ grabber.AnalysisFunction = async frame =>
+ {
+ Console.WriteLine($"Submitting frame acquired at {frame.Metadata.Timestamp}");
+ // Encode image and submit to Face service.
+ return (await faceClient.Face.DetectWithStreamAsync(frame.Image.ToMemoryStream(".jpg"))).ToArray();
+ };
+
+ // Set up a listener for when we receive a new result from an API call.
+ grabber.NewResultAvailable += (s, e) =>
+ {
+ if (e.TimedOut)
+ Console.WriteLine("API call timed out.");
+ else if (e.Exception != null)
+ Console.WriteLine("API call threw an exception.");
+ else
+ Console.WriteLine($"New result received for frame acquired at {e.Frame.Metadata.Timestamp}. {e.Analysis.Length} faces detected");
+ };
+
+ // Tell grabber when to call the API.
+ // See also TriggerAnalysisOnPredicate
+ grabber.TriggerAnalysisOnInterval(TimeSpan.FromMilliseconds(3000));
+
+ // Start running in the background.
+ await grabber.StartProcessingCameraAsync();
+
+ // Wait for key press to stop.
+ Console.WriteLine("Press any key to stop...");
+ Console.ReadKey();
+
+ // Stop, blocking until done.
+ await grabber.StopProcessingAsync();
+ }
+ }
+}
+```
+
+The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
+
+In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure Cognitive Services.
+
+By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Cognitive Services APIs can be used to augment this processing with more advanced analysis when necessary.
+
+![The LiveCameraSample app displaying an image with tags](../../Video/Images/FramebyFrame.jpg)
+
+### Integrate the samples into your codebase
+
+To get started with this sample, do the following:
+
+1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
+2. Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+ - [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ After the resources are deployed, click **Go to resource** to collect your key and endpoint for each resource.
+3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+4. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
+ - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
+ - For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
+
+When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
+
+The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure Cognitive Services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure Cognitive Services.
+
+## Summary
+
+In this article, you learned how to run near real-time analysis on live video streams by using the Face and Computer Vision services. You also learned how you can use our sample code to get started.
+
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/). To provide broader API feedback, go to our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
+
cognitive-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image.md
+
+ Title: Call the Image Analysis API
+
+description: Learn how to call the Image Analysis API and configure its behavior.
++++++ Last updated : 04/11/2022+++
+# Call the Image Analysis API
+
+This article demonstrates how to call the Image Analysis API to return information about an image's visual features. It also shows you how to parse the returned information using the client SDKs or REST API.
+
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a key and endpoint URL. If you're using a client SDK, you'll also need to authenticate a client object. If you haven't done these steps, follow the [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+
+## Submit data to the service
+
+The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
+
+#### [REST](#tab/rest)
+
+When analyzing a local image, you put the binary image data in the HTTP request body. For a remote image, you specify the image's URL by formatting the request body like this: `{"url":"http://example.com/images/test.jpg"}`.
+
+#### [C#](#tab/csharp)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze_url)]
+
+#### [Java](#tab/java)
+
+In your main class, save a reference to the URL of the image you want to analyze.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_urlimage)]
+
+#### [JavaScript](#tab/javascript)
+
+In your main function, save a reference to the URL of the image you want to analyze.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_describe_image)]
+
+#### [Python](#tab/python)
+
+Save a reference to the URL of the image you want to analyze.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_remoteimage)]
++++
+## Determine how to process the data
+
+### Select visual features
+
+The Analyze API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The examples below add all of the available visual features, but for practical usage you'll likely only need one or two.
+
+#### [REST](#tab/rest)
+
+You can specify which features you want to use by setting the URL query parameters of the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). A parameter can have multiple values, separated by commas. Each feature you specify will require more computation time, so only specify what you need.
+
+|URL parameter | Value | Description|
+|||--|
+|`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content ("racy" content) is also detected.|
+|`visualFeatures`|`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
+|`visualFeatures`|`Categories` | categorizes image content according to a taxonomy defined in documentation. This value is the default value of `visualFeatures`.|
+|`visualFeatures`|`Color` | determines the accent color, dominant color, and whether an image is black&white.|
+|`visualFeatures`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`visualFeatures`|`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
+|`visualFeatures`|`ImageType` | detects if image is clip art or a line drawing.|
+|`visualFeatures`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`visualFeatures`|`Tags` | tags the image with a detailed list of words related to the image content.|
+|`details`| `Celebrities` | identifies celebrities if detected in the image.|
+|`details`|`Landmarks` |identifies landmarks if detected in the image.|
+
+A populated URL might look like this:
+
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities`
+
+#### [C#](#tab/csharp)
+
+Define your new method for image analysis. Add the code below, which specifies visual features you'd like to extract in your analysis. See the **[VisualFeatureTypes](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes)** enum for a complete list.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_visualfeatures)]
++
+#### [Java](#tab/java)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_features_remote)]
+
+#### [JavaScript](#tab/javascript)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/javascript/api/@azure/cognitiveservices-computervision/visualfeaturetypes) enum for a complete list.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_features_remote)]
+
+#### [Python](#tab/python)
+
+Specify which visual features you'd like to extract in your analysis. See the [VisualFeatureTypes](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.models.visualfeaturetypes) enum for a complete list.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_features_remote)]
+++++
+### Specify languages
+
+You can also specify the language of the returned data.
+
+#### [REST](#tab/rest)
+
+The following URL query parameter specifies the language. The default value is `en`.
+
+|URL parameter | Value | Description|
+|||--|
+|`language`|`en` | English|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
+
+A populated URL might look like this:
+
+`https://{endpoint}/vision/v2.1/analyze?visualFeatures=Description,Tags&details=Celebrities&language=en`
+
+#### [C#](#tab/csharp)
+
+Use the *language* parameter of [AnalyzeImageAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.computervision.computervisionclientextensions.analyzeimageasync#microsoft-azure-cognitiveservices-vision-computervision-computervisionclientextensions-analyzeimageasync(microsoft-azure-cognitiveservices-vision-computervision-icomputervisionclient-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-visualfeaturetypes))))-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-details))))-system-string-system-collections-generic-ilist((system-nullable((microsoft-azure-cognitiveservices-vision-computervision-models-descriptionexclude))))-system-string-system-threading-cancellationtoken)) call to specify a language. A method call that specifies a language might look like the following.
+
+```csharp
+ImageAnalysis results = await client.AnalyzeImageAsync(imageUrl, visualFeatures: features, language: "en");
+```
+
+#### [Java](#tab/java)
+
+Use the [AnalyzeImageOptionalParameter](/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.models.analyzeimageoptionalparameter) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
++
+```java
+ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
+ .withVisualFeatures(featuresToExtractFromLocalImage)
+ .language("en")
+ .execute();
+```
+
+#### [JavaScript](#tab/javascript)
+
+Use the **language** property of the [ComputerVisionClientAnalyzeImageOptionalParams](/javascript/api/@azure/cognitiveservices-computervision/computervisionclientanalyzeimageoptionalparams) input in your Analyze call to specify a language. A method call that specifies a language might look like the following.
+
+```javascript
+const result = (await computerVisionClient.analyzeImage(imageURL,{visualFeatures: features, language: 'en'}));
+```
+
+#### [Python](#tab/python)
+
+Use the *language* parameter of your [analyze_image](/python/api/azure-cognitiveservices-vision-computervision/azure.cognitiveservices.vision.computervision.operations.computervisionclientoperationsmixin#azure-cognitiveservices-vision-computervision-operations-computervisionclientoperationsmixin-analyze-image) call to specify a language. A method call that specifies a language might look like the following.
+
+```python
+results_remote = computervision_client.analyze_image(remote_image_url , remote_image_features, remote_image_details, 'en')
+```
++++
+## Get results from the service
+
+This section shows you how to parse the results of the API call. It includes the API call itself.
+
+> [!NOTE]
+> **Scoped API calls**
+>
+> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `https://{endpoint}/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
+
+#### [REST](#tab/rest)
+
+The service returns a `200` HTTP response, and the body contains the returned data in the form of a JSON string. The following text is an example of a JSON response.
+
+```json
+{
+ "tags":[
+ {
+ "name":"outdoor",
+ "score":0.976
+ },
+ {
+ "name":"bird",
+ "score":0.95
+ }
+ ],
+ "description":{
+ "tags":[
+ "outdoor",
+ "bird"
+ ],
+ "captions":[
+ {
+ "text":"partridge in a pear tree",
+ "confidence":0.96
+ }
+ ]
+ }
+}
+```
+
+See the following table for explanations of the fields in this example:
+
+Field | Type | Content
+|||
+Tags | `object` | The top-level object for an array of tags.
+tags[].Name | `string` | The keyword from the tags classifier.
+tags[].Score | `number` | The confidence score, between 0 and 1.
+description | `object` | The top-level object for an image description.
+description.tags[] | `string` | The list of tags. If there is insufficient confidence in the ability to produce a caption, the tags might be the only information available to the caller.
+description.captions[].text | `string` | A phrase describing the image.
+description.captions[].confidence | `number` | The confidence score for the phrase.
+
+### Error codes
+
+See the following list of possible errors and their causes:
+
+* 400
+ * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
+ * `InvalidImageFormat` - Input data is not a valid image.
+ * `InvalidImageSize` - Input image is too large.
+ * `NotSupportedVisualFeature` - Specified feature type isn't valid.
+ * `NotSupportedImage` - Unsupported image, for example child pornography.
+ * `InvalidDetails` - Unsupported `detail` parameter value.
+ * `NotSupportedLanguage` - The requested operation isn't supported in the language specified.
+ * `BadArgument` - More details are provided in the error message.
+* 415 - Unsupported media type error. The Content-Type isn't in the allowed types:
+ * For an image URL, Content-Type should be `application/json`
+ * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
+* 500
+ * `FailedToProcess`
+ * `Timeout` - Image processing timed out.
+ * `InternalServerError`
++
+#### [C#](#tab/csharp)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/ComputerVision/ImageAnalysisQuickstart.cs?name=snippet_analyze)]
+
+#### [Java](#tab/java)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-java[](~/cognitive-services-quickstart-code/java/ComputerVision/src/main/java/ImageAnalysisQuickstart.java?name=snippet_analyze)]
+
+#### [JavaScript](#tab/javascript)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-javascript[](~/cognitive-services-quickstart-code/javascript/ComputerVision/ImageAnalysisQuickstart.js?name=snippet_analyze)]
+
+#### [Python](#tab/python)
+
+The following code calls the Image Analysis API and prints the results to the console.
+
+[!code-python[](~/cognitive-services-quickstart-code/python/ComputerVision/ImageAnalysisQuickstart.py?name=snippet_analyze)]
++++
+> [!TIP]
+> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
++
+## Next steps
+
+* Explore the [concept articles](../concept-object-detection.md) to learn more about each feature.
+* See the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn more about the API functionality.
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
+
+ Title: How to call the Read API
+
+description: Learn how to call the Read API and configure its behavior in detail.
+++++++ Last updated : 02/05/2022+++
+# Call the Read API
+
+In this guide, you'll learn how to call the Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+
+## Determine how to process the data (optional)
+
+### Specify the OCR model
+
+By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
+
+When using the Read operation, use the following values for the optional `model-version` parameter.
+
+|Value| Model used |
+|:--|:-|
+| Not provided | Latest GA model |
+| latest | Latest GA model|
+| [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwriitten text, adds support for Japanese and Korean. |
+| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages, For handwriitten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. |
+| 2021-04-12 | 2021 GA model |
+
+### Input language
+
+By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
+
+### Natural reading order output (Latin languages only)
+
+By default, the service outputs the text lines in the left to right order. Optionally, with the `readingOrder` request parameter, use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page(s) or page ranges for text extraction
+
+By default, the service extracts text from all pages in the documents. Optionally, use the `pages` request parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
++
+## Submit data to the service
+
+You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
+
+The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+
+`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
+
+The call returns with a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Operation ID to be used in the next step.
+
+|Response header| Example value |
+|:--|:-|
+|Operation-Location | `https://cognitiveservice/vision/v3.2/read/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+
+> [!NOTE]
+> **Billing**
+>
+> The [Computer Vision pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) page includes the pricing tier for Read. Each analyzed image or page is one transaction. If you call the operation with a PDF or TIFF document containing 100 pages, the Read operation will count it as 100 transactions and you will be billed for 100 transactions. If you made 50 calls to the operation and each call submitted a document with 100 pages, you will be billed for 50 X 100 = 5000 transactions.
++
+## Get results from the service
+
+The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+
+`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
+
+It returns a JSON response that contains a **status** field with the following possible values.
+
+|Value | Meaning |
+|:--|:-|
+| `notStarted`| The operation has not started. |
+| `running`| The operation is being processed. |
+| `failed`| The operation has failed. |
+| `succeeded`| The operation has succeeded. |
+
+You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 1 to 2 seconds to avoid exceeding the requests per second (RPS) rate.
+
+> [!NOTE]
+> The free tier limits the request rate to 20 calls per minute. The paid tier allows 10 requests per second (RPS) that can be increased upon request. Note your Azure resource identfier and region, and open an Azure support ticket or contact your account team to request a higher request per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores.
+
+> [!NOTE]
+> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
+
+### Sample JSON output
+
+See the following example of a successful JSON response:
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2021-02-04T06:32:08.2752706+00:00",
+ "lastUpdatedDateTime": "2021-02-04T06:32:08.7706172+00:00",
+ "analyzeResult": {
+ "version": "3.2",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 2.1243,
+ "width": 502,
+ "height": 252,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 58,
+ 42,
+ 314,
+ 59,
+ 311,
+ 123,
+ 56,
+ 121
+ ],
+ "text": "Tabs vs",
+ "appearance": {
+ "style": {
+ "name": "handwriting",
+ "confidence": 0.96
+ }
+ },
+ "words": [
+ {
+ "boundingBox": [
+ 68,
+ 44,
+ 225,
+ 59,
+ 224,
+ 122,
+ 66,
+ 123
+ ],
+ "text": "Tabs",
+ "confidence": 0.933
+ },
+ {
+ "boundingBox": [
+ 241,
+ 61,
+ 314,
+ 72,
+ 314,
+ 123,
+ 239,
+ 122
+ ],
+ "text": "vs",
+ "confidence": 0.977
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Handwritten classification for text lines (Latin languages only)
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
++
+## Next steps
+
+- Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).
+- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
+
+ Title: "Find similar faces"
+
+description: Use the Face service to find similar faces (face search by image).
+++++++ Last updated : 05/05/2022++++
+# Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
+
+## Set up sample URL
+
+This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path.
+
+```
+"https://csdx.blob.core.windows.net/resources/Face/media/"
+```
+
+## Detect faces for comparison
+
+You need to detect faces in images before you can compare them. In this guide, the following remote image, called *findsimilar.jpg*, will be used as the source:
+
+![Photo of a man who is smiling.](../media/quickstarts/find-similar.jpg)
+
+#### [C#](#tab/csharp)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_face_detect_recognize)]
+
+The following code uses the above method to get face data from a series of images.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_loadfaces)]
++
+#### [JavaScript](#tab/javascript)
+
+The following face detection method is optimized for comparison operations. It doesn't extract detailed face attributes, and it uses an optimized recognition model.
++
+The following code uses the above method to get face data from a series of images.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate. Then run the command to detect one of the target faces.
++
+Find the `"faceId"` value in the JSON response and save it to a temporary location. Then, call the above command again for these other image URLs, and save their face IDs as well. You'll use these IDs as the target group of faces from which to find a similar face.
++
+Finally, detect the single source face that you'll use for matching, and save its ID. Keep this ID separate from the others.
++++
+## Find and print matches
+
+In this guide, the face detected in the *Family1-Dad1.jpg* image should be returned as the face that's similar to the source image face.
+
+![Photo of a man who is smiling; this is the same person as the previous image.](../media/quickstarts/family-1-dad-1.jpg)
+
+#### [C#](#tab/csharp)
+
+The following code calls the Find Similar API on the saved list of faces.
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar)]
+
+The following code prints the match details to the console:
+
+[!code-csharp[](~/cognitive-services-quickstart-code/dotnet/Face/FaceQuickstart.cs?name=snippet_find_similar_print)]
+
+#### [JavaScript](#tab/javascript)
+
+The following method takes a set of target faces and a single source face. Then, it compares them and finds all the target faces that are similar to the source face. Finally, it prints the match details to the console.
+++
+#### [REST API](#tab/rest)
+
+Copy the following cURL command and insert your key and endpoint where appropriate.
++
+Paste in the following JSON content for the `body` value:
++
+Then, copy over the source face ID value to the `"faceId"` field. Then copy the other face IDs, separated by commas, as terms in the `"faceIds"` array.
+
+Run the command, and the returned JSON should show the correct face ID as a similar match.
+++
+## Next steps
+
+In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
+
+* [Specify a face recognition model](specify-recognition-model.md)
cognitive-services Identity Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-analyze-video.md
+
+ Title: "Example: Real-time video analysis - Face"
+
+description: Use the Face service to perform near-real-time analysis on frames taken from a live video stream.
+++++++ Last updated : 03/01/2018+
+ms.devlang: csharp
+++
+# Example: How to Analyze Videos in Real-time
+
+This guide will demonstrate how to perform near-real-time analysis on frames taken from a live video stream. The basic components in such a system are:
+
+- Acquire frames from a video source
+- Select which frames to analyze
+- Submit these frames to the API
+- Consume each analysis result that is returned from the API call
+
+These samples are written in C# and the code can be found on GitHub here: [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/).
+
+## The Approach
+
+There are multiple ways to solve the problem of running near-real-time analysis on video streams. We will start by outlining three approaches in increasing levels of sophistication.
+
+### A Simple Approach
+
+The simplest design for a near-real-time analysis system is an infinite loop, where each iteration grabs a frame, analyzes it, and then consumes the result:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+}
+```
+
+If our analysis consisted of a lightweight client-side algorithm, this approach would be suitable. However, when analysis happens in the cloud, the latency involved means that an API call might take several seconds. During this time, we are not capturing images, and our thread is essentially doing nothing. Our maximum frame-rate is limited by the latency of the API calls.
+
+### Parallelizing API Calls
+
+While a simple single-threaded loop makes sense for a lightweight client-side algorithm, it doesn't fit well with the latency involved in cloud API calls. The solution to this problem is to allow the long-running API calls to execute in parallel with the frame-grabbing. In C#, we could achieve this using Task-based parallelism, for example:
+
+```csharp
+while (true)
+{
+ Frame f = GrabFrame();
+ if (ShouldAnalyze(f))
+ {
+ var t = Task.Run(async () =>
+ {
+ AnalysisResult r = await Analyze(f);
+ ConsumeResult(r);
+ }
+ }
+}
+```
+
+This code launches each analysis in a separate Task, which can run in the background while we continue grabbing new frames. With this method we avoid blocking the main thread while waiting for an API call to return, but we have lost some of the guarantees that the simple version provided. Multiple API calls might occur in parallel, and the results might get returned in the wrong order. This could also cause multiple threads to enter the ConsumeResult() function simultaneously, which could be dangerous, if the function is not thread-safe. Finally, this simple code does not keep track of the Tasks that get created, so exceptions will silently disappear. Therefore, the final step is to add a "consumer" thread that will track the analysis tasks, raise exceptions, kill long-running tasks, and ensure that the results get consumed in the correct order.
+
+### A Producer-Consumer Design
+
+In our final "producer-consumer" system, we have a producer thread that looks similar to our previous infinite loop. However, instead of consuming analysis results as soon as they are available, the producer simply puts the tasks into a queue to keep track of them.
+
+```csharp
+// Queue that will contain the API call tasks.
+var taskQueue = new BlockingCollection<Task<ResultWrapper>>();
+
+// Producer thread.
+while (true)
+{
+ // Grab a frame.
+ Frame f = GrabFrame();
+
+ // Decide whether to analyze the frame.
+ if (ShouldAnalyze(f))
+ {
+ // Start a task that will run in parallel with this thread.
+ var analysisTask = Task.Run(async () =>
+ {
+ // Put the frame, and the result/exception into a wrapper object.
+ var output = new ResultWrapper(f);
+ try
+ {
+ output.Analysis = await Analyze(f);
+ }
+ catch (Exception e)
+ {
+ output.Exception = e;
+ }
+ return output;
+ }
+
+ // Push the task onto the queue.
+ taskQueue.Add(analysisTask);
+ }
+}
+```
+
+We also have a consumer thread that takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, we can guarantee that results get consumed one at a time, in the correct order, without limiting the maximum frame-rate of the system.
+
+```csharp
+// Consumer thread.
+while (true)
+{
+ // Get the oldest task.
+ Task<ResultWrapper> analysisTask = taskQueue.Take();
+
+ // Await until the task is completed.
+ var output = await analysisTask;
+
+ // Consume the exception or result.
+ if (output.Exception != null)
+ {
+ throw output.Exception;
+ }
+ else
+ {
+ ConsumeResult(output.Analysis);
+ }
+}
+```
+
+## Implementing the Solution
+
+### Getting Started
+
+To get your app up and running as quickly as possible, you will use a flexible implementation of the system described above. To access the code, go to [https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis).
+
+The library contains the class FrameGrabber, which implements the producer-consumer system discussed above to process video frames from a webcam. The user can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired or a new analysis result is available.
+
+To illustrate some of the possibilities, there are two sample apps that use the library. The first is a simple console app, and a simplified version of it is reproduced below. It grabs frames from the default webcam, and submits them to the Face service for face detection.
++
+The second sample app is a bit more interesting, and allows you to choose which API to call on the video frames. On the left-hand side, the app shows a preview of the live video, on the right-hand side it shows the most recent API result overlaid on the corresponding frame.
+
+In most modes, there will be a visible delay between the live video on the left, and the visualized analysis on the right. This delay is the time taken to make the API call. One exception is the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer using OpenCV, before submitting any images to Cognitive Services. This way, we can visualize the detected face immediately and then update the emotions once the API call returns. This is an example of a "hybrid" approach, where the client can perform some simple processing, and Cognitive Services APIs can augment this with more advanced analysis when necessary.
+
+![HowToAnalyzeVideo](../../Video/Images/FramebyFrame.jpg)
+
+### Integrating into your codebase
+
+To get started with this sample, follow these steps:
+
+1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
+2. Create resources for Computer Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+ - [Computer Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
+ - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
+ After the resources are deployed, click **Go to resource** to collect your key and endpoint for each resource.
+3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+4. Open the sample in Visual Studio, and build and run the sample applications:
+ - For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs).
+ - For LiveCameraSample, the keys should be entered into the Settings pane of the app. They will be persisted across sessions as user data.
+
+
+When you're ready to integrate, **reference the VideoFrameAnalyzer library from your own projects.**
+
+## Summary
+
+In this guide, you learned how to run near-real-time analysis on live video streams using the Face, Computer Vision, and Emotion APIs, and how to use our sample code to get started.
+
+Feel free to provide feedback and suggestions in the [GitHub repository](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) or, for broader API feedback, on our [UserVoice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) site.
+
+## Related Topics
+- [Call the detect API](identity-detect-faces.md)
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
+
+ Title: "Call the Detect API - Face"
+
+description: This guide demonstrates how to use face detection to extract attributes like age, emotion, or head pose from a given image.
+++++++ Last updated : 08/04/2021+
+ms.devlang: csharp
+++
+# Call the Detect API
+
+This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
+
+The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
++
+## Setup
+
+This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, with a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
+
+## Submit data to the service
+
+To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
++
+You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for their unique IDs and a rectangle that gives the pixel coordinates of the face. This way, you can tell which face ID maps to which face in the original image.
++
+For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
+
+## Determine how to process the data
+
+This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.
+
+### Get face landmarks
+
+[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
++
+### Get face attributes
+
+Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
+
+To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
+++
+## Get results from the service
+
+### Face landmark results
+
+The following code demonstrates how you might retrieve the locations of the nose and pupils:
++
+You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
++
+When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
++
+### Face attribute results
+
+The following code shows how you might retrieve the face attribute data that you requested in the original call.
++
+To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
+
+## Next steps
+
+In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.
+
+- [Tutorial: Add users to a Face service](../enrollment-overview.md)
+
+## Related articles
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/migrate-face-data.md
+
+ Title: "Migrate your face data across subscriptions - Face"
+
+description: This guide shows you how to migrate your stored face data from one Face subscription to another.
+++++++ Last updated : 02/22/2021+
+ms.devlang: csharp
+++
+# Migrate your face data to a different Face subscription
+
+This guide shows you how to move face data, such as a saved PersonGroup object with faces, to a different Azure Cognitive Services Face subscription. To move the data, you use the Snapshot feature. This way you avoid having to repeatedly build and train a PersonGroup or FaceList object when you move or expand your operations. For example, perhaps you created a PersonGroup object with a free subscription and now want to migrate it to your paid subscription. Or you might need to sync face data across subscriptions in different regions for a large enterprise operation.
+
+This same migration strategy also applies to LargePersonGroup and LargeFaceList objects. If you aren't familiar with the concepts in this guide, see their definitions in the [Face recognition concepts](../concept-face-recognition.md) guide. This guide uses the Face .NET client library with C#.
+
+> [!WARNING]
+> The Snapshot feature might move your data outside the geographic region you originally selected. Data might move to West US, West Europe, and Southeast Asia regions.
+
+## Prerequisites
+
+You need the following items:
+
+- Two Face keys, one with the existing data and one to migrate to. To subscribe to the Face service and get your key, follow the instructions in [Create a Cognitive Services account](../../cognitive-services-apis-create-account.md).
+- The Face subscription ID string that corresponds to the target subscription. To find it, select **Overview** in the Azure portal.
+- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).
+
+## Create the Visual Studio project
+
+This guide uses a simple console app to run the face data migration. For a full implementation, see the [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) on GitHub.
+
+1. In Visual Studio, create a new Console app .NET Framework project. Name it **FaceApiSnapshotSample**.
+1. Get the required NuGet packages. Right-click your project in the Solution Explorer, and select **Manage NuGet Packages**. Select the **Browse** tab, and select **Include prerelease**. Find and install the following package:
+ - [Microsoft.Azure.CognitiveServices.Vision.Face 2.3.0-preview](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/2.2.0-preview)
+
+## Create face clients
+
+In the **Main** method in *Program.cs*, create two [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) instances for your source and target subscriptions. This example uses a Face subscription in the East Asia region as the source and a West US subscription as the target. This example demonstrates how to migrate data from one Azure region to another.
++
+```csharp
+var FaceClientEastAsia = new FaceClient(new ApiKeyServiceClientCredentials("<East Asia Key>"))
+ {
+ Endpoint = "https://southeastasia.api.cognitive.microsoft.com/>"
+ };
+
+var FaceClientWestUS = new FaceClient(new ApiKeyServiceClientCredentials("<West US Key>"))
+ {
+ Endpoint = "https://westus.api.cognitive.microsoft.com/"
+ };
+```
+
+Fill in the key values and endpoint URLs for your source and target subscriptions.
++
+## Prepare a PersonGroup for migration
+
+You need the ID of the PersonGroup in your source subscription to migrate it to the target subscription. Use the [PersonGroupOperationsExtensions.ListAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperationsextensions.listasync) method to retrieve a list of your PersonGroup objects. Then get the [PersonGroup.PersonGroupId](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.persongroup.persongroupid#Microsoft_Azure_CognitiveServices_Vision_Face_Models_PersonGroup_PersonGroupId) property. This process looks different based on what PersonGroup objects you have. In this guide, the source PersonGroup ID is stored in `personGroupId`.
+
+> [!NOTE]
+> The [sample code](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample) creates and trains a new PersonGroup to migrate. In most cases, you should already have a PersonGroup to use.
+
+## Take a snapshot of a PersonGroup
+
+A snapshot is temporary remote storage for certain Face data types. It functions as a kind of clipboard to copy data from one subscription to another. First, you take a snapshot of the data in the source subscription. Then you apply it to a new data object in the target subscription.
+
+Use the source subscription's FaceClient instance to take a snapshot of the PersonGroup. Use [TakeAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperationsextensions.takeasync) with the PersonGroup ID and the target subscription's ID. If you have multiple target subscriptions, add them as array entries in the third parameter.
+
+```csharp
+var takeSnapshotResult = await FaceClientEastAsia.Snapshot.TakeAsync(
+ SnapshotObjectType.PersonGroup,
+ personGroupId,
+ new[] { "<Azure West US Subscription ID>" /* Put other IDs here, if multiple target subscriptions wanted */ });
+```
+
+> [!NOTE]
+> The process of taking and applying snapshots doesn't disrupt any regular calls to the source or target PersonGroups or FaceLists. Don't make simultaneous calls that change the source object, such as [FaceList management calls](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.facelistoperations) or the [PersonGroup Train](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.persongroupoperations) call, for example. The snapshot operation might run before or after those operations or might encounter errors.
+
+## Retrieve the snapshot ID
+
+The method used to take snapshots is asynchronous, so you must wait for its completion. Snapshot operations can't be canceled. In this code, the `WaitForOperation` method monitors the asynchronous call. It checks the status every 100 ms. After the operation finishes, retrieve an operation ID by parsing the `OperationLocation` field.
+
+```csharp
+var takeOperationId = Guid.Parse(takeSnapshotResult.OperationLocation.Split('/')[2]);
+var operationStatus = await WaitForOperation(FaceClientEastAsia, takeOperationId);
+```
+
+A typical `OperationLocation` value looks like this:
+
+```csharp
+"/operations/a63a3bdd-a1db-4d05-87b8-dbad6850062a"
+```
+
+The `WaitForOperation` helper method is here:
+
+```csharp
+/// <summary>
+/// Waits for the take/apply operation to complete and returns the final operation status.
+/// </summary>
+/// <returns>The final operation status.</returns>
+private static async Task<OperationStatus> WaitForOperation(IFaceClient client, Guid operationId)
+{
+ OperationStatus operationStatus = null;
+ do
+ {
+ if (operationStatus != null)
+ {
+ Thread.Sleep(TimeSpan.FromMilliseconds(100));
+ }
+
+ // Get the status of the operation.
+ operationStatus = await client.Snapshot.GetOperationStatusAsync(operationId);
+
+ Console.WriteLine($"Operation Status: {operationStatus.Status}");
+ }
+ while (operationStatus.Status != OperationStatusType.Succeeded
+ && operationStatus.Status != OperationStatusType.Failed);
+
+ return operationStatus;
+}
+```
+
+After the operation status shows `Succeeded`, get the snapshot ID by parsing the `ResourceLocation` field of the returned OperationStatus instance.
+
+```csharp
+var snapshotId = Guid.Parse(operationStatus.ResourceLocation.Split('/')[2]);
+```
+
+A typical `resourceLocation` value looks like this:
+
+```csharp
+"/snapshots/e58b3f08-1e8b-4165-81df-aa9858f233dc"
+```
+
+## Apply a snapshot to a target subscription
+
+Next, create the new PersonGroup in the target subscription by using a randomly generated ID. Then use the target subscription's FaceClient instance to apply the snapshot to this PersonGroup. Pass in the snapshot ID and the new PersonGroup ID.
+
+```csharp
+var newPersonGroupId = Guid.NewGuid().ToString();
+var applySnapshotResult = await FaceClientWestUS.Snapshot.ApplyAsync(snapshotId, newPersonGroupId);
+```
++
+> [!NOTE]
+> A Snapshot object is valid for only 48 hours. Only take a snapshot if you intend to use it for data migration soon after.
+
+A snapshot apply request returns another operation ID. To get this ID, parse the `OperationLocation` field of the returned applySnapshotResult instance.
+
+```csharp
+var applyOperationId = Guid.Parse(applySnapshotResult.OperationLocation.Split('/')[2]);
+```
+
+The snapshot application process is also asynchronous, so again use `WaitForOperation` to wait for it to finish.
+
+```csharp
+operationStatus = await WaitForOperation(FaceClientWestUS, applyOperationId);
+```
+
+## Test the data migration
+
+After you apply the snapshot, the new PersonGroup in the target subscription populates with the original face data. By default, training results are also copied. The new PersonGroup is ready for face identification calls without needing retraining.
+
+To test the data migration, run the following operations and compare the results they print to the console:
+
+```csharp
+await DisplayPersonGroup(FaceClientEastAsia, personGroupId);
+await IdentifyInPersonGroup(FaceClientEastAsia, personGroupId);
+
+await DisplayPersonGroup(FaceClientWestUS, newPersonGroupId);
+// No need to retrain the PersonGroup before identification,
+// training results are copied by snapshot as well.
+await IdentifyInPersonGroup(FaceClientWestUS, newPersonGroupId);
+```
+
+Use the following helper methods:
+
+```csharp
+private static async Task DisplayPersonGroup(IFaceClient client, string personGroupId)
+{
+ var personGroup = await client.PersonGroup.GetAsync(personGroupId);
+ Console.WriteLine("PersonGroup:");
+ Console.WriteLine(JsonConvert.SerializeObject(personGroup));
+
+ // List persons.
+ var persons = await client.PersonGroupPerson.ListAsync(personGroupId);
+
+ foreach (var person in persons)
+ {
+ Console.WriteLine(JsonConvert.SerializeObject(person));
+ }
+
+ Console.WriteLine();
+}
+```
+
+```csharp
+private static async Task IdentifyInPersonGroup(IFaceClient client, string personGroupId)
+{
+ using (var fileStream = new FileStream("data\\PersonGroup\\Daughter\\Daughter1.jpg", FileMode.Open, FileAccess.Read))
+ {
+ var detectedFaces = await client.Face.DetectWithStreamAsync(fileStream);
+
+ var result = await client.Face.IdentifyAsync(detectedFaces.Select(face => face.FaceId.Value).ToList(), personGroupId);
+ Console.WriteLine("Test identify against PersonGroup");
+ Console.WriteLine(JsonConvert.SerializeObject(result));
+ Console.WriteLine();
+ }
+}
+```
+
+Now you can use the new PersonGroup in the target subscription.
+
+To update the target PersonGroup again in the future, create a new PersonGroup to receive the snapshot. To do this, follow the steps in this guide. A single PersonGroup object can have a snapshot applied to it only one time.
+
+## Clean up resources
+
+After you finish migrating face data, manually delete the snapshot object.
+
+```csharp
+await FaceClientEastAsia.Snapshot.DeleteAsync(snapshotId);
+```
+
+## Next steps
+
+Next, see the relevant API reference documentation, explore a sample app that uses the Snapshot feature, or follow a how-to guide to start using the other API operations mentioned here:
+
+- [Snapshot reference documentation (.NET SDK)](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.snapshotoperations)
+- [Face snapshot sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceApiSnapshotSample/FaceApiSnapshotSample)
+- [Add faces](add-faces.md)
+- [Call the detect API](identity-detect-faces.md)
cognitive-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/mitigate-latency.md
+
+ Title: How to mitigate latency when using the Face service
+
+description: Learn how to mitigate latency when using the Face service.
+++++ Last updated : 1/5/2021+
+ms.devlang: csharp
+++
+# How to: mitigate latency when using the Face service
+
+You may encounter latency when using the Face service. Latency refers to any kind of delay that occurs when communicating over a network. In general, possible causes of latency include:
+- The physical distance each packet must travel from source to destination.
+- Problems with the transmission medium.
+- Errors in routers or switches along the transmission path.
+- The time required by antivirus applications, firewalls, and other security mechanisms to inspect packets.
+- Malfunctions in client or server applications.
+
+This article talks about possible causes of latency specific to using the Azure Cognitive Services, and how you can mitigate these causes.
+
+> [!NOTE]
+> Azure Cognitive Services does not provide any Service Level Agreement (SLA) regarding latency.
+
+## Possible causes of latency
+
+### Slow connection between the Cognitive Service and a remote URL
+
+Some Azure services provide methods that obtain data from a remote URL that you provide. For example, when you call the [DetectWithUrlAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithUrlAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_String_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can specify the URL of an image in which the service tries to detect faces.
+
+```csharp
+var faces = await client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+```
+
+The Face service must then download the image from the remote server. If the connection from the Face service to the remote server is slow, that will affect the response time of the Detect method.
+
+To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
+
+### Large upload size
+
+Some Azure services provide methods that obtain data from a file that you upload. For example, when you call the [DetectWithStreamAsync method](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync#Microsoft_Azure_CognitiveServices_Vision_Face_FaceOperationsExtensions_DetectWithStreamAsync_Microsoft_Azure_CognitiveServices_Vision_Face_IFaceOperations_System_IO_Stream_System_Nullable_System_Boolean__System_Nullable_System_Boolean__System_Collections_Generic_IList_System_Nullable_Microsoft_Azure_CognitiveServices_Vision_Face_Models_FaceAttributeType___System_String_System_Nullable_System_Boolean__System_String_System_Threading_CancellationToken_) of the Face service, you can upload an image in which the service tries to detect faces.
+
+```csharp
+using FileStream fs = File.OpenRead(@"C:\images\face.jpg");
+System.Collections.Generic.IList<DetectedFace> faces = await client.Face.DetectWithStreamAsync(fs, detectionModel: DetectionModel.Detection02);
+```
+
+If the file to upload is large, that will impact the response time of the `DetectWithStreamAsync` method, for the following reasons:
+- It takes longer to upload the file.
+- It takes the service longer to process the file, in proportion to the file size.
+
+Mitigations:
+- Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example:
+``` csharp
+var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+```
+- Consider uploading a smaller file.
+ - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
+ - For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size will increase processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080.
+ - For face recognition, reducing the face size to 200x200 pixels doesn't affect the accuracy of the recognition model.
+ - The performance of the `DetectWithUrlAsync` and `DetectWithStreamAsync` methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
+ - If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison:
+```csharp
+var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedymini-biography.jpg");
+var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedydebating-richard-nixon.jpg");
+Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
+IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
+```
+
+### Slow connection between your compute resource and the Face service
+
+If your computer has a slow connection to the Face service, this will affect the response time of service methods.
+
+Mitigations:
+- When you create your Face subscription, make sure to choose the region closest to where your application is hosted.
+- If you need to call multiple service methods, consider calling them in parallel if your application design allows for it. See the previous section for an example.
+- If longer latencies affect the user experience, choose a timeout threshold (for example, maximum 5 seconds) before retrying the API call.
+
+## Next steps
+
+In this guide, you learned how to mitigate latency when using the Face service. Next, learn how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively.
+
+> [!div class="nextstepaction"]
+> [Example: Use the large-scale feature](use-large-scale.md)
+
+## Related topics
+
+- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
cognitive-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-detection-model.md
+
+ Title: How to specify a detection model - Face
+
+description: This article will show you how to choose which face detection model to use with your Azure Face application.
+++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face detection model
+
+This guide shows you how to specify a face detection model for the Azure Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
+
+Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
+
+If you aren't sure whether you should use the latest model, skip to the [Evaluate different models](#evaluate-different-models) section to evaluate the new model and compare results using your current data set.
+
+## Prerequisites
+
+You should be familiar with the concept of AI face detection. If you aren't, see the face detection conceptual guide or how-to guide:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Call the detect API](identity-detect-faces.md)
+
+## Detect faces with specified model
+
+Face detection finds the bounding-box locations of human faces and identifies their visual landmarks. It extracts the face's features and stores them for later use in [recognition](../concept-face-recognition.md) operations.
+
+When you use the [Face - Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
+
+* `detection_01`
+* `detection_02`
+* `detection_03`
+
+A request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
+
+If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, false, false, recognitionModel: "recognition_04", detectionModel: "detection_03");
+```
+
+## Add face to Person with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object through the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. In this API call, you can specify the detection model in the same way as in [Face - Detect].
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create a PersonGroup and add a person with face detected by "detection_03" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+
+string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Face - Identify] API, for example).
+
+## Add face to FaceList with specified model
+
+You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
+```
+
+This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
+
+> [!NOTE]
+> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.
+
+## Evaluate different models
+
+The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
+
+|**detection_01** |**detection_02** |**detection_03**
+||||
+|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations.
+|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
+|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
+|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
+
+The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
+
+## Next steps
+
+In this article, you learned how to specify the detection model to use with different Face APIs. Next, follow a quickstart to get started with face detection and analysis.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[FaceList - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-recognition-model.md
+
+ Title: How to specify a recognition model - Face
+
+description: This article will show you how to choose which recognition model to use with your Azure Face application.
++++++ Last updated : 03/05/2021+
+ms.devlang: csharp
+++
+# Specify a face recognition model
+
+This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure Face service.
+
+The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use. They can choose the model that best fits their use case.
+
+The Azure Face service has four recognition models available. The models _recognition_01_ (published 2017), _recognition_02_ (published 2019), and _recognition_03_ (published 2020) are continually supported to ensure backwards compatibility for customers using FaceLists or **PersonGroup**s created with these models. A **FaceList** or **PersonGroup** will always use the recognition model it was created with, and new faces will become associated with this model when they're added. This can't be changed after creation and customers will need to use the corresponding recognition model with the corresponding **FaceList** or **PersonGroup**.
+
+You can move to later recognition models at your own convenience; however, you'll need to create new FaceLists and PersonGroups with the recognition model of your choice.
+
+The _recognition_04_ model (published 2021) is the most accurate model currently available. If you're a new customer, we recommend using this model. _Recognition_04_ will provide improved accuracy for both similarity comparisons and person-matching comparisons. _Recognition_04_ improves recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now you can build safe and seamless user experiences that use the latest _detection_03_ model to detect whether an enrolled user is wearing a face cover. Then you can use the latest _recognition_04_ model to recognize their identity. Each model operates independently of the others, and a confidence threshold set for one model isn't meant to be compared across the other recognition models.
+
+Read on to learn how to specify a selected model in different Face operations while avoiding model conflicts. If you're an advanced user and would like to determine whether you should switch to the latest model, skip to the [Evaluate different models](#evaluate-different-models) section. You can evaluate the new model and compare results using your current data set.
++
+## Prerequisites
+
+You should be familiar with the concepts of AI face detection and identification. If you aren't, see these guides first:
+
+* [Face detection concepts](../concept-face-detection.md)
+* [Face recognition concepts](../concept-face-recognition.md)
+* [Call the detect API](identity-detect-faces.md)
+
+## Detect faces with specified model
+
+Face detection identifies the visual landmarks of human faces and finds their bounding-box locations. It also extracts the face's features and stores them for use in identification. All of this information forms the representation of one face.
+
+The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.
+
+When using the [Face - Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
+* recognition_01
+* recognition_02
+* recognition_03
+* recognition_04
++
+Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Face - Detect] REST API will look like this:
+
+`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
+
+If you're using the client library, you can assign the value for `recognitionModel` by passing a string representing the version. If you leave it unassigned, a default model version of `recognition_01` will be used. See the following code example for the .NET client library.
+
+```csharp
+string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+var faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, true, recognitionModel: "recognition_01", returnRecognitionModel: true);
+```
+
+## Identify faces with specified model
+
+The Face service can extract face data from an image and associate it with a **Person** object (through the [Add face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Face - Identify] call), and the matching person within that group can be identified.
+
+A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([PersonGroup - Create] or [LargePersonGroup - Create]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [PersonGroup - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+// Create an empty PersonGroup with "recognition_04" model
+string personGroupId = "mypersongroupid";
+await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
+```
+
+In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
+
+Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
+
+There is no change in the [Face - Identify] API; you only need to specify the model version in detection.
+
+## Find similar faces with specified model
+
+You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+
+See the following code example for the .NET client library.
+
+```csharp
+await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
+```
+
+This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
+
+There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
+
+## Verify faces with specified model
+
+The [Face - Verify] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
+
+## Evaluate different models
+
+If you'd like to compare the performances of different recognition models on your own data, you'll need to:
+1. Create four PersonGroups using _recognition_01_, _recognition_02_, _recognition_03_, and _recognition_04_ respectively.
+1. Use your image data to detect faces and register them to **Person**s within these four **PersonGroup**s.
+1. Train your PersonGroups using the PersonGroup - Train API.
+1. Test with Face - Identify on all four **PersonGroup**s and compare the results.
++
+If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.
+
+## Next steps
+
+In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.
+
+* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
+* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
+
+[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
+[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
+[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
+[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
+[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
+[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
+[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
+[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
+[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
+[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
+[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
+[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
cognitive-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-headpose.md
+
+ Title: Use the HeadPose attribute
+
+description: Learn how to use the HeadPose attribute to automatically rotate the face rectangle or detect head gestures in a video feed.
++++++ Last updated : 02/23/2021+
+ms.devlang: csharp
+++
+# Use the HeadPose attribute
+
+In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
+
+## Rotate the face rectangle
+
+The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
+
+The [Cognitive Services Face WPF (Windows Presentation Foundation)](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) sample app uses the HeadPose attribute to rotate its detected face rectangles.
+
+### Explore the sample code
+
+You can programmatically rotate the face rectangle by using the HeadPose attribute. If you specify this attribute when detecting faces (see [Call the detect API](identity-detect-faces.md)), you will be able to query it later. The following method from the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app takes a list of **DetectedFace** objects and returns a list of **[Face](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/Face.cs)** objects. **Face** here is a custom class that stores face data, including the updated rectangle coordinates. New values are calculated for **top**, **left**, **width**, and **height**, and a new field **FaceAngle** specifies the rotation.
+
+```csharp
+/// <summary>
+/// Calculate the rendering face rectangle
+/// </summary>
+/// <param name="faces">Detected face from service</param>
+/// <param name="maxSize">Image rendering size</param>
+/// <param name="imageInfo">Image width and height</param>
+/// <returns>Face structure for rendering</returns>
+public static IEnumerable<Face> CalculateFaceRectangleForRendering(IList<DetectedFace> faces, int maxSize, Tuple<int, int> imageInfo)
+{
+ var imageWidth = imageInfo.Item1;
+ var imageHeight = imageInfo.Item2;
+ var ratio = (float)imageWidth / imageHeight;
+ int uiWidth = 0;
+ int uiHeight = 0;
+ if (ratio > 1.0)
+ {
+ uiWidth = maxSize;
+ uiHeight = (int)(maxSize / ratio);
+ }
+ else
+ {
+ uiHeight = maxSize;
+ uiWidth = (int)(ratio * uiHeight);
+ }
+
+ var uiXOffset = (maxSize - uiWidth) / 2;
+ var uiYOffset = (maxSize - uiHeight) / 2;
+ var scale = (float)uiWidth / imageWidth;
+
+ foreach (var face in faces)
+ {
+ var left = (int)(face.FaceRectangle.Left * scale + uiXOffset);
+ var top = (int)(face.FaceRectangle.Top * scale + uiYOffset);
+
+ // Angle of face rectangles, default value is 0 (not rotated).
+ double faceAngle = 0;
+
+ // If head pose attributes have been obtained, re-calculate the left & top (X & Y) positions.
+ if (face.FaceAttributes?.HeadPose != null)
+ {
+ // Head pose's roll value acts directly as the face angle.
+ faceAngle = face.FaceAttributes.HeadPose.Roll;
+ var angleToPi = Math.Abs((faceAngle / 180) * Math.PI);
+
+ // _____ | / \ |
+ // |____| => |/ /|
+ // | \ / |
+ // Re-calculate the face rectangle's left & top (X & Y) positions.
+ var newLeft = face.FaceRectangle.Left +
+ face.FaceRectangle.Width / 2 -
+ (face.FaceRectangle.Width * Math.Sin(angleToPi) + face.FaceRectangle.Height * Math.Cos(angleToPi)) / 2;
+
+ var newTop = face.FaceRectangle.Top +
+ face.FaceRectangle.Height / 2 -
+ (face.FaceRectangle.Height * Math.Sin(angleToPi) + face.FaceRectangle.Width * Math.Cos(angleToPi)) / 2;
+
+ left = (int)(newLeft * scale + uiXOffset);
+ top = (int)(newTop * scale + uiYOffset);
+ }
+
+ yield return new Face()
+ {
+ FaceId = face.FaceId?.ToString(),
+ Left = left,
+ Top = top,
+ OriginalLeft = (int)(face.FaceRectangle.Left * scale + uiXOffset),
+ OriginalTop = (int)(face.FaceRectangle.Top * scale + uiYOffset),
+ Height = (int)(face.FaceRectangle.Height * scale),
+ Width = (int)(face.FaceRectangle.Width * scale),
+ FaceAngle = faceAngle,
+ };
+ }
+}
+```
+
+### Display the updated rectangle
+
+From here, you can use the returned **Face** objects in your display. The following lines from [FaceDetectionPage.xaml](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/app-samples/Cognitive-Services-Face-WPF/Sample-WPF/Controls/FaceDetectionPage.xaml) show how the new rectangle is rendered from this data:
+
+```xaml
+ <DataTemplate>
+ <Rectangle Width="{Binding Width}" Height="{Binding Height}" Stroke="#FF26B8F4" StrokeThickness="1">
+ <Rectangle.LayoutTransform>
+ <RotateTransform Angle="{Binding FaceAngle}"/>
+ </Rectangle.LayoutTransform>
+ </Rectangle>
+</DataTemplate>
+```
+
+## Detect head gestures
+
+You can detect head gestures like nodding and head shaking by tracking HeadPose changes in real time. You can use this feature as a custom liveness detector.
+
+Liveness detection is the task of determining that a subject is a real person and not an image or video representation. A head gesture detector could serve as one way to help verify liveness, especially as opposed to an image representation of a person.
+
+> [!CAUTION]
+> To detect head gestures in real time, you'll need to call the Face API at a high rate (more than once per second). If you have a free-tier (f0) subscription, this will not be possible. If you have a paid-tier subscription, make sure you've calculated the costs of making rapid API calls for head gesture detection.
+
+See the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/FaceAPIHeadPoseSample) on GitHub for a working example of head gesture detection.
+
+## Next steps
+
+See the [Cognitive Services Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
cognitive-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-large-scale.md
+
+ Title: "Example: Use the Large-Scale feature - Face"
+
+description: This guide is an article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects.
+++++++ Last updated : 05/01/2019+
+ms.devlang: csharp
+++
+# Example: Use the large-scale feature
+
+This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
+
+LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
+
+The samples are written in C# by using the Azure Cognitive Services Face client library.
+
+> [!NOTE]
+> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
+
+## Step 1: Initialize the client object
+
+When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. For example:
+
+```csharp
+string SubscriptionKey = "<Key>";
+// Use your own subscription endpoint corresponding to the key.
+string SubscriptionEndpoint = "https://westus.api.cognitive.microsoft.com";
+private readonly IFaceClient faceClient = new FaceClient(
+ new ApiKeyServiceClientCredentials(subscriptionKey),
+ new System.Net.Http.DelegatingHandler[] { });
+faceClient.Endpoint = SubscriptionEndpoint
+```
+
+To get the key with its corresponding endpoint, go to the Azure Marketplace from the Azure portal.
+For more information, see [Subscriptions](https://azure.microsoft.com/services/cognitive-services/directory/vision/).
+
+## Step 2: Code migration
+
+This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility.
+
+Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead.
+
+### Migrate a PersonGroup to a LargePersonGroup
+
+Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations.
+
+For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person.
+
+Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md).
+
+### Migrate a FaceList to a LargeFaceList
+
+| FaceList APIs | LargeFaceList APIs |
+|::|::|
+| Create | Create |
+| Delete | Delete |
+| Get | Get |
+| List | List |
+| Update | Update |
+| - | Train |
+| - | Get Training Status |
+
+The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the
+[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList:
+
+```csharp
+/// <summary>
+/// Helper function to train LargeFaceList and wait for finish.
+/// </summary>
+/// <remarks>
+/// The time interval can be adjusted considering the following factors:
+/// - The training time which depends on the capacity of the LargeFaceList.
+/// - The acceptable latency for getting the training status.
+/// - The call frequency and cost.
+///
+/// Estimated training time for LargeFaceList in different scale:
+/// - 1,000 faces cost about 1 to 2 seconds.
+/// - 10,000 faces cost about 5 to 10 seconds.
+/// - 100,000 faces cost about 1 to 2 minutes.
+/// - 1,000,000 faces cost about 10 to 30 minutes.
+/// </remarks>
+/// <param name="largeFaceListId">The Id of the LargeFaceList for training.</param>
+/// <param name="timeIntervalInMilliseconds">The time interval for getting training status in milliseconds.</param>
+/// <returns>A task of waiting for LargeFaceList training finish.</returns>
+private static async Task TrainLargeFaceList(
+ string largeFaceListId,
+ int timeIntervalInMilliseconds = 1000)
+{
+ // Trigger a train call.
+ await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
+
+ // Wait for training finish.
+ while (true)
+ {
+ Task.Delay(timeIntervalInMilliseconds).Wait();
+ var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
+
+ if (status.Status == Status.Running)
+ {
+ continue;
+ }
+ else if (status.Status == Status.Succeeded)
+ {
+ break;
+ }
+ else
+ {
+ throw new Exception("The train operation is failed!");
+ }
+ }
+}
+```
+
+Previously, a typical use of FaceList with added faces and FindSimilar looked like the following:
+
+```csharp
+// Create a FaceList.
+const string FaceListId = "myfacelistid_001";
+const string FaceListName = "MyFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
+
+// Add Faces to the FaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.FaceList.AddFaceFromStreamAsync(FaceListId, stream);
+ }
+ });
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
+ }
+}
+```
+
+When migrating it to LargeFaceList, it becomes the following:
+
+```csharp
+// Create a LargeFaceList.
+const string LargeFaceListId = "mylargefacelistid_001";
+const string LargeFaceListName = "MyLargeFaceListDisplayName";
+const string ImageDir = @"/path/to/FaceList/images";
+faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
+
+// Add Faces to the LargeFaceList.
+Parallel.ForEach(
+ Directory.GetFiles(ImageDir, "*.jpg"),
+ async imagePath =>
+ {
+ using (Stream stream = File.OpenRead(imagePath))
+ {
+ await faceClient.LargeFaceList.AddFaceFromStreamAsync(LargeFaceListId, stream);
+ }
+ });
+
+// Train() is newly added operation for LargeFaceList.
+// Must call it before FindSimilarAsync() to ensure the newly added faces searchable.
+await TrainLargeFaceList(LargeFaceListId);
+
+// Perform FindSimilar.
+const string QueryImagePath = @"/path/to/query/image";
+var results = new List<SimilarPersistedFace[]>();
+using (Stream stream = File.OpenRead(QueryImagePath))
+{
+ var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ foreach (var face in faces)
+ {
+ results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
+ }
+}
+```
+
+As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works.
+
+## Step 3: Train suggestions
+
+Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)
+and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
+
+| Scale for faces or persons | Estimated training time |
+|::|::|
+| 1,000 | 1-2 sec |
+| 10,000 | 5-10 sec |
+| 100,000 | 1-2 min |
+| 1,000,000 | 10-30 min |
+
+To better utilize the large-scale feature, we recommend the following strategies.
+
+### Step 3.1: Customize time interval
+
+As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
+
+The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
+
+### Step 3.2: Small-scale buffer
+
+Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
+
+To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily.
+
+An example workflow:
+
+1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces.
+1. Add new persons or faces to both the master collection and the buffer collection.
+1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.
+1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results.
+1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
+1. Delete the old buffer collection after the Train operation finishes on the master collection.
+
+### Step 3.3: Standalone training
+
+If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
+
+Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
+
+```csharp
+private static void Main()
+{
+ // Create a LargePersonGroup.
+ const string LargePersonGroupId = "mylargepersongroupid_001";
+ const string LargePersonGroupName = "MyLargePersonGroupDisplayName";
+ faceClient.LargePersonGroup.CreateAsync(LargePersonGroupId, LargePersonGroupName).Wait();
+
+ // Set up standalone training at regular intervals.
+ const int TimeIntervalForStatus = 1000 * 60; // 1-minute interval for getting training status.
+ const double TimeIntervalForTrain = 1000 * 60 * 60; // 1-hour interval for training.
+ var trainTimer = new Timer(TimeIntervalForTrain);
+ trainTimer.Elapsed += (sender, args) => TrainTimerOnElapsed(LargePersonGroupId, TimeIntervalForStatus);
+ trainTimer.AutoReset = true;
+ trainTimer.Enabled = true;
+
+ // Other operations like creating persons, adding faces, and identification, except for Train.
+ // ...
+}
+
+private static void TrainTimerOnElapsed(string largePersonGroupId, int timeIntervalInMilliseconds)
+{
+ TrainLargePersonGroup(largePersonGroupId, timeIntervalInMilliseconds).Wait();
+}
+```
+
+For more information about data management and identification-related implementations, see [Add faces](add-faces.md).
+
+## Summary
+
+In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList:
+
+- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.
+- Take the proper Train strategy to dynamic data update for large-scale data sets.
+
+## Next steps
+
+Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup.
+
+- [Add faces](add-faces.md)
+- [Face client library quickstart](../quickstarts-sdk/identity-client-library.md)
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-persondirectory.md
+
+ Title: "Example: Use the PersonDirectory structure - Face"
+
+description: Learn how to use the PersonDirectory data structure to store face and person data at greater capacity and with other new features.
+++++++ Last updated : 04/22/2021+
+ms.devlang: csharp
+++
+# Use the PersonDirectory structure
+
+To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory.
+
+Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+
+Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically.
+
+## Prerequisites
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below.
+ * You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Add Persons to the PersonDirectory
+**Persons** are the base enrollment units in the **PersonDirectory**. Once you add a **Person** to the directory, you can add up to 248 face images to that **Person**, per recognition model. Then you can identify faces against them using varying scopes.
+
+### Create the Person
+To create a **Person**, you need to call the **CreatePerson** API and provide a name or userData property value.
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Net.Http;
+using System.Net.Http.Headers;
+using System.Text;
+using System.Threading.Tasks;
+
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Person");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(addPersonUri, content);
+}
+```
+
+The CreatePerson call will return a generated ID for the **Person** and an operation location. The **Person** data will be processed asynchronously, so you use the operation location to fetch the results.
+
+### Wait for asynchronous operation completion
+You'll need to query the async operation status using the returned operation location string to check the progress.
+
+First, you should define a data model like the following to handle the status response.
+
+```csharp
+[Serializable]
+public class AsyncStatus
+{
+ [DataMember(Name = "status")]
+ public string Status { get; set; }
+
+ [DataMember(Name = "createdTime")]
+ public DateTime CreatedTime { get; set; }
+
+ [DataMember(Name = "lastActionTime")]
+ public DateTime? LastActionTime { get; set; }
+
+ [DataMember(Name = "finishedTime", EmitDefaultValue = false)]
+ public DateTime? FinishedTime { get; set; }
+
+ [DataMember(Name = "resourceLocation", EmitDefaultValue = false)]
+ public string ResourceLocation { get; set; }
+
+ [DataMember(Name = "message", EmitDefaultValue = false)]
+ public string Message { get; set; }
+}
+```
+
+Using the HttpResponseMessage from above, you can then poll the URL and wait for results.
+
+```csharp
+string operationLocation = response.Headers.GetValues("Operation-Location").FirstOrDefault();
+
+Stopwatch s = Stopwatch.StartNew();
+string status = "notstarted";
+do
+{
+ if (status == "succeeded")
+ {
+ await Task.Delay(500);
+ }
+
+ var operationResponseMessage = await client.GetAsync(operationLocation);
+
+ var asyncOperationObj = JsonConvert.DeserializeObject<AsyncStatus>(await operationResponseMessage.Content.ReadAsStringAsync());
+ status = asyncOperationObj.Status;
+
+} while ((status == "running" || status == "notstarted") && s.Elapsed < TimeSpan.FromSeconds(30));
+```
++
+Once the status returns as "succeeded", the **Person** object is considered added to the directory.
+
+> [!NOTE]
+> The asynchronous operation from the Create **Person** call does not have to show "succeeded" status before faces can be added to it, but it does need to be completed before the **Person** can be added to a **DynamicPersonGroup** (see below Create and update a **DynamicPersonGroup**) or compared during an Identify call. Verify calls will work immediately after faces are successfully added to the **Person**.
++
+### Add faces to Persons
+
+Once you have the **Person** ID from the Create Person call, you can add up to 248 face images to a **Person** per recognition model. Specify the recognition model (and optionally the detection model) to use in the call, as data under each recognition model will be processed separately inside the **PersonDirectory**.
+
+The currently supported recognition models are:
+* `Recognition_02`
+* `Recognition_03`
+* `Recognition_04`
+
+Additionally, if the image contains multiple faces, you'll need to specify the rectangle bounding box for the face that is the intended target. The following code adds faces to a **Person** object.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var queryString = "userData={userDefinedData}&targetFace={left,top,width,height}&detectionModel={detectionModel}";
+var uri = "https://{endpoint}/face/v1.0-preview/persons/{personId}/recognitionModels/{recognitionModel}/persistedFaces?" + queryString;
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("url", "{image url}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+After the Add Faces call, the face data will be processed asynchronously, and you'll need to wait for the success of the operation in the same manner as before.
+
+When the operation for the face addition finishes, the data will be ready for in Identify calls.
+
+## Create and update a **DynamicPersonGroup**
+
+**DynamicPersonGroups** are collections of references to **Person** objects within a **PersonDirectory**; they're used to create subsets of the directory. A common use is when you want to get fewer false positives and increased accuracy in an Identify operation by limiting the scope to just the **Person** objects you expect to match. Practical use cases include directories for specific building access among a larger campus or organization. The organization directory may contain 5 million individuals, but you only need to search a specific 800 people for a particular building, so you would create a **DynamicPersonGroup** containing those specific individuals.
+
+If you've used a **PersonGroup** before, take note of two major differences:
+* Each **Person** inside a **DynamicPersonGroup** is a reference to the actual **Person** in the **PersonDirectory**, meaning that it's not necessary to recreate a **Person** in each group.
+* As mentioned in previous sections, there is no need to make Train calls, as the face data is processed at the Directory level automatically.
+
+### Create the group
+
+To create a **DynamicPersonGroup**, you need to provide a group ID with alphanumeric or dash characters. This ID will function as the unique identifier for all usage purposes of the group.
+
+There are two ways to initialize a group collection. You can create an empty group initially, and populate it later:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+}
+```
+
+This process is immediate and there is no need to wait for any asynchronous operations to succeed.
+
+Alternatively, you can create it with a set of **Person** IDs to contain those references from the beginning by providing the set in the _AddPersonIds_ argument:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example DynamicPersonGroup");
+body.Add("userData", "User defined data");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PutAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+> [!NOTE]
+> As soon as the call returns, the created **DynamicPersonGroup** will be ready to use in an Identify call, with any **Person** references provided in the process. The completion status of the returned operation ID, on the other hand, indicates the update status of the person-to-group relationship.
+
+### Update the DynamicPersonGroup
+
+After the initial creation, you can add and remove **Person** references from the **DynamicPersonGroup** with the Update Dynamic Person Group API. To add **Person** objects to the group, list the **Person** IDs in the _addPersonsIds_ argument. To remove **Person** objects, list them in the _removePersonIds_ argument. Both adding and removing can be performed in a single call:
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/dynamicpersongroups/{dynamicPersonGroupId}";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("name", "Example Dynamic Person Group updated");
+body.Add("userData", "User defined data updated");
+body.Add("addPersonIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("removePersonIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PatchAsync(uri, content);
+
+ // Async operation location to query the completion status from
+ var operationLocation = response.Headers.Get("Operation-Location");
+}
+```
+
+Once the call returns, the updates to the collection will be reflected when the group is queried. As with the creation API, the returned operation indicates the update status of person-to-group relationship for any **Person** that's involved in the update. You don't need to wait for the completion of the operation before making further Update calls to the group.
+
+## Identify faces in a PersonDirectory
+
+The most common way to use face data in a **PersonDirectory** is to compare the enrolled **Person** objects against a given face and identify the most likely candidate it belongs to. Multiple faces can be provided in the request, and each will receive its own set of comparison results in the response.
+
+In **PersonDirectory**, there are three types of scopes each face can be identified against:
+
+### Scenario 1: Identify against a DynamicPersonGroup
+
+Specifying the _dynamicPersonGroupId_ property in the request compares the face against every **Person** referenced in the group. Only a single **DynamicPersonGroup** can be identified against in a call.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+// Optional query strings for more fine grained face control
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("dynamicPersonGroupId", "{dynamicPersonGroupIdToIdentifyIn}");
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 2: Identify against a specific list of persons
+
+You can also specify a list of **Person** IDs in the _personIds_ property to compare the face against each of them.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"{guid1}", "{guid2}", …});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+### Scenario 3: Identify against the entire **PersonDirectory**
+
+Providing a single asterisk in the _personIds_ property in the request compares the face against every single **Person** enrolled in the **PersonDirectory**.
+
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/identify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceIds", new List<string>{"{guid1}", "{guid2}", …});
+body.Add("personIds", new List<string>{"*"});
+byte[] byteData = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+For all three scenarios, the identification only compares the incoming face against faces whose AddPersonFace call has returned with a "succeeded" response.
+
+## Verify faces against persons in the **PersonDirectory**
+
+With a face ID returned from a detection call, you can verify if the face belongs to a specific **Person** enrolled inside the **PersonDirectory**. Specify the **Person** using the _personId_ property.
+
+```csharp
+var client = new HttpClient();
+
+// Request headers
+client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
+
+var uri = "https://{endpoint}/face/v1.0-preview/verify";
+
+HttpResponseMessage response;
+
+// Request body
+var body = new Dictionary<string, object>();
+body.Add("faceId", "{guid1}");
+body.Add("personId", "{guid1}");
+var jsSerializer = new JavaScriptSerializer();
+byte[] byteData = Encoding.UTF8.GetBytes(jsSerializer.Serialize(body));
+
+using (var content = new ByteArrayContent(byteData))
+{
+ content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ response = await client.PostAsync(uri, content);
+}
+```
+
+The response will contain a Boolean value indicating whether the service considers the new face to belong to the same **Person**, and a confidence score for the prediction.
+
+## Next steps
+
+In this guide, you learned how to use the **PersonDirectory** structure to store face and person data for your Face app. Next, learn the best practices for adding your users' face data.
+
+* [Best practices for adding users](../enrollment-overview.md)
cognitive-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-api-reference.md
+
+ Title: API Reference - Face
+
+description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs.
+++++++ Last updated : 02/17/2021+++
+# Face API reference list
+
+Azure Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories:
+
+- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).
+- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
cognitive-services Identity Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-encrypt-data-at-rest.md
+
+ Title: Face service encryption of data at rest
+
+description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK.
++++++ Last updated : 08/28/2020++
+#Customer intent: As a user of the Face service, I want to learn how encryption at rest works.
++
+# Face service encryption of data at rest
+
+The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
++
+> [!IMPORTANT]
+> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Face Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Face service, you will need to create a new Face resource and select E0 as the Pricing Tier. Once your Face resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
++
+## Next steps
+
+* For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
+* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [Cognitive Services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
You can use Computer Vision Spatial Analysis to ingest streaming video from came
<!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
* The [conceptual articles](tbd) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/language-support.md
The Computer Vision [Read API](./overview-ocr.md#read-api) supports many languag
> > `Read` OCR's deep-learning-based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-See [How to specify the `Read` model](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
+See [How to specify the `Read` model](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the new languages.
### Handwritten text
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Japanese |`ja`|✅ | ✅| ✅|||||| |✅|✅| |Kazakh |`kk`| | ✅| |||||| ||| |Korean |`ko`| | ✅| |||||| |||
-|Lithuanian |`It`| | ✅| |||||| |||
-|Latvian |`Iv`| | ✅| |||||| |||
+|Lithuanian |`lt`| | ✅| |||||| |||
+|Latvian |`lv`| | ✅| |||||| |||
|Macedonian |`mk`| | ✅| |||||| ||| |Malay Malaysia |`ms`| | ✅| |||||| ||| |Norwegian (Bokmal) |`nb`| | ✅| |||||| |||
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Polish |`pl`| | ✅| |||||| ||| |Dari |`prs`| | ✅| |||||| ||| | Portuguese-Brazil|`pt-BR`| | ✅| |||||| |||
-| Portuguese-Portugal |`pt`/`pt-PT`|✅ | ✅| ✅|||||| |✅|✅|
+| Portuguese-Portugal |`pt`|✅ | ✅| ✅|||||| |✅|✅|
+| Portuguese-Portugal |`pt-PT`| | ✅| |||||| |||
|Romanian |`ro`| | ✅| |||||| ||| |Russian |`ru`| | ✅| |||||| ||| |Slovak |`sk`| | ✅| |||||| |||
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Turkish |`tr`| | ✅| |||||| ||| |Ukrainian |`uk`| | ✅| |||||| ||| |Vietnamese |`vi`| | ✅| |||||| |||
-|Chinese Simplified |`zh`/ `zh-Hans`|✅ | ✅| ✅|||||| |✅|✅|
+|Chinese Simplified |`zh`|✅ | ✅| ✅|||||| |✅|✅|
+|Chinese Simplified |`zh-Hans`| | ✅| |||||| |||
|Chinese Traditional |`zh-Hant`| | ✅| |||||| |||
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
+
+ Title: What is the Azure Face service?
+
+description: The Azure Face service provides AI algorithms that you use to detect, recognize, and analyze human faces in images.
++++++ Last updated : 02/28/2022++
+keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
+#Customer intent: As the developer of an app that deals with images of humans, I want to learn what the Face service does so I can determine if I should use its features.
++
+# What is the Azure Face service?
+
+> [!WARNING]
+> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it.
+
+The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
+
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
+
+## Example use cases
+
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+
+**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
+
+**Face redaction**: Redact or blur detected faces of people recorded in a video to protect their privacy.
++
+## Face detection and analysis
+
+Face detection is required as a first step in all the other scenarios. The Detect API detects human faces in an image and returns the rectangle coordinates of their locations. It also returns a unique ID that represents the stored face data. This is used in later operations to identify or verify faces.
+
+Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
+
+> [!NOTE]
+> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to use other Face operations like Identify, Verify, Find Similar, or Face grouping, you should use this service instead.
+
+For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
++
+## Identity verification
+
+Modern enterprises and apps can use the Face identification and Face verification operations to verify that a user is who they claim to be.
+
+### Identification
+
+Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building or airport access to a certain group of people or verifying the user of a device.
+
+The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
+
+![A grid with three columns for different people, each with three rows of face images](./media/person.group.clare.jpg)
+
+After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+
+### Verification
+
+The verification operation answers the question, "Do these two faces belong to the same person?".
+
+Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for Identity Verification, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID.
+
+For more information about identity verification, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
++
+## Find similar faces
+
+The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+
+The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
+
+The following example shows the target face:
+
+![A woman smiling](./media/FaceFindSimilar.QueryFace.jpg)
+
+And these images are the candidate faces:
+
+![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
+
+To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
+
+## Group faces
+
+The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
+
+All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
+
+## Data privacy and security
+
+As with all of the Cognitive Services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center.
+
+## Next steps
+
+Follow a quickstart to code the basic components of a face recognition app in the language of your choice.
+
+- [Client library quickstart](quickstarts-sdk/identity-client-library.md).
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
You can use Image Analysis through a client library SDK or by calling the [REST
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
Generate a description of an entire image in human-readable language, using comp
### Detect faces
-Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](../face/index.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
+Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.<br/>Computer Vision provides a subset of the [Face](./index-identity.yml) service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection. [Detect faces](concept-detecting-faces.md)
### Detect image types
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
Optical character recognition (OCR) allows you to extract printed or handwritten
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./Vision-API-How-to-Topics/call-read-api.md) contain instructions for using the service in more specific or customized ways.
-<!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.
+* The [how-to guides](./how-to/call-read-api.md) contain instructions for using the service in more specific or customized ways.
+<!--* The [conceptual articles](how-to/call-read-api.md) provide in-depth explanations of the service's functionality and features.
* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. --> ## Read API
OCR for print text includes support for English, French, German, Italian, Portug
OCR for handwritten text includes support for English, Chinese Simplified, French, German, Italian, Japanese, Korean, Portuguese, Spanish languages.
-See [How to specify the model version](./Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
+See [How to specify the model version](./how-to/call-read-api.md#determine-how-to-process-the-data-optional) to use the preview languages and features. Refer to the full list of [OCR-supported languages](./language-support.md#optical-character-recognition-ocr).
## Key features
The Read API includes the following features.
* Handwriting classification for text lines (Latin only) * Available as Distroless Docker container for on-premises deployment
-Learn [how to use the OCR features](./vision-api-how-to-topics/call-read-api.md).
+Learn [how to use the OCR features](./how-to/call-read-api.md).
## Use the cloud API or deploy on-premises The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Azure's Computer Vision service gives you access to advanced algorithms that pro
||| | [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| |[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.|
+| [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. |
| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.| ## Computer Vision for digital asset management
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
+
+ Title: 'Quickstart: Use the Face client library'
+
+description: The Face API offers client libraries that makes it easy to detect, find similar, identify, verify and more.
+++
+zone_pivot_groups: programming-languages-set-face
+++ Last updated : 09/27/2021+
+ms.devlang: csharp, golang, javascript, python
+
+keywords: face search by image, facial recognition search, facial recognition, face recognition app
++
+# Quickstart: Use the Face client library
++++++++++++
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Title: What's new in Computer Vision?
-description: This article contains news about Computer Vision.
+description: Stay up to date on recent releases and updates to Azure Computer Vision.
Previously updated : 05/02/2022 Last updated : 05/25/2022 # What's new in Computer Vision
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
## May 2022
Computer Vision's [OCR (Read) API](overview-ocr.md) latest model with [164 suppo
* Performance and latency improvements. * Available as [cloud service](overview-ocr.md#read-api) and [Docker container](computer-vision-how-to-install-containers.md).
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the GA model.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages
* Enhancements including better support for extracting handwritten dates, amounts, names, and single character boxes. * General performance and AI quality improvements
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
+### New Quality Attribute in Detection_01 and Detection_03
+* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concept-face-detection.md) and see how to use it with [QuickStart](./quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio).
+ ## September 2021
Computer Vision's [OCR (Read) API](overview-ocr.md) expands [supported languages
* Enhancements for processing digital PDFs and Machine Readable Zone (MRZ) text in identity documents. * General performance and AI quality improvements
-See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
+See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-data-optional) to learn how to use the new preview features.
> [!div class="nextstepaction"] > [Get Started with the Read API](./quickstarts-sdk/client-library.md)
See the [OCR how-to guide](Vision-API-How-to-Topics/call-read-api.md#determine-h
The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
+## July 2021
+
+### New HeadPose and Landmarks improvements for Detection_03
+
+* The Detection_03 model has been updated to support facial landmarks.
+* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
+ ## May 2021 ### Spatial Analysis container update
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
The Computer Vision API v3.2 is now generally available with the following updates:
-* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
-* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./vision-api-how-to-topics/howtocallvisionapi.md) and [overview](./overview-image-analysis.md) to learn more.
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages. * [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment. > [!div class="nextstepaction"] > [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+### PersonDirectory data structure
+
+* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
+* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md).
+ ## March 2021 ### Computer Vision 3.2 Public Preview update
The Computer Vision Read API v3.2 public preview, available as cloud service and
* Extract text only for selected pages for a multi-page document. * Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) +
+### New Face API detection model
+* The new Detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New detectable Face attributes
+* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+### New Face API Recognition Model
+* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./how-to/specify-recognition-model.md) for more details.
+ ## January 2021 ### Spatial Analysis container update
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
* Added support for auto recalibration (by default disabled) via the `enable_recalibration` parameter, please refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details * Camera calibration parameters to the `DETECTOR_NODE_CONFIG`. Refer to [Spatial Analysis operations](./spatial-analysis-operations.md) for details.
+### Mitigate latency
+* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./how-to/mitigate-latency.md).
+
+## December 2020
+### Customer configuration for Face ID storage
+* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
+
+## November 2020
+### Sample Face enrollment app
+* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](Tutorials/build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
+ ## October 2020 ### Computer Vision API v3.1 GA
The Computer Vision Read API v3.1 public preview adds these capabilities:
* This preview version of the Read API supports English, Dutch, French, German, Italian, Japanese, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
+## August 2020
+### Customer-managed encryption of data at rest
+* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./identity-encrypt-data-at-rest.md).
+ ## July 2020 ### Read API v3.1 Public Preview with OCR for Simplified Chinese
The Computer Vision Read API v3.1 public preview adds support for Simplified Chi
* This preview version of the Read API supports English, Dutch, French, German, Italian, Portuguese, Simplified Chinese, and Spanish languages.
-See the [Read API how-to guide](Vision-API-How-to-Topics/call-read-api.md) to learn more.
+See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
> [!div class="nextstepaction"] > [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005)
Computer Vision API v3.0 entered General Availability, with updates to the Read
See the [OCR overview](overview-ocr.md) to learn more.
+## April 2020
+### New Face API Recognition Model
+* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./how-to/specify-recognition-model.md).
+ ## March 2020 * TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure Cognitive Services security](../cognitive-services-security.md).
You now can use version 3.0 of the Read API to extract printed or handwritten te
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/REST/CSharp-hand-text.md?tabs=version-3) to get starting using the 3.0 API. +
+## June 2019
+
+### New Face API detection model
+* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
+
+## April 2019
+
+### Improved attribute accuracy
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Improved processing speeds
+* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
+
+## March 2019
+
+### New Face API recognition model
+* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
+
+## January 2019
+
+### Face Snapshot feature
+* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](how-to/migrate-face-data.md).
+
+## October 2018
+
+### API messages
+* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
+
+## May 2018
+
+### Improved attribute accuracy
+* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Increased file size limit
+* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
+
+## March 2018
+
+### New data structure
+* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](how-to/use-large-scale.md).
+* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
+
+## May 2017
+
+### New detectable Face attributes
+* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
+* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
+
+## March 2017
+
+### New detectable Face attribute
+* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+### Fixed issues
+* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
+
+## November 2016
+### New subscription tier
+* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
+
+## October 2016
+### API messages
+* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+
+## July 2016
+### New features
+* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
+* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
+* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
+
+## V1.0 changes from V0
+
+* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
+ [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
+* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
+* Deprecated the V0 endpoint of Face API on June 30, 2016.
++ ## Cognitive Service updates [Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/ReleaseNotes.md
- Title: What's new in Azure Face service?-
-description: Stay up to date on recent releases and updates to the Azure Face service.
------- Previously updated : 09/27/2021----
-# What's new in Azure Face service?
-
-The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with new features, enhancements, fixes, and documentation updates.
-
-## February 2022
-
-### New Quality Attribute in Detection_01 and Detection_03
-* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concepts/face-detection.md) and see how to use it with [QuickStart](./quickstarts/client-libraries.md?pivots=programming-language-csharp&tabs=visual-studio).
--
-## July 2021
-
-### New HeadPose and Landmarks improvements for Detection_03
-
-* The Detection_03 model has been updated to support facial landmarks.
-* The landmarks feature in Detection_03 is much more precise, especially in the eyeball landmarks which are crucial for gaze tracking.
--
-## April 2021
-
-### PersonDirectory data structure
-
-* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
-* Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](Face-API-How-to-Topics/use-persondirectory.md).
--
-## February 2021
-
-### New Face API detection model
-* The new Detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
-### New detectable Face attributes
-* The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./face-api-how-to-topics/specify-detection-model.md) for more details.
-### New Face API Recognition Model
-* The new Recognition 04 model is the most accurate recognition model currently available. If you're a new customer, we recommend using this model for verification and identification. It improves upon the accuracy of Recognition 03, including improved recognition for users wearing face covers (surgical masks, N95 masks, cloth masks). Note that we recommend against enrolling images of users wearing face covers as this will lower recognition quality. Now customers can build safe and seamless user experiences that detect whether a user is wearing a face cover with the latest Detection 03 model, and recognize them with the latest Recognition 04 model. See [Specify a face recognition model](./face-api-how-to-topics/specify-recognition-model.md) for more details.
--
-## January 2021
-### Mitigate latency
-* The Face team published a new article detailing potential causes of latency when using the service and possible mitigation strategies. See [Mitigate latency when using the Face service](./face-api-how-to-topics/how-to-mitigate-latency.md).
-
-## December 2020
-### Customer configuration for Face ID storage
-* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
-
-## November 2020
-### Sample Face enrollment app
-* The team published a sample Face enrollment app to demonstrate best practices for establishing meaningful consent and creating high-accuracy face recognition systems through high-quality enrollments. The open-source sample can be found in the [Build an enrollment app](build-enrollment-app.md) guide and on [GitHub](https://github.com/Azure-Samples/cognitive-services-FaceAPIEnrollmentSample), ready for developers to deploy or customize.
-
-## August 2020
-### Customer-managed encryption of data at rest
-* The Face service automatically encrypts your data when persisting it to the cloud. The Face service encryption protects your data to help you meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. There is also a new option to manage your subscription with your own keys called customer-managed keys (CMK). More details can be found at [Customer-managed keys](./encrypt-data-at-rest.md).
-
-## April 2020
-### New Face API Recognition Model
-* The new recognition 03 model is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition 03 will provide improved accuracy for both similarity comparisons and person-matching comparisons. More details can be found at [Specify a face recognition model](./face-api-how-to-topics/specify-recognition-model.md).
-
-## June 2019
-
-### New Face API detection model
-* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](Face-API-How-to-Topics/specify-detection-model.md).
-
-## April 2019
-
-### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Improved processing speeds
-* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
-
-## March 2019
-
-### New Face API recognition model
-* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](Face-API-How-to-Topics/specify-recognition-model.md).
-
-## January 2019
-
-### Face Snapshot feature
-* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get). More details in [How to Migrate your face data to a different Face subscription](Face-API-How-to-Topics/how-to-migrate-face-data.md).
-
-## October 2018
-
-### API messages
-* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
-
-## May 2018
-
-### Improved attribute accuracy
-* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Increased file size limit
-* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
-
-## March 2018
-
-### New data structure
-* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to use the large-scale feature](Face-API-How-to-Topics/how-to-use-large-scale.md).
-* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
-
-## May 2017
-
-### New detectable Face attributes
-* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
-* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup.
-
-## March 2017
-
-### New detectable Face attribute
-* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-### Fixed issues
-* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels.
-
-## November 2016
-### New subscription tier
-* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
-
-## October 2016
-### API messages
-* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
-
-## July 2016
-### New features
-* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
-* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
-* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
-
-## V1.0 changes from V0
-
-* Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
- [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
-* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected.
-* Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service.
-* Deprecated the V0 endpoint of Face API on June 30, 2016.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
The issues are divided into three types. Refer to the following tables to check
**Auto-rejected**
-Data with these errors will not be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
+Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
| Category | Name | Description | | | -- | |
Unresolved errors listed in the next table affect the quality of training, but d
After you validate your data files, you can use them to build your Custom Neural Voice model.
-1. On the **Train model** tab, select **Train model** to create a voice model with the data you've uploaded.
+1. On the **Train model** tab, select **Train a new model** to create a voice model with the data you've uploaded.
-1. Select the neural training method for your model and target language. By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
+1. Select the neural training method for your model and target language.
+
+ By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
1. Choose the data you want to use for training, and specify a speaker file.
After you validate your data files, you can use them to build your Custom Neural
>- To create a custom neural voice, select at least 300 utterances. >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use his or her speech data to train a custom neural voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access](https://aka.ms/customneural).
-1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
+1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script, including up to 100 utterances. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
``` This is the waistline, and it's falling.
After you validate your data files, you can use them to build your Custom Neural
> [!NOTE] > Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
- The **Train model** table displays a new entry that corresponds to this newly created model. The table also displays the status: processing, succeeded, or failed. The status reflects the process of converting your data to a voice model, as shown in this table:
+ The **Train model** table displays a new entry that corresponds to this newly created model.
+
+ When the model is training, you can select **Cancel training** to cancel your voice model. You're not charged for this canceled training.
+
+ :::image type="content" source="media/custom-voice/cnv-cancel-training.png" alt-text="Screenshot that shows how to cancel training for a model.":::
+
+ The table displays the status: processing, succeeded, failed, and canceled. The status reflects the process of converting your data to a voice model, as shown in this table:
| State | Meaning | | -- | - | | Processing | Your voice model is being created. | | Succeeded | Your voice model has been created and can be deployed. | | Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
+ | Canceled | The training for your voice model was canceled. |
Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice. > [!NOTE]
- > Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
+ > Standard subscription (S0) users can train four voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
1. After you finish training the model successfully, you can review the model details.
The quality of the voice depends on many factors, such as:
- The accuracy of the transcript file. - How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
+### Rename your model
+
+If you want to rename the model you built, you can select **Clone model** to create a clone of the model with a new name in the current project.
++
+Enter the new name on the **Clone voice model** window, then click **Submit**. The text 'Neural' will be automatically added as a suffix to your new model name.
++
+### Test your voice model
+
+After you've trained your voice model, you can test the model on the model details page. Select **DefaultTests** under **Testing** to listen to the sample audios. The default test samples include 100 sample audios generated automatically during training to help you test the model. In addition to these 100 audios provided by default, your own test script (at most 100 utterances) provided during training are also added to **DefaultTests** set. You're not charged for the testing with **DefaultTests**.
++
+If you want to upload your own test scripts to further test your model, select **Add test scripts** to upload your own test script.
++
+Before uploading test script, check the [test script requirements](#train-your-custom-neural-voice-model). You'll be charged for the additional testing with the batch synthesis based on the number of billable characters. See [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+On **Add test scripts** window, click **Browse for a file** to select your own script, then select **Add** to upload it.
++ For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). > [!NOTE]
-> Custom Neural Voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
+> Custom Neural Voice training is only available in some regions. But you can easily copy a neural voice model from these regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
## Next steps
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Cognitive Services for Big Data can use services from any region in the world, a
|Service Name|Service Description| |:--|:| |[Computer Vision](../computer-vision/index.yml "Computer Vision")| The Computer Vision service provides you with access to advanced algorithms for processing images and returning information. |
-|[Face](../face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. |
+|[Face](../computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. |
### Speech
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/best-practices.md
+
+ Title: Project best practices
+description: Best practices for Question Answering
+++++
+recommendations: false
Last updated : 06/03/2022++
+# Project best practices
+
+The following list of QnA pairs will be used to represent a project (knowledge base) to highlight best practices when authoring in custom question answering.
+
+|Question |Answer |
+|-|-|
+|I want to buy a car. |There are three options for buying a car. |
+|I want to purchase software license. |Software licenses can be purchased online at no cost. |
+|How to get access to WPA? |WPA can be accessed via the company portal. |
+|What is the price of Microsoft stock?|$200. |
+|How do I buy Microsoft Services? |Microsoft services can be bought online. |
+|I want to sell car. |Please send car pictures and documents. |
+|How do I get an identification card? |Apply via company portal to get an identification card.|
+|How do I use WPA? |WPA is easy to use with the provided manual. |
+|What is the utility of WPA? |WPA provides a secure way to access company resources. |
+
+## When should you add alternate questions to a QnA?
+
+- Question answering employs a transformer-based ranker that takes care of user queries that are semantically similar to questions in the knowledge base. For example, consider the following question answer pair:
+
+ **Question: ΓÇ£What is the price of Microsoft Stock?ΓÇ¥**
+
+ **Answer: ΓÇ£$200ΓÇ¥.**
+
+ The service can return expected responses for semantically similar queries such as:
+
+ "How much is Microsoft stock worth?"
+
+ "How much is Microsoft's share value?"
+
+ "How much does a Microsoft share cost?"
+
+ "What is the market value of Microsoft stock?"
+
+ "What is the market value of a Microsoft share?"
+
+ However, please note that the confidence score with which the system returns the correct response will vary based on the input query and how different it is from the original question answer pair.
+
+- There are certain scenarios which require the customer to add an alternate question. When a query does not return the correct answer despite it being present in the knowledge base, we advise adding that query as an alternate question to the intended QnA pair.
+
+## How many alternate questions per QnA is optimal?
+
+- Users can add up to 10 alternate questions depending on their scenario. Alternate questions beyond the first 10 arenΓÇÖt considered by our core ranker. However, they are evaluated in the other processing layers resulting in better output overall. All the alternate questions will be considered in the preprocessing step to look for an exact match.
+
+- Semantic understanding in question answering should be able to take care of similar alternate questions.
+
+- The return on investment will start diminishing once you exceed 10 questions. Even if youΓÇÖre adding more than 10 alternate questions, try to make the initial 10 questions as semantically dissimilar as possible so that all intents for the answer are captured by these 10 questions. For the knowledge base above, in QNA #1, adding alternate questions such as "How can I buy a car?", "I wanna buy a car." are not required. Whereas adding alternate questions such as "How to purchase a car.", "What are the options for buying a vehicle?" can be useful.
+
+## When to add synonyms to a knowledge base
+
+- Question answering provides the flexibility to use synonyms at the knowledge base level, unlike QnA Maker where synonyms are shared across knowledge bases for the entire service.
+
+- For better relevance, the customer needs to provide a list of acronyms that the end user intends to use interchangeably. For instance, the following is a list of acceptable acronyms:
+
+ MSFT ΓÇô Microsoft
+
+ ID ΓÇô Identification
+
+ ETA ΓÇô Estimated time of Arrival
+
+- Apart from acronyms, if you think your words are similar in context of a particular domain and generic language models wonΓÇÖt consider them similar, itΓÇÖs better to add them as synonyms. For instance, if an auto company producing a car model X receives queries such as "my carΓÇÖs audio isnΓÇÖt working" and the knowledge base has questions on "fixing audio for car X", then we need to add "X" and "car" as synonyms.
+
+- The Transformer based model already takes care of most of the common synonym cases, for e.g.- Purchase ΓÇô Buy, Sell - Auction, Price ΓÇô Value. For example, consider the following QnA pair: Q: "What is the price of Microsoft Stock?" A: "$200".
+
+If we receive user queries like "Microsoft stock value", "Microsoft share value", "Microsoft stock worth", "Microsoft share worth", "stock value", etc., they should be able to get correct answer even though these queries have words like share, value, worth which are not originally present in the knowledge base.
+
+## How are lowercase/uppercase characters treated?
+
+Question answering takes casing into account but it's intelligent enough to understand when it is to be ignored. You should not be seeing any perceivable difference due to wrong casing.
+
+## How are QnAs prioritized for multi-turn questions?
+
+When a KB has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other QnAs, for the next query we give slight preference to all the children QnAs, sibling QnAs and grandchildren QnAs in that order. Along with any query, the [Question Answering API] (/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a "context" object with the property "previousQnAId" which denotes the last top answer. Based on this previous QnA ID, all the related QnAs are boosted.
+
+## How are accents treated?
+
+Accents are supported for all major European languages. If the query has an incorrect accent, confidence score might be slightly different, but the service still returns the relevant answer and takes care of minor errors by leveraging fuzzy search.
+
+## How is punctuation in a user query treated?
+
+Punctuation is ignored in user query before sending it to the ranking stack. Ideally it should not impact the relevance scores. Punctuations that are ignored are as follows: ,?:;\"'(){}[]-+。./!*؟
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with Question Answering](../quickstart/sdk.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
* [Anomaly Detector (Preview)](./anomaly-detector/index.yml) * [Custom Vision](./custom-vision-service/index.yml)
-* [Face](./face/index.yml)
+* [Face](./computer-vision/index-identity.yml)
* [Personalizer](./personalizer/index.yml) ## Vision
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
See the tables below to learn about the services offered within those categories
|:--|:|--| |[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information.| [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md)| |[Custom Vision](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | [Custom Vision quickstart](./custom-vision-service/getting-started-build-a-classifier.md)|
-|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
+|[Face](./computer-vision/index-identity.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
## Speech APIs
communication-services Network Diagnostic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/developer-tools/network-diagnostic.md
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
-The Network Diagnostics Tool enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://acs-network-diagnostic-tool.azurewebsites.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool.
+The **Network Diagnostics Tool** enables Azure Communication Services developers to ensure that their device and network conditions are optimal for connecting to the service to ensure a great call experience. The tool can be found at [aka.ms/acsdiagnostics](https://azurecommdiagnostics.net/). Users can quickly run a test, by pressing the start test button. The tool performs diagnostics on the network, devices, and call quality. The results of the diagnostics are directly provided through the tools UI. No sign-in required to use the tool. After the test, a GUID is presented which can be provided to our support team for further help.
![Network Diagnostic Tool home screen](../media/network-diagnostic-tool.png)
If you are looking to build your own Network Diagnostic Tool or to perform deepe
## Privacy When a user runs a network diagnostic, the tool collects and store service and client telemetry data to verify your network conditions and ensure that they're compatible with Azure Communication Services. The telemetry collected doesn't contain personal identifiable information. The test utilizes both audio and video collected through your device for this verification. The audio and video used for the test aren't stored.+
+## Support
+
+The test provides a **unique identifier** for your test which you can provide our support team who can provide further help. For more information see [help and support options](../../support.md)
## Next Steps
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/messaging-policy.md
# Azure Communication Services Messaging Policy
-Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams and Skype. Integrate SMS messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements and industry standards to get started.
+Azure Communication Services is transforming the way our customers engage with their clients by building rich, custom communication experiences that take advantage of the same enterprise-grade services that back Microsoft Teams, Skype, and Exchange. You can easily integrate SMS and email messaging functionality into your communications solutions to reach your customers anytime and anywhere they need support. You just need to keep in mind a few messaging requirements and industry standards to get started.
We know that messaging requirements can seem daunting to learn, but they're as easy as remembering ΓÇ£COMSΓÇ¥:
We developed this messaging policy to help you satisfy regulatory requirements a
### What is consent?
-Consent is an agreement between you and the message recipient that allows you to send automated messages to them. You must obtain consent before sending the first message, and you should make clear to the recipient that they're agreeing to receive messages from you. This procedure is known as receiving ΓÇ£prior express consentΓÇ¥ from the individual you intend to message.
+Consent is an agreement between you and the message recipient that allows you to send application to person (A2P) messages to them. You must obtain consent before sending the first message, and you should make clear to the recipient that they're agreeing to receive messages from you. This procedure is known as receiving "prior express consent" from the individual you intend to message.
-The messages that you send must be the same type of messages that the recipient agreed to receive and should only be sent to the number that the recipient provided to you. If you intend to send informational messages, such as appointment reminders or alerts, then consent can be either written or oral. If you intend to send promotional messages, such as sales or marketing messages that promote a product or service, then consent must be written.
+The messages that you send must be the same type of messages that the recipient agreed to receive and should only be sent to the number or email address that the recipient provided to you. If you intend to send informational messages, such as appointment reminders or alerts, then consent can be either written or oral. If you intend to send promotional messages, such as sales or marketing messages that promote a product or service, then consent must be written.
### How do you obtain consent? Consent can be obtained in a variety of ways, such as: -- When a user enters their telephone number into a website,
+- When a user enters their telephone number or email address into a website,
- When a user initiates a text message exchange, or - When a user sends a sign-up keyword to your phone number.
Regardless of how consent is obtained, you and your customers must ensure that t
- Provide a ΓÇ£Call to ActionΓÇ¥ before obtaining consent. You and your customers should provide potential message recipients with a ΓÇ£call to actionΓÇ¥ that invites them to opt-in to your messaging program. The call to action should include, at a minimum: (1) the identity of the message sender, (2) clear opt-in instructions, (3) opt-out instructions, and (4) any associated messaging fees. - Consent isn't transferable or assignable. Any consent that an individual provides to you cannot be transferred or sold to an unaffiliated third party. If you collect an individualΓÇÖs consent for a third party, then you must clearly identify the third party to the individual. You must also state that the consent you obtained applies only to communications from the third party.-- Consent is limited in purpose. An individual who provides their number for a particular purpose consents to receive communications only for that specific purpose and from that specific message sender. Before obtaining consent, you should clearly notify the intended message recipient if you'll send recurring messages or messages from an affiliate.
+- Consent is limited in purpose. An individual who provides their number or an email address for a particular purpose consents to receive communications only for that specific purpose and from that specific message sender. Before obtaining consent, you should clearly notify the intended message recipient if you'll send recurring messages or messages from an affiliate.
### Consent best practices:
In addition to the messaging requirements discussed above, you may want to imple
- Detailed ΓÇ£Call to ActionΓÇ¥ information. To ensure that you obtain appropriate consent, provide - The name or description of your messaging program or product
- - The number(s) from which recipients will receive messages, and
+ - The number(s) or email address(es) from which recipients will receive messages, and
- Any applicable terms and conditions before an individual opts-in to receiving messages from you. - Accurate records of consent. You should retain records of any consent that an individual provides to you for at least four years. Records of consent can include: - Timestamps
Message recipients may revoke consent and opt-out of receiving future messages t
Ensure that message recipients can opt-out of future messages at any time. You must also offer multiple opt-out options. After a message recipient opts-out, you should not send additional messages unless the individual provides renewed consent.
-One of the most common opt-out mechanisms is to include a ΓÇ£STOPΓÇ¥ keyword in the initial message of every new conversation. Be prepared to remove customers that reply with a lowercase ΓÇ£stopΓÇ¥ or other common keywords, such as ΓÇ£unsubscribeΓÇ¥ or ΓÇ£cancel.ΓÇ¥ After an individual revokes consent, you should remove them from all recurring messaging campaigns unless they expressly elect to continue receiving messages from a particular program.
+One of the most common opt-out mechanisms in SMS applications is to include a ΓÇ£STOPΓÇ¥ keyword in the initial message of every new conversation. Be prepared to remove customers that reply with a lowercase ΓÇ£stopΓÇ¥ or other common keywords, such as ΓÇ£unsubscribeΓÇ¥ or ΓÇ£cancel.ΓÇ¥
+
+For email, it is to embed a link to unsubscribe in every email sent to the customer. If the customer selects the unsubscribe link, you should be prepared to remove that customer email address(es) from your communication list.
+
+After an individual revokes consent, you should remove them from all recurring messaging campaigns unless they expressly elect to continue receiving messages from a particular program.
### Opt-out best practices:
-In addition to keywords, other common opt-out mechanisms include providing customers with a designated opt-out e-mail address, the phone number of customer support staff, or a link to unsubscribe on your webpage.
+In addition to keywords, other common opt-out mechanisms include providing customers with a designated opt-out e-mail address, the phone number of customer support staff, or a link to unsubscribe embedded in an email message you sent or available on your webpage.
-### How we handle opt-out requests:
+### How we handle opt-out requests for SMS
If an individual requests to opt-out of future messages on an Azure Communication Services toll-free number, then all further traffic from that number will be automatically stopped. However, you must still ensure that you do not send additional messages for that messaging campaign from new or different numbers. If you have separately obtained express consent for a different messaging campaign, then you may continue to send messages from a different number for that campaign. Check out our FAQ section to learn more on [Opt-out handling](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/communication-services/concepts/sms/sms-faq.md#how-can-i-receive-messages-using-azure-communication-services)
+### How we handle opt-out requests for email
+
+If an individual requests to opt out of future messages on Azure Communication Services using the unsubscribe UI page to process the unsubscribe requests, you will have to add the requested recipient's email address to the suppression list that will be used to filter recipients during the send-mail process.
+ ## Message content ### Adult content:
We reserve the right to modify the list of prohibited message content at any tim
## Spoofing
-Spoofing is the act of causing a misleading or inaccurate originating number to display on a message recipientΓÇÖs device. We strongly discourage you and any service provider that you use from sending spoofed messages. Spoofing shields the identity of the message sender and prevents message recipients from easily opting out of unwanted communications. We also require that you abide by all applicable spoofing laws.
+Spoofing is the act of causing a misleading or inaccurate originating number or email address to display on a message recipientΓÇÖs device. We strongly discourage you and any service provider that you use from sending spoofed messages. Spoofing shields the identity of the message sender and prevents message recipients from easily opting out of unwanted communications. We also require that you abide by all applicable spoofing laws.
## Final thoughts
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The SBC makes a DNS query to resolve sip.pstnhub.microsoft.com. Based on the SBC
## Media traffic: IP and Port ranges
-The media traffic flows to and from a separate service called Media Processor. At the moment of publishing, Media Processor for Communication Services can use any Azure IP address.
-Download [the full list of addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+The media traffic flows to and from a separate service in the Microsoft Cloud called Media Processor. The IP address range for media traffic:
+- `20.202.0.0/16 (IP addresses from 20.202.0.1 to 20.202.255.254)`
-### Port range
-The port range of the Media Processors is shown in the following table:
+### Port ranges
+The port ranges of the Media Processors are shown in the following table:
|Traffic|From|To|Source port|Destination port| |: |: |: |: |: |
The port range of the Media Processors is shown in the following table:
## Media traffic: Media processors geography
-The media traffic flows via components called media processors. Media processors are placed in the same datacenters as SIP proxies:
+Media Processors are placed in the same datacenters as SIP proxies:
- NOAM (US South Central, two in US West and US East datacenters) - Europe (UK South, France Central, Amsterdam and Dublin datacenters) - Asia (Singapore datacenter)
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 06/01/2022 Last updated : 06/08/2022 tags: connectors
The SQL Server connector has different versions, based on [logic app type and ho
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). | | **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For operations, managed connector limits, and other information, review the [SQL Server managed connector reference](/connectors/sql). For ISE-versioned limits, review the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed connector's message limits. |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). For managed connector operations, limits, and other information, review the [SQL Server managed connector reference](/connectors/sql/). <br><br>The built-in connector differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. This action can directly access Azure virtual networks with a connection string and doesn't need the on-premises data gateway. <br><br>For built-in connector operations, limits, and other information, review the [SQL Server built-in connector reference](#built-in-connector-operations). |
||||
+## Limitations
+
+For more information, review the [SQL Server managed connector reference](/connectors/sql/) or the [SQL Server built-in connector reference](#built-in-connector-operations).
+ ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The SQL Server connector has different versions, based on [logic app type and ho
You can use the SQL Server built-in connector, which requires a connection string. To use the SQL Server managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-For other connector requirements, review [SQL Server connector reference](/connectors/sql/).
-
-## Limitations
-
-For more information, review the [SQL Server connector reference](/connectors/sql/).
+For other connector requirements, review [SQL Server managed connector reference](/connectors/sql/).
<a name="add-sql-trigger"></a>
When you call a stored procedure by using the SQL Server connector, the returned
1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
+<a name="built-in-connector-operations"></a>
+
+## Built-in connector operations
++
+### Actions
+
+The SQL Server built-in connector has a single action.
+
+#### Execute Query
+
+Operation ID: `executeQuery`
+
+Runs a query against a SQL database.
+
+##### Parameters
+
+| Name | Key | Required | Type | Description |
+||--|-||-|
+| **Query** | `query` | True | Dynamic | The body for your query |
+| **Query Parameters** | `queryParameters` | False | Objects | The parameters for your query |
+||||||
+
+##### Returns
+
+The outputs from this operation are dynamic.
+
+## Built-in connector app settings
+
+The SQL Server built-in connector includes app settings on your Standard logic app resource that control various thresholds for performance, throughput, capacity, and so on. For example, you can change the default timeout value for connector operations. For more information, review [Reference for app settings - local.settings.json](../logic-apps/edit-app-settings-host-settings.md#reference-local-settings-json).
+ ## Troubleshoot problems <a name="connection-problems"></a>
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Previously updated : 05/15/2022 Last updated : 06/07/2022
Azure Container Apps allows you to bind one or more custom domains to a containe
- Every domain name must be associated with a domain certificate. - Certificates are applied to the container app environment and are bound to individual container apps. You must have role-based access to the environment to add certificates. - [SNI domain certificates](https://wikipedia.org/wiki/Server_Name_Indication) are required.
+- Ingress must be enabled for the container app
## Add a custom domain and certificate
-> [!NOTE]
-> If you are using a new certificate, you must have an existing [SNI domain certificate](https://wikipedia.org/wiki/Server_Name_Indication) file available to upload to Azure.
+> [!IMPORTANT]
+> If you are using a new certificate, you must have an existing [SNI domain certificate](https://wikipedia.org/wiki/Server_Name_Indication) file available to upload to Azure.
1. Navigate to your container app in the [Azure portal](https://portal.azure.com)
+1. Verify that your app has ingress enabled by selecting **Ingress** in the *Settings* section. If ingress is not enabled, enable it with these steps:
+
+ 1. Set *HTTP Ingress* to **Enabled**.
+ 1. Select the desired *Ingress traffic* setting.
+ 1. Enter the *Target port*.
+ 1. Select **Save**.
+ 1. Under the *Settings* section, select **Custom domains**. 1. Select the **Add custom domain** button.
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 05/10/2022 Last updated : 06/07/2022 # Dapr integration with Azure Container Apps
-The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once enabled in Container Apps, Dapr exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
+The Distributed Application Runtime ([Dapr][dapr-concepts]) is a set of incrementally adoptable APIs that simplify the authoring of distributed, microservice-based applications. For example, Dapr provides capabilities for enabling application intercommunication, whether through messaging via pub/sub or reliable and secure service-to-service calls. Once Dapr is enabled in Container Apps, it exposes its HTTP and gRPC APIs via a sidecar: a process that runs in tandem with each of your Container Apps.
Dapr APIs, also referred to as building blocks, are built on best practice industry standards, that:
The following Pub/sub example demonstrates how Dapr works alongside your contain
| -- | - | -- | | 1 | Container Apps with Dapr enabled | Dapr is enabled at the container app level by configuring Dapr settings. Dapr settings apply across all revisions of a given container app. | | 2 | Dapr sidecar | Fully managed Dapr APIs are exposed to your container app via the Dapr sidecar. These APIs are available through HTTP and gRPC protocols. By default, the sidecar runs on port 3500 in Container Apps. |
-| 3 | Dapr component | Dapr components can be shared by multiple container apps. Using scopes, the Dapr sidecar will determine which components to load for a given container app at runtime. |
+| 3 | Dapr component | Dapr components can be shared by multiple container apps. The Dapr sidecar uses scopes to determine which components to load for a given container app at runtime. |
### Enable Dapr
-You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like bicep or ARM. With the following settings, you enable Dapr on your app:
+You can define the Dapr configuration for a container app through the Azure CLI or using Infrastructure as Code templates like a bicep or an Azure Resource Manager (ARM) template. You can enable Dapr in your app with the following settings:
-| Field | Description |
-| -- | -- |
-| `--enable-dapr` / `enabled` | Enables Dapr on the container app. |
-| `--dapr-app-port` / `appPort` | Identifies which port your application is listening. |
-| `--dapr-app-protocol` / `appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
-| `--dapr-app-id` / `appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
+| CLI Parameter | Template field | Description |
+| -- | -- | -- |
+| `--enable-dapr` | `dapr.enabled` | Enables Dapr on the container app. |
+| `--dapr-app-port` | `dapr.appPort` | Identifies which port your application is listening. |
+| `--dapr-app-protocol` | `dapr.appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http` or `grpc`. Default is `http`. |
+| `--dapr-app-id` | `dapr.appId` | The unique ID of the application. Used for service discovery, state encapsulation, and the pub/sub consumer ID. |
-Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr settings. However, when changing a Dapr setting, the container app instance and revisions are automatically restarted.
+The following example shows how to define a Dapr configuration in a template by adding the Dapr configuration to the `properties.configuration` section of your container apps resource declaration.
+
+# [Bicep](#tab/bicep1)
+
+```bicep
+ dapr: {
+ enabled: true
+ appId: 'nodeapp'
+ appProtocol: 'http'
+ appPort: 3000
+ }
+```
+
+# [ARM](#tab/arm1)
+
+```json
+ "dapr": {
+ "enabled": true,
+ "appId": "nodeapp",
+ "appProcotol": "http",
+ "appPort": 3000
+ }
+
+```
+++
+Since Dapr settings are considered application-scope changes, new revisions aren't created when you change Dapr setting. However, when changing Dapr settings, the container app revisions and replicas are automatically restarted.
### Configure Dapr components
Once Dapr is enabled on your container app, you're able to plug in and use the [
- Can be easily modified to point to any one of the component implementations. - Can reference secure configuration values using Container Apps secrets.
-Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you will find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
+Based on your needs, you can "plug in" certain Dapr component types like state stores, pub/sub brokers, and more. In the examples below, you'll find the various schemas available for defining a Dapr component in Azure Container Apps. The Container Apps manifests differ sightly from the Dapr OSS manifests in order to simplify the component creation experience.
> [!NOTE] > By default, all Dapr-enabled container apps within the same environment will load the full set of deployed components. By adding scopes to a component, you tell the Dapr sidecars for each respective container app which components to load at runtime. Using scopes is recommended for production workloads. # [YAML](#tab/yaml)
-When defining a Dapr component via YAML, you will pass your component manifest into the Azure CLI. When configuring multiple components, you will need to create a separate YAML file and run the Azure CLI command for each component.
+When defining a Dapr component via YAML, you'll pass your component manifest into the Azure CLI. When configuring multiple components, you'll need to create a separate YAML file and run the Azure CLI command for each component.
For example, deploy a `pubsub.yaml` component using the following command:
For example, deploy a `pubsub.yaml` component using the following command:
az containerapp env dapr-component set --name ENVIRONMENT_NAME --resource-group RESOURCE_GROUP_NAME --dapr-component-name pubsub --yaml "./pubsub.yaml" ```
-The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`.
+The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`.
```yaml # pubsub.yaml for Azure Service Bus component
The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with ap
This resource defines a Dapr component called `dapr-pubsub` via Bicep. The Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+The `dapr-pubsub` component is scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```bicep resource daprComponent 'daprComponents@2022-03-01' = {
resource daprComponent 'daprComponents@2022-03-01' = {
A Dapr component is defined as a child resource of your Container Apps environment. To define multiple components, you can add a `daprComponent` resource for each Dapr component.
-This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app ids `publisher-app` and `subscriber-app`:
+This resource defines a Dapr component called `dapr-pubsub` via ARM. The `dapr-pubsub` component will be scoped to the Dapr-enabled container apps with app IDs `publisher-app` and `subscriber-app`:
```json {
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Once your Azure Blob Storage account is created, you'll create a template where
### Create Azure Resource Manager (ARM) template
-Create an ARM template to deploy a Container Apps environment including:
+Create an ARM template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace
-* Application Insights resource for distributed tracing
+* the Application Insights resource for distributed tracing
* a dapr component for the state store
-* two dapr-enabled container apps
+* the two dapr-enabled container apps
Save the following file as _hello-world.json_:
Save the following file as _hello-world.json_:
"managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments/', parameters('environment_name'))]", "configuration": { "ingress": {
- "external": true,
+ "external": false,
"targetPort": 3000 }, "dapr": {
Save the following file as _hello-world.json_:
{ "image": "dapriosamples/hello-k8s-node:latest", "name": "hello-k8s-node",
+ "env": [
+ {
+ "name": "APP_PORT",
+ "value": "3000"
+ }
+ ],
"resources": { "cpu": 0.5, "memory": "1.0Gi"
Save the following file as _hello-world.json_:
### Create Azure Bicep templates
-Create a bicep template to deploy a Container Apps environment including:
+Create a bicep template to deploy a Container Apps environment that includes:
* the associated Log Analytics workspace
-* Application Insights resource for distributed tracing
+* the Application Insights resource for distributed tracing
* a dapr component for the state store * the two dapr-enabled container apps
resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
managedEnvironmentId: environment.id configuration: { ingress: {
- external: true
+ external: false
targetPort: 3000 } dapr: {
resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
{ image: 'dapriosamples/hello-k8s-node:latest' name: 'hello-k8s-node'
+ env: [
+ {
+ name: 'APP_PORT'
+ value: '3000'
+ }
+ ]
resources: { cpu: json('0.5') memory: '1.0Gi'
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
az containerapp env dapr-component set `
-Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and is not available to other container apps.
+Your state store is configured using the Dapr component described in *statestore.yaml*. The component is scoped to a container app named `nodeapp` and isn't available to other container apps.
## Deploy the service application (HTTP web server)
az containerapp create \
--environment $CONTAINERAPPS_ENVIRONMENT \ --image dapriosamples/hello-k8s-node:latest \ --target-port 3000 \
- --ingress 'external' \
+ --ingress 'internal' \
--min-replicas 1 \ --max-replicas 1 \ --enable-dapr \
+ --dapr-app-id nodeapp \
--dapr-app-port 3000 \
- --dapr-app-id nodeapp
+ --env-vars 'APP_PORT=3000'
``` # [PowerShell](#tab/powershell)
az containerapp create `
--environment $CONTAINERAPPS_ENVIRONMENT ` --image dapriosamples/hello-k8s-node:latest ` --target-port 3000 `
- --ingress 'external' `
+ --ingress 'internal' `
--min-replicas 1 ` --max-replicas 1 ` --enable-dapr `
+ --dapr-app-id nodeapp `
--dapr-app-port 3000 `
- --dapr-app-id nodeapp
+ --env-vars 'APP_PORT=3000'
```
az containerapp create `
This command deploys: * the service (Node) app server on `--target-port 3000` (the app port)
-* its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000` for service discovery and invocation
+* its accompanying Dapr sidecar configured with `--dapr-app-id nodeapp` and `--dapr-app-port 3000'` for service discovery and invocation
## Deploy the client application (headless client)
az containerapp create `
-This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there is no `--target-port` to start a server, nor is there a need to enable ingress.
+This command deploys `pythonapp` that also runs with a Dapr sidecar that is used to look up and securely call the Dapr sidecar for `nodeapp`. As this app is headless there's no `--target-port` to start a server, nor is there a need to enable ingress.
## Verify the result
You can confirm that the services are working correctly by viewing data in your
### View Logs
-Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI. Wait a few minutes for the analytics to arrive for the first time before you are able to query the logged data.
+Data logged via a container app are stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. You can view logs through the Azure portal or with the CLI. Wait a few minutes for the analytics to arrive for the first time before you're able to query the logged data.
Use the following CLI command to view logs on the command line.
nodeapp Got a new order! Order ID: 63 PrimaryResult 2021-10-22
## Clean up resources
-Once you are done, run the following command to delete your resource group along with all the resources you created in this tutorial.
+Once you're done, run the following command to delete your resource group along with all the resources you created in this tutorial.
# [Bash](#tab/bash)
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Title: Manage revisions in Azure Container Apps
-description: Manage revisions and traffic splitting in Azure Container Apps.
+description: Manage revisions and traffic splitting in Azure Container Apps.
Previously updated : 11/02/2021 Last updated : 06/07/2022 -
-# Manage revisions Azure Container Apps
+# Manage revisions in Azure Container Apps
-Supporting multiple revisions in Azure Container Apps allows you to manage the versioning and amount of [traffic sent to each revision](#traffic-splitting). Use the following commands to control of how your container app manages revisions.
+Supporting multiple revisions in Azure Container Apps allows you to manage the versioning of your container app. With this feature, you can activate and deactivate revisions, and control the amount of [traffic sent to each revision](#traffic-splitting). To learn more about revisions, see [Revisions in Azure Container Apps](revisions.md)
-## List
+A revision is created when you first deploy your application. New revisions are created when you [update](#updating-your-container-app) your application with [revision-scope changes](revisions.md#revision-scope-changes). You can also update your container app based on a specific revision.
-List all revisions associated with your container app with `az containerapp revision list`.
+
+This article described the commands to manage your container app's revisions. For more information about Container Apps commands, see [`az containerapp`](/cli/azure/containerapp). For more information about commands to manage revisions, see [`az containerapp revision`](/cli/azure/containerapp/revision).
++
+## Updating your container app
+
+To update a container app, use the `az containerapp update` command. With this command you can modify environment variables, compute resources, scale parameters, and deploy a different image. If your container app update includes [revision-scope changes](revisions.md#revision-scope-changes), a new revision will be generated.
+
+You may also use a YAML file to define these and other configuration options and parameters. For more information regarding this command, see [`az containerapp revision copy`](/cli/azure/containerapp#az-containerapp-update).
+
+This example updates the container image. (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp update \
+ --name <APPLICATION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --image <IMAGE_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp update `
+ --name <APPLICATION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --image <IMAGE_NAME>
+```
+++
+You can also update your container app with the [Revision copy](#revision-copy) command.
+
+## Revision list
+
+List all revisions associated with your container app with `az containerapp revision list`. For more information about this command, see [`az containerapp revision list`](/cli/azure/containerapp/revision#az-containerapp-revision-list)
# [Bash](#tab/bash)
az containerapp revision list `
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision show
-## Show
+Show details about a specific revision by using `az containerapp revision show`. For more information about this command, see [`az containerapp revision show`](/cli/azure/containerapp/revision#az-containerapp-revision-show).
-Show details about a specific revision by using `az containerapp revision show`.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision show \ --name <REVISION_NAME> \
- --app <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision show \
```azurecli az containerapp revision show ` --name <REVISION_NAME> `
- --app <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision copy
+
+To create a new revision based on an existing revision, use the `az containerapp revision copy`. Container Apps will use the configuration of the existing revision, which you then may modify.
-## Update
+With this command, you can modify environment variables, compute resources, scale parameters, and deploy a different image. You may also use a YAML file to define these and other configuration options and parameters. For more information regarding this command, see [`az containerapp revision copy`](/cli/azure/containerapp/revision#az-containerapp-revision-copy).
-To update a container app, use `az containerapp update`.
+This example copies the latest revision and sets the compute resource parameters. (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli
-az containerapp update \
+az containerapp revision copy \
--name <APPLICATION_NAME> \ --resource-group <RESOURCE_GROUP_NAME> \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld
+ --cpu 0.75 \
+ --memory 1.5Gi
``` # [PowerShell](#tab/powershell) ```azurecli
-az containerapp update `
+az containerapp revision copy `
--name <APPLICATION_NAME> ` --resource-group <RESOURCE_GROUP_NAME> `
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld
+ --cpu 0.75 `
+ --memory 1.5Gi
```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision activate
-## Activate
+Activate a revision by using `az containerapp revision activate`. For more information about this command, see [`az containerapp revision activate`](/cli/azure/containerapp/revision#az-containerapp-revision-activate).
-Activate a revision by using `az containerapp revision activate`.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision activate \ --revision <REVISION_NAME> \
- --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision activate \
```poweshell az containerapp revision activate ` --revision <REVISION_NAME> `
- --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision deactivate
-## Deactivate
+Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision. For more information, see [`az containerapp revision deactivate`](/cli/azure/containerapp/revision#az-containerapp-revision-deactivate).
-Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision.
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision deactivate \ --revision <REVISION_NAME> \
- --name <CONTAINER_APP_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision deactivate \
```azurecli az containerapp revision deactivate ` --revision <REVISION_NAME> `
- --name <CONTAINER_APP_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision restart
+
+This command restarts a revision. For more information about this command, see [`az containerapp revision restart`](/cli/azure/containerapp/revision#az-containerapp-revision-restart).
-## Restart
+When you modify secrets in your container app, you'll need to restart the active revisions so they can access the secrets.
-All existing container apps revisions will not have access to this secret until they are restarted
+Example: (Replace the \<placeholders\> with your values.)
# [Bash](#tab/bash) ```azurecli az containerapp revision restart \ --revision <REVISION_NAME> \
- --name <APPLICATION_NAME> \
--resource-group <RESOURCE_GROUP_NAME> ```
az containerapp revision restart \
```azurecli az containerapp revision restart ` --revision <REVISION_NAME> `
- --name <APPLICATION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> ```
-As you interact with this example, replace the placeholders surrounded by `<>` with your values.
+## Revision set mode
-## Set active revision mode
+The revision mode controls whether only a single revision or multiple revisions of your container app can be simultaneously active. To set your container app to support [single revision mode](revisions.md#single-revision-mode) or [multiple revision mode](revisions.md#multiple-revision-mode), use the `az containerapp revision set-mode` command.
-Configure whether or not your container app supports multiple active revisions.
+The default setting is *single revision mode*. For more information about this command, see [`az containerapp revision set-mode`](/cli/azure/containerapp/revision#az-containerapp-revision-set-mode).
-The `activeRevisionsMode` property accepts two values:
+The mode values are `single` or `multiple`. Changing the revision mode doesn't create a new revision.
-- `multiple`: Configures the container app to allow more than one active revision.
+Example: (Replace the \<placeholders\> with your values.)
-- `single`: Automatically deactivates all other revisions when a revision is activated. Enabling `single` mode makes it so that when you create a revision-scope change and a new revision is created, any other revisions are automatically deactivated.
+# [Bash](#tab/bash)
-```json
-{
- ...
- "resources": [
- {
- ...
- "properties": {
- "configuration": {
- "activeRevisionsMode": "multiple"
- }
- }
- }]
-}
+```azurecli
+az containerapp revision set-mode \
+ --name <APPLICATION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --mode single
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --name <APPLICATION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode single
+```
+++
+## Revision labels
+
+Labels provide a unique URL that you can use to direct traffic to a revision. You can move a label between revisions to reroute traffic directed to the label's URL to a different revision. For more information about revision labels, see [Revision Labels](revisions.md#revision-labels).
+
+You can add and remove a label from a revision. For more information about the label commands, see [`az containerapp revision label`](/cli/azure/containerapp/revision/label)
+
+### Revision label add
+
+To add a label to a revision, use the [`az containerapp revision label add`](/cli/azure/containerapp/revision/label#az-containerapp-revision-label-add) command.
+
+You can only assign a label to one revision at a time, and a revision can only be assigned one label. If the revision you specify has a label, the add command will replace the existing label.
+
+This example adds a label to a revision: (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision label add \
+ --revision <REVISION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --label <LABEL_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --revision <REVISION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode <LABEL_NAME>
```
-The following configuration fragment shows how to set the `activeRevisionsMode` property. Changes made to this property require the context of the container app's full ARM template.
++
+### Revision label remove
+
+To remove a label from a revision, use the [`az containerapp revision label remove`](/cli/azure/containerapp/revision/label#az-containerapp-revision-label-remove) command.
+
+This example removes a label to a revision: (Replace the \<placeholders\> with your values.)
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision label add \
+ --revision <REVISION_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --label <LABEL_NAME>
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision set-mode `
+ --revision <REVISION_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --mode <LABEL_NAME>
+```
++ ## Traffic splitting Applied by assigning percentage values, you can decide how to balance traffic among different revisions. Traffic splitting rules are assigned by setting weights to different revisions.
-The following example shows how to split traffic between three revisions.
+The following example shows how to split traffic between three revisions.
```json {
Each revision gets traffic based on the following rules:
- 30% of the requests go to REVISION2 - 20% of the requests go to the latest revision
-The sum total of all revision weights must equal 100.
+The sum of all revision weights must equal 100.
-In this example, replace the `<REVISION*_NAME>` placeholders with revision names in your container app. You access revision names via the [list](#list) command.
+In this example, replace the `<REVISION*_NAME>` placeholders with revision names in your container app. You access revision names via the [revision list](#revision-list) command.
## Next steps
-> [!div class="nextstepaction"]
-> [Get started](get-started.md)
+* [Revisions in Azure Container Apps](revisions.md)
+* [Application lifecycle management in Azure Container Apps](application-lifecycle-management.md)
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
Budget alerts notify you when spending, based on usage or cost, reaches or excee
In the Azure portal, budgets are defined by cost. Using the Azure Consumption API, budgets are defined by cost or by consumption usage. Budget alerts support both cost-based and usage-based budgets. Budget alerts are generated automatically whenever the budget alert conditions are met. You can view all cost alerts in the Azure portal. Whenever an alert is generated, it's shown in cost alerts. An alert email is also sent to the people in the alert recipients list of the budget.
-If you have an Enterprise Agreement, you can [Create and edit budgets with PowerShell](tutorial-acm-create-budgets.md#create-and-edit-budgets-with-powershell). However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs.
+If you have an Enterprise Agreement, you can [Create and edit budgets with PowerShell](tutorial-acm-create-budgets.md#create-and-edit-budgets-with-powershell). Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically.
You can use the Budget API to send email alerts in a different language. For more information, see [Supported locales for budget alert emails](manage-automation.md#supported-locales-for-budget-alert-emails).
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Budget integration with action groups works for action groups which have enabled
If you're an EA customer, you can create and edit budgets programmatically using the Azure PowerShell module. However, we recommend that you use REST APIs to create and edit budgets because CLI commands might not support the latest version of the APIs. > [!NOTE]
-> Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically because PowerShell and CLI aren't yet supported.
+> Customers with a Microsoft Customer Agreement should use the [Budgets REST API](/rest/api/consumption/budgets/create-or-update) to create budgets programmatically.
To download the latest version of Azure PowerShell, run the following command:
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
Previously updated : 06/07/2022 Last updated : 06/08/2022
Some third-party reseller services available on Azure Marketplace now consume yo
### Partners > [!NOTE]
-> The Azure Marketplace price list feature in the EA portal is retired. The same feature is available in the Azure portal.
+> The Azure Marketplace price list feature in the EA portal is retired.
LSPs can download an Azure Marketplace price list from the price sheet page in the Azure Enterprise portal. Select the **Marketplace Price list** link in the upper right. Azure Marketplace price list shows all available services and their prices.
The following services are billed hourly under an Enterprise Agreement instead o
### Azure RemoteApp
-If you have an Enterprise Agreement, you pay for Azure RemoteApp based on your Enterprise Agreement price level. There aren't additional charges. The standard price includes an initial 40 hours. The unlimited price covers an initial 80 hours. RemoteApp stops emitting usage over 80 hours.
+If you have an Enterprise Agreement, you pay for Azure RemoteApp based on your Enterprise Agreement price level. There aren't extra charges. The standard price includes an initial 40 hours. The unlimited price covers an initial 80 hours. RemoteApp stops emitting usage over 80 hours.
## Next steps
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 04/26/2022 Last updated : 06/08/2022 # View and download your Microsoft Azure invoice
-You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Invoices are sent to the person set to receive invoices for the enrollment.
+You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. Invoices are sent to the person set to receive invoices for the enrollment.
-## When invoices are generated
+If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Direct EA administrators can [Download or view their Azure billing invoice](../manage/direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). Indirect EA administrators can use the information at [Azure Enterprise enrollment invoices](../manage/ea-portal-enrollment-invoices.md) to download their invoice.
-An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP) also called pay-as-you-go, Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts. However, invoices for EA billing accounts aren't shown in the Azure portal.
+## Where invoices are generated
+
+An invoice is generated based on your billing account type. Invoices are created for Microsoft Online Service Program (MOSP) also called pay-as-you-go, Microsoft Customer Agreement (MCA), and Microsoft Partner Agreement (MPA) billing accounts. Invoices are also generated for Enterprise Agreement (EA) billing accounts.
To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](../manage/view-all-accounts.md).
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md
Previously updated : 09/09/2021 Last updated : 06/07/2022 # Copy data from DB2 using Azure Data Factory or Synapse Analytics
Typical properties inside the connection string:
| certificateCommonName | When you use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encryption, you must enter a value for Certificate common name. | No | > [!TIP]
-> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, the service will try to create the package under the collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want the service to create the needed packages when querying the database.
+> If you receive an error message that states `The package corresponding to an SQL statement execution request was not found. SQLSTATE=51002 SQLCODE=-805`, the reason is a needed package is not created for the user. By default, the service will try to create the package under the collection named as the user you used to connect to the DB2. Specify the package collection property to indicate under where you want the service to create the needed packages when querying the database. If you can't determine the package collection name, try to set `packageCollection=NULLID`.
**Example:**
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 04/01/2022 Last updated : 06/07/2022
The following sections provide details about properties you can use to define Da
## Linked service properties
-> [!Important]
-> Due to Azure service security and compliance request, system-assigned managed identity authentication is no longer available in REST connector for both Copy and Mapping data flow. You are recommended to migrate existing linked services that use system-managed identity authentication to user-assigned managed identity authentication or other authentication types. Please make sure the migration to be done by **September 15, 2022**. For more detailed steps about how to create, manage user-assigned managed identities, refer to [this](data-factory-service-identity.md#user-assigned-managed-identity).
- The following properties are supported for the REST linked service: | Property | Description | Required |
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-lookup-activity.md
Previously updated : 04/06/2022 Last updated : 05/31/2022 # Lookup activity in Azure Data Factory and Azure Synapse Analytics
databox-online Azure Stack Edge Gpu 2205 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md
The following table provides a summary of known issues carried over from the pre
|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | |**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | |**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
-|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle wasn't managed in the update flow. Hence, when updating to 2103 release, GPU VMs aren't stopped automatically during the update. You'll need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
-|**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
-|**26.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
## Next steps
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/23/2022 Last updated : 06/07/2022 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest update
-The current update is Update 2203. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2205. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
-- Device software version - **2.2.1902.4561**
+- Device software version - **2.2.1983.5094**
- Kubernetes server version - **v1.21.7** - IoT Edge version: **0.1.0-beta15**-- Azure Arc version: **1.5.3**-- GPU driver version: **470.57.02**-- CUDA version: **11.4**
+- Azure Arc version: **1.6.6**
+- GPU driver version: **510.47.03**
+- CUDA version: **11.6**
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2203-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2205-release-notes.md).
-**To apply 2203 update, your device must be running 2106 or later.**
+**To apply 2205 update, your device must be running 2106 or later.**
- If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*. -- You can update to 2106 from an older version and then install 2203.
+- You can update to 2106 from an older version and then install 2205.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has two files for the Kubernetes updates (*Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+4. Select **Download**. There are two packages to download for the update. The first package will have two files for the device software updates (*SoftwareUpdatePackage.0.exe*, *SoftwareUpdatePackage.1.exe*) and the second package has three files for the Kubernetes updates (*Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
This procedure takes around 20 minutes to complete. Perform the following steps
6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2203**.
-7. You will now update the Kubernetes software version. Select the remaining two Kubernetes files together (file with the *Kubernetes_Package.0.exe* and *Kubernetes_Package.1.exe* suffix) and repeat the above steps to apply update.
+7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
![Screenshot of files selected for the Kubernetes update.](./media/azure-stack-edge-gpu-install-update/local-ui-update-7.png)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
|--|--|:-:|--| | **Attempt to create a new Linux namespace from a container detected (Preview)**<br>(K8S.NODE_NamespaceCreation) | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium | | **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
-| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container indicates that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
-| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
+| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
-| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of host/device data detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
-| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container detected execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
+| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
+| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
+| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low |
+| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | | **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low | | **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
-| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
-| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
-| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
+| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
+| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
+| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
-| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High |
+| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
-| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
-| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
-| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
+| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium |
+| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
+| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium | | **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | | **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium | | **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low | | **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low | | **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
-| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
-| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
+| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
+| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
+| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low | | **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
-| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
-| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container indicates a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
-| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container detected the download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
-| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
-| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container detected common files as a way to obfuscate their actions or for persistence. | Persistence | Medium |
-| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container detected the initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
-| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
+| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
+| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
+| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
+| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
+| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
+| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
+| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
+| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
+| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an override for common files as a way to obfuscate actions or for persistence. | Persistence | Medium |
+| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
+| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
-| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
+| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
| **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low | | **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low |
-| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
-| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
+| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a screen capture tool. This isn't a common usage scenario for containers and could be part of attackers attempt to access private data. | Collection | Low |
+| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
+| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
-| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
-| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
+| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
+| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | | **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
-| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational |
-| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
+| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
+| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
+| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
+| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
<sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
defender-for-cloud Quickstart Enable Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-defender-for-cosmos.md
Title: Enable Microsoft Defender for Azure Cosmos DB
description: Learn how to enable Microsoft Defender for Azure Cosmos DB's enhanced security features. Previously updated : 02/28/2022 Last updated : 06/07/2022 # Quickstart: Enable Microsoft Defender for Azure Cosmos DB
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
### [ARM template](#tab/arm-template)
-Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/?term=cosmosdb-advanced-threat-protection-create-account).
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment.-- Previously updated : 04/28/2022 Last updated : 06/08/2022
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers Plan 2 |
-| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers |
| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 11/09/2021 Last updated : 06/06/2022
If you forgot your password, select the **Recover Password** option. See [Passwo
## Activate the on-premises management console
-After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file.
+After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforces the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
-To activate the on-premises management console:
+**To activate the on-premises management console**:
1. Sign in to the on-premises management console.
After initial activation, the number of monitored devices might exceed the numbe
If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console).
-### Activate an expired license (versions under 10.0)
+### Activation expirations
+
+After activating an on-premises management console, you'll need to apply new activation files on both the on-premises management console and connected sensors as follows:
+
+|Location |Activation process |
+|||
+|**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#update-committed-devices-in-a-subscription) in your subscription. |
+|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](how-to-manage-individual-sensors.md#download-a-new-activation-file-for-version-221x-or-higher) from a legacy version to version 22.2.x. |
+| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+
+For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
+
+### Activate expired licenses from versions earlier than 10.0
For users with versions prior to 10.0, your license might expire and the following alert will appear: :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="Screenshot that shows the License has expired alert.":::
-To activate your license:
+**To activate your license**:
1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Title: Activate and set up your sensor description: This article describes how to sign in and activate a sensor console. Previously updated : 11/09/2021 Last updated : 06/06/2022
You might need to refresh your screen after uploading the CA-signed certificate.
For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+### Activation expirations
+
+After activating a sensor, you'll need to apply new activation files as follows:
+
+|Location |Activation process |
+|||
+|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](how-to-manage-individual-sensors.md#download-a-new-activation-file-for-version-221x-or-higher) from a legacy version to version 22.2.x. |
+| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+
+For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
+ ### Activate an expired license (versions under 10.0)
digital-twins How To Monitor Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-diagnostics.md
Here are the field and property descriptions for API logs.
| `ResultDescription` | String | Additional details about the event | | `DurationMs` | String | How long it took to perform the event in milliseconds | | `CallerIpAddress` | String | A masked source IP address for the event |
-| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `CorrelationId` | Guid | Unique identifier for the event |
| `ApplicationId` | Guid | Application ID used in bearer authorization | | `Level` | Int | The logging severity of the event | | `Location` | String | The region where the event took place |
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription. > [!NOTE]
-> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 15th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
>[!IMPORTANT] > **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 15th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 15th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
Last updated 03/31/2022
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
+## REST API version 2021-12
+This release corresponds to REST API version 2021-12-01, which includes the following features:
+
+- [Enable managed identities for system topics](enable-identity-system-topics.md)
+- [Enabled managed identities for custom topics and domains](enable-identity-custom-topics-domains.md)
+- [Use managed identities to deliver events to destinations](add-identity-roles.md)
+- [Support for delivery attributes](delivery-properties.md)
+- [Storage queue - message time-to-live (TTL)](delivery-properties.md#configure-time-to-live-on-outgoing-events-to-azure-storage-queues)-
+- [Azure Active Directory authentication for topics and domains, and partner namespaces](authenticate-with-active-directory.md)
+ ## REST API version 2021-10 This release corresponds to REST API version 2021-10-15-preview, which includes the following features:
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
### FastPath and Private Link for 100Gbps ExpressRoute Direct
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This is supported for connections associated to 100Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
+With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypassess the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
1. Send an email to **ERFastPathPL@microsoft.com**, providing the following information: * Azure Subscription ID * Virtual Network (Vnet) Resource ID
firewall Tutorial Hybrid Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal-policy.md
Previously updated : 08/26/2021 Last updated : 06/08/2022 #Customer intent: As an administrator, I want to control network access from an on-premises network to an Azure virtual network.
If you don't have an Azure subscription, create a [free account](https://azure.m
First, create the resource group to contain the resources for this tutorial: 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal home page, select **Resource groups** > **Add**.
+2. On the Azure portal home page, select **Resource groups** > **Create**.
3. For **Subscription**, select your subscription. 1. For **Resource group name**, type **FW-Hybrid-Test**. 2. For **Region**, select **(US) East US**. All resources that you create later must be in the same location.
Now, create the VNet:
1. From the Azure portal home page, select **Create a resource**. 2. In **Networking**, select **Virtual network**.
-7. For **Resource group**, select **FW-Hybrid-Test**.
+1. Select **Create**.
+1. For **Resource group**, select **FW-Hybrid-Test**.
1. For **Name**, type **VNet-Spoke**.
-2. For **Region**, select **(US) East US**.
-3. Select **Next: IP Addresses**.
-4. For **IPv4 address space**, delete the default address and type **10.6.0.0/16**.
-6. Under **Subnet name**, select **Add subnet**.
-7. For **Subnet name** type **SN-Workload**.
-8. For **Subnet address range**, type **10.6.0.0/24**.
-9. Select **Add**.
-10. Select **Review + create**.
-11. Select **Create**.
+1. For **Region**, select **(US) East US**.
+1. Select **Next: IP Addresses**.
+1. For **IPv4 address space**, delete the default address and type **10.6.0.0/16**.
+1. Under **Subnet name**, select **Add subnet**.
+1. For **Subnet name** type **SN-Workload**.
+1. For **Subnet address range**, type **10.6.0.0/24**.
+1. Select **Add**.
+1. Select **Review + create**.
+1. Select **Create**.
## Create the on-premises virtual network
Now create a second subnet for the gateway.
2. Select **+Subnet**. 3. For **Name**, type **GatewaySubnet**. 4. For **Subnet address range** type **192.168.2.0/24**.
-5. Select **OK**.
+5. Select **Save**.
## Configure and deploy the firewall Now deploy the firewall into the firewall hub virtual network. 1. From the Azure portal home page, select **Create a resource**.
-2. In the left column, select **Networking**, and search for and then select **Firewall**.
+2. In the left column, select **Networking**, and search for and then select **Firewall**, and then select **Create**.
4. On the **Create a Firewall** page, use the following table to configure the firewall: |Setting |Value |
Now deploy the firewall into the firewall hub virtual network.
|Resource group |**FW-Hybrid-Test** | |Name |**AzFW01**| |Region |**East US**|
+ |Firewall tier|**Standard**|
|Firewall management|**Use a Firewall Policy to manage this firewall**| |Firewall policy|Add new:<br>**hybrid-test-pol**<br>**East US** |Choose a virtual network |Use existing:<br> **VNet-hub**|
- |Public IP address |Add new: <br>**fw-pip**. |
+ |Public IP address |Add new: <br>**fw-pip** |
5. Select **Review + create**.
Next, create a couple routes:
12. Select **Routes** in the left column. 13. Select **Add**. 14. For the route name, type **ToSpoke**.
-15. For the address prefix, type **10.6.0.0/16**.
-16. For next hop type, select **Virtual appliance**.
-17. For next hop address, type the firewall's private IP address that you noted earlier.
-18. Select **OK**.
+1. For the **Address prefix destination**, select **IP Addresses**.
+1. For the **Destination IP addresses/CIDR ranges**, type **10.6.0.0/16**.
+1. For next hop type, select **Virtual appliance**.
+1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the default route from the spoke subnet.
8. Select **Routes** in the left column. 9. Select **Add**. 10. For the route name, type **ToHub**.
-11. For the address prefix, type **0.0.0.0/0**.
-12. For next hop type, select **Virtual appliance**.
-13. For next hop address, type the firewall's private IP address that you noted earlier.
-14. Select **OK**.
+1. For the **Address prefix destination**, select **IP Addresses**.
+1. For the **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
+1. For next hop type, select **Virtual appliance**.
+1. For next hop address, type the firewall's private IP address that you noted earlier.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the spoke workload and on-premises virtual machines, and place them i
Create a virtual machine in the spoke virtual network, running IIS, with no public IP address. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine:
- - **Resource group** - Select **FW-Hybrid-Test**.
- - **Virtual machine name**: *VM-Spoke-01*.
- - **Region** - Same region that you're used previously.
- - **User name**: \<type a user name\>.
+ - **Resource group** - Select **FW-Hybrid-Test**
+ - **Virtual machine name**: *VM-Spoke-01*
+ - **Region** - Same region that you're used previously
+ - **User name**: \<type a user name\>
- **Password**: \<type a password\>
-4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**
+4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**.
4. Select **Next:Disks**. 5. Accept the defaults and select **Next: Networking**. 6. Select **VNet-Spoke** for the virtual network and the subnet is **SN-Workload**.
Create a virtual machine in the spoke virtual network, running IIS, with no publ
### Install IIS
+After the virtual machine is created, install IIS.
+ 1. From the Azure portal, open the Cloud Shell and make sure that it's set to **PowerShell**. 2. Run the following command to install IIS on the virtual machine and change the location if necessary:
Create a virtual machine in the spoke virtual network, running IIS, with no publ
This is a virtual machine that you use to connect using Remote Desktop to the public IP address. From there, you then connect to the on-premises server through the firewall. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine: - **Resource group** - Select existing, and then select **FW-Hybrid-Test**. - **Virtual machine name** - *VM-Onprem*.
This is a virtual machine that you use to connect using Remote Desktop to the pu
1. First, note the private IP address for **VM-spoke-01** virtual machine. 2. From the Azure portal, connect to the **VM-Onprem** virtual machine.
-<!2. Open a Windows PowerShell command prompt on **VM-Onprem**, and ping the private IP for **VM-spoke-01**.
- You should get a reply.>
3. Open a web browser on **VM-Onprem**, and browse to http://\<VM-spoke-01 private IP\>. You should see the **VM-spoke-01** web page:
This is a virtual machine that you use to connect using Remote Desktop to the pu
So now you've verified that the firewall rules are working:
-<!- You can ping the server on the spoke VNet.>
- You can browse web server on the spoke virtual network. - You can connect to the server on the spoke virtual network using RDP.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId 'ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037' -Role Contributor
+ New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037"
``` ##### Azure CLI
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- SP_ID=$(az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --query objectId -o tsv)
- az role assignment create --assignee $SP_ID --role Contributor
+ az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037
``` #### Grant Azure Front Door access to your key vault
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- SP_ID=$(az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --query objectId -o tsv)
- az role assignment create --assignee $SP_ID --role Contributor
+ az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8
``` #### Grant Azure Front Door access to your key vault
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
description: Use the JDBC driver from a Java application to submit Apache Hive q
Previously updated : 04/20/2020 Last updated : 06/08/2022 # Query Apache Hive through the JDBC driver in HDInsight
hdinsight Hdinsight Hadoop Create Linux Clusters With Secure Transfer Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md
description: Learn how to create HDInsight clusters with secure transfer enabled
Previously updated : 02/18/2020 Last updated : 06/08/2022 # Apache Hadoop clusters with secure transfer storage accounts in Azure HDInsight
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
description: Add custom components to HDInsight clusters by using script actions
Previously updated : 03/09/2021 Last updated : 06/08/2022 # Customize Azure HDInsight clusters by using script actions
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
Title: Enable Private Link on an Azure HDInsight cluster
description: Learn how to connect to an outside HDInsight cluster by using Azure Private Link. Previously updated : 10/15/2020 Last updated : 06/08/2022 # Enable Private Link on an HDInsight cluster
hdinsight Hdinsight Use External Metadata Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md
description: Use external metadata stores with Azure HDInsight clusters.
Previously updated : 05/05/2022 Last updated : 06/08/2022 # Use external metadata stores in Azure HDInsight
+> [!IMPORTANT]
+> The default metastore provides a basic tier Azure SQL Database with only **5 DTU and 2 GB data max size (NOT UPGRADEABLE)**! Use this for QA and testing purposes only. **For production or large workloads, we recommend migrating to an external metastore!**
+ HDInsight allows you to take control of your data and metadata with external data stores. This feature is available for [Apache Hive metastore](#custom-metastore), [Apache Oozie metastore](#apache-oozie-metastore), and [Apache Ambari database](#custom-ambari-db). The Apache Hive metastore in HDInsight is an essential part of the Apache Hadoop architecture. A metastore is the central schema repository. The metastore is used by other big data access tools such as Apache Spark, Interactive Query (LLAP), Presto, or Apache Pig. HDInsight uses an Azure SQL Database as the Hive metastore.
There are two ways you can set up a metastore for your HDInsight clusters:
## Default metastore
-> [!IMPORTANT]
-> The default metastore provides a basic tier Azure SQL Database with only **5 DTU and 2 GB data max size (NOT UPGRADEABLE)**! Use this for QA and testing purposes only. **For production or large workloads, we recommend migrating to an external metastore!**
- By default, HDInsight creates a metastore with every cluster type. You can instead specify a custom metastore. The default metastore includes the following considerations:
+* Limited resources. See notice at the top of the page.
+ * No additional cost. HDInsight creates a metastore with every cluster type without any additional cost to you.
-* Each default metastore is part of the cluster lifecycle. When you delete a cluster, the corresponding metastore and metadata are also deleted.
+* The default metastore is part of the cluster lifecycle. When you delete a cluster, the corresponding metastore and metadata are also deleted.
-* You can't share the default metastore with other clusters.
+* The default metastore is recommended only for simple workloads. Workloads that don't require multiple clusters and don't need metadata preserved beyond the cluster's lifecycle.
-* Default metastore is recommended only for simple workloads. Workloads that don't require multiple clusters and don't need metadata preserved beyond the cluster's lifecycle.
+* The default metastore can't be shared with other clusters.
## Custom metastore
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
To validate connectivity to Kafka, use the following steps to create and run a P
* If you have __enabled name resolution through a custom DNS server__, replace the `kafka_broker` entries with the FQDN of the worker nodes. > [!NOTE]
- > This code sends the string `test message` to the topic `testtopic`. The default configuration of Kafka on HDInsight is to create the topic if it does not exist.
-
+ > This code sends the string `test message` to the topic `testtopic`. The default configuration of Kafka on HDInsight is not to create the topic if it does not exist. See [How to configure Apache Kafka on HDInsight to automatically create topics](./apache-kafka-auto-create-topics.md). Alternatively, you can create topics manually before producing messages.
+
4. To retrieve the messages from Kafka, use the following Python code: ```python
hdinsight Apache Spark Load Data Run Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md
description: Tutorial - Learn how to load data and run interactive queries on Sp
Previously updated : 02/12/2020 Last updated : 06/08/2022 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data.
hdinsight Apache Spark Troubleshoot Application Stops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-application-stops.md
Title: Apache Spark Streaming application stops after 24 days in Azure HDInsight
description: An Apache Spark Streaming application stops after executing for 24 days and there are no errors in the log files. Previously updated : 07/29/2019 Last updated : 06/08/2022 # Scenario: Apache Spark Streaming application stops after executing for 24 days in Azure HDInsight
Replace `<yourclustername>` with the name of your HDInsight cluster as shown in
## Next steps
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Using the REST API:
1. Use the REST API to retrieve a list of role IDs from your application: ```http
- GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+ GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
``` The response to this request looks like the following example:
Using the REST API:
1. Use the REST API to create an API token for a role. For example, to create an API token called `operator-token` for the operator role: ```http
- PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=1.0
+ PUT https://{your app subdomain}.azureiotcentral.com/api/apiToken/operator-token?api-version=2022-05-31
``` Request body:
Using the REST API:
You can use the REST API to list and delete API tokens in an application.
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) includes support for the new [organizations feature](howto-create-organizations.md).
- ## Use a bearer token To use a bearer token when you make a REST API call, your authorization header looks like the following example:
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/devices) includes support for the new [organizations feature](howto-create-organizations.md).
- [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] ## Components and modules
In IoT Central, a module refers to an IoT Edge module running on a connected IoT
Use the following request to retrieve the components from a device called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=2022-05-31
``` The response to this request looks like the following example. The `value` array contains details of each device component:
The response to this request looks like the following example. The `value` array
Use the following request to retrieve a list of modules running on a connected IoT Edge device called `environmental-sensor-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
``` The response to this request looks like the following example. The array of modules only includes custom modules running on the IoT Edge device, not the built-in `$edgeAgent` and `$edgeHub` modules:
The response to this request looks like the following example. The array of modu
Use the following request to retrieve a list of the components in a module called `SimulatedTemperatureSensor`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=2022-05-31
``` ## Read telemetry
GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-s
Use the following request to retrieve the last known telemetry value from a device that doesn't use components. In this example, the device is called `thermostat-01` and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the last known telemetry value from a device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the telemetry is called `temperature`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to retrieve the last known telemetry value from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor` and telemetry called `ambient`. The `ambient` telemetry type has temperature and humidity values: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the property values from a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
``` The response to this request looks like the following example. It shows the device is reporting a single property value:
The response to this request looks like the following example. It shows the devi
Use the following request to retrieve property values from all components. In this example, the device is called `temperature-controller-01`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a property value from an individual component. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to retrieve property values from a from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor`: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
``` The response to this request looks like the following example:
Some properties are writable. For example, in the thermostat model the `targetTe
Use the following request to write an individual property value to a device that doesn't use components. In this example, the device is called `thermostat-01`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
Use the following request to write an individual property value to a device that does use components. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If the device is an IoT Edge device, use the following request to write an individual property value to a module. This example uses a device called `environmental-sensor-01`, a module called `SimulatedTemperatureSensor`, and a property called `SendInterval`: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
If you're using an IoT Edge device, use the following request to retrieve property values from a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/properties?api-version=2022-05-31
``` If you're using an IoT Edge device, use the following request to retrieve property values from a component in a module: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}/modules/{moduleName}/components/{componentName}/properties?api-version=2022-05-31
``` ## Call commands
You can use the REST API to call device commands and retrieve the device history
Use the following request to call a command on device that doesn't use components. In this example, the device is called `thermostat-01` and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
``` The request body looks like the following example:
The response to this request looks like the following example:
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to call a command on device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the command is called `getMaxMinReport`: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
``` The formats of the request payload and response are the same as for a device that doesn't use components.
The formats of the request payload and response are the same as for a device tha
To view the history for this command, use the following request: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=2022-05-31
``` > [!TIP]
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
+
+ Title: Integrate Azure IoT Central with CI/CD | Microsoft Docs
+description: Describes how to integrate IoT Central into a pipeline created with Azure Pipelines.
++ Last updated : 05/27/2022+++
+# Integrate IoT Central with Azure Pipelines for CI/CD
+
+## Overview
+
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. This article shows you how to automate the build, test, and deployment of IoT Central application configuration, to enable development teams to deliver reliable releases more frequently.
+
+Continuous integration starts with a commit of your code to a branch in a source code repository. Each commit is merged with commits from other developers to ensure that no conflicts are introduced. Changes are further validated by creating a build and running automated tests against that build. This process ultimately results in an artifact, or deployment bundle, to deploy to a target environment, in this case an Azure IoT Central application.
+
+Just as IoT Central is a part of your larger IoT solution, IoT Central is a part of your CI/CD pipeline. Your CI/CD pipeline should deploy your entire IoT solution and all configurations to each environment from development through to production:
++
+IoT Central is an *application platform as a service* that has different deployment requirements from *platform as a service* components. For IoT Central, you deploy configurations and device templates. These configurations and device templates are managed and integrated into your release pipeline by using APIs.
+
+While it's possible to automate IoT Central app creation, you should create an app in each environment before you develop your CI/CD pipeline.
+
+By using the Azure IoT Central REST API, you can integrate IoT Central app configurations into your release pipeline.
+
+This guide walks you through the creation of a new pipeline that updates an IoT Central application based on configuration files managed in GitHub. This guide has specific instructions for integrating with [Azure Pipelines](/azure/devops/pipelines/?view=azure-devops&preserve-view=true), but could be adapted to include IoT Central in any release pipeline built using tools such as Tekton, Jenkins, GitLab, or GitHub Actions.
+
+In this guide, you create a pipeline that only applies an IoT Central configuration to a single instance of an IoT Central application. You should integrate the steps into a larger pipeline that deploys your entire solution and promotes it from *development* to *QA* to *pre-production* to *production*, performing all necessary testing along the way.
+
+The scripts currently don't transfer the following settings between IoT Central instances: dashboards, views, custom settings in device templates, pricing plan, UX customizations, application image, rules, scheduled jobs, saved jobs, and enrollment groups.
+
+The scripts currently don't remove settings from the target IoT Central application that aren't present in the configuration file.
+
+## Prerequisites
+
+You need the following prerequisites to complete the steps in this guide:
+
+- Two IoT Central applications - one for your development environment and one for your production environment. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
+- Two Azure Key Vaults - one for your development environment and one for your production environment. It's best practice to have a dedicated Key Vault for each environment. To learn more, see [Create an Azure Key Vault with the Azure portal](../../key-vault/general/quick-create-portal.md).
+- A GitHub account [GitHub](https://github.com/).
+- An Azure DevOps organization. To learn more, see [Create an Azure DevOps organization](/devops/organizations/accounts/create-organization?view=azure-devops&preserve-view=true).
+- PowerShell 7 for Windows, Mac or Linux. [Get PowerShell](/powershell/scripting/install/installing-powershell).
+- Azure Az PowerShell module installed in your PowerShell 7 environment. To learn more, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+- Visual Studio Code or other tool to edit PowerShell and JSON files.[Get Visual Studio Code](https://code.visualstudio.com/Download).
+- Git client. Download the latest version from [Git - Downloads (git-scm.com)](https://git-scm.com/downloads).
++
+## Download the sample code
+
+To get started, fork the IoT Central CI/CD GitHub repository and then clone your fork to your local machine:
+
+1. To fork the Git Hub repository, open the [IoT Central CI/CD GitHub repository](https://github.com/Azure/iot-central-CICD-sample) and select **Fork**.
+
+1. Clone your fork of the repository to your local machine by opening a console or bash window and running the following command.
+
+ ```cmd\bash
+ git clone https://github.com/{your GitHub username}/iot-central-CICD-sample
+ ```
+
+## Create a service principal
+
+While Azure Pipelines can integrate directly with a key vault, your pipeline needs a service principal for some of the dynamic key vault interactions such as fetching secrets for data export destinations.
+
+To create a service principal scoped to your subscription:
+
+1. Run the following command to create a new service principal:
+
+ ```azurecli
+ az ad sp create-for-rbac -n DevOpsAccess --scopes /subscriptions/{your Azure subscription Id} --role Contributor
+ ```
+
+1. Make a note of the **password**, **appId**, and **tenant** as you need these values later.
+
+1. Add the service principal password as a secret called `SP-Password` to your production key vault:
+
+ ```azurecli
+ az keyvault secret set --name SP-Password --vault-name {your production key vault name} --value {your service principal password}
+ ```
+
+1. Give the service principal permission to read secrets from the key vault:
+
+ ```azurecli
+ az keyvault set-policy --name {your production key vault name} --secret-permissions get list --spn {the appId of the service principal}
+ ```
+
+## Generate IoT Central API tokens
+
+In this guide, your pipeline uses API tokens to interact with your IoT Central applications. It's also possible to use a service principal.
+
+> [!NOTE]
+> IoT Central API tokens expire after one year.
+
+Complete the following steps for both your development and production IoT Central apps.
+
+1. In your IoT Central app, select **Permissions** and then **API tokens**.
+1. Select **New**.
+1. Give the token a name, specify the top-level organization in your app, and set the role to **App Administrator**.
+1. Make a note of the API token from your development IoT Central application. You use it later when you run the *IoTC-Config.ps1* script.
+1. Save the generated token from the production IoT Central application as a secret called `API-Token` to the production key vault:
+
+ ```azurecli
+ az keyvault secret set --name API-Token --vault-name {your production key vault name} --value '{your production app API token}'
+ ```
+
+## Generate a configuration file
+
+These steps produce a JSON configuration file for your development environment based on an existing IoT Central application. You also download all the existing device templates from the application.
+
+1. Run the following PowerShell 7 script in the local copy of the IoT Central CI/CD repository:
+
+ ```powershell
+ cd .\iot-central-CICD-sample\PowerShell\
+ .\IoTC-Config.ps1
+ ```
+
+1. Follow the instructions to sign in to your Azure account.
+1. After you sign in, the script displays the IoTC Config options menu. The script can generate a config file from an existing IoT Central application and apply a configuration to another IoT Central application.
+1. Select option **1** to generate a configuration file.
+1. Enter the necessary parameters and press **Enter**:
+ - The API token you generated for your development IoT Central application.
+ - The subdomain of your development IoT Central application.
+ - Enter *..\Config\Dev* as the folder to store the config file and device templates.
+ - The name of your development key vault.
+
+1. The script creates a folder called *IoTC Configuration* in the *Config\Dev* folder in your local copy of the repository. This folder contains a configuration file and a folder called *Device Models* for all the device templates in your application.
+
+## Modify the configuration file
+
+Now that you have a configuration file that represents the settings for your development IoT Central application instance, make any necessary changes before you apply this configuration to your production IoT Central application instance.
+
+1. Create a copy of the *Dev* folder created previously and call it *Production*.
+1. Open IoTC-Config.json in the *Production* folder using a text editor.
+1. The file has multiple sections. However, if your application doesn't use a particular setting, that section is omitted from the file:
+
+ ```json
+ {
+ "APITokens": {
+ "value": [
+ {
+ "id": "dev-admin",
+ "roles": [
+ {
+ "role": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4"
+ }
+ ],
+ "expiry": "2023-05-31T10:47:08.53Z"
+ }
+ ]
+ },
+ "data exports": {
+ "value": [
+ {
+ "id": "5ad278d6-e22b-4749-803d-db1a8a2b8529",
+ "displayName": "All telemetry to blob storage",
+ "enabled": false,
+ "source": "telemetry",
+ "destinations": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63"
+ }
+ ],
+ "status": "notStarted"
+ }
+ ]
+ },
+ "device groups": {
+ "value": [
+ {
+ "id": "66f41d29-832d-4a12-9e9d-18932bee3141",
+ "displayName": "MXCHIP Getting Started Guide - All devices"
+ },
+ {
+ "id": "494dc749-0963-4ec1-89ff-e1de2228e750",
+ "displayName": "RS40 Occupancy Sensor - All devices"
+ },
+ {
+ "id": "dd87877d-9465-410b-947e-64167a7a1c39",
+ "displayName": "Cascade 500 - All devices"
+ },
+ {
+ "id": "91ceac5b-f98d-4df0-9ed6-5465854e7d9e",
+ "displayName": "Simulated devices"
+ }
+ ]
+ },
+ "organizations": {
+ "value": []
+ },
+ "roles": {
+ "value": [
+ {
+ "id": "344138e9-8de4-4497-8c54-5237e96d6aaf",
+ "displayName": "Builder"
+ },
+ {
+ "id": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4",
+ "displayName": "Administrator"
+ },
+ {
+ "id": "ae2c9854-393b-4f97-8c42-479d70ce626e",
+ "displayName": "Operator"
+ }
+ ]
+ },
+ "destinations": {
+ "value": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63",
+ "displayName": "Blob destination",
+ "type": "blobstorage@v1",
+ "authorization": {
+ "type": "connectionString",
+ "connectionString": "DefaultEndpointsProtocol=https;AccountName=yourexportaccount;AccountKey=*****;EndpointSuffix=core.windows.net",
+ "containerName": "dataexport"
+ },
+ "status": "waiting"
+ }
+ ]
+ },
+ "file uploads": {
+ "connectionString": "FileUpload",
+ "container": "fileupload",
+ "sasTtl": "PT1H"
+ },
+ "jobs": {
+ "value": []
+ }
+ }
+ ```
+
+1. If your application uses file uploads, the script creates a secret in your development key vault with the value shown in the `connectionString` property. Create a secret with the same name in your production key vault that contains the connection string for your production storage account. For example:
+
+ ```azurecli
+ az keyvault secret set --name FileUpload --vault-name {your production key vault name} --value '{your production storage account connection string}'
+ ```
+
+1. If your application uses data exports, add secrets for the destinations to the production key vault. The config file doesn't contain any actual secrets for your destination, the secrets are stored in your key vault.
+1. Update the secrets in the config file with the name of the secret in your key vault.
+
+ | Destination type | Property to change |
+ | | |
+ | Service Bus queue | connectionString |
+ | Service Bus topic | connectionString |
+ | Azure Data Explorer | clientSecret |
+ | Azure Blob Storage | connectionString |
+ | Event Hubs | connectionString |
+ | Webhook No Auth | N/A |
+
+ For example:
+
+ ```json
+ "destinations": {
+ "value": [
+ {
+ "id": "393adfc9-0ed8-45f4-aa29-25b5c96ecf63",
+ "displayName": "Blob destination",
+ "type": "blobstorage@v1",
+ "authorization": {
+ "type": "connectionString",
+ "connectionString": "Storage-CS",
+ "containerName": "dataexport"
+ },
+ "status": "waiting"
+ }
+ ]
+ }
+ ```
+
+1. To upload the *Configuration* folder to your GitHub repository, run the following commands from the *IoTC-CICD-howto* folder.
+
+ ```cmd/bash
+ git add Config
+ git commit -m "Adding config directories and files"
+ git push
+ ```
+
+## Create a pipeline
+
+1. Open your Azure DevOps organization in a web browser by going to `https://dev.azure.com/{your DevOps organization}`
+1. Select **New project** to create a new project.
+1. Give your project a name and optional description and then select **Create**.
+1. On the **Welcome to the project** page, select **Pipelines** and then **Create Pipeline**.
+1. Select **GitHub** as the location of your code.
+1. Select **Authorize AzurePipelines** to authorize Azure Pipelines to access your GitHub account.
+1. On the **Select a repository** page, select your fork of the IoT Central CI/CD GitHub repository.
+1. When prompted to log into GitHub and provide permission for Azure Pipelines to access the repository, select **Approve & install**.
+1. On the **Configure your pipeline** page, select **Starter pipeline** to get started. The *azure-pipelines.yml* is displayed for you to edit.
+
+## Create a variable group
+
+An easy way to integrate key vault secrets into a pipeline is through variable groups. Use a variable group to ensure the right secrets are available to your deployment script. To create a variable group:
+
+1. Select **Library** in the **Pipelines** section of the menu on the left.
+1. Select **+ Variable group**.
+1. Enter `keyvault` as the name for your variable group.
+1. Enable the toggle to link secrets from an Azure key vault.
+1. Select your Azure subscription and authorize it. Then select your production key vault name.
+
+1. Select **Add** to start adding variables to the group.
+
+1. Add the following secrets:
+ - The IoT Central API Key for your production app. You called this secret `API-Token` when you created it.
+ - The password for the service principal you created previously. You called this secret `SP-Password` when you created it.
+1. Select **OK**.
+1. Select **Save** to save the variable group.
+
+## Configure your pipeline
+
+Now configure the pipeline to push configuration changes to your IoT Central application:
+
+1. Select **Pipelines** in the **Pipelines** section of the menu on the left.
+1. Replace the contents of your pipeline YAML with the following YAML. The configuration assumes your production key vault contains:
+ - The API token for your production IoT Central app in a secret called `API-Token`.
+ - Your service principal password in a secret called `SP-Password`.
+
+ Replace the values for `-AppName` and `-KeyVault` with the appropriate values for your production instances.
+
+ You made a note of the `-AppId` and `-TenantId` when you created your service principal.
+
+ ```yml
+ trigger:
+ - master
+ variables:
+ - group: keyvault
+ - name: buildConfiguration
+ value: 'Release'
+ steps:
+ - task: PowerShell@2
+ displayName: 'IoT Central'
+ inputs:
+ filePath: 'PowerShell/IoTC-Task.ps1'
+ arguments: '-ApiToken "$(API-Token)" -ConfigPath "Config/Production/IoTC Configuration" -AppName "{your production IoT Central app name}" -ServicePrincipalPassword (ConvertTo-SecureString "$(SP-Password)" -AsPlainText -Force) -AppId "{your service principal app id}" -KeyVault "{your production key vault name}" -TenantId "{your tenant id}"'
+ pwsh: true
+ failOnStderr: true
+ ```
+
+1. Select **Save and run**.
+1. The YAML file is saved to your Git Hub repository, so you need to provide a commit message and then select **Save and run** again.
+
+Your pipeline is queued. It may take a few minutes before it runs.
+
+The first time you run your pipeline, you're prompted to give permissions for the pipeline to access your subscription and to access your key vault. Select **Permit** and then **Permit** again for each resource.
+
+When your pipeline job completes successfully, sign in to your production IoT Central application and verify the configuration was applied as expected.
+
+## Promote changes from development to production
+
+Now that you have a working pipeline you can manage your IoT Central instances directly by using configuration changes. You can upload new device templates into the *Device Models* folder and make changes directly to the configuration file. This approach lets you treat your IoT Central application's configuration the same as any other code.
+
+## Next steps
+
+Now that know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create and publish a new device template. Default views are automatically generated for device templates created this way. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+PUT https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve details of a device template from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
## Update a device template ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+PATCH https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to delete a device template: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=1.0
+DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?api-version=2022-05-31
``` ## List device templates
DELETE https://{subdomain}.{baseDomain}/api/deviceTemplates/{deviceTemplateId}?a
Use the following request to retrieve a list of device templates from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-05-31
``` The response to this request looks like the following example:
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create a new device. ```http
-PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+PUT https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` The following example shows a request body that adds a device for a device template. You can get the `template` details from the device templates page in IoT Central application UI.
The response to this request looks like the following example:
Use the following request to retrieve details of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to retrieve credentials of a device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices/{deviceId}/credentials?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
} ``` - ### Update a device ```http
-PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+PATCH https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` >[!NOTE]
The response to this request looks like the following example:
Use the following request to delete a device: ```http
-DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
+DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=2022-05-31
``` ### List devices
DELETE https://{subdomain}.{baseDomain}/api/devices/{deviceId}?api-version=1.0
Use the following request to retrieve a list of devices from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.0
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to create a new device group. ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true
The response to this request looks like the following example:
Use the following request to retrieve details of a device group from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` * deviceGroupId - Unique ID for the device group.
The response to this request looks like the following example:
### Update a device group ```http
-PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` The sample request body looks like the following example which updates the `displayName` of the device group:
The response to this request looks like the following example:
Use the following request to delete a device group: ```http
-DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=1.2-preview
+DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
``` ### List device groups
DELETE https://{subdomain}.{baseDomain}/api/deviceGroups/{deviceGroupId}?api-ver
Use the following request to retrieve a list of device groups from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/deviceGroups?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
} ``` - ## Next steps Now that you've learned how to manage devices with the REST API, a suggested next step is to [How to control devices with rest api.](howto-control-devices-with-rest-api.md)
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/jobs) includes support for the new [organizations feature](howto-create-organizations.md).
- To learn how to create and manage jobs in the UI, see [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md). [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage organizations in your IoT Central application.
-> [!TIP]
-> The [organizations feature](howto-create-organizations.md) is currently available in [preview API](/rest/api/iotcentral/1.2-previewdataplane/users).
- Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md). For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
The IoT Central REST API lets you:
The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application: ```http
-PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` * organizationId - Unique ID of the organization
The response to this request looks like the following example:
Use the following request to retrieve details of an individual organization from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update details of an organization in your application: ```http
-PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
``` The following example shows a request body that updates an organization.
The response to this request looks like the following example:
Use the following request to retrieve a list of organizations from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=1.2-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=2022-05-31
``` The response to this request looks like the following example.
The response to this request looks like the following example.
Use the following request to delete an organization: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=1.2-preview
+DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
``` ## Next steps
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
Every IoT Central REST API call requires an authorization header. To learn more,
For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
-> [!TIP]
-> The [preview API](/rest/api/iotcentral/1.2-previewdataplane/users) includes support for the new [organizations feature](howto-create-organizations.md).
- [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] ## Manage roles
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
``` The response to this request looks like the following example that includes the three built-in roles and a custom role:
The REST API lets you:
Use the following request to retrieve a list of users from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=2022-05-31
``` The response to this request looks like the following example. The role values identify the role ID the user is associated with:
The response to this request looks like the following example. The role values i
Use the following request to retrieve details of an individual user from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=1.0
+GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=2022-05-31
``` The response to this request looks like the following example. The role value identifies the role ID the user is associated with:
The response to this request looks like the following example. The role value id
Use the following request to create a user in your application. The ID and email must be unique in the application: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` In the following request body, the `role` value is for the operator role you retrieved previously:
You can also add a service principal user which is useful if you need to use ser
Use the following request to change the role assigned to user. This example uses the ID of the builder role you retrieved previously: ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` Request body. The value is for the builder role you retrieved previously:
The response to this request looks like the following example:
Use the following request to delete a user: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
``` ## Next steps
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
The response to this request looks like the following example:
Use the following request to create a file upload blob storage account configuration in your IoT Central application: ```http
-PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+PUT https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` The request body has the following fields:
The response to this request looks like the following example:
"etag": "\"7502ac89-0000-0300-0000-627eaf100000\"" }- ``` ## Get the file upload storage account configuration Use the following request to retrieve details of a file upload blob storage account configuration in your IoT Central application: - ```http
-GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+GET https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update a file upload blob storage account configuration in your IoT Central application: ```http
-PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+PATCH https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` ```json
The response to this request looks like the following example:
Use the following request to delete a storage account configuration: ```http
-DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=1.2-preview
+DELETE https://{your-app-subdomain}.azureiotcentral.com/api/fileUploads?api-version=2022-05-31
``` ## Test file upload
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application administration guide
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, and security.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, security, and automated deployments.
Last updated 01/04/2022
IoT Central application administration includes the following tasks:
- Upgrade applications. - Export and share applications. - Monitor application health.
+- DevOps integration.
## Create applications
An administrator can:
- Create a copy of an application if you just need a duplicate copy of your application. For example, you may need a duplicate copy for testing. - Create an application template from an existing application if you plan to create multiple copies.
-To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template) .
+To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template).
+
+## Integrate with DevOps pipelines
+
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. You can use Azure DevOps pipelines to automate the build, test, and deployment of IoT Central application configurations.
+
+Just as IoT Central is a part of your larger IoT solution, make IoT Central a part of your CI/CD pipeline.
+
+To learn more, see [Integrate IoT Central into your Azure DevOps CI/CD pipeline](howto-integrate-with-devops.md).
## Monitor application health
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
DPS uses the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagn
| ApplicationId | GUID | Application ID used in bearer authorization. | | CallerIpAddress | String | A masked source IP address for the event. | | Category | String | Type of operation, either **ServiceOperations** or **DeviceOperations**. |
-| CorrelationId | GUID | Customer provided unique identifier for the event. |
+| CorrelationId | GUID | Unique identifier for the event. |
| DurationMs | String | How long it took to perform the event in milliseconds. | | Level | Int | The logging severity of the event. For example, Information or Error. | | OperationName | String | The type of action performed during the event. For example: Query, Get, Upsert, and so on. |
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Title: Update IoT Edge version on devices - Azure IoT Edge | Microsoft Docs
-description: How to update IoT Edge devices to run the latest versions of the security daemon and the IoT Edge runtime
+description: How to update IoT Edge devices to run the latest versions of the security subsystem and the IoT Edge runtime
keywords:
As the IoT Edge service releases new versions, you'll want to update your IoT Edge devices for the latest features and security improvements. This article provides information about how to update your IoT Edge devices when a new version is available.
-Two components of an IoT Edge device need to be updated if you want to move to a newer version. The first is the security daemon, which runs on the device and starts the runtime modules when the device starts. Currently, the security daemon can only be updated from the device itself. The second component is the runtime, made up of the IoT Edge hub and IoT Edge agent modules. Depending on how you structure your deployment, the runtime can be updated from the device or remotely.
+Two logical components of an IoT Edge device need to be updated if you want to move to a newer version. The first is the security subsystem. Although the architecture of the security subsystem [changed between version 1.1 and 1.2](iot-edge-security-manager.md), its overall responsibilities remained the same. It runs on the device, handles security-based tasks, and starts the modules when the device starts. Currently, the security subsystem can only be updated from the device itself. The second component is the runtime, made up of the IoT Edge hub and IoT Edge agent modules. Depending on how you structure your deployment, the runtime can be updated from the device or remotely.
To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases).
-## Update the security daemon
+## Update the security subsystem
-The IoT Edge security daemon is a native component that needs to be updated using the package manager on the IoT Edge device. View the [Update the security daemon](how-to-update-iot-edge.md#update-the-security-daemon) tutorial for a walk-through on Linux-based devices.
+The IoT Edge security subsystem includes a set of native components that need to be updated using the package manager on the IoT Edge device.
-Check the version of the security daemon running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
+Check the version of the security subsystem running on your device by using the command `iotedge version`. If you're using IoT Edge for Linux on Windows, you need to SSH into the Linux virtual machine to check the version.
# [Linux](#tab/linux) >[!IMPORTANT] >If you are updating a device from version 1.0 or 1.1 to version 1.2, there are differences in the installation and configuration processes that require extra steps. For more information, refer to the steps later in this article: [Special case: Update from 1.0 or 1.1 to 1.2](#special-case-update-from-10-or-11-to-12).
-On Linux x64 devices, use apt-get or your appropriate package manager to update the security daemon to the latest version.
+On Linux x64 devices, use apt-get or your appropriate package manager to update the runtime module to the latest version.
Update apt.
Check to see which versions of IoT Edge are available.
apt list -a iotedge ```
-If you want to update to the most recent version of the security daemon, use the following command which also updates **libiothsm-std** to the latest version:
+If you want to update to the most recent version of the runtime module, use the following command which also updates **libiothsm-std** to the latest version:
```bash sudo apt-get install iotedge ```
-If you want to update to a specific version of the security daemon, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.1 release:
+If you want to update to a specific version of the runtime module, specify the version from the apt list output. Whenever **iotedge** is updated, it automatically tries to update the **libiothsm-std** package to its latest version, which may cause a dependency conflict. If you aren't going to the most recent version, be sure to target both packages for the same version. For example, the following command installs a specific version of the 1.1 release:
```bash sudo apt-get install iotedge=1.1.1 libiothsm-std=1.1.1
Check to see which versions of IoT Edge are available.
apt list -a aziot-edge ```
-If you want to update to the most recent version of IoT Edge, use the following command which also updates the identity service to the latest version:
+If you want to update to the most recent version of IoT Edge, use the following command which also updates the [identity service](https://azure.github.io/iot-identity-service/) to the latest version:
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
+It's recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
<!-- end 1.2 --> :::moniker-end
For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates]
:::moniker range=">=iotedge-2020-11" >[!NOTE]
->Currently, there is not support for IoT Edge version 1.2 running on Windows devices.
+>Currently, there's no support for IoT Edge version 1.2 running on Windows devices.
> >To view the steps for updating IoT Edge for Linux on Windows, see [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true&tabs=windows).
For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates]
With IoT Edge for Windows, IoT Edge runs directly on the Windows device.
-Use the `Update-IoTEdge` command to update the security daemon. The script automatically pulls the latest version of the security daemon.
+Use the `Update-IoTEdge` command to update the module runtime. The script automatically pulls the latest version of the module runtime.
```powershell . {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Update-IoTEdge ```
-Running the Update-IoTEdge command removes and updates the security daemon from your device, along with the two runtime container images. The config.yaml file is kept on the device, as well as data from the Moby container engine. Keeping the configuration information means that you don't have to provide the connection string or Device Provisioning Service information for your device again during the update process.
+Running the `Update-IoTEdge` command removes and updates the runtime module from your device, along with the two runtime container images. The config.yaml file is kept on the device, as well as data from the Moby container engine. Keeping the configuration information means that you don't have to provide the connection string or Device Provisioning Service information for your device again during the update process.
-If you want to update to a specific version of the security daemon, find the version from 1.1 release channel you want to target from [IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). In that version, download the **Microsoft-Azure-IoTEdge.cab** file. Then, use the `-OfflineInstallationPath` parameter to point to the local file location. For example:
+If you want to update to a specific version of the security subsystem, find the version from 1.1 release channel you want to target from [IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). In that version, download the **Microsoft-Azure-IoTEdge.cab** file. Then, use the `-OfflineInstallationPath` parameter to point to the local file location. For example:
```powershell . {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; Update-IoTEdge -OfflineInstallationPath <absolute path to directory>
The IoT Edge agent and IoT Edge hub images are tagged with the IoT Edge version
If you use rolling tags in your deployment (for example, mcr.microsoft.com/azureiotedge-hub:**1.1**) then you need to force the container runtime on your device to pull the latest version of the image.
-Delete the local version of the image from your IoT Edge device. On Windows machines, uninstalling the security daemon also removes the runtime images, so you don't need to take this step again.
+Delete the local version of the image from your IoT Edge device. On Windows machines, uninstalling the security subsystem also removes the runtime images, so you don't need to take this step again.
```bash docker rmi mcr.microsoft.com/azureiotedge-hub:1.1
Some of the key differences between 1.2 and earlier versions include:
* The package name changed from **iotedge** to **aziot-edge**. * The **libiothsm-std** package is no longer used. If you used the standard package provided as part of the IoT Edge release, then your configurations can be transferred to the new version. If you used a different implementation of libiothsm-std, then any user-provided certificates like the device identity certificate, device CA, and trust bundle will need to be reconfigured.
-* A new identity service, **aziot-identity-service** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like [Device Update for IoT Hub](../iot-hub-device-update/understand-device-update.md).
+* A new identity service, **[aziot-identity-service](https://azure.github.io/iot-identity-service/)** was introduced as part of the 1.2 release. This service handles the identity provisioning and management for IoT Edge and for other device components that need to communicate with IoT Hub, like [Device Update for IoT Hub](../iot-hub-device-update/understand-device-update.md).
* The default config file has a new name and location. Formerly `/etc/iotedge/config.yaml`, your device configuration information is now expected to be in `/etc/aziot/config.toml` by default. The `iotedge config import` command can be used to help migrate configuration information from the old location and syntax to the new one. * The import command cannot detect or modify access rules to a device's trusted platform module (TPM). If your device uses TPM attestation, you need to manually update the /etc/udev/rules.d/tpmaccess.rules file to give access to the aziottpm service. For more information, see [Give IoT Edge access to the TPM](how-to-auto-provision-simulated-device-linux.md?view=iotedge-2020-11&preserve-view=true#give-iot-edge-access-to-the-tpm). * The workload API in version 1.2 saves encrypted secrets in a new format. If you upgrade from an older version to version 1.2, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in version 1.2 are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary.
When you're ready, follow these steps to update IoT Edge on your devices:
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
+It's recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
1. Import your old config.yaml file into its new format, and apply the configuration info.
The IoT Edge agent and hub modules have RC versions that are tagged with the sam
As previews, release candidate versions aren't included as the latest version that the regular installers target. Instead, you need to manually target the assets for the RC version that you want to test. For the most part, installing or updating to an RC version is the same as targeting any other specific version of IoT Edge.
-Use the sections in this article to learn how to update an IoT Edge device to a specific version of the security daemon or runtime modules.
+Use the sections in this article to learn how to update an IoT Edge device to a specific version of the security subsystem or runtime modules.
If you're installing IoT Edge, rather than upgrading an existing installation, use the steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
iot-hub-device-update Device Update Apt Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-apt-manifest.md
If version is omitted, the latest available version of specified package will be
> APT package manager ignores versioning requirements given by a package when the dependent packages to install are being automatically resolved. Unless explicit versions of dependent packages are given they will use the latest, even though the package itself may specify a strict requirement (=) on a given version. This automatic resolution can lead to errors regarding an unmet dependency. [Learn More](https://unix.stackexchange.com/questions/350192/apt-get-not-properly-resolving-a-dependency-on-a-fixed-version-in-a-debian-ubunt)
-If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `iotedge` package and its dependent `libiothsm-std` package in your APT manifest.
-[Learn More](../iot-edge/how-to-update-iot-edge.md#update-the-security-daemon)
+If you're updating a specific version of the Azure IoT Edge security daemon, then you should include the desired version of the `aziot-edge` package and its dependent `aziot-identity-service` package in your APT manifest.
+[Learn More](../iot-edge/how-to-update-iot-edge.md#update-the-security-subsystem)
> [!NOTE] > An apt manifest can be used to update Device Update agent and its dependencies. List the device update agent name and desired version in the apt manifest, like you would for any other package. This apt manifest can then be imported and deployed through the Device Update for IoT Hub pipeline.
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
az keyvault key rotation-policy update --vault-name <vault-name> --name <key-nam
Set rotation policy using Azure Powershell [Set-AzKeyVaultKeyRotationPolicy](/powershell/module/az.keyvault/set-azkeyvaultkeyrotationpolicy) cmdlet. ```powershell
-Get-AzKeyVaultKey -VaultName <vault-name> -Name <key-name>
-$action = [Microsoft.Azure.Commands.KeyVault.Models.PSKeyRotationLifetimeAction]::new()
-$action.Action = "Rotate"
-$action.TimeAfterCreate = New-TimeSpan -Days 540
-$expiresIn = New-TimeSpan -Days 720
-Set-AzKeyVaultKeyRotationPolicy -InputObject $key -KeyRotationLifetimeAction $action -ExpiresIn $expiresIn
+Set-AzKeyVaultKeyRotationPolicy -VaultName <vault-name> -KeyName <key-name> -ExpiresIn (New-TimeSpan -Days 720) -KeyRotationLifetimeAction @{Action="Rotate";TimeAfterCreate= (New-TimeSpan -Days 540)}
```- ## Rotation on demand Key rotation can be invoked manually.
logic-apps Create Serverless Apps Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-serverless-apps-visual-studio.md
Title: Create an example serverless app with Visual Studio
-description: Create, deploy, and manage an example serverless app with an Azure quickstart template, Azure Logic Apps and Azure Functions in Visual Studio.
+description: Create, deploy, and manage an example serverless app with an Azure Quickstart Template, Azure Logic Apps and Azure Functions in Visual Studio.
ms.suite: integration
Last updated 07/15/2021
# Create an example serverless app with Azure Logic Apps and Azure Functions in Visual Studio + You can quickly create, build, and deploy cloud-based "serverless" apps by using the services and capabilities in Azure, such as Azure Logic Apps and Azure Functions. When you use Azure Logic Apps, you can quickly and easily build workflows using low-code or no-code approaches to simplify orchestrating combined tasks. You can integrate different services, cloud, on-premises, or hybrid, without coding those interactions, having to maintain glue code, or learn new APIs or specifications. When you use Azure Functions, you can speed up development by using an event-driven model. You can use triggers that respond to events by automatically running your own code. You can use bindings to seamlessly integrate other services.
-This article shows how to create an example serverless app that runs in multi-tenant Azure by using an Azure Quickstart template. The template creates an Azure resource group project that includes an Azure Resource Manager deployment template. This template defines a basic logic app resource where a predefined a workflow includes a call to an Azure function that you define. The workflow definition includes the following components:
+This article shows how to create an example serverless app that runs in multi-tenant Azure by using an Azure Quickstart Template. The template creates an Azure resource group project that includes an Azure Resource Manager deployment template. This template defines a basic logic app resource where a predefined a workflow includes a call to an Azure function that you define. The workflow definition includes the following components:
* A Request trigger that receives HTTP requests. To start this trigger, you send a request to the trigger's URL. * An Azure Functions action that calls an Azure function that you can later define.
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
Title: Deploy single-tenant logic apps to private storage accounts
-description: How to deploy Standard logic app workflows to Azure storage accounts that use private endpoints and deny public access.
+ Title: Deploy Standard logic apps to private storage accounts
+description: Deploy Standard logic app workflows to Azure storage accounts that use private endpoints and deny public access.
ms.suite: integration Last updated 01/06/2022
-# As a developer, I want to deploy my single-tenant logic apps to Azure storage accounts using private endpoints
+# As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
# Deploy single-tenant Standard logic apps to private storage accounts using private endpoints + When you create a single-tenant Standard logic app resource, you're required to have a storage account for storing logic app artifacts. You can restrict access to this storage account so that only the resources inside a virtual network can connect to your logic app workflow. Azure Storage supports adding private endpoints to your storage account. This article describes the steps to follow for deploying such logic apps to protected private storage accounts. For more information, review [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md).
logic-apps Designer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/designer-overview.md
Title: About single-tenant workflow designer
+ Title: About Standard logic app workflow designer
description: Learn how the designer in single-tenant Azure Logic Apps helps you visually create workflows through the Azure portal. Discover the benefits and features in this latest version. ms.suite: integration
Last updated 06/30/2021
-# About the workflow designer in single-tenant Azure Logic Apps
+# About the Standard logic app workflow designer in single-tenant Azure Logic Apps
+ When you work with Azure Logic Apps in the Azure portal, you can edit your [*workflows*](logic-apps-overview.md#workflow) visually or programmatically. After you open a [*logic app* resource](logic-apps-overview.md#logic-app) in the portal, on the resource menu under **Developer**, you can select between [**Code** view](#code-view) and **Designer** view. When you want to visually develop, edit, and run your workflow, select the designer view. You can switch between the designer view and code view at any time.
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
Title: Edit runtime and environment settings in single-tenant Azure Logic Apps
-description: Change the runtime and environment settings for logic apps in single-tenant Azure Logic Apps.
+ Title: Edit runtime and environment settings for Standard logic apps
+description: Change the runtime and environment settings for Standard logic apps in single-tenant Azure Logic Apps.
ms.suite: integration