Updates from: 03/01/2023 02:15:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
You typically use only one identity provider in your applications, but you have
* [LinkedIn](identity-provider-linkedin.md) * [Microsoft Account](identity-provider-microsoft-account.md) * [Mobile ID](identity-provider-mobile-id.md)
-* [PingOne](identity-provider-ping-one.md) (PingIdentity)
+* [PingOne](identity-provider-ping-one.md) (Ping Identity)
* [QQ](identity-provider-qq.md) * [Salesforce](identity-provider-salesforce.md) * [Salesforce (SAML protocol)](identity-provider-salesforce-saml.md)
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
zone_pivot_groups: b2c-policy-type
## Create a PingOne application
-To enable sign-in for users with a PingOne (PingIdentity) account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in PingIdentity Administrator Console. For more information, see [Add or update an OIDC application](https://docs.pingidentity.com/bundle/pingoneforenterprise/page/agd1564020501024-1.html). If you don't already have a PingOne account, you can sign up at [`https://admin.pingone.com/web-portal/register`](https://admin.pingone.com/web-portal/register).
+To enable sign-in for users with a PingOne (Ping Identity) account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the Ping Identity Administrator Console. For more information, see [Adding or updating an OIDC application](https://docs.pingidentity.com/access/sources/dita/topic?resourceid=p14e_add_update_oidc_application) in the Ping Identity documentation. If you don't already have a PingOne account, you can sign up at [`https://admin.pingone.com/web-portal/register`](https://admin.pingone.com/web-portal/register).
-1. Sign in to the PingIdentity Administrator Console with your PingOne account credentials.
+1. Sign in to the Ping Identity Administrator Console with your PingOne account credentials.
1. In the left menu of the page, select **Connections**, then next to **Applications**, select **+**. 1. On the **New Application** page, select **web app**, then under **OIDC**, select **Configure**. 1. Enter an **Application name**, and select **Next**.
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
|{Settings:DfpTenantId}|The ID of the Azure AD tenant (not B2C) where DFP is licensed and installed|`01234567-89ab-cdef-0123-456789abcdef` or `consoto.onmicrosoft.com` | |{Settings:DfpAppClientIdKeyContainer}|Name of the policy key-in which you save the DFP client ID|`B2C_1A_DFPClientId`| |{Settings:DfpAppClientSecretKeyContainer}|Name of the policy key-in which you save the DFP client secret |`B2C_1A_DFPClientSecret`|
-|{Settings:DfpEnvironment}| The ID of the DFP environment.|Environment ID is a global unique identifier of the DFP environment that you sends the data to. Your custom policy should invoke the API endpoint including the `x-ms-dfpenvid=<your-env-id>` in the query string parameter.|
+|{Settings:DfpEnvironment}| The ID of the DFP environment.|Environment ID is a global unique identifier of the DFP environment that you send the data to. Your custom policy should call the API endpoint, including the query string parameter `x-ms-dfpenvid=your-env-id>`|
*You can set up application insights in an Azure AD tenant or subscription. This value is optional but [recommended to assist with debugging](./troubleshoot-with-application-insights.md).
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
# Tutorial: Configure Ping Identity with Azure Active Directory B2C for secure hybrid access
-In this tutorial, learn how to extend the capabilities of Azure Active Directory B2C (Azure AD B2C) with [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More) and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html). PingAccess provides access to applications and APIs, and a policy engine for authorized user access. PingFederate is an enterprise federation server for user authentication and single sign-on, an authority that permits customers, employees, and partners to access applications from devices. Use them together to enable secure hybrid access (SHA).
+In this tutorial, learn how to extend the capabilities of Azure Active Directory B2C (Azure AD B2C) with [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html) and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html). PingAccess provides access to applications and APIs, and a policy engine for authorized user access. PingFederate is an enterprise federation server for user authentication and single sign-on, an authority that permits customers, employees, and partners to access applications from devices. Use them together to enable secure hybrid access (SHA).
Many e-commerce sites and web applications exposed to the internet are deployed behind proxy systems, or a reverse-proxy system. These proxy systems pre-authenticate, enforce policy, and route traffic. Typical scenarios include protecting web applications from inbound web traffic and providing a uniform session management across distributed server deployments.
If you want to modernize an identity platform in such configurations, there migh
- Drive the end-user experience consistency - Provide a single sign-in experience across applications
-In answer to these concerns, the approach in this tutorial is an Azure AD B2C, [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html#:~:text=%20Modern%20Access%20Managementfor%20the%20Digital%20Enterprise%20,consistent%20enforcement%20of%20security%20policies%20by...%20More), and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) integration.
+In answer to these concerns, the approach in this tutorial is an Azure AD B2C, [PingAccess](https://www.pingidentity.com/en/software/pingaccess.html), and [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) integration.
## Shared environment
Use the instructions in the following sections to configure PingAccess and PingF
To configure PingFederate as the token provider for PingAccess, ensure connectivity from PingFederate to PingAccess. Confirm connectivity from PingAccess to PingFederate.
-Go to pingidentity.com for, [Configure PingFederate as the token provider for PingAccess](https://docs.pingidentity.com/bundle/pingaccess-61/page/zgh1581446287067.html).
+For more information, see [Configure PingFederate as the token provider for PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_configure_pf_as_the_token_provider_for_pa) in the Ping Identity documentation.
### Configure a PingAccess application for header-based authentication
Use the following instructions to create a PingAccess application for the target
#### Create a virtual host >[!IMPORTANT]
->Create a virtual host for every application. For more information, see [What can I configure with PingAccess?]([https://docs.pingidentity.com/bundle/pingaccess-43/page/reference/pa_c_KeyConsiderations.html].
+>Create a virtual host for every application. For more information, see [What can I configure with PingAccess?](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_what_can_I_configure_with_pa) in the Ping Identity documentation.
To create a virtual host:
To create an application in PingAccess for each application in Azure that you wa
Configure the PingFederate authentication policy to federate to the multiple IdPs provided by the Azure AD B2C tenants
-1. Create a contract to bridge the attributes between the IdPs and the SP. For more information, see [Federation hub and authentication policy contracts](https://docs.pingidentity.com/bundle/pingfederate-101/page/ope1564002971971.html). You likely need only one contract unless the SP requires a different set of attributes from each IdP.
+1. Create a contract to bridge the attributes between the IdPs and the SP. You should need only one contract unless the SP requires a different set of attributes from each IdP. For more information, see [Federation hub and authentication policy contracts](https://docs.pingidentity.com/access/sources/dita/topic?category=pingfederate&Releasestatus_ce=Current&resourceid=pf_fed_hub_auth_polic_contract) in the Ping Identity documentation.
2. For each IdP, create an IdP connection between the IdP and PingFederate, the federation hub as the SP.
active-directory-b2c Quickstart Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-single-page-app.md
Previously updated : 01/13/2022 Last updated : 02/23/2023
In this quickstart, you use a single-page application to sign in using a social
1. Browse to the URL of the application. For example, `http://localhost:6420`.
- ![Single-page application sample app shown in browser](./media/quickstart-single-page-app/sample-app-spa.png)
## Sign in using your account 1. Select **Sign In** to start the user journey. 1. Azure AD B2C presents a sign-in page for a fictitious company called "Fabrikam" for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
- ![Sign In or Sign Up page showing identity provider buttons](./media/quickstart-single-page-app/sign-in-or-sign-up-spa.png)
+ :::image type="content" source="./media/quickstart-single-page-app/sign-in-or-sign-up-spa.png" alt-text="Screenshot of Sign In or Sign Up page showing identity provider buttons" lightbox="./media/quickstart-single-page-app/sign-in-or-sign-up-spa.png":::
- You authenticate (sign in) using your social account credentials and authorize the application to read information from your social account. By granting access, the application can retrieve profile information from the social account such as your name and city.
+ You authenticate (sign in) using your social account credentials and authorize the application to read information from your social account. By granting access, the application can retrieve profile information from the social account such as your name and city.
1. Finish the sign-in process for the identity provider.
In this quickstart, you use a single-page application to sign in using a social
Select **Call API** to have your display name returned from the web API as a JSON object.
-![Sample application in browser showing the web API response](./media/quickstart-single-page-app/call-api-spa.png)
The sample single-page application includes an access token in the request to the protected web API resource.
active-directory On Premises Ldap Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-configure.md
Previously updated : 02/08/2022 Last updated : 02/08/2023
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Previously updated : 02/08/2022 Last updated : 02/08/2023
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 02/27/2023 Last updated : 02/28/2023
There are several endpoints defined in the SCIM RFC. You can start with the `/Us
## Understand the Azure AD SCIM implementation
-To support a SCIM 2.0 user management API, this section describes how the Azure AD Provisioning Service is implemented and shows how to model your SCIM protocol request handling and responses.
+The Azure AD Provisioning Services is designed to support a SCIM 2.0 user management API.
> [!IMPORTANT] > The behavior of the Azure AD SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md).
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
### General: * `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero elements.
-* Values sent should be stored in the same format as what they were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data shouldn't happen between data being sent by Azure AD and data being stored in the SCIM application. (for example. A phone number sent as 55555555555 shouldn't be saved/returned as +5 (555) 555-5555)
+* Values sent should be stored in the same format they were sent. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data shouldn't happen between data from Azure AD and data stored in the SCIM application. (for example. A phone number sent as 55555555555 shouldn't be saved/returned as +5 (555) 555-5555)
* It isn't necessary to include the entire resource in the **PATCH** response. * Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Azure AD emits the values of `op` as **Add**, **Replace**, and **Remove**. * Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com).
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
* If a value isn't present, don't send null values. * Property values should be camel cased (for example, readWrite). * Must return a list response.
-* The /schemas request will be made by the Azure AD Provisioning Service every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Other attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. It will not result in attributes being removed.
+* The Azure AD Provisioning Service makes the /schemas request every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Other attributes discovered are surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. Attributes aren't removed.
### User provisioning and deprovisioning
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
To publish your own on-premises application:
1. **Translate URL in Headers**: Choose **No**. > [!NOTE]
- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=reference/ui/pa_c_Listeners.html).
+ > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
1. Select **Add**. The overview page for the new application appears.
When you will configure PingAccess in the following step, the Web Session you wi
Now that you've completed all the Azure Active Directory setup steps, you can move on to configuring PingAccess.
-The detailed steps for the PingAccess part of this scenario continue in the Ping Identity documentation. Follow the instructions in [Configure PingAccess for Azure AD to protect applications published using Microsoft Azure AD Application Proxy](https://support.pingidentity.com/s/document-item?bundleId=pingaccess-52&topicId=agents/azure/pa_c_PAAzureSolutionOverview.html) on the Ping Identity web site and download the [latest version of PingAccess](https://www.pingidentity.com/en/lp/azure-download.html?).
+The detailed steps for the PingAccess part of this scenario continue in the Ping Identity documentation. Follow the instructions in [Configuring PingAccess for Azure AD](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_configuring_apps_for_azure) on the Ping Identity web site and download the [latest version of PingAccess](https://www.pingidentity.com/en/lp/azure-download.html).
Those steps help you install PingAccess and set up a PingAccess account (if you don't already have one). Then, to create an Azure AD OpenID Connect (OIDC) connection, you set up a token provider with the **Directory (tenant) ID** value that you copied from the Azure AD portal. Next, to create a web session on PingAccess, you use the **Application (client) ID** and `PingAccess key` values. After that, you can set up identity mapping and create a virtual host, site, and application.
When you've completed all these steps, your application should be up and running
## Next steps -- [Configure PingAccess for Azure AD to protect applications published using Microsoft Azure AD Application Proxy](https://docs.pingidentity.com/bundle/pingaccess-60/page/jep1564006742933.html)
+- [Configuring PingAccess to use Azure AD as the token provider](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_configure_pa_to_use_azure_ad_as_the_token_provider)
- [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md) - [Troubleshoot Application Proxy problems and error messages](application-proxy-troubleshoot.md)
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
The following are prerequisites required for completing this tutorial
- A test environment with Azure AD Connect sync version 1.4.32.0 or later - An OU or group that is in scope of sync and can be used the pilot. We recommend starting with a small set of objects.-- A server running Windows Server 2012 R2 or later that will host the provisioning agent.
+- A server running Windows Server 2016 or later that will host the provisioning agent.
- Source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID* ## Update Azure AD Connect
Azure AD Connect sync synchronizes changes occurring in your on-premises directo
>If you are running your own custom scheduler for Azure AD Connect sync, then please disable the scheduler. ## Create custom user inbound rule
+In the Azure AD Connect Synchronization Rules editor, you need to create an inbound sync rule that filters out users in the OU you identified previously. The inbound sync rule is a join rule with a target attribute of cloudNoFlow. This rule tells Azure AD Connect not to synchronize attributes for these users. For more information, see [Migrating to cloud sync](migrate-azure-ad-connect-to-cloud-sync.md) documentation before attempting to migrate your production environment.
1. Launch the synchronization editor from the application menu in desktop as shown below:
Azure AD Connect sync synchronizes changes occurring in your on-premises directo
Same steps need to be followed for all object types (user, group and contact). Repeat steps per configured AD Connector / per AD forest. ## Create custom user outbound rule
+You'll also need an outbound sync rule with a link type of JoinNoFlow and the scoping filter that has the cloudNoFlow attribute set to True. This rule tells Azure AD Connect not to synchronize attributes for these users. For more information, see [Migrating to cloud sync](migrate-azure-ad-connect-to-cloud-sync.md) documentation before attempting to migrate your production environment.
1. Select **Outbound** from the drop-down list for Direction and select **Add rule**.
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
Use the following steps to configure provisioning:
-1. Sign-in to the Azure AD portal.
-2. Select **Azure Active Directory**
-3. Select **Azure AD Connect**
-4. Select **Manage cloud sync**
-
- ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-
-5. Select **New Configuration**
-
- ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-
-6. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
-
- ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/tutorial-single-forest/configure-2.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX cloud sync screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
+
+ 4. Select **New configuration**.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-1.png" alt-text="Screenshot of adding a configuration." lightbox="media/how-to-configure/new-ux-configure-1.png":::
+ 5. On the configuration screen, select your domain and whether to enable password hash sync. Click **Create**.
+
+ :::image type="content" source="media/how-to-configure/new-ux-configure-2.png" alt-text="Screenshot of a new configuration." lightbox="media/how-to-configure/new-ux-configure-2.png":::
-7. Under **Configure**, select **All users** to change the scope of the configuration rule.
+ 6. The **Get started** screen will open.
- ![Screenshot of Configure screen with "All users" highlighted next to "Scope users".](media/how-to-configure/scope-2.png)
-
-8. On the right, change the scope to include the specific OU you created "OU=CPUsers,DC=contoso,DC=com".
+ :::image type="content" source="media/how-to-configure/new-ux-configure-3.png" alt-text="Screenshot of the getting started screen." lightbox="media/how-to-configure/new-ux-configure-3.png":::
- ![Screenshot of the Scope users screen highlighting the scope changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
-
-9. Select **Done** and **Save**.
-10. The scope should now be set to one organizational unit.
+ 7. On the **Get started** screen, click either **Add scoping filters** next to the **Add scoping filters** icon or on the click **Scoping filters** on the left under **Manage**.
- ![Screenshot of Configure screen with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-5.png" alt-text="Screenshot of scoping filters." lightbox="media/how-to-configure/new-ux-configure-5.png":::
+
+ 8. Select the scoping filter. For this tutorial select:
+ - **Selected organizational units**: Scopes the configuration to apply to specific OUs.
+ 9. In the box, enter "OU=CPUsers,DC=contoso,DC=com".
+
+ :::image type="content" source="media/tutorial-migrate-aadc-aadccp/configure-1.png" alt-text="Screenshot of the scoping filter." lightbox="media/tutorial-migrate-aadc-aadccp/configure-1.png":::
+
+ 10. Click **Add**. Click **Save**.
-## Verify users are provisioned by cloud sync
-You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. This process may take a few hours to complete. To verify users are provisioning by cloud sync, follow these steps:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
-2. On the left, select **Azure Active Directory**
-3. Select on **Azure AD Connect**
-4. Select on **Manage cloud sync**
-5. Select on **Logs** button
-6. Search for a username to confirm that the user is provisioned by cloud sync
-Additionally, you can verify that the user and group exist in Azure AD.
+
## Start the scheduler
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
Policy 1: All users with the directory role of Global Administrator, accessing t
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **Select apps**, and select **Microsoft Azure Management**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and **Require device to be marked as compliant**, then select **Select**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and **Require device to be marked as compliant**, then select **Select**.
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
User actions are tasks that can be performed by a user. Currently, Conditional A
- **Register security information**: This user action allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md). -- **Register or join devices**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multi-factor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action:
- - `Require multi-factor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
+- **Register or join devices**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multifactor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action:
+ - `Require multifactor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
- `Client apps`, `Filters for devices` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies.
- - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
+ - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multifactor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
## Authentication context
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
This setting has an impact on access attempts made from the following mobile app
- When creating a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy. - Organizations can narrow the scope of this policy to specific platforms using the **Device platforms** condition.
-If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multi-factor authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls.
+If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multifactor Authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls.
For more information, see the following articles:
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The control for blocking access considers any assignments and prevents access ba
Administrators can choose to enforce one or more controls when granting access. These controls include the following options: -- [Require multifactor authentication (Azure AD Multi-Factor Authentication)](../authentication/concept-mfa-howitworks.md)
+- [Require multifactor authentication (Azure AD Multifactor Authentication)](../authentication/concept-mfa-howitworks.md)
- [Require authentication strength (Preview)](#require-authentication-strength-preview) - [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started) - [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md)
When administrators choose to combine these options, they can use the following
By default, Conditional Access requires all selected controls.
-### Require Multi-Factor Authentication
+### Require Multifactor Authentication
-Selecting this checkbox requires users to perform Azure Active Directory (Azure AD) Multi-factor Authentication. You can find more information about deploying Azure AD Multi-Factor Authentication in [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+Selecting this checkbox requires users to perform Azure Active Directory (Azure AD) Multifactor Authentication. You can find more information about deploying Azure AD Multifactor Authentication in [Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md).
[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
You can use the Microsoft Defender for Endpoint app with the approved client app
Organizations can choose to use the device identity as part of their Conditional Access policy. Organizations can require that devices are hybrid Azure AD joined by using this checkbox. For more information about device identities, see [What is a device identity?](../devices/overview.md).
-When you use the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the required grant control for the managed device or a device state condition isn't supported. This is because the device that is performing authentication can't provide its device state to the device that is providing a code. Also, the device state in the token is locked to the device performing authentication. Use the **Require Multi-Factor Authentication** control instead.
+When you use the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the required grant control for the managed device or a device state condition isn't supported. This is because the device that is performing authentication can't provide its device state to the device that is providing a code. Also, the device state in the token is locked to the device performing authentication. Use the **Require Multifactor Authentication** control instead.
The **Require hybrid Azure AD joined device** control: - Only supports domain-joined Windows down-level (before Windows 10) and Windows current (Windows 10+) devices.
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
How does an organization create these policies? What is required? How are they a
![Conditional Access (Signals + Decisions + Enforcement = Policies)](./media/concept-conditional-access-policies/conditional-access-signal-decision-enforcement.png)
-Multiple Conditional Access policies may apply to an individual user at any time. In this case, all policies that apply must be satisfied. For example, if one policy requires multi-factor authentication (MFA) and another requires a compliant device, you must complete MFA, and use a compliant device. All assignments are logically **ANDed**. If you've more than one assignment configured, all assignments must be satisfied to trigger a policy.
+Multiple Conditional Access policies may apply to an individual user at any time. In this case, all policies that apply must be satisfied. For example, if one policy requires multifactor authentication (MFA) and another requires a compliant device, you must complete MFA, and use a compliant device. All assignments are logically **ANDed**. If you've more than one assignment configured, all assignments must be satisfied to trigger a policy.
If a policy where "Require one of the selected controls" is selected, we prompt in the order defined, as soon as the policy requirements are satisfied, access is granted.
All policies are enforced in two phases:
- Use the session details gathered in phase 1 to identify any requirements that haven't been met. - If there's a policy that is configured to block access, with the block grant control, enforcement will stop here and the user will be blocked. - The user will be prompted to complete more grant control requirements that weren't satisfied during phase 1 in the following order, until policy is satisfied:
- 1. [Multi-factor authenticationΓÇï](concept-conditional-access-grant.md#require-multi-factor-authentication)
+ 1. [Multifactor AuthenticationΓÇï](concept-conditional-access-grant.md#require-multifactor-authentication)
2. [Device to be marked as compliant](./concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant) 3. [Hybrid Azure AD joined device](./concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device) 4. [Approved client app](./concept-conditional-access-grant.md#require-approved-client-app)
Block access does just that, it will block access under the specified assignment
The grant control can trigger enforcement of one or more controls. -- Require multi-factor authentication
+- Require multifactor authentication
- Require device to be marked as compliant (Intune) - Require Hybrid Azure AD joined device - Require approved client app
The article [Common Conditional Access policies](concept-conditional-access-poli
[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
-[Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md)
+[Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md)
[Managing device compliance with Intune](/intune/device-compliance-get-started)
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Organizations can select individual policy templates and:
- [Require approved client apps or app protection](howto-policy-approved-app-or-app-protection.md) - [Require compliant or hybrid Azure AD joined device or multifactor authentication for all users](howto-conditional-access-policy-compliant-device.md) - [Require compliant or Hybrid Azure AD joined device for administrators](howto-conditional-access-policy-compliant-device-admin.md)-- [Require multi-factor authentication for risky sign-in](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2**
+- [Require multifactor authentication for risky sign-in](howto-conditional-access-policy-risk.md) **Requires Azure AD Premium P2**
- [Require multifactor authentication for guest access](howto-policy-guest-mfa.md) - [Require password change for high-risk users](howto-conditional-access-policy-risk-user.md) **Requires Azure AD Premium P2** - [Securing security info registration](howto-conditional-access-policy-registration.md)
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 04/21/2022 Last updated : 02/27/2023
Within a Conditional Access policy, an administrator can make use of session controls to enable limited experiences within specific cloud applications.
-![Conditional Access policy with a grant control requiring multi-factor authentication](./media/concept-conditional-access-session/conditional-access-session.png)
+![Conditional Access policy with a grant control requiring multifactor authentication](./media/concept-conditional-access-session/conditional-access-session.png)
## Application enforced restrictions
Conditional Access App Control enables user app access and sessions to be monito
- Prevent data exfiltration: You can block the download, cut, copy, and print of sensitive documents on, for example, unmanaged devices. - Protect on download: Instead of blocking the download of sensitive documents, you can require documents to be labeled and protected with Azure Information Protection. This action ensures the document is protected and user access is restricted in a potentially risky session.-- Prevent upload of unlabeled files: Before a sensitive file is uploaded, distributed, and used by others, itΓÇÖs important to make sure that the file has the right label and protection. You can ensure that unlabeled files with sensitive content are blocked from being uploaded until the user classifies the content.
+- Prevent upload of unlabeled files: Before a sensitive file is uploaded, distributed, and used, itΓÇÖs important to make sure that the file has the right label and protection. You can ensure that unlabeled files with sensitive content are blocked from being uploaded until the user classifies the content.
- Monitor user sessions for compliance (Preview): Risky users are monitored when they sign into apps and their actions are logged from within the session. You can investigate and analyze user behavior to understand where, and under what conditions, session policies should be applied in the future. - Block access (Preview): You can granularly block access for specific apps and users depending on several risk factors. For example, you can block them if they're using client certificates as a form of device management. - Block custom activities: Some apps have unique scenarios that carry risk, for example, sending messages with sensitive content in apps like Microsoft Teams or Slack. In these kinds of scenarios, you can scan messages for sensitive content and block them in real time.
For more information, see the article [Configure authentication session manageme
## Disable resilience defaults (Preview)
-During an outage, Azure AD will extend access to existing sessions while enforcing Conditional Access policies. If a policy can't be evaluated, access is determined by resilience settings.
+During an outage, Azure AD extends access to existing sessions while enforcing Conditional Access policies.
If resilience defaults are disabled, access is denied once existing sessions expire. For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
The initial implementation of continuous access evaluation focuses on Exchange,
To prepare your applications to use CAE, see [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md).
-Continuous access evaluation isn't currently available in Azure Government GCC High tenants.
+Continuous access evaluation is available in Azure Government tenants (GCC High and DOD) for Exchange Online.
### Key benefits
Continuous access evaluation is implemented by enabling services, like Exchange
- User Account is deleted or disabled - Password for a user is changed or reset-- Multi-factor authentication is enabled for the user
+- Multifactor Authentication is enabled for the user
- Administrator explicitly revokes all refresh tokens for a user - High user risk detected by Azure AD Identity Protection
Networks and network services used by clients connecting to identity and resourc
CAE only has insight into [IP-based named locations](../conditional-access/location-condition.md#ipv4-and-ipv6-address-ranges). CAE doesn't have insight into other location conditions like [MFA trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) or country-based locations. When a user comes from an MFA trusted IP, trusted location that includes MFA Trusted IPs, or country location, CAE won't be enforced after that user moves to a different location. In those cases, Azure AD will issue a one-hour access token without instant IP enforcement check. > [!IMPORTANT]
-> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country location conditions or the trusted ips feature that is available in Azure AD Multi-Factor Authentication's service settings page.
+> If you want your location policies to be enforced in real time by continuous access evaluation, use only the [IP based Conditional Access location condition](../conditional-access/location-condition.md) and configure all IP addresses, **including both IPv4 and IPv6**, that can be seen by your identity provider and resources provider. Do not use country location conditions or the trusted ips feature that is available in Azure AD Multifactor Authentication's service settings page.
### Named location limitations
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
Follow the instructions in the article, [Add or deactivate custom security attri
1. Set **Operator** to **Contains**. 1. Set **Value** to **requireMFA**. 1. Select **Done**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
Previously updated : 08/27/2020 Last updated : 02/27/2023 -+
The Conditional Access insights and reporting workbook enables you to understand
To enable the insights and reporting workbook, your tenant must have a Log Analytics workspace to retain sign-in logs data. Users must have Azure AD Premium P1 or P2 licenses to use Conditional Access.
-The following roles can access insights and reporting:
--- Conditional Access Administrator -- Security reader -- Security administrator -- Global Reader -- Global Administrator -
-Users also need one of the following Log Analytics workspace roles:
--- Contributor -- Owner
+Users must have at least the Security Reader role assigned and Log Analytics workspace Contributor roles assigned.
### Stream sign-in logs from Azure AD to Azure Monitor logs
-If you haven't integrated Azure AD logs with Azure Monitor logs, you'll need to take the following steps before the workbook will load:
+If you haven't integrated Azure AD logs with Azure Monitor logs, you need to take the following steps before the workbook loads:
1. [Create a Log Analytics workspace in Azure Monitor](../../azure-monitor/logs/quick-create-workspace.md). 1. [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
The insights and reporting dashboard lets you see the impact of one or more Cond
**Conditional Access policy**: Select one or more Conditional Access policies to view their combined impact. Policies are separated into two groups: Enabled and Report-only policies. By default, all Enabled policies are selected. These enabled policies are the policies currently enforced in your tenant.
-**Time range**: Select a time range from 4 hours to as far back as 90 days. If you select a time range further back than when you integrated the Azure AD logs with Azure Monitor, only sign-ins after the time of integration will appear.
+**Time range**: Select a time range from 4 hours to as far back as 90 days. If you select a time range further back than when you integrated the Azure AD logs with Azure Monitor, only sign-ins after the time of integration appear.
**User**: By default, the dashboard shows the impact of the selected policies for all users. To filter by an individual user, type the name of the user into the text field. To filter by all users, type ΓÇ£All usersΓÇ¥ into the text field or leave the parameter empty. **App**: By default, the dashboard shows the impact of the selected policies for all apps. To filter by an individual app, type the name of the app into the text field. To filter by all apps, type ΓÇ£All appsΓÇ¥ into the text field or leave the parameter empty.
-**Data view**: Select whether you want the dashboard to show results in terms of the number of users or number of sign-ins. An individual user may have hundreds of sign-ins to many apps with many different outcomes during a given time range. If you select the data view to be users, a user could be included in both the Success and Failure counts (for example, if there are 10 users, 8 of them could have had a result of success in the past 30 days and 9 of them could have had a result of failure in the past 30 days).
+**Data view**: Select whether you want the dashboard to show results in terms of the number of users or number of sign-ins. An individual user may have hundreds of sign-ins to many apps with many different outcomes during a given time range. If you select the data view to be users, a user could be included in both the Success and Failure counts. For example, if there are 10 users, 8 of them could have had a result of success in the past 30 days and 9 of them could have had a result of failure in the past 30 days.
## Impact summary
Once the parameters have been set, the impact summary loads. The summary shows h
**Failure**: The number of users or sign-ins during the time period where the result of at least one of the selected policies was ΓÇ£FailureΓÇ¥ or ΓÇ£Report-only: FailureΓÇ¥.
-**User action required**: The number of users or sign-ins during the time period where the combined result of the selected policies was ΓÇ£Report-only: User action requiredΓÇ¥. User action is required when an interactive grant control, such as multifactor authentication is required by a report-only Conditional Access policy. Since interactive grant controls aren't enforced by report-only policies, success or failure can't be determined.
+**User action required**: The number of users or sign-ins during the time period where the combined result of the selected policies was ΓÇ£Report-only: User action requiredΓÇ¥. User action is required when an interactive grant control, such as multifactor authentication is required. Since interactive grant controls aren't enforced by report-only policies, success or failure can't be determined.
**Not applied**: The number of users or sign-ins during the time period where none of the selected policies applied.
View the breakdown of users or sign-ins for each of the conditions. You can filt
![Workbook sign-in details](./media/howto-conditional-access-insights-reporting/workbook-sign-in-details.png)
-You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query on the left displays the most frequent users. Selecting a user will filter the query to the right.
+You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query on the left displays the most frequent users. Selecting a user filters the query to the right.
> [!NOTE] > When downloading the Sign-ins logs, choose JSON format to include Conditional Access report-only result data.
For more information about how to stream Azure AD sign-in logs to a Log Analytic
### Why are the queries in the workbook failing?
-Customers have noticed that queries sometimes fail if the wrong or multiple workspaces are associated with the workbook. To fix this problem, click **Edit** at the top of the workbook and then the Settings gear. Select and then remove workspaces that aren't associated with the workbook. There should be only one workspace associated with each workbook.
+Customers have noticed that queries sometimes fail if the wrong or multiple workspaces are associated with the workbook. To fix this problem, select **Edit** at the top of the workbook and then the Settings gear. Select and then remove workspaces that aren't associated with the workbook. There should be only one workspace associated with each workbook.
### Why is the Conditional Access policies parameter is empty?
-The list of policies is generated by looking at the policies evaluated for the most recent sign-in event. If there are no recent sign-ins in your tenant, you may need to wait a few minutes for the workbook to load the list of Conditional Access policies. This can happen immediately after configuring Log Analytics or may take longer if a tenant doesnΓÇÖt have recent sign-in activity.
+The list of policies is generated by looking at the policies evaluated for the most recent sign-in event. If there are no recent sign-ins in your tenant, you may need to wait a few minutes for the workbook to load the list of Conditional Access policies. Empty results can happen immediately after configuring Log Analytics or if a tenant doesnΓÇÖt have recent sign-in activity.
### Why is the workbook taking a long time to load?
Depending on the time range selected and the size of your tenant, the workbook m
### After loading for a few minutes, why is the workbook returning zero results?
-When the volume of sign-ins exceeds the query capacity of Log Analytics, the workbook will return zero results. Try shortening the time range to 4 hours to see if the workbook loads.
+When the volume of sign-ins exceeds the query capacity of Log Analytics, the workbook returns zero results. Try shortening the time range to 4 hours to see if the workbook loads.
### Can I save my parameter selections?
-You can save your parameter selections at the top of the workbook by going to **Azure Active Directory** > **Workbooks** > **Conditional Access Insights and reporting**. Here you'll find the workbook template, where you can edit the workbook and save a copy to your workspace, including the parameter selections, in **My reports** or **Shared reports**.
+You can save your parameter selections at the top of the workbook by going to **Azure Active Directory** > **Workbooks** > **Conditional Access Insights and reporting**. Here you find the workbook template, where you can edit the workbook and save a copy to your workspace, including the parameter selections, in **My reports** or **Shared reports**.
-### Can I edit and customize the workbook with additional queries?
+### Can I edit and customize the workbook with other queries?
-You can edit and customize the workbook by going to **Azure Active Directory** > **Workbooks** > **Conditional Access Insights and reporting**. Here you'll find the workbook template, where you can edit the workbook and save a copy to your workspace, including the parameter selections, in **My reports** or **Shared reports**. To start editing the queries, click **Edit** at the top of the workbook.
+You can edit and customize the workbook by going to **Azure Active Directory** > **Workbooks** > **Conditional Access Insights and reporting**. Here you find the workbook template, where you can edit the workbook and save a copy to your workspace, including the parameter selections, in **My reports** or **Shared reports**. To start editing the queries, select **Edit** at the top of the workbook.
## Next steps
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
You can use one of the built-in strengths or create a [custom authentication str
In external user scenarios, the MFA authentication methods that a resource tenant can accept vary depending on whether the user is completing MFA in their home tenant or in the resource tenant. For details, see [Conditional Access authentication strength](https://aka.ms/b2b-auth-strengths). > [!NOTE]
-> Currently, you can only apply authentication strength policies to external users who authenticate with Azure AD. For email one-time passcode, SAML/WS-Fed, and Google federation users, use the [MFA grant control](concept-conditional-access-grant.md#require-multi-factor-authentication) to require MFA.
+> Currently, you can only apply authentication strength policies to external users who authenticate with Azure AD. For email one-time passcode, SAML/WS-Fed, and Google federation users, use the [MFA grant control](concept-conditional-access-grant.md#require-multifactor-authentication) to require MFA.
## Configure cross-tenant access settings to trust MFA
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Supported scenarios:
- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status. - Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.-- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multi-factor-authentication) grant control.
+- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multifactor-authentication) grant control.
When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Conditional Access policies are at their most basic an if-then statement combini
![Conceptual Conditional signal plus decision to get enforcement](./media/location-condition/conditional-access-signal-decision-enforcement.png)
+> [!IMPORTANT]
+> [IPv6 is coming to Azure Active Directory (Azure AD)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/ipv6-coming-to-azure-ad/ba-p/2967451). We will begin introducing IPv6 support into Azure AD services in a phased approach, starting April 3, 2023. Organizations that use named locations in Conditional Access or Identity Protection must [take action to avoid possible service impact](/troubleshoot/azure/active-directory/azure-ad-ipv6-support#what-does-my-organization-have-to-do).
+ Organizations can use this location for common tasks like: - Requiring multifactor authentication for users accessing a service when they're off the corporate network.
To define a named location by country, you need to provide:
![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
-If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business.
+If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries to block traffic from countries where they don't do business.
If you select **Determine location by GPS coordinates**, the user needs to have the Microsoft Authenticator app installed on their mobile device. Every hour, the system contacts the userΓÇÖs Microsoft Authenticator app to collect the GPS location of the userΓÇÖs mobile device.
This option applies to:
#### Multifactor authentication trusted IPs
-Using the trusted IPs section of multifactor authentication's service settings is no longer recommended. This control only accepts IPv4 addresses and should only be used for specific scenarios covered in the article [Configure Azure AD Multi-Factor Authentication settings](../authentication/howto-mfa-mfasettings.md#trusted-ips)
+Using the trusted IPs section of multifactor authentication's service settings is no longer recommended. This control only accepts IPv4 addresses and should only be used for specific scenarios covered in the article [Configure Azure AD Multifactor Authentication settings](../authentication/howto-mfa-mfasettings.md#trusted-ips)
If you have these trusted IPs configured, they show up as **MFA Trusted IPs** in the list of locations for the location condition.
With this option, you can select one or more named locations. For a policy with
## IPv6 traffic
-Conditional Access policies apply to all IPv4 **and** IPv6 traffic.
+Conditional Access policies apply to all IPv4 **and** [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) traffic (starting April 3, 2023).
### Identifying IPv6 traffic with Azure AD Sign-in activity reports
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Conditional Access brings signals together, to make decisions, and enforce organ
![Conceptual Conditional signal plus decision to get enforcement](./media/overview/conditional-access-signal-decision-enforcement.png)
-Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multi-factor authentication to access it.
+Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multifactor authentication to access it.
Administrators are faced with two primary goals:
Common signals that Conditional Access can take in to account when making a poli
- Application - Users attempting to access specific applications can trigger different Conditional Access policies. - Real-time and calculated risk detection
- - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify risky sign-in behavior. Policies can then force users to change their password, do multi-factor authentication to reduce their risk level, or block access until an administrator takes manual action.
+ - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify risky sign-in behavior. Policies can then force users to change their password, do multifactor authentication to reduce their risk level, or block access until an administrator takes manual action.
- [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) - Enables user application access and sessions to be monitored and controlled in real time, increasing visibility and control over access to and activities done within your cloud environment.
Common signals that Conditional Access can take in to account when making a poli
- Most restrictive decision - Grant access - Least restrictive decision, can still require one or more of the following options:
- - Require multi-factor authentication
+ - Require multifactor authentication
- Require device to be marked as compliant - Require Hybrid Azure AD joined device - Require approved client app
Common signals that Conditional Access can take in to account when making a poli
Many organizations have [common access concerns that Conditional Access policies can help with](concept-conditional-access-policy-common.md) such as: -- Requiring multi-factor authentication for users with administrative roles-- Requiring multi-factor authentication for Azure management tasks
+- Requiring multifactor authentication for users with administrative roles
+- Requiring multifactor authentication for Azure management tasks
- Blocking sign-ins for users attempting to use legacy authentication protocols-- Requiring trusted locations for Azure AD Multi-Factor Authentication registration
+- Requiring trusted locations for Azure AD Multifactor Authentication registration
- Blocking or granting access from specific locations - Blocking risky sign-in behaviors - Requiring organization-managed devices for specific applications
active-directory Service Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/service-dependencies.md
# What are service dependencies in Azure Active Directory Conditional Access?
-With Conditional Access policies, you can specify access requirements to websites and services. For example, your access requirements can include requiring multi-factor authentication (MFA) or [managed devices](require-managed-devices.md).
+With Conditional Access policies, you can specify access requirements to websites and services. For example, your access requirements can include requiring multifactor authentication (MFA) or [managed devices](require-managed-devices.md).
-When you access a site or service directly, the impact of a related policy is typically easy to assess. For example, if you have a policy that requires multi-factor authentication (MFA) for SharePoint Online configured, MFA is enforced for each sign-in to the SharePoint web portal. However, it isn't always straight-forward to assess the impact of a policy because there are cloud apps with dependencies to other cloud apps. For example, Microsoft Teams can provide access to resources in SharePoint Online. So, when you access Microsoft Teams in our current scenario, you're also subject to the SharePoint MFA policy.
+When you access a site or service directly, the impact of a related policy is typically easy to assess. For example, if you have a policy that requires multifactor authentication (MFA) for SharePoint Online configured, MFA is enforced for each sign-in to the SharePoint web portal. However, it isn't always straight-forward to assess the impact of a policy because there are cloud apps with dependencies to other cloud apps. For example, Microsoft Teams can provide access to resources in SharePoint Online. So, when you access Microsoft Teams in our current scenario, you're also subject to the SharePoint MFA policy.
> [!TIP] > Using the [Office 365](concept-conditional-access-cloud-apps.md#office-365) app will target all Office apps to avoid issues with service dependencies in the Office stack.
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies have the following capabilities:
- Require employees or guests to accept your terms of use policy before getting access. - Require employees or guests to accept your terms of use policy on every device before getting access. - Require employees or guests to accept your terms of use policy on a recurring schedule.-- Require employees or guests to accept your terms of use policy before registering security information in Azure AD Multi-Factor Authentication (MFA).
+- Require employees or guests to accept your terms of use policy before registering security information in Azure AD Multifactor Authentication (MFA).
- Require employees to accept your terms of use policy before registering security information in Azure AD self-service password reset (SSPR). - Present a general terms of use policy for all users in your organization. - Present specific terms of use policies based on a user attributes (such as doctors versus nurses, or domestic versus international employees) by using [dynamic groups](../enterprise-users/groups-dynamic-membership.md)).
active-directory Troubleshoot Conditional Access What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access-what-if.md
At any point, you can select **Reset** to clear any criteria input and return to
### Policies that will apply
-This list will show which Conditional Access policies would apply given the conditions. The list will include both the grant and session controls that apply including those from policies in report-only mode. Examples include requiring multi-factor authentication to access a specific application.
+This list will show which Conditional Access policies would apply given the conditions. The list will include both the grant and session controls that apply including those from policies in report-only mode. Examples include requiring multifactor authentication to access a specific application.
### Policies that will not apply
This test could be expanded to incorporate other data points to narrow the scope
* [What is Conditional Access report-only mode?](concept-conditional-access-report-only.md) * [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md) * [What is a device identity?](../devices/overview.md)
-* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md)
+* [How it works: Azure AD Multifactor Authentication](../authentication/concept-mfa-howitworks.md)
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Conditional Access policies have historically applied only to users when they ac
A [workload identity](../develop/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they: -- CanΓÇÖt perform multi-factor authentication.
+- CanΓÇÖt perform multifactor authentication.
- Often have no formal lifecycle process. - Need to store their credentials or secrets somewhere.
active-directory Active Directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-jwt-claims-customization.md
# Customize claims issued in the JSON web token (JWT) for enterprise applications (Preview)
-The Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
+The Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application
+ through the Microsoft identity platform using the OIDC protocol, the Microsoft identity platform sends a token to the application. And then, the application validates and uses the token to log the user in instead of prompting for a username and password.
These JSON Web tokens (JWT) used by OIDC & OAuth applications (preview) contain pieces of information about the user known as *claims*. A *claim* is information that an identity provider states about a user inside the token they issue for that user.
In an [OIDC response](v2-protocols-oidc.md), *claims* data is typically containe
## View or edit claims
-Besides [optional claims](active-directory-optional-claims.md), you can view, create or edit the attributes and claims issued in the OIDC token to the application. To edit claims, open the application in Azure portal through the Enterprise Applications experience. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section.
+You can view, create or edit the attributes and claims issued in the JWT token to the application. To edit claims, open the application in Azure portal through the Enterprise Applications experience. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section.
:::image type="content" source="./media/active-directory-jwt-claims-customization/attributes-claims.png" alt-text="Screenshot of opening the Attributes & Claims section in the Azure portal.":::
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
Don't use auth context where the app itself is going to be a target of Condition
- [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web app](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) - [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a web API](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md)
+- [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a React single-page application and an Express web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/3-call-api-acrs)
## Authentication context [ACRs] in Conditional Access expected behavior
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Refresh tokens can be revoked by the server because of a change in credentials,
| Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |
-| User revokes their refresh tokens [via PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
+| User revokes their refresh tokens [via PowerShell](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked |
| Admin revokes all refresh tokens for a user [via PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked | | Single sign-out [on web](v2-protocols-oidc.md#single-sign-out) | Revoked | Stays alive | Revoked | Stays alive | Stays alive |
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
https://login.microsoftonline.com/common/oauth2/v2.0/token
Many modern apps have a single-page app front end written primarily in JavaScript, often with a framework like Angular, React, or Vue. The Microsoft identity platform supports these apps by using the [OpenID Connect](v2-protocols-oidc.md) protocol for authentication and one of two types of authorization grants defined by OAuth 2.0. The supported grant types are either the [OAuth 2.0 implicit grant flow](v2-oauth2-implicit-grant-flow.md) or the more recent [OAuth 2.0 authorization code + PKCE flow](v2-oauth2-auth-code-flow.md) (see below).
-The flow diagram below demonstrates the OAuth 2.0 authorization code grant (with details around PKCE omitted), where the app receives a code from the Microsoft identity platform `authorize` endpoint, and redeems it for an access token and a refresh token using cross-site web requests. The access token expires every 24 hours, and the app must request another code using the refresh token. In addition to the access token, an `id_token` that represents the signed-in user to the client application is typically also requested through the same flow and/or a separate OpenID Connect request (not shown here).
+The flow diagram below demonstrates the OAuth 2.0 authorization code grant (with details around PKCE omitted), where the app receives a code from the Microsoft identity platform `authorize` endpoint, and redeems it for an access token and a refresh token using cross-site web requests. For single-page apps (SPAs), the access token is valid for 1 hour, and once expired, must request another code using the refresh token. In addition to the access token, an `id_token` that represents the signed-in user to the client application is typically also requested through the same flow and/or a separate OpenID Connect request (not shown here).
:::image type="content" source="media/v2-oauth-auth-code-spa/active-directory-oauth-code-spa.svg" alt-text="Diagram showing the OAuth 2 authorization code flow between a single-page app and the security token service endpoint." border="false":::
active-directory Azuread Join Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azuread-join-sso.md
Title: How SSO to on-premises resources works on Azure AD joined devices | Microsoft Docs
-description: Learn how to extend the SSO experience by configuring hybrid Azure Active Directory joined devices.
+ Title: How SSO to on-premises resources works on Azure AD joined devices
+description: Extend the SSO experience by configuring hybrid Azure Active Directory joined devices.
Previously updated : 02/08/2022 Last updated : 02/27/2023
# How SSO to on-premises resources works on Azure AD joined devices
-It's probably not a surprise that an Azure Active Directory (Azure AD) joined device gives you a single sign-on (SSO) experience to your tenant's cloud apps. If your environment has on-premises Active Directory Domain Services (AD DS), you can also get SSO experience on Azure AD joined devices to resources and applications that rely on on-premises AD.
+Azure Active Directory (Azure AD) joined devices give users a single sign-on (SSO) experience to your tenant's cloud apps. If your environment has on-premises Active Directory Domain Services (AD DS), users can also SSO to resources and applications that rely on on-premises Active Directory Domain Services.
This article explains how this works.
This article explains how this works.
- An [Azure AD joined device](concept-azure-ad-join.md). - On-premises SSO requires line-of-sight communication with your on-premises AD DS domain controllers. If Azure AD joined devices aren't connected to your organization's network, a VPN or other network infrastructure is required. -- Azure AD Connect: To synchronize default user attributes like SAM Account Name, Domain Name, and UPN. For more information, see the article [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
+- Azure AD Connect or Azure AD Connect cloud sync: To synchronize default user attributes like SAM Account Name, Domain Name, and UPN. For more information, see the article [Attributes synchronized by Azure AD Connect](../hybrid/reference-connect-sync-attributes-synchronized.md#windows-10).
## How it works
With an Azure AD joined device, your users already have an SSO experience to the
Azure AD joined devices have no knowledge about your on-premises AD DS environment because they aren't joined to it. However, you can provide additional information about your on-premises AD to these devices with Azure AD Connect.
-If you have a hybrid environment, with both Azure AD and on-premises AD DS, it's likely that you already have Azure AD Connect or Azure AD Connect cloud sync deployed to synchronize your on-premises identity information to the cloud. As part of the synchronization process, on-premises user and domain information is synchronized to Azure AD. When a user signs in to an Azure AD joined device in a hybrid environment:
+Azure AD Connect or Azure AD Connect cloud sync synchronize your on-premises identity information to the cloud. As part of the synchronization process, on-premises user and domain information is synchronized to Azure AD. When a user signs in to an Azure AD joined device in a hybrid environment:
1. Azure AD sends the details of the user's on-premises domain back to the device, along with the [Primary Refresh Token](concept-primary-refresh-token.md) 1. The local security authority (LSA) service enables Kerberos and NTLM authentication on the device.
If you have a hybrid environment, with both Azure AD and on-premises AD DS, it's
> > For Windows Hello for Business Hybrid Certificate Trust, see [Using Certificates for AADJ On-premises Single-sign On](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-cert).
-During an access attempt to a resource requesting Kerberos or NTLM in the user's on-premises environment, the device:
+During an access attempt to an on-premises resource requesting Kerberos or NTLM, the device:
1. Sends the on-premises domain information and user credentials to the located DC to get the user authenticated.
-1. Receives a Kerberos [Ticket-Granting Ticket (TGT)](/windows/desktop/secauthn/ticket-granting-tickets) or NTLM token based on the protocol the on-premises resource or application supports. If the attempt to get the Kerberos TGT or NTLM token for the domain fails (related DCLocator timeout can cause a delay), Credential Manager entries are attempted, or the user may receive an authentication pop-up requesting credentials for the target resource.
+1. Receives a Kerberos [Ticket-Granting Ticket (TGT)](/windows/desktop/secauthn/ticket-granting-tickets) or NTLM token based on the protocol the on-premises resource or application supports. If the attempt to get the Kerberos TGT or NTLM token for the domain fails, Credential Manager entries are tried, or the user may receive an authentication pop-up requesting credentials for the target resource. This failure can be related to a delay caused by a DCLocator timeout.
All apps that are configured for **Windows-Integrated authentication** seamlessly get SSO when a user tries to access them.
active-directory Concept Azure Ad Join Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-join-hybrid.md
Hybrid Azure AD joined devices require network line of sight to your on-premises
Use Azure AD hybrid joined devices if: -- You support down-level devices running 8.1.
+- You support down-level devices running Windows 8.1, Windows Server 2008/R2, 2012/R2, 2016.
- You want to continue to use [Group Policy](/mem/configmgr/comanage/faq#my-environment-has-too-many-group-policy-objects-and-legacy-authenticated-apps--do-i-have-to-use-hybrid-azure-ad-) to manage device configuration. - You want to continue to use existing imaging solutions to deploy and configure devices. - You have Win32 apps deployed to these devices that rely on Active Directory machine authentication.
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
Title: Primary Refresh Token (PRT) and Azure AD - Azure Active Directory
-description: What is the role of and how do we manage the Primary Refresh Token (PRT) in Azure Active Directory?
+ Title: Primary Refresh Token (PRT) and Azure Active Directory
+description: What is the role of and how do we manage the Primary Refresh Token (PRT) in Azure AD?
Previously updated : 02/15/2022 Last updated : 02/27/2023
# What is a Primary Refresh Token?
-A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10 or newer, Windows Server 2016 and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) specially issued to Microsoft first party token brokers to enable single sign-on (SSO) across the applications used on those devices. In this article, we will provide details on how a PRT is issued, used, and protected on Windows 10 or newer devices. We recommend using the latest versions of Windows 10, Windows 11 and Windows Server 2019+ to get the best SSO experience.
+A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10 or newer, Windows Server 2016 and later versions, iOS, and Android devices. It's a JSON Web Token (JWT) specially issued to Microsoft first party token brokers to enable single sign-on (SSO) across the applications used on those devices. In this article, provide details on how a PRT is issued, used, and protected on Windows 10 or newer devices. We recommend using the latest versions of Windows 10, Windows 11 and Windows Server 2019+ to get the best SSO experience.
This article assumes that you already understand the different device states available in Azure AD and how single sign-on works in Windows 10 or newer. For more information about devices in Azure AD, see the article [What is device management in Azure Active Directory?](overview.md)
The following Windows components play a key role in requesting and using a PRT:
* **Cloud Authentication Provider** (CloudAP): CloudAP is the modern authentication provider for Windows sign in, that verifies users logging to a Windows 10 or newer device. CloudAP provides a plugin framework that identity providers can build on to enable authentication to Windows using that identity providerΓÇÖs credentials. * **Web Account Manager** (WAM): WAM is the default token broker on Windows 10 or newer devices. WAM also provides a plugin framework that identity providers can build on and enable SSO to their applications relying on that identity provider.
-* **Azure AD CloudAP plugin**: An Azure AD specific plugin built on the CloudAP framework, that verifies user credentials with Azure AD during Windows sign in.
-* **Azure AD WAM plugin**: An Azure AD specific plugin built on the WAM framework, that enables SSO to applications that rely on Azure AD for authentication.
+* **Azure AD CloudAP plugin**: An Azure AD specific plugin built on the CloudAP framework that verifies user credentials with Azure AD during Windows sign in.
+* **Azure AD WAM plugin**: An Azure AD specific plugin built on the WAM framework that enables SSO to applications that rely on Azure AD for authentication.
* **Dsreg**: An Azure AD specific component on Windows 10 or newer, that handles the device registration process for all device states.
-* **Trusted Platform Module** (TPM): A TPM is a hardware component built into a device, that provides hardware-based security functions for user and device secrets. More details can be found in the article [Trusted Platform Module Technology Overview](/windows/security/information-protection/tpm/trusted-platform-module-overview).
+* **Trusted Platform Module** (TPM): A TPM is a hardware component built into a device that provides hardware-based security functions for user and device secrets. More details can be found in the article [Trusted Platform Module Technology Overview](/windows/security/information-protection/tpm/trusted-platform-module-overview).
## What does the PRT contain?
-A PRT contains claims generally contained in any Azure AD refresh token. In addition, there are some device-specific claims included in the PRT. They are as follows:
+A PRT contains claims found in most Azure AD refresh tokens. In addition, there are some device-specific claims included in the PRT. They are as follows:
* **Device ID**: A PRT is issued to a user on a specific device. The device ID claim `deviceID` determines the device the PRT was issued to the user on. This claim is later issued to tokens obtained via the PRT. The device ID claim is used to determine authorization for Conditional Access based on device state or compliance. * **Session key**: The session key is an encrypted symmetric key, generated by the Azure AD authentication service, issued as part of the PRT. The session key acts as the proof of possession when a PRT is used to obtain tokens for other applications. ### Can I see whatΓÇÖs in a PRT?
-A PRT is an opaque blob sent from Azure AD whose contents are not known to any client components. You cannot see whatΓÇÖs inside a PRT.
+A PRT is an opaque blob sent from Azure AD whose contents aren't known to any client components. You can't see whatΓÇÖs inside a PRT.
## How is a PRT issued?
The PRT is issued during user authentication on a Windows 10 or newer device in
* Adding an account via the **Allow my organization to manage my device** prompt after signing in to an app (for example, Outlook) * Adding an account from **Settings** > **Accounts** > **Access Work or School** > **Connect**
-In Azure AD registered device scenarios, the Azure AD WAM plugin is the primary authority for the PRT since Windows logon is not happening with this Azure AD account.
+In Azure AD registered device scenarios, the Azure AD WAM plugin is the primary authority for the PRT since Windows logon isn't happening with this Azure AD account.
> [!NOTE] > 3rd party identity providers need to support the WS-Trust protocol to enable PRT issuance on Windows 10 or newer devices. Without WS-Trust, PRT cannot be issued to users on Hybrid Azure AD joined or Azure AD joined devices. On ADFS only usernamemixed endpoints are required. Both adfs/services/trust/2005/windowstransport and adfs/services/trust/13/windowstransport should be enabled as intranet facing endpoints only and **must NOT be exposed** as extranet facing endpoints through the Web Application Proxy.
Once issued, a PRT is valid for 14 days and is continuously renewed as long as t
A PRT is used by two key components in Windows:
-* **Azure AD CloudAP plugin**: During Windows sign in, the Azure AD CloudAP plugin requests a PRT from Azure AD using the credentials provided by the user. It also caches the PRT to enable cached sign in when the user does not have access to an internet connection.
+* **Azure AD CloudAP plugin**: During Windows sign in, the Azure AD CloudAP plugin requests a PRT from Azure AD using the credentials provided by the user. It also caches the PRT to enable cached sign in when the user doesn't have access to an internet connection.
* **Azure AD WAM plugin**: When users try to access applications, the Azure AD WAM plugin uses the PRT to enable SSO on Windows 10 or newer. Azure AD WAM plugin uses the PRT to request refresh and access tokens for applications that rely on WAM for token requests. It also enables SSO on browsers by injecting the PRT into browser requests. Browser SSO in Windows 10 or newer is supported on Microsoft Edge (natively), Chrome (via the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji?hl=en) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb?hl=en) extensions) or Mozilla Firefox v91+ (Firefox [Windows SSO setting](https://support.mozilla.org/kb/windows-sso)) > [!NOTE] > In instances where a user has two accounts from the same Azure AD tenant signed in to a browser application, the device authentication provided by the PRT of the primary account is automatically applied to the second account as well. As a result, the second account also satisfies any device-based Conditional Access policy on the tenant.
A PRT is used by two key components in Windows:
A PRT is renewed in two different methods:
-* **Azure AD CloudAP plugin every 4 hours**: The CloudAP plugin renews the PRT every 4 hours during Windows sign in. If the user does not have internet connection during that time, CloudAP plugin will renew the PRT after the device is connected to the internet.
+* **Azure AD CloudAP plugin every 4 hours**: The CloudAP plugin renews the PRT every 4 hours during Windows sign in. If the user doesn't have internet connection during that time, CloudAP plugin will renew the PRT after the device is connected to the internet.
* **Azure AD WAM plugin during app token requests**: The WAM plugin enables SSO on Windows 10 or newer devices by enabling silent token requests for applications. The WAM plugin can renew the PRT during these token requests in two different ways: * An app requests WAM for an access token silently but thereΓÇÖs no refresh token available for that app. In this case, WAM uses the PRT to request a token for the app and gets back a new PRT in the response.
- * An app requests WAM for an access token but the PRT is invalid or Azure AD requires additional authorization (for example, Azure AD Multi-Factor Authentication). In this scenario, WAM initiates an interactive logon requiring the user to reauthenticate or provide additional verification and a new PRT is issued on successful authentication.
+ * An app requests WAM for an access token but the PRT is invalid or Azure AD requires extra authorization (for example, Azure AD Multifactor Authentication). In this scenario, WAM initiates an interactive logon requiring the user to reauthenticate or provide extra verification and a new PRT is issued on successful authentication.
In an ADFS environment, direct line of sight to the domain controller isn't required to renew the PRT. PRT renewal requires only /adfs/services/trust/2005/usernamemixed and /adfs/services/trust/13/usernamemixed endpoints enabled on proxy by using WS-Trust protocol.
Windows transport endpoints are required for password authentication only when a
### Key considerations
-* A PRT is only issued and renewed during native app authentication. A PRT is not renewed or issued during a browser session.
+* A PRT is only issued and renewed during native app authentication. A PRT isn't renewed or issued during a browser session.
* In Azure AD joined and hybrid Azure AD joined devices, the CloudAP plugin is the primary authority for a PRT. If a PRT is renewed during a WAM-based token request, the PRT is sent back to CloudAP plugin, which verifies the validity of the PRT with Azure AD before accepting it. ## How is the PRT protected? A PRT is protected by binding it to the device the user has signed in to. Azure AD and Windows 10 or newer enable PRT protection through the following methods:
-* **During first sign in**: During first sign in, a PRT is issued by signing requests using the device key cryptographically generated during device registration. On a device with a valid and functioning TPM, the device key is secured by the TPM preventing any malicious access. A PRT is not issued if the corresponding device key signature cannot be validated.
-* **During token requests and renewal**: When a PRT is issued, Azure AD also issues an encrypted session key to the device. It is encrypted with the public transport key (tkpub) generated and sent to Azure AD as part of device registration. This session key can only be decrypted by the private transport key (tkpriv) secured by the TPM. The session key is the Proof-of-Possession (POP) key for any requests sent to Azure AD. The session key is also protected by the TPM and no other OS component can access it. Token requests or PRT renewal requests are securely signed by this session key through the TPM and hence, cannot be tampered with. Azure AD will invalidate any requests from the device that are not signed by the corresponding session key.
+* **During first sign in**: During first sign in, a PRT is issued by signing requests using the device key cryptographically generated during device registration. On a device with a valid and functioning TPM, the device key is secured by the TPM preventing any malicious access. A PRT isn't issued if the corresponding device key signature can't be validated.
+* **During token requests and renewal**: When a PRT is issued, Azure AD also issues an encrypted session key to the device. It's encrypted with the public transport key (tkpub) generated and sent to Azure AD as part of device registration. This session key can only be decrypted by the private transport key (tkpriv) secured by the TPM. The session key is the Proof-of-Possession (POP) key for any requests sent to Azure AD. The session key is also protected by the TPM and no other OS component can access it. Token requests or PRT renewal requests are securely signed by this session key through the TPM and hence, can't be tampered with. Azure AD invalidates any requests from the device that aren't signed by the corresponding session key.
-By securing these keys with the TPM, we enhance the security for PRT from malicious actors trying to steal the keys or replay the PRT. So, using a TPM greatly enhances the security of Azure AD Joined, Hybrid Azure AD joined, and Azure AD registered devices against credential theft. For performance and reliability, TPM 2.0 is the recommended version for all Azure AD device registration scenarios on Windows 10 or newer. Starting with the Windows 10, 1903 update, Azure AD does not use TPM 1.2 for any of the above keys due to reliability issues.
+By securing these keys with the TPM, we enhance the security for PRT from malicious actors trying to steal the keys or replay the PRT. So, using a TPM greatly enhances the security of Azure AD Joined, Hybrid Azure AD joined, and Azure AD registered devices against credential theft. For performance and reliability, TPM 2.0 is the recommended version for all Azure AD device registration scenarios on Windows 10 or newer. Starting with the Windows 10, 1903 update, Azure AD doesn't use TPM 1.2 for any of the above keys due to reliability issues.
### How are app tokens and browser cookies protected?
By securing these keys with the TPM, we enhance the security for PRT from malic
**Browser cookies**: In Windows 10 or newer, Azure AD supports browser SSO in Internet Explorer and Microsoft Edge natively, in Google Chrome via the Windows 10 accounts extension and in Mozilla Firefox v91+ via a browser setting. The security is built not only to protect the cookies but also the endpoints to which the cookies are sent. Browser cookies are protected the same way a PRT is, by utilizing the session key to sign and protect the cookies.
-When a user initiates a browser interaction, the browser (or extension) invokes a COM native client host. The native client host ensures that the page is from one of the allowed domains. The browser could send other parameters to the native client host, including a nonce, however the native client host guarantees validation of the hostname. The native client host requests a PRT-cookie from CloudAP plugin, which creates and signs it with the TPM-protected session key. As the PRT-cookie is signed by the session key, it is very difficult to tamper with. This PRT-cookie is included in the request header for Azure AD to validate the device it is originating from. If using the Chrome browser, only the extension explicitly defined in the native client hostΓÇÖs manifest can invoke it preventing arbitrary extensions from making these requests. Once Azure AD validates the PRT cookie, it issues a session cookie to the browser. This session cookie also contains the same session key issued with a PRT. During subsequent requests, the session key is validated effectively binding the cookie to the device and preventing replays from elsewhere.
+When a user initiates a browser interaction, the browser (or extension) invokes a COM native client host. The native client host ensures that the page is from one of the allowed domains. The browser could send other parameters to the native client host, including a nonce, however the native client host guarantees validation of the hostname. The native client host requests a PRT-cookie from CloudAP plugin, which creates and signs it with the TPM-protected session key. As the PRT-cookie is signed by the session key, it's difficult to tamper with. This PRT-cookie is included in the request header for Azure AD to validate the device it's originating from. If using the Chrome browser, only the extension explicitly defined in the native client hostΓÇÖs manifest can invoke it preventing arbitrary extensions from making these requests. Once Azure AD validates the PRT cookie, it issues a session cookie to the browser. This session cookie also contains the same session key issued with a PRT. During subsequent requests, the session key is validated effectively binding the cookie to the device and preventing replays from elsewhere.
## When does a PRT get an MFA claim?
-A PRT can get a multi-factor authentication (MFA) claim in specific scenarios. When an MFA-based PRT is used to request tokens for applications, the MFA claim is transferred to those app tokens. This functionality provides a seamless experience to users by preventing MFA challenge for every app that requires it. A PRT can get an MFA claim in the following ways:
+A PRT can get a multifactor authentication (MFA) claim in specific scenarios. When an MFA-based PRT is used to request tokens for applications, the MFA claim is transferred to those app tokens. This functionality provides a seamless experience to users by preventing MFA challenge for every app that requires it. A PRT can get an MFA claim in the following ways:
* **Sign in with Windows Hello for Business**: Windows Hello for Business replaces passwords and uses cryptographic keys to provide strong two-factor authentication. Windows Hello for Business is specific to a user on a device, and itself requires MFA to provision. When a user logs in with Windows Hello for Business, the userΓÇÖs PRT gets an MFA claim. This scenario also applies to users logging in with smartcards if smartcard authentication produces an MFA claim from ADFS.
- * As Windows Hello for Business is considered multi-factor authentication, the MFA claim is updated when the PRT itself is refreshed, so the MFA duration will continually extend when users sign in with Windows Hello for Business.
+ * As Windows Hello for Business is considered multifactor authentication, the MFA claim is updated when the PRT itself is refreshed, so the MFA duration will continually extend when users sign in with Windows Hello for Business.
* **MFA during WAM interactive sign in**: During a token request through WAM, if a user is required to do MFA to access the app, the PRT that is renewed during this interaction is imprinted with an MFA claim.
- * In this case, the MFA claim is not updated continuously, so the MFA duration is based on the lifetime set on the directory.
- * When a previous existing PRT and RT are used for access to an app, the PRT and RT will be regarded as the first proof of authentication. A new AT will be required with a second proof and an imprinted MFA claim. This will also issue a new PRT and RT.
+ * In this case, the MFA claim isn't updated continuously, so the MFA duration is based on the lifetime set on the directory.
+ * When a previous existing PRT and RT are used for access to an app, the PRT and RT are regarded as the first proof of authentication. A new AT is required with a second proof and an imprinted MFA claim. This process also issues a new PRT and RT.
Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, thereΓÇÖs a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.
Windows 10 or newer maintain a partitioned list of PRTs for each credential. So,
A PRT is invalidated in the following scenarios:
-* **Invalid user**: If a user is deleted or disabled in Azure AD, their PRT is invalidated and cannot be used to obtain tokens for applications. If a deleted or disabled user already signed in to a device before, cached sign-in would log them in, until CloudAP is aware of their invalid state. Once CloudAP determines that the user is invalid, it blocks subsequent logons. An invalid user is automatically blocked from sign in to new devices that donΓÇÖt have their credentials cached.
-* **Invalid device**: If a device is deleted or disabled in Azure AD, the PRT obtained on that device is invalidated and cannot be used to obtain tokens for other applications. If a user is already signed in to an invalid device, they can continue to do so. But all tokens on the device are invalidated and the user does not have SSO to any resources from that device.
+* **Invalid user**: If a user is deleted or disabled in Azure AD, their PRT is invalidated and can't be used to obtain tokens for applications. If a deleted or disabled user already signed in to a device before, cached sign-in would log them in, until CloudAP is aware of their invalid state. Once CloudAP determines that the user is invalid, it blocks subsequent logons. An invalid user is automatically blocked from sign in to new devices that donΓÇÖt have their credentials cached.
+* **Invalid device**: If a device is deleted or disabled in Azure AD, the PRT obtained on that device is invalidated and can't be used to obtain tokens for other applications. If a user is already signed in to an invalid device, they can continue to do so. But all tokens on the device are invalidated and the user doesn't have SSO to any resources from that device.
* **Password change**: After a user changes their password, the PRT obtained with the previous password is invalidated by Azure AD. Password change results in the user getting a new PRT. This invalidation can happen in two different ways:
- * If user signs in to Windows with their new password, CloudAP discards the old PRT and requests Azure AD to issue a new PRT with their new password. If user does not have an internet connection, the new password cannot be validated, Windows may require the user to enter their old password.
+ * If user signs in to Windows with their new password, CloudAP discards the old PRT and requests Azure AD to issue a new PRT with their new password. If user doesn't have an internet connection, the new password can't be validated, Windows may require the user to enter their old password.
* If a user has logged in with their old password or changed their password after signing into Windows, the old PRT is used for any WAM-based token requests. In this scenario, the user is prompted to reauthenticate during the WAM token request and a new PRT is issued.
-* **TPM issues**: Sometimes, a deviceΓÇÖs TPM can falter or fail, leading to inaccessibility of keys secured by the TPM. In this case, the device is incapable of getting a PRT or requesting tokens using an existing PRT as it cannot prove possession of the cryptographic keys. As a result, any existing PRT is invalidated by Azure AD. When Windows 10 detects a failure, it initiates a recovery flow to re-register the device with new cryptographic keys. With Hybrid Azure Ad join, just like the initial registration, the recovery happens silently without user input. For Azure AD joined or Azure AD registered devices, the recovery needs to be performed by a user who has administrator privileges on the device. In this scenario, the recovery flow is initiated by a Windows prompt that guides the user to successfully recover the device.
+* **TPM issues**: Sometimes, a deviceΓÇÖs TPM can falter or fail, leading to inaccessibility of keys secured by the TPM. In this case, the device is incapable of getting a PRT or requesting tokens using an existing PRT as it can't prove possession of the cryptographic keys. As a result, any existing PRT is invalidated by Azure AD. When Windows 10 detects a failure, it initiates a recovery flow to re-register the device with new cryptographic keys. With Hybrid Azure Ad join, just like the initial registration, the recovery happens silently without user input. For Azure AD joined or Azure AD registered devices, the recovery needs to be performed by a user who has administrator privileges on the device. In this scenario, the recovery flow is initiated by a Windows prompt that guides the user to successfully recover the device.
## Detailed flows
The following diagrams illustrate the underlying details in issuing, renewing, a
| :: | | | A | User enters their password in the sign in UI. LogonUI passes the credentials in an auth buffer to LSA, which in turns passes it internally to CloudAP. CloudAP forwards this request to the CloudAP plugin. | | B | CloudAP plugin initiates a realm discovery request to identify the identity provider for the user. If userΓÇÖs tenant has a federation provider setup, Azure AD returns the federation providerΓÇÖs Metadata Exchange endpoint (MEX) endpoint. If not, Azure AD returns that the user is managed indicating that user can authenticate with Azure AD. |
-| C | If the user is managed, CloudAP will get the nonce from Azure AD. If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Nonce is requested before the SAML token is sent to Azure AD. |
+| C | If the user is managed, CloudAP gets the nonce from Azure AD. If the user is federated, CloudAP plugin requests a SAML token from the federation provider with the userΓÇÖs credentials. Nonce is requested before the SAML token is sent to Azure AD. |
| D | CloudAP plugin constructs the authentication request with the userΓÇÖs credentials, nonce, and a broker scope, signs the request with the Device key (dkpriv) and sends it to Azure AD. In a federated environment, CloudAP plugin uses the SAML token returned by the federation provider instead of the userΓÇÖ credentials. | | E | Azure AD validates the user credentials, the nonce, and device signature, verifies that the device is valid in the tenant and issues the encrypted PRT. Along with the PRT, Azure AD also issues a symmetric key, called the Session key encrypted by Azure AD using the Transport key (tkpub). In addition, the Session key is also embedded in the PRT. This Session key acts as the Proof-of-possession (PoP) key for subsequent requests with the PRT. | | F | CloudAP plugin passes the encrypted PRT and Session key to CloudAP. CloudAP request the TPM to decrypt the Session key using the Transport key (tkpriv) and re-encrypt it using the TPMΓÇÖs own key. CloudAP stores the encrypted Session key in its cache along with the PRT. |
The following diagrams illustrate the underlying details in issuing, renewing, a
| :: | | | A | An application (for example, Outlook, OneNote etc.) initiates a token request to WAM. WAM, in turn, asks the Azure AD WAM plugin to service the token request. | | B | If a Refresh token for the application is already available, Azure AD WAM plugin uses it to request an access token. To provide proof of device binding, WAM plugin signs the request with the Session key. Azure AD validates the Session key and issues an access token and a new refresh token for the app, encrypted by the Session key. WAM plugin requests CloudAP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache |
-| C | If a Refresh token for the application is not available, Azure AD WAM plugin uses the PRT to request an access token. To provide proof of possession, WAM plugin signs the request containing the PRT with the Session key. Azure AD validates the Session key signature by comparing it against the Session key embedded in the PRT, verifies that the device is valid and issues an access token and a refresh token for the application. in addition, Azure AD can issue a new PRT (based on refresh cycle), all of them encrypted by the Session key. |
-| D | WAM plugin requests CloudAP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache. WAM plugin will use the refresh token going forward for this application. WAM plugin also gives back the new PRT to CloudAP plugin, which validates the PRT with Azure AD before updating it in its own cache. CloudAP plugin will use the new PRT going forward. |
+| C | If a Refresh token for the application isn't available, Azure AD WAM plugin uses the PRT to request an access token. To provide proof of possession, WAM plugin signs the request containing the PRT with the Session key. Azure AD validates the Session key signature by comparing it against the Session key embedded in the PRT, verifies that the device is valid and issues an access token and a refresh token for the application. in addition, Azure AD can issue a new PRT (based on refresh cycle), all of them encrypted by the Session key. |
+| D | WAM plugin requests CloudAP plugin to decrypt the tokens, which, in turn, requests the TPM to decrypt using the Session key, resulting in WAM plugin getting both the tokens. Next, WAM plugin provides only the access token to the application, while it re-encrypts the refresh token with DPAPI and stores it in its own cache. WAM plugin uses the refresh token going forward for this application. WAM plugin also gives back the new PRT to CloudAP plugin, which validates the PRT with Azure AD before updating it in its own cache. CloudAP plugin uses the new PRT going forward. |
| E | WAM provides the newly issued access token to WAM, which in turn, provides it back to the calling application| ### Browser SSO using PRT
The following diagrams illustrate the underlying details in issuing, renewing, a
| A | User logs in to Windows with their credentials to get a PRT. Once user opens the browser, browser (or extension) loads the URLs from the registry. | | B | When a user opens an Azure AD login URL, the browser or extension validates the URL with the ones obtained from the registry. If they match, the browser invokes the native client host for getting a token. | | C | The native client host validates that the URLs belong to the Microsoft identity providers (Microsoft account or Azure AD), extracts a nonce sent from the URL and makes a call to CloudAP plugin to get a PRT cookie. |
-| D | The CloudAP plugin will create the PRT cookie, sign in with the TPM-bound session key and send it back to the native client host. |
-| E | The native client host will return this PRT cookie to the browser, which will include it as part of the request header called x-ms-RefreshTokenCredential and request tokens from Azure AD. |
+| D | The CloudAP plugin creates the PRT cookie, sign in with the TPM-bound session key and send it back to the native client host. |
+| E | The native client host returns this PRT cookie to the browser, which includes it as part of the request header called x-ms-RefreshTokenCredential and request tokens from Azure AD. |
| F | Azure AD validates the Session key signature on the PRT cookie, validates the nonce, verifies that the device is valid in the tenant, and issues an ID token for the web page and an encrypted session cookie for the browser. | > [!NOTE]
-> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts or Office Online extensions) or in private mode in Mozilla Firefox v91+
+> The Browser SSO flow described in the previous steps doesn't apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts or Office Online extensions) or in private mode in Mozilla Firefox v91+
## Next steps
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
This option is a premium edition capability available through products like Azur
- **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected, and **NONE** is unavailable. - **Require Multi-Factor Authentication to register or join devices with Azure AD**: - We recommend organizations use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication. You must configure this toggle to **No** if you use a Conditional Access policy to require multifactor authentication.
- - This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
+ - This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multifactor Authentication services, see [getting started with Azure AD Multifactor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
> [!NOTE] > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
active-directory Howto Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-join-verify.md
Previously updated : 04/06/2022 Last updated : 02/27/2023
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Another MFA-related error message is the one described previously: "Your credent
![Screenshot of the message that says your credentials didn't work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-If you've configured a legacy per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting and you see the error above, you can resolve the problem by removing the per-user MFA setting through these commands:
+If you've configured a legacy per-user **Enabled/Enforced Azure AD Multifactor Authentication** setting and you see the error above, you can resolve the problem by removing the per-user MFA setting through these commands:
``` # Get StrongAuthenticationRequirements configure on a user
active-directory Hybrid Azuread Join Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-control.md
Previously updated : 04/06/2022 Last updated : 02/27/2023
# Hybrid Azure AD join targeted deployment
-You can validate your [planning and prerequisites](hybrid-azuread-join-plan.md) for hybrid Azure AD joining devices using a targeted deployment before enabling it across the entire organization. This article will explain how to accomplish a targeted deployment of hybrid Azure AD join.
+You can validate your [planning and prerequisites](hybrid-azuread-join-plan.md) for hybrid Azure AD joining devices using a targeted deployment before enabling it across the entire organization. This article explains how to accomplish a targeted deployment of hybrid Azure AD join.
## Targeted deployment of hybrid Azure AD join on Windows current devices
To control the device registration, you should deploy the Windows Installer pack
## Why a device might be in a pending state
-When you configure a **Hybrid Azure AD join** task in the Azure AD Connect Sync for your on-premises devices, the task will sync the device objects to Azure AD, and temporarily set the registered state of the devices to "pending" before the device completes the device registration. This is because the device must be added to the Azure AD directory before it can be registered. For more information about the device registration process, see [How it works: Device registration](device-registration-how-it-works.md#hybrid-azure-ad-joined-in-managed-environments).
+When you configure a **Hybrid Azure AD join** task in the Azure AD Connect Sync for your on-premises devices, the task syncs device objects to Azure AD, and temporarily set the registered state of the devices to "pending" before the device completes the device registration. This pending state is because the device must be added to the Azure AD directory before it can be registered. For more information about the device registration process, see [How it works: Device registration](device-registration-how-it-works.md#hybrid-azure-ad-joined-in-managed-environments).
## Post validation
-After you verify that everything works as expected, you can automatically register the rest of your Windows current and down-level devices with Azure AD by [configuring the SCP using Azure AD Connect](hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
+After you verify that everything works as expected, you can automatically register the rest of your Windows current and down-level devices with Azure AD. Automate hybrid Azure AD join by [configuring the SCP using Azure AD Connect](hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
## Next steps - [Plan your hybrid Azure Active Directory join implementation](hybrid-azuread-join-plan.md) - [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md) - [Configure hybrid Azure Active Directory join manually](hybrid-azuread-join-manual.md)-- [Use Conditional Access to require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md)
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
You can enable users to create and manage their own security groups or Microsoft
## Self-service group membership
-You can allow users to create security groups, which are used to manage access to shared resources. Security groups can be created by users in Azure portals, using Azure AD PowerShell, or from the [MyApps Groups Access panel](https://account.activedirectory.windowsazure.com/r#/groups). Only the group's owners can update membership, but you can provide group owners the ability to approve or deny membership requests from the MyApp Groups Access panel. Security groups created by self-service through the MyApps Groups Access panel are available to join for all users, whether owner-approved or auto-approved. In the MyApps Groups Access panel, you can change membership options when you create the group.
+You can allow users to create security groups, which are used to manage access to shared resources. Security groups can be created by users in Azure portals, using Azure AD PowerShell, or from the [MyApps Groups Access panel](https://account.activedirectory.windowsazure.com/r#/groups). Only the group's owners can update membership, but you can provide group owners the ability to approve or deny membership requests from the MyApps Groups Access panel. Security groups created by self-service through the MyApps Groups Access panel are available to join for all users, whether owner-approved or auto-approved. In the MyApps Groups Access panel, you can change membership options when you create the group.
-Microsoft 365 groups, which provide collaboration opportunities for your users, can be created in any of the Microsoft 365 applications, such as SharePoint, Microsoft Teams, and Planner. Microsoft 365 groups can also be created in Azure portals, using Azure AD PowerShell, or from the MyApp Groups Access panel. For more information on the difference between security groups and Microsoft 365 groups, see [Learn about groups](../fundamentals/concept-learn-about-groups.md#what-to-know-before-creating-a-group)
+Microsoft 365 groups, which provide collaboration opportunities for your users, can be created in any of the Microsoft 365 applications, such as SharePoint, Microsoft Teams, and Planner. Microsoft 365 groups can also be created in Azure portals, using Azure AD PowerShell, or from the MyApps Groups Access panel. For more information on the difference between security groups and Microsoft 365 groups, see [Learn about groups](../fundamentals/concept-learn-about-groups.md#what-to-know-before-creating-a-group)
Groups created in | Security group default behavior | Microsoft 365 group default behavior | - | [Azure AD PowerShell](../enterprise-users/groups-settings-cmdlets.md) | Only owners can add members<br>Visible but not available to join in MyApp Groups Access panel | Open to join for all users
-[Azure portal](https://portal.azure.com) | Only owners can add members<br>Visible but not available to join in MyApp Groups Access panel<br>Owner is not assigned automatically at group creation | Open to join for all users
+[Azure portal](https://portal.azure.com) | Only owners can add members<br>Visible but not available to join in MyApps Groups Access panel<br>Owner is not assigned automatically at group creation | Open to join for all users
[MyApps Groups Access panel](https://account.activedirectory.windowsazure.com/r#/joinGroups) | Open to join for all users<br>Membership options can be changed when the group is created | Open to join for all users<br>Membership options can be changed when the group is created ## Self-service group management scenarios
Groups created in | Security group default behavior | Microsoft 365 group defaul
* **Delegated group management** An example is an administrator who is managing access to a Software as a Service (SaaS) application that the company is using. Managing these access rights is becoming cumbersome, so this administrator asks the business owner to create a new group. The administrator assigns access for the application to the new group, and adds to the group all people already accessing the application. The business owner then can add more users, and those users are automatically provisioned to the application. The business owner doesn't need to wait for the administrator to manage access for users. If the administrator grants the same permission to a manager in a different business group, that person can also manage access for their own group members. Neither the business owner nor the manager can view or manage each other's group memberships. The administrator can still see all users who have access to the application and block access rights if needed. * **Self-service group management**
- An example of this scenario is two users who both have SharePoint Online sites that they set up independently. They want to give each other's teams access to their sites. To accomplish this, they can create one group in Azure AD, and in SharePoint Online each of them selects that group to provide access to their sites. When someone wants access, they request it from the MyApp Groups Access Panel, and after approval they get access to both SharePoint Online sites automatically. Later, one of them decides that all people accessing the site should also get access to a particular SaaS application. The administrator of the SaaS application can add access rights for the application to the SharePoint Online site. From then on, any requests that get approved give access to the two SharePoint Online sites and also to this SaaS application.
+ An example of this scenario is two users who both have SharePoint Online sites that they set up independently. They want to give each other's teams access to their sites. To accomplish this, they can create one group in Azure AD, and in SharePoint Online each of them selects that group to provide access to their sites. When someone wants access, they request it from the MyApps Groups Access Panel, and after approval they get access to both SharePoint Online sites automatically. Later, one of them decides that all people accessing the site should also get access to a particular SaaS application. The administrator of the SaaS application can add access rights for the application to the SharePoint Online site. From then on, any requests that get approved give access to the two SharePoint Online sites and also to this SaaS application.
## Make a group available for user self-service
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Title: Tutorial for bulk inviting B2B collaboration users - Azure AD
+ Title: Bulk invite guest users for B2B collaboration tutorial - Azure AD
description: In this tutorial, you learn how to send bulk invitations using a CSV file to external Azure AD B2B collaboration users. Previously updated : 10/24/2022 Last updated : 02/28/2023
# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. -+ # Tutorial: Bulk invite Azure AD B2B collaboration users
-If you use [Azure Active Directory (Azure AD) B2B collaboration](what-is-b2b.md) to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you'll follow these steps:
+If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you'll follow these steps:
> [!div class="checklist"] >
For example: `Remove-MgUser -UserId "lstokes_fabrikam.com#EXT#@contoso.onmicroso
## Next steps
+- [Bulk invite guest users via PowerShell](bulk-invite-powershell.md)
- [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md) - [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)-- [Billing model for guest user collaboration usage](external-identities-pricing.md#about-monthly-active-users-mau-billing)+
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Previously updated : 02/23/2023 Last updated : 02/28/2023
This article is number 9 in a series of 10 articles. We recommend you review the
Sharing in Microsoft 365 is partially governed by the **External Identities, External collaboration settings** in Azure Active Directory (Azure AD). If external sharing is disabled or restricted in Azure AD, it overrides sharing settings configured in Microsoft 365. An exception is if Azure AD B2B integration isn't enabled. You can configure SharePoint and OneDrive to support ad-hoc sharing via one-time password (OTP). The following screenshot shows the External Identities, External collaboration settings dialog.
- ![Screenshot of options and entries under External Identities, External collaboration settings.](media/secure-external-access/9-external-collaboration-settings.png)
Learn more:
Guest users are invited to have access to resources.
3. Under **Categories**, select **Identity**. 4. From the list, select **External Identities**. 5. Select **External collaboration settings**.
-6. Find the **Guest user access** option.
-
-To prevent guest-user access to other guest-user details, and to prevent enumeration of group membership, select **Guest users have limited access to properties and memberships of directory objects**.
+6. Find the **Guest user access** options.
+7. To prevent guest-user access to other guest-user details, and to prevent enumeration of group membership, select **Guest users have limited access to properties and memberships of directory objects**.
### Guest invite settings
Guest invite settings determine who invites guests and how guests are invited. T
* Confirms access reviews occur * Removes users added to SharePoint
-1. Select **Email one-time passcodes for guests**.
+1. Select the banner for **Email one-time passcodes for guests**.
2. For **Enable guest self-service sign up via user flows**, select **Yes**. ### Collaboration restrictions For the Collaboration restrictions option, the organization's business requirements dictate the choice of invitation.
-* **Allow invitations to be sent to any domain** - any user can be invited
+* **Allow invitations to be sent to any domain (most inclusive)** - any user can be invited
* **Deny invitations to the specified domains** - any user outside those domains can be invited
-* **Allow invitations only to the specified domains** - any user outside those domains can't be invited
+* **Allow invitations only to the specified domains (most restrictive)** - any user outside those domains can't be invited
## External users and guest users in Teams
active-directory Active Directory Deployment Checklist P2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-checklist-p2.md
- Title: Azure AD deployment checklist
-description: Azure Active Directory feature deployment checklist
----- Previously updated : 01/23/2023--------
-# Azure Active Directory feature deployment guide
-
-It can seem scary to deploy Azure Active Directory (Azure AD) for your organization and keep it secure. This article identifies common tasks that customers find helpful to complete in phases, over the course of 30, 60, 90 days, or more, to enhance their security posture. Even organizations who have already deployed Azure AD can use this guide to ensure they're getting the most out of their investment.
-
-A well-planned and executed identity infrastructure paves the way for secure access to your productivity workloads and data by known users and devices only.
-
-Additionally customers can check their [identity secure score](identity-secure-score.md) to see how aligned they're to Microsoft best practices. Check your secure score before and after implementing these recommendations to see how well you're doing compared to others in your industry and to other organizations of your size.
-
-## Prerequisites
-
-Many of the recommendations in this guide can be implemented with Azure AD Free or no license at all. Where licenses are required we state which license is required at minimum to accomplish the task.
-
-Additional information about licensing can be found on the following pages:
-
-* [Azure AD licensing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
-* [Microsoft 365 Enterprise](https://www.microsoft.com/licensing/product-licensing/microsoft-365-enterprise)
-* [Enterprise Mobility + Security](https://www.microsoft.com/licensing/product-licensing/enterprise-mobility-security)
-* [Azure AD External Identities pricing](../external-identities/external-identities-pricing.md)
-
-## Phase 1: Build a foundation of security
-
-In this phase, administrators enable baseline security features to create a more secure and easy to use foundation in Azure AD before we import or create normal user accounts. This foundational phase ensures you are in a more secure state from the start and that your end-users only have to be introduced to new concepts one time.
-
-| Task | Detail | Required license |
-| - | | - |
-| [Create more than one global administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent global administrator accounts for use in an emergency. These accounts aren't to be used daily and should have long and complex passwords. | Azure AD Free |
-| [Use non-global administrative roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be global administrators. | Azure AD Free |
-| [Enable Privileged Identity Management for tracking admin role use](../privileged-identity-management/pim-getting-started.md) | Enable Privileged Identity Management to start tracking administrative role usage. | Azure AD Premium P2 |
-| [Roll out self-service password reset](../authentication/howto-sspr-deployment.md) | Reduce helpdesk calls for password resets by allowing staff to reset their own passwords using policies you as an administrator control. | Azure AD Premium P1 |
-| [Create an organization specific custom banned password list](../authentication/tutorial-configure-custom-password-protection.md) | Prevent users from creating passwords that include common words or phrases from your organization or area. | Azure AD Premium P1 |
-| [Enable on-premises integration with Azure AD password protection](../authentication/concept-password-ban-bad-on-premises.md) | Extend the banned password list to your on-premises directory, to ensure passwords set on-premises are also in compliance with the global and tenant-specific banned password lists. | Azure AD Premium P1 |
-| [Enable Microsoft's password guidance](https://www.microsoft.com/research/publication/password-guidance/) | Stop requiring users to change their password on a set schedule, disable complexity requirements, and your users are more apt to remember their passwords and keep them something that is secure. | Azure AD Free |
-| [Disable periodic password resets for cloud-based user accounts](../authentication/concept-sspr-policy.md#set-a-password-to-never-expire) | Periodic password resets encourage your users to increment their existing passwords. Use the guidelines in Microsoft's password guidance doc and mirror your on-premises policy to cloud-only users. | Azure AD Free |
-| [Customize Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) | Stop lockouts from cloud-based users from being replicated to on-premises Active Directory users | Azure AD Premium P1 |
-| [Enable Extranet Smart Lockout for AD FS](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) | AD FS extranet lockout protects against brute force password-guessing attacks, while letting valid AD FS users continue to use their accounts. | |
-| [Block legacy authentication to Azure AD with Conditional Access](../conditional-access/block-legacy-authentication.md) | Block legacy authentication protocols like POP, SMTP, IMAP, and MAPI that can't enforce Multi-Factor Authentication, making them a preferred entry point for adversaries. | Azure AD Premium P1 |
-| [Deploy Azure AD Multi-Factor Authentication using Conditional Access policies](../authentication/howto-mfa-getstarted.md) | Require users to do two-step verification when accessing sensitive applications using Conditional Access policies. | Azure AD Premium P1 |
-| [Enable Azure Active Directory Identity Protection](../identity-protection/overview-identity-protection.md) | Enable tracking of risky sign-ins and compromised credentials for users in your organization. | Azure AD Premium P2 |
-| [Use risk detections to trigger multi-factor authentication and password changes](../authentication/tutorial-risk-based-sspr-mfa.md) | Enable automation that can trigger events such as multi-factor authentication, password reset, and blocking of sign-ins based on risk. | Azure AD Premium P2 |
-| [Enable combined registration for self-service password reset and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md) | Allow your users to register from one common experience for both Azure AD Multi-Factor Authentication and self-service password reset. | Azure AD Premium P1 |
-
-## Phase 2: Import users, enable synchronization, and manage devices
-
-Next, we add to the foundation laid in phase 1 by importing our users and enabling synchronization, planning for guest access, and preparing to support more functionality.
-
-| Task | Detail | Required license |
-| - | | - |
-| [Install Azure AD Connect](../hybrid/how-to-connect-install-select-installation.md) | Prepare to synchronize users from your existing on-premises directory to the cloud. | Azure AD Free |
-| [Implement Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) | Synchronize password hashes to allow password changes to be replicated, bad password detection and remediation, and leaked credential reporting. | Azure AD Free |
-| [Implement Password Writeback](../authentication/tutorial-enable-sspr-writeback.md) | Allow password changes in the cloud to be written back to an on-premises Windows Server Active Directory environment. | Azure AD Premium P1 |
-| [Implement Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md#what-is-azure-ad-connect-health) | Enable monitoring of key health statistics for your Azure AD Connect servers, AD FS servers, and domain controllers. | Azure AD Premium P1 |
-| [Assign licenses to users by group membership in Azure Active Directory](../enterprise-users/licensing-groups-assign.md) | Save time and effort by creating licensing groups that enable or disable features by group instead of setting per user. | Azure AD Premium P1 |
-| [Create a plan for guest user access](../external-identities/what-is-b2b.md) | Collaborate with guest users by letting them sign in to your apps and services with their own work, school, or social identities. | [Azure AD External Identities pricing](../external-identities/external-identities-pricing.md) |
-| [Decide on device management strategy](../devices/overview.md) | Decide what your organization allows regarding devices. Registering vs joining, Bring Your Own Device vs company provided. | |
-| [Deploy Windows Hello for Business in your organization](/windows/security/identity-protection/hello-for-business/hello-manage-in-organization) | Prepare for passwordless authentication using Windows Hello | |
-| [Deploy passwordless authentication methods for your users](../authentication/concept-authentication-passwordless.md) | Provide your users with convenient passwordless authentication methods | Azure AD Premium P1 |
-| [Configure cross-tenant synchronization (preview)](../multi-tenant-organizations/cross-tenant-synchronization-configure.md) | For multi-tenant organization scenarios, enable users to collaborate across tenants. (Currently in preview.) | Azure AD Premium P1 |
-
-## Phase 3: Manage applications
-
-As we continue to build on the previous phases, we identify candidate applications for migration and integration with Azure AD and complete the setup of those applications.
-
-| Task | Detail | Required license |
-| - | | - |
-| Identify your applications | Identify applications in use in your organization: on-premises, SaaS applications in the cloud, and other line-of-business applications. Determine if these applications can and should be managed with Azure AD. | No license required |
-| [Integrate supported SaaS applications in the gallery](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. | Azure AD Free |
-| [Use Application Proxy to integrate on-premises applications](../app-proxy/application-proxy-add-on-premises-application.md) | Application Proxy enables users to access on-premises applications by signing in with their Azure AD account. | Azure AD Premium P1 |
-
-## Phase 4: Audit privileged identities, complete an access review, and manage user lifecycle
-
-Phase 4 sees administrators enforcing least privilege principles for administration, completing their first access reviews, and enabling automation of common user lifecycle tasks.
-
-| Task | Detail | Required license |
-| - | | - |
-| [Enforce the use of Privileged Identity Management](../privileged-identity-management/pim-security-wizard.md) | Remove administrative roles from normal day-to-day user accounts. Make administrative users eligible to use their role after succeeding a multi-factor authentication check, provide a business justification, or request approval from approvers. | Azure AD Premium P2 |
-| [Complete an access review for Azure AD directory roles in PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) | Work with your security and leadership teams to create an access review policy to review administrative access based on your organization's policies. | Azure AD Premium P2 |
-| [Implement dynamic group membership policies](../enterprise-users/groups-dynamic-membership.md) | Use dynamic groups to automatically assign users to groups based on their attributes from HR (or your source of truth), such as department, title, region, and other attributes. | Azure AD Premium P1 |
-| [Implement group based application provisioning](../manage-apps/what-is-access-management.md) | Use group-based access management provisioning to automatically provision users for SaaS applications. | Azure AD Premium P1 |
-| [Automate user provisioning and deprovisioning](../app-provisioning/user-provisioning.md) | Remove manual steps from your employee account lifecycle to prevent unauthorized access. Synchronize identities from your source of truth (HR System) to Azure AD. | Azure AD Premium P1 |
-
-## Next steps
-
-[Azure AD licensing and pricing details](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
-
-[Identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations)
-
-[Common recommended identity and device access policies](/microsoft-365/enterprise/identity-access-policies)
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
Title: Rapidly respond to secure identities with Azure Active Directory
-description: Rapidly respond to threats with Azure AD cloud-based identities
+ Title: Secure your organization's identities with Azure AD
+description: Improve your security posture and empower users with Microsoft Azure AD.
Previously updated : 04/27/2020 Last updated : 02/27/2023 -+ +
-# Rapidly respond to secure identities with Azure AD
+# Secure your organization's identities with Azure AD
-It can seem daunting trying to secure your workers in today's world, especially when you have to respond rapidly and provide access to many services quickly. This article is meant to provide a concise list of all the actions to take, helping you identify and prioritize which order to deploy the Azure AD features based on the license type you own. Azure AD offers many features and provides many layers of security for your Identities, navigating which feature is relevant can sometimes be overwhelming. Many organizations are already in the cloud or moving quickly to the cloud, this document is intended to allow you to deploy services quickly, with securing your identities as the primary consideration.
+It can seem daunting trying to secure your workers in today's world, especially when you have to respond rapidly and provide access to many services quickly. This article is meant to provide a concise list of all the actions to take, helping you identify and prioritize which order to deploy the Azure Active Directory (Azure AD) features based on the license type you own. Azure AD offers many features and provides many layers of security for your Identities, navigating which feature is relevant can sometimes be overwhelming. This document is intended to help organizations deploy services quickly, with secure identities as the primary consideration.
-Each table provides a consistent security recommendation, protecting both Administrator and User identities from the main security attacks (breach replay, phishing, and password spray) while minimizing the user impact and improving the user experience.
+Each table provides a consistent security recommendation, protecting identities from common security attacks while minimizing user friction.
-The guidance will also allow administrators to configure access to SaaS and on-premises applications in a secure and protected manner and is applicable to either cloud or hybrid (synced) identities and applies to users working remotely or in the office.
+The guidance helps:
-This checklist will help you quickly deploy critical recommended actions to protect your organization immediately by explaining how to:
--- Strengthen your credentials.-- Reduce your attack surface area.-- Automate threat response.-- Utilize cloud intelligence.-- Enable end-user self-service.
+- Configure access to SaaS and on-premises applications in a secure and protected manner.
+- Both cloud and hybrid identities.
+- Users working remotely or in the office.
## Prerequisites This guide assumes that your cloud only or hybrid identities have been established in Azure AD already. For help with choosing your identity type see the article, [Choose the right authentication method for your Azure Active Directory hybrid identity solution](../hybrid/choose-ad-authn.md)
-## Summary
-
-There are many aspects to a secure identity infrastructure, but this checklist focuses on a safe and secure identity infrastructure enabling users to work remotely. Securing your identity is just part of your security story, protecting data, applications, and devices should also be considered.
-
-### Guidance for Azure AD Free, Office 365, or Microsoft 365 customers.
+## Guidance for Azure AD Free, Office 365, or Microsoft 365 customers.
-There are a number of recommendations that Azure AD Free, Office 365, or Microsoft 365 app customers should take to protect their user identities, the following table is intended to highlight the key actions for the following license subscriptions:
+There are many recommendations that Azure AD Free, Office 365, or Microsoft 365 app customers should take to protect their user identities. The following table is intended to highlight key actions for the following license subscriptions:
- Office 365 (Office 365 E1, E3, E5, F1, A1, A3, A5) - Microsoft 365 (Business Basic, Apps for Business, Business Standard, Business Premium, A1)
There are a number of recommendations that Azure AD Free, Office 365, or Microso
| [Enable Security Defaults](concept-fundamentals-security-defaults.md) | Protect all user identities and applications by enabling MFA and blocking legacy authentication | | [Enable Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) (if using hybrid identities) | Provide redundancy for authentication and improve security (including Smart Lockout, IP Lockout, and the ability to discover leaked credentials.) | | [Enable ADFS smart lock out](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) (If applicable) | Protects your users from experiencing extranet account lockout from malicious activity. |
-| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout assists in locking out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
-| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users do not expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
+| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout helps to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
+| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO) | | [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable) | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. |
-| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) (applicable to cloud only accounts) | This ability reduces help desk calls and loss of productivity when a user cannot sign into their device or an application. |
-| [Use non-global administrative roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be global administrators. |
+| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) (applicable to cloud only accounts) | This ability reduces help desk calls and loss of productivity when a user can't sign into their device or an application. |
+| [Use least privileged roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be Global Administrators. |
| [Enable Microsoft's password guidance](https://www.microsoft.com/research/publication/password-guidance/) | Stop requiring users to change their password on a set schedule, disable complexity requirements, and your users are more apt to remember their passwords and keep them something that is secure. | -
-### Guidance for Azure AD Premium Plan 1 customers.
+## Guidance for Azure AD Premium Plan 1 customers.
The following table is intended to highlight the key actions for the following license subscriptions:
The following table is intended to highlight the key actions for the following l
| Recommended action | Detail | | | |
+| [Create more than one Global Administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent Global Administrator accounts for use in an emergency. These accounts aren't to be used daily and should have long and complex passwords. |
| [Enable combined registration experience for Azure AD MFA and SSPR to simplify user registration experience](../authentication/howto-registration-mfa-sspr-combined.md) | Allow your users to register from one common experience for both Azure AD Multi-Factor Authentication and self-service password reset. | | [Configure MFA settings for your organization](../authentication/howto-mfa-getstarted.md) | Ensure accounts are protected from being compromised with multi-factor authentication |
-| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) | This ability reduces help desk calls and loss of productivity when a user cannot sign into their device or an application |
+| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) | This ability reduces help desk calls and loss of productivity when a user can't sign into their device or an application |
| [Implement Password Writeback](../authentication/tutorial-enable-sspr-writeback.md) (if using hybrid identities) | Allow password changes in the cloud to be written back to an on-premises Windows Server Active Directory environment. | | Create and enable Conditional Access policies | [MFA for admins to protect accounts that are assigned administrative rights.](../conditional-access/howto-conditional-access-policy-admin-mfa.md) <br><br> [Block legacy authentication protocols due to the increased risk associated with legacy authentication protocols.](../conditional-access/howto-conditional-access-policy-block-legacy.md) <br><br> [MFA for all users and applications to create a balanced MFA policy for your environment, securing your users and applications.](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) <br><br> [Require MFA for Azure Management to protect your privileged resources by requiring multi-factor authentication for any user accessing Azure resources.](../conditional-access/howto-conditional-access-policy-azure-management.md) | | [Enable Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) (if using hybrid identities) | Provide redundancy for authentication and improve security (including Smart Lockout, IP Lockout, and the ability to discover leaked credentials.) | | [Enable ADFS smart lock out](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) (If applicable) | Protects your users from experiencing extranet account lockout from malicious activity. |
-| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout assists in locking out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
-| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users do not expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
+| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout helps to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
+| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
| [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. | | [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). | | [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
-| [Designate more than one global administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent global administrator accounts for use if there is an emergency. These accounts are not be used daily and should have long and complex passwords. Break Glass Accounts ensure you can access the service in an emergency. |
-| [Use non-global administrative roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be global administrators. |
+| [Use least privileged roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be Global Administrators. |
| [Enable Microsoft's password guidance](https://www.microsoft.com/research/publication/password-guidance/) | Stop requiring users to change their password on a set schedule, disable complexity requirements, and your users are more apt to remember their passwords and keep them something that is secure. |
+| [Create an organization specific custom banned password list](../authentication/tutorial-configure-custom-password-protection.md) | Prevent users from creating passwords that include common words or phrases from your organization or area. |
+| [Deploy passwordless authentication methods for your users](../authentication/concept-authentication-passwordless.md) | Provide your users with convenient passwordless authentication methods |
| [Create a plan for guest user access](../external-identities/what-is-b2b.md) | Collaborate with guest users by letting them sign into your apps and services with their own work, school, or social identities. |
-### Guidance for Azure AD Premium Plan 2 customers.
+## Guidance for Azure AD Premium Plan 2 customers.
The following table is intended to highlight the key actions for the following license subscriptions:
The following table is intended to highlight the key actions for the following l
| Recommended action | Detail | | | |
+| [Create more than one Global Administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent Global Administrator accounts for use in an emergency. These accounts aren't to be used daily and should have long and complex passwords. |
| [Enable combined registration experience for Azure AD MFA and SSPR to simplify user registration experience](../authentication/howto-registration-mfa-sspr-combined.md) | Allow your users to register from one common experience for both Azure AD Multi-Factor Authentication and self-service password reset. | | [Configure MFA settings for your organization](../authentication/howto-mfa-getstarted.md) | Ensure accounts are protected from being compromised with multi-factor authentication |
-| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) | This ability reduces help desk calls and loss of productivity when a user cannot sign into their device or an application |
+| [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) | This ability reduces help desk calls and loss of productivity when a user can't sign into their device or an application |
| [Implement Password Writeback](../authentication/tutorial-enable-sspr-writeback.md) (if using hybrid identities) | Allow password changes in the cloud to be written back to an on-premises Windows Server Active Directory environment. | | [Enable Identity Protection policies to enforce MFA registration](../identity-protection/howto-identity-protection-configure-mfa-policy.md) | Manage the roll-out of Azure AD Multi-Factor Authentication (MFA). |
-| [Enable Identity Protection user and sign-in risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) | Enable Identity Protection User and Sign-in policies. The recommended sign-in policy is to target medium risk sign-ins and require MFA. For User policies it should target high risk users requiring the password change action. |
+| [Enable Identity Protection user and sign-in risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) | Enable Identity Protection User and Sign-in policies. The recommended sign-in policy is to target medium risk sign-ins and require MFA. For User policies, you should target high risk users requiring the password change action. |
| Create and enable Conditional Access policies | [MFA for admins to protect accounts that are assigned administrative rights.](../conditional-access/howto-conditional-access-policy-admin-mfa.md) <br><br> [Block legacy authentication protocols due to the increased risk associated with legacy authentication protocols.](../conditional-access/howto-conditional-access-policy-block-legacy.md) <br><br> [Require MFA for Azure Management to protect your privileged resources by requiring multi-factor authentication for any user accessing Azure resources.](../conditional-access/howto-conditional-access-policy-azure-management.md) | | [Enable Password Hash Sync](../hybrid/how-to-connect-password-hash-synchronization.md) (if using hybrid identities) | Provide redundancy for authentication and improve security (including Smart Lockout, IP Lockout, and the ability to discover leaked credentials.) | | [Enable ADFS smart lock out](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) (If applicable) | Protects your users from experiencing extranet account lockout from malicious activity. |
-| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout assists in locking out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
-| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users do not expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
+| [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout helps to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. |
+| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
| [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. | | [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). | | [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
-| [Designate more than one global administrator](../roles/security-emergency-access.md) | Assign at least two cloud-only permanent global administrator accounts for use if there is an emergency. These accounts are not be used daily and should have long and complex passwords. Break Glass Accounts ensure you can access the service in an emergency. |
-| [Use non-global administrative roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be global administrators. |
+| [Use least privileged roles where possible](../roles/permissions-reference.md) | Give your administrators only the access they need to only the areas they need access to. Not all administrators need to be Global Administrators. |
| [Enable Microsoft's password guidance](https://www.microsoft.com/research/publication/password-guidance/) | Stop requiring users to change their password on a set schedule, disable complexity requirements, and your users are more apt to remember their passwords and keep them something that is secure. |
+| [Create an organization specific custom banned password list](../authentication/tutorial-configure-custom-password-protection.md) | Prevent users from creating passwords that include common words or phrases from your organization or area. |
+| [Deploy passwordless authentication methods for your users](../authentication/concept-authentication-passwordless.md) | Provide your users with convenient passwordless authentication methods |
| [Create a plan for guest user access](../external-identities/what-is-b2b.md) | Collaborate with guest users by letting them sign into your apps and services with their own work, school, or social identities. | | [Enable Privileged Identity Management](../privileged-identity-management/pim-configure.md) | Enables you to manage, control, and monitor access to important resources in your organization, ensuring admins have access only when needed and with approval |
+| [Complete an access review for Azure AD directory roles in PIM](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) | Work with your security and leadership teams to create an access review policy to review administrative access based on your organization's policies. |
+ ## Next steps - For detailed deployment guidance for individual features of Azure AD, review the [Azure AD project deployment plans](active-directory-deployment-plans.md).--- For an end-to-end Azure AD deployment checklist, see the article [Azure Active Directory feature deployment guide](active-directory-deployment-checklist-p2.md)
+- Organizations can use [identity secure score](identity-secure-score.md) to track their progress against other Microsoft recommendations.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
-## January 2023
+## February 2023
-### Public Preview - Cross-tenant synchronization
+### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Privileged Identity Management (PIM) role activation has been expanded to the Billing and AD extensions in the Azure portal. Shortcuts have been added to Subscriptions (billing) and Access Control (AD) to allow users to activate PIM roles directly from these blades. From the Subscriptions blade, select **View eligible subscriptions** in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane. In Access control (IAM) for a resource, you can now select **View my access** to see your currently active and eligible role assignments and activate directly. By integrating PIM capabilities into different Azure portal blades, this new feature allows users to gain temporary access to view or edit subscriptions and resources more easily.
++
+For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md).
++++
+### General Availability - Follow Azure AD best practices with recommendations
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+Azure AD recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure AD Overview.
+
+This release includes our first 3 recommendations:
+
+- Convert from per-user MFA to Conditional Access MFA
+- Migration applications from AD FS to AAD
+- Minimize MFA prompts from known devices
++
+For more information, see:
+
+- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md)
+- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview)
+++
+### Public Preview - Azure AD PIM + Conditional Access integration
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Now you can require users who are eligible for a role to satisfy Conditional Access policy requirements for activation: use specific authentication method enforced through Authentication Strengths, activate from Intune compliant device, comply with Terms of Use, and use 3rd party MFA and satisfy location requirements.
+
+For more information, see: [Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md).
++++
+### General Availability - More information on why a sign-in was flagged as "unfamiliar"
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Unfamiliar sign-in properties risk detection now provides risk reasons as to which properties are unfamiliar for customers to better investigate that risk.
+
+Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*.
+There is no additional work to enable this feature, the unfamiliar properties will be shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
++
+### General Availability - New Federated Apps available in Azure AD Application gallery - February 2023
+++
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In February 2023 we've added the following 10 new applications in our App gallery with Federation support:
+
+[PROCAS](https://accounting.procas.com/), [Tanium Cloud SSO](../saas-apps/tanium-cloud-sso-tutorial.md), [LeanDNA](../saas-apps/leandna-tutorial.md), [CalendarAnything LWC](https://silverlinecrm.com/calendaranything/), [courses.work](../saas-apps/courseswork-tutorial.md), [Udemy Business SAML](../saas-apps/udemy-business-saml-tutorial.md), [Canva](../saas-apps/canva-tutorial.md), [Kno2fy](../saas-apps/kno2fy-tutorial.md), [IT-Conductor](../saas-apps/it-conductor-tutorial.md), [ナレッジワーク(Knowledge Work)](../saas-apps/knowledge-work-tutorial.md), [Valotalive Digital Signage Microsoft 365 integration](https://store.valotalive.com/#main), [Priority Matrix HIPAA](https://hipaa.prioritymatrix.com/), [Priority Matrix Government](https://hipaa.prioritymatrix.com/), [Beable](../saas-apps/beable-tutorial.md), [Grain](https://grain.com/app?dialog=integrations&integration=microsoft+teams), [DojoNavi](../saas-apps/dojonavi-tutorial.md), [Global Validity Access Manager](https://myaccessmanager.com/), [FieldEquip](https://app.fieldequip.com/), [Peoplevine](https://control.peoplevine.com/), [Respondent](../saas-apps/respondent-tutorial.md), [WebTMA](../saas-apps/webtma-tutorial.md), [ClearIP](https://clearip.com/login), [Pennylane](../saas-apps/pennylane-tutorial.md), [VsimpleSSO](https://app.vsimple.com/login), [Compliance Genie](../saas-apps/compliance-genie-tutorial.md), [Dataminr Corporate](https://dmcorp.okta.com/), [Talon](../saas-apps/talon-tutorial.md).
++
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - February 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Atmos](../saas-apps/atmos-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+## January 2023
+
+### Public Preview - Cross-tenant synchronization
+ **Type:** New feature **Service category:** Provisioning **Product capability:** Collaboration
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
Workflows created within Lifecycle Workflows follow the same schedule that you d
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Identity Governance** on the search bar near the top of the page.
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
1. In the left menu, select **Lifecycle workflows (Preview)**.
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Versioning with Lifecycle Workflows provides many benefits over the alternative
## Workflow properties and versions
-While updates to workflows can trigger the creation of a new version, this isn't always the case. There are parameters of workflows, known as basic properties, that can be updated without a new version of the workflow being created. The list of these parameters are as follows:
+While updates to workflows can trigger the creation of a new version, this isn't always the case. There are parameters of workflows known as basic properties, that 's changeable without creating a new version of the workflow. The list of these parameters are as follows:
- displayName - description - isEnabled - IsSchedulingEnabled
+- task name
+- task description
You'll find these corresponding parameters in the Azure portal under the **Properties** section of the workflow you're updating.
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
# Manage workflow properties (preview)
-Managing workflows can be accomplished in one of two ways.
+Managing workflows can be accomplished in one of two ways:
- Updating the basic properties of a workflow without creating a new version of it
- - Creating a new version of the updated workflow.
+ - Creating a new version of the updated workflow
You can update the following basic information without creating a new workflow. - display name - description
- - whether or not it is enabled.
- - Whether or not workflow schedule is enabled.
+ - whether or not it's enabled
+ - Whether or not workflow schedule is enabled
+ - task name
+ - task description
-If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article.
+If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article.
If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you will have to manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
When you select the domain that you want to federate, Azure AD Connect provides
## Configuring federation with PingFederate You can configure PingFederate with Azure AD Connect in just a few clicks. The following prerequisites are required:-- PingFederate 8.4 or later. For more information, see [PingFederate integration with Azure Active Directory and Microsoft 365](https://docs.pingidentity.com/bundle/pingfederate-azuread-office365-integration/).
+- PingFederate 8.4 or later. For more information, see [PingFederate integration with Azure Active Directory and Microsoft 365](https://docs.pingidentity.com/access/sources/dita/topic?category=integrationdoc&resourceid=pingfederate_azuread_office365_integration) in the Ping Identity documentation.
- A TLS/SSL certificate for the federation service name that you intend to use (for example, sts.contoso.com). ### Verify the domain
active-directory Plan Connect User Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-user-signin.md
For more information, see [Configuring SSO with AD FS](how-to-connect-install-cu
### Federation with PingFederate With federated sign-in, your users can sign in to Azure AD-based services with their on-premises passwords. While they're on the corporate network, they don't even have to enter their passwords.
-For more information on configuring PingFederate for use with Azure Active Directory, see [PingFederate Integration with Azure Active Directory and Office 365](https://www.pingidentity.com/AzureADConnect)
+For more information on configuring PingFederate for use with Azure Active Directory, see [PingFederate integration with Azure Active Directory and Microsoft 365](https://docs.pingidentity.com/access/sources/dita/topic?category=integrationdoc&resourceid=pingfederate_azuread_office365_integration).
For information on setting up Azure AD Connect using PingFederate, see [Azure AD Connect custom installation](how-to-connect-install-custom.md#configuring-federation-with-pingfederate)
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
All of the Identity Protection policies have an impact on the sign in experience
## Multi-factor authentication registration
-Enabling the Identity Protection policy requiring multi-factor authentication registration and targeting all of your users, will make sure that they can use Azure AD MFA to self-remediate in the future. Configuring this policy gives your users a 14-day period where they can choose to register and at the end are forced to register.
+Enabling the Identity Protection policy requiring Azure AD Multifactor Authentication registration and targeting all of your users, will make sure that they can use Azure AD MFA to self-remediate in the future. Configuring this policy gives your users a 14-day period where they can choose to register and at the end are forced to register.
### Registration interrupt
-1. At sign-in to any Azure AD-integrated application, the user gets a notification about the requirement to set up the account for multi-factor authentication. This policy is also triggered in the Windows 10 Out of Box Experience for new users with a new device.
+1. At sign-in to any Azure AD-integrated application, the user gets a notification about the requirement to set up the account for multifactor authentication. This policy is also triggered in the Windows 10 Out of Box Experience for new users with a new device.
![More information required](./media/concept-identity-protection-user-experience/identity-protection-experience-more-info-mfa.png)
When a user risk policy has been configured, users who meet the user risk level
## Risky sign-in administrator unblock
-Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff. Self-remediation by performing multi-factor authentication and self-service password reset isn't an option in this case.
+Administrators can choose to block users upon sign-in depending on their risk level. To get unblocked, end users must contact their IT staff. Self-remediation by performing multifactor authentication and self-service password reset isn't an option in this case.
![Blocked by user risk policy](./media/concept-identity-protection-user-experience/104.png)
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Azure AD Identity Protection has historically protected users in detecting, inve
A [workload identity](../develop/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they: -- CanΓÇÖt perform multi-factor authentication.
+- CanΓÇÖt perform multifactor authentication.
- Often have no formal lifecycle process. - Need to store their credentials or secrets somewhere.
Organizations can export data by configurating [diagnostic settings in Azure AD]
Using [Conditional Access for workload identities](../conditional-access/workload-identity.md), you can block access for specific accounts you choose when Identity Protection marks them "at risk." Policy can be applied to single-tenant service principals that have been registered in your tenant. Third-party SaaS, multi-tenanted apps, and managed identities are out of scope.
+For improved security and resilience of your workload identities, Continuous Access Evaluation (CAE) for workload identities is a powerful tool that offers instant enforcement of your Conditional Access policies and any detected risk signals. CAE-enabled third party workload identities accessing CAE-capable first party resources are equipped with 24 hour Long Lived Tokens (LLT's) that are subject to continuous security checks. Refer to the [CAE for workload identities documentation](../conditional-access/concept-continuous-access-evaluation-workload.md) for information on configuring workload identity clients for CAE and up to date feature scope.
+ ## Investigate risky workload identities Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal.
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
If after investigation, an account is confirmed compromised:
1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised". 2. If a risk-based policy wasn't triggered, and the risk wasn't [self-remediated](#self-remediation-with-risk-based-policy), then do one or more of the followings: 1. [Request a password reset](#manual-password-reset).
- 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
+ 1. Block the user if you suspect the attacker can reset the password or do multifactor authentication for the user.
1. Revoke refresh tokens. 1. [Disable any devices](../devices/device-management-azure-portal.md) that are considered compromised. 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
Completing the following procedure requires you to use a user account that has:
**To simulate a sign-in from an unfamiliar location, perform the following steps**:
-1. When signing in with your test account, fail the multi-factor authentication (MFA) challenge by not passing the MFA challenge.
+1. When signing in with your test account, fail the multifactor authentication (MFA) challenge by not passing the MFA challenge.
2. Using your new VPN, navigate to [https://myapps.microsoft.com](https://myapps.microsoft.com) and enter the credentials of your test account. The sign-in shows up on the Identity Protection dashboard within 10 - 15 minutes.
To test a sign-in risk policy, perform the following steps:
1. Optionally you can choose to exclude users from the policy. 1. **Conditions** - **Sign-in risk** Microsoft's recommendation is to set this option to **Medium and above**. 1. Under **Controls**
- 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require multi-factor authentication**.
+ 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require multifactor authentication**.
1. **Enforce Policy** - **On** 1. **Save** - This action will return you to the **Overview** page. 1. You can now test Sign-in Risk-based Conditional Access by signing in using a risky session (for example, by using the Tor browser).
active-directory Reference Identity Protection Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/reference-identity-protection-glossary.md
A risk detection triggered when current user credentials (user name and password
### Mitigation An action to limit or eliminate the ability of an attacker to exploit a compromised identity or device without restoring the identity or device to a safe state. A mitigation does not resolve previous risk detections associated with the identity or device.
-### Multi-factor authentication
+### Multifactor authentication
An authentication method that requires two or more authentication methods, which may include something the user has, such a certificate; something the user knows, such as user names, passwords, or pass phrases; physical attributes, such as a thumbprint; and personal attributes, such as a personal signature. ### Offline detection
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
The minimum permissions needed to do basic sign-in are `openid`, `profile`, `ema
To configure permission classifications, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: An administrator, or owner of the service principal.
+- One of the following roles: A global administrator, or owner of the service principal.
## Manage permission classifications
You can use the latest [Azure AD PowerShell](/powershell/module/azuread/?preserv
Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article. ```powershell
-Connect-AzureAD -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All".
+Connect-AzureAD -Scopes "Policy.ReadWrite.PermissionGrant".
``` ### List the current permission classifications
You can use [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?
Run the following command to connect to Microsoft Graph PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article. ```powershell
-Connect-MgGraph -Scopes "Application.ReadWrite.All", "Directory.ReadWrite.All", "DelegatedPermissionGrant.ReadWrite.All".
+Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant".
``` ### List current permission classifications for an API
Remove-MgServicePrincipalDelegatedPermissionClassification -DelegatedPermissionC
To configure permissions classifications for an enterprise application, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
-You need to consent to the following permissions:
-
-`Application.ReadWrite.All`, `Directory.ReadWrite.All`, `DelegatedPermissionGrant.ReadWrite.All`.
+You need to consent to the `Policy.ReadWrite.PermissionGrant` permission.
Run the following queries on Microsoft Graph explorer to add a delegated permissions classification for an application.
active-directory Custom Security Attributes Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/custom-security-attributes-apps.md
Title: Assign, update, list, or remove custom security attributes for an application (Preview) - Azure Active Directory
+ Title: Manage custom security attributes for an application (Preview) - Azure Active Directory
description: Assign, update, list, or remove custom security attributes for an application that has been registered with your Azure Active Directory (Azure AD) tenant.
Previously updated : 02/20/2023 Last updated : 02/28/2023
+zone_pivot_groups: enterprise-apps-all
+
-# Assign, update, list, or remove custom security attributes for an application (Preview)
+# Manage custom security attributes for an application (Preview)
> [!IMPORTANT] > Custom security attributes are currently in PREVIEW.
To assign or remove custom security attributes for an application in your Azure
- Azure AD Premium P1 or P2 license - [Attribute Assignment Administrator](../roles/permissions-reference.md#attribute-assignment-administrator)-- [AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview) version 2.0.2.138 or later when using PowerShell
+- Make sure you have existing custom security attributes. To learn how to create a security attribute, see [Add or deactivate custom security attributes in Azure AD](../fundamentals/custom-security-attributes-add.md).
+ > [!IMPORTANT]
-> By default, [Global Administrator](../roles/permissions-reference.md#global-administrator) and other administrator roles do not have permissions to read, define, or assign custom security attributes.
+> By default, [Global Administrator](../roles/permissions-reference.md#global-administrator) and other administrator roles don't have permissions to read, define, or assign custom security attributes.
-## Assign custom security attributes to an application
+## Assign, update, list, or remove custom attributes for an application
-1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+Learn how to work with custom attributes for applications in Azure AD.
+### Assign custom security attributes to an application
-1. Make sure that you have existing custom security attributes. For more information, see [Add or deactivate custom security attributes in Azure AD](../fundamentals/custom-security-attributes-add.md).
-1. Select **Azure Active Directory** > **Enterprise applications**.
+
+Undertake the following steps to assign custom security attributes through the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**, then select **Enterprise applications**.
1. Find and select the application you want to add a custom security attribute to.
To assign or remove custom security attributes for an application in your Azure
- For predefined custom security attribute values, select a value from the **Assigned values** list. - For multi-valued custom security attributes, select **Add values** to open the **Attribute values** pane and add your values. When finished adding values, select **Done**.
- ![Screenshot showing assigning a custom security attribute to an application.](./media/custom-security-attributes-apps/apps-attributes-assign.png)
+ ![Screenshot shows how to assign a custom security attribute to an application.](./media/custom-security-attributes-apps/apps-attributes-assign.png)
1. When finished, select **Save** to assign the custom security attributes to the application.
-## Update custom security attribute assignment values for an application
+### Update custom security attribute assignment values for an application
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
-1. Select **Azure Active Directory** > **Enterprise applications**.
+1. Select **Azure Active Directory**, then select **Enterprise applications**.
1. Find and select the application that has a custom security attribute assignment value you want to update.
To assign or remove custom security attributes for an application in your Azure
1. Find the custom security attribute assignment value you want to update.
- Once you have assigned a custom security attribute to an application, you can only change the value of the custom security attribute. You can't change other properties of the custom security attribute, such as attribute set or custom security attribute name.
+ Once you've assigned a custom security attribute to an application, you can only change the value of the custom security attribute. You can't change other properties of the custom security attribute, such as attribute set or custom security attribute name.
1. Depending on the properties of the selected custom security attribute, you can update a single value, select a value from a predefined list, or update multiple values. 1. When finished, select **Save**.
-## Filter applications based on custom security attributes
+### Filter applications based on custom security attributes
-You can filter the list of custom security attributes assigned to applications on the All applications page.
+You can filter the list of custom security attributes assigned to applications on the **All applications** page.
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
-1. Select **Azure Active Directory** > **Enterprise applications**.
+1. Select **Azure Active Directory**, then select **Enterprise applications**.
1. Select **Add filters** to open the Pick a field pane.
- If you don't see Add filters, click the banner to enable the Enterprise applications search preview.
+ If you don't see **Add filters**, select the banner to enable the Enterprise applications search preview.
1. For **Filters**, select **Custom security attribute**.
You can filter the list of custom security attributes assigned to applications o
1. To apply the filter, select **Apply**.
-## Remove custom security attribute assignments from applications
+### Remove custom security attribute assignments from applications
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
-1. Select **Azure Active Directory** > **Enterprise applications**.
+1. Select **Azure Active Directory**, then select **Enterprise applications**.
1. Find and select the application that has the custom security attribute assignments you want to remove.
-1. In the Manage section, select **Custom security attributes (preview)**.
+1. In the **Manage** section, select **Custom security attributes (preview)**.
1. Add check marks next to all the custom security attribute assignments you want to remove. 1. Select **Remove assignment**.
-## PowerShell
+
+### PowerShell
To manage custom security attribute assignments for applications in your Azure AD organization, you can use PowerShell. The following commands can be used to manage assignments.
-#### Assign a custom security attribute with a multi-string value to an application (service principal)
+### Assign a custom security attribute with a multi-string value to an application (service principal)
Use the [Set-AzureADMSServicePrincipal](/powershell/module/azuread/set-azureadmsserviceprincipal) command to assign a custom security attribute with a multi-string value to an application (service principal).
$attributes = @{
Set-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -CustomSecurityAttributes $attributes ```
-#### Update a custom security attribute with a multi-string value for an application (service principal)
+### Update a custom security attribute with a multi-string value for an application (service principal)
-Use the [Set-AzureADMSServicePrincipal](/powershell/module/azuread/set-azureadmsserviceprincipal) command to update a custom security attribute with a multi-string value for an application (service principal).
+Provide the new set of attribute values that you would like to reflect on the application. In this example, we're adding one more value for project attribute.
- Attribute set: `Engineering` - Attribute: `Project`
$attributesUpdate = @{
Set-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -CustomSecurityAttributes $attributesUpdate ```
-#### Get the custom security attribute assignments for an application (service principal)
+### Get the custom security attribute assignments for an application (service principal)
Use the [Get-AzureADMSServicePrincipal](/powershell/module/azuread/get-azureadmsserviceprincipal) command to get the custom security attribute assignments for an application (service principal).
Get-AzureADMSServicePrincipal -Select CustomSecurityAttributes
Get-AzureADMSServicePrincipal -Id 7d194b0c-bf17-40ff-9f7f-4b671de8dc20 -Select "CustomSecurityAttributes, Id" ```
-## Microsoft Graph API
++
+To manage custom security attribute assignments for applications in your Azure AD organization, you can use Microsoft Graph PowerShell. The following commands can be used to manage assignments.
+
+### Assign a custom security attribute with a multi-string value to an application (service principal)
+
+Use the [Set-AzureADMSServicePrincipal](/powershell/module/azuread/set-azureadmsserviceprincipal) command to assign a custom security attribute with a multi-string value to an application (service principal).
+
+Given the values
+
+- Attribute set: Engineering
+- Attribute: Project
+- Attribute data type: String
+- Attribute value: "Baker"
+
+```powershell
+#Retrieve the servicePrincipal
+
+$ServicePrincipal= (Get-MgServicePrincipal -Filter "displayName eq 'Microsoft Graph'").Id
+
+$params = @{
+CustomSecurityAttributes = @{
+Engineering =@{
+"@odata.type" = "#Microsoft.DirectoryServices.CustomSecurityAttributeValue"
+ProjectDate ="Baker"
+}
+ }
+ }
+
+Update-MgServicePrincipal -ServicePrincipalId $ServicePrincipal -BodyParameter $params
+```
+
+### Update a custom security attribute with a multi-string value for an application (service principal)
+
+Provide the new set of attribute values that you would like to reflect on the application. In this example, we're adding one more value for project attribute.
+
+```powershell
+$params = @{
+ CustomSecurityAttributes = @{
+Engineering =@{
+"@odata.type" = "#Microsoft.DirectoryServices.CustomSecurityAttributeValue"
+Project =@(
+ "Baker"
+"Cascade"
+)
+ }
+ }
+ }
+Update-MgServicePrincipal -ServicePrincipalId $ServicePrincipal -BodyParameter $params
+```
+
+### Filter applications based on custom security attributes
+
+This example filters a list of applications with a custom security attribute assignment that equals the specified value.
+
+```powershell
+Get-MgServicePrincipal -CountVariable CountVar -Property "id,displayName,customSecurityAttributes" -Filter "customSecurityAttributes/Engineering/Project eq 'Baker'" -ConsistencyLevel eventual
+```
+
+### Remove custom security attribute assignments from applications
+
+In this example, we remove a custom security attribute assignment that supports multiple values.
+
+```powershell
+$params = @{
+CustomSecurityAttributes = @{
+Engineering =@{
+"@odata.type" = "#Microsoft.DirectoryServices.CustomSecurityAttributeValue"
+ Project =@(
+ )
+ }
+ }
+ }
+Update-MgServicePrincipal -ServicePrincipalId $ServicePrincipal -BodyParameter $params
+```
++
-To manage custom security attribute assignments for applications in your Azure AD organization, you can use the Microsoft Graph API. The following API calls can be made to manage assignments.
+
+To manage custom security attribute assignments for applications in your Azure AD organization, you can use the Microsoft Graph API. Make the following API calls to manage assignments.
For other similar Microsoft Graph API examples for users, see [Assign, update, list, or remove custom security attributes for a user](../enterprise-users/users-custom-security-attributes.md#microsoft-graph-api) and [Examples: Assign, update, list, or remove custom security attribute assignments using the Microsoft Graph API](/graph/custom-security-attributes-examples).
-#### Assign a custom security attribute with a string value to an application (service principal)
+### Assign a custom security attribute with a multi-string value to an application (service principal)
-Use the [Update servicePrincipal](/graph/api/serviceprincipal-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a string value to a user.
+Use the [Update servicePrincipal](/graph/api/serviceprincipal-update?view=graph-rest-beta&preserve-view=true) API to assign a custom security attribute with a string value to an application.
-- Attribute set: `Engineering`-- Attribute: `ProjectDate`
+Given the values
+
+- Attribute set: Engineering
+- Attribute: Project
- Attribute data type: String-- Attribute value: `"2022-10-01"`
+- Attribute value: "Baker"
+
+```http
+PATCH https://graph.microsoft.com/beta/servicePrincipals/{id}
+Content-type: application/json
+
+{
+ "customSecurityAttributes":
+ {
+ "Engineering":
+ {
+ "@odata.type":"#Microsoft.DirectoryServices.CustomSecurityAttributeValue",
+ "Project@odata.type":"#Collection(String)",
+ "Project": "Baker"
+ }
+ }
+}
+```
+
+### Update a custom security attribute with a multi-string value for an application (service principal)
+
+Provide the new set of attribute values that you would like to reflect on the application. In this example, we're adding one more value for project attribute.
```http PATCH https://graph.microsoft.com/beta/servicePrincipals/{id}
+Content-type: application/json
+ { "customSecurityAttributes": { "Engineering": { "@odata.type":"#Microsoft.DirectoryServices.CustomSecurityAttributeValue",
- "ProjectDate":"2022-10-01"
+ "Project@odata.type":"#Collection(String)",
+ "Project":["Baker","Cascade"]
} } } ```
-#### Get the custom security attribute assignments for an application (service principal)
+### Filter applications based on custom security attributes
-Use the [Get servicePrincipal](/graph/api/serviceprincipal-get?view=graph-rest-beta&preserve-view=true) API to get the custom security attribute assignments for an application (service principal).
+This example filters a list of applications with a custom security attribute assignment that equals the specified value.
```http
-GET https://graph.microsoft.com/beta/servicePrincipals/{id}?$select=customSecurityAttributes
+GET https://graph.microsoft.com/beta/servicePrincipals?$count=true&$select=id,displayName,customSecurityAttributes&$filter=customSecurityAttributes/Engineering/Project eq 'Baker'ConsistencyLevel: eventual
```
-If there are no custom security attributes assigned to the application or if the calling principal does not have access, the response will look like:
+### Remove custom security attribute assignments from an application
+
+In this example, we remove a custom security attribute assignment that supports multiple values.
```http
+PATCH https://graph.microsoft.com/beta/servicePrincipals/{id}
+Content-type: application/json
+ {
- "customSecurityAttributes": null
+ "customSecurityAttributes":
+ {
+ "Engineering":
+ {
+ "@odata.type":"#Microsoft.DirectoryServices.CustomSecurityAttributeValue",
+ "Project":[]
+ }
+ }
} ``` + ## Next steps - [Add or deactivate custom security attributes in Azure AD](../fundamentals/custom-security-attributes-add.md)
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Previously updated : 01/26/2023 Last updated : 02/28/2023
App consent policies where the ID begins with "microsoft-" are built-in policies
:::zone pivot="ms-powershell"
-1. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true).
+2. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true).
```powershell Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant"
Once the app consent policy has been created, you can [allow user consent](confi
To manage app consent policies, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section.
+You need to consent to the `Policy.ReadWrite.PermissionGrant` permission.
## List existing app consent policies It's a good idea to start by getting familiar with the existing app consent policies in your organization: 1. List all app consent policies:
-```http
-GET /policies/permissionGrantPolicies?$select=id,displayName,description
-```
+ ```http
+ GET /policies/permissionGrantPolicies?$select=id,displayName,description
+ ```
1. View the "include" condition sets of a policy:
-```http
-GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/includes
-```
+ ```http
+ GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/includes
+ ```
1. View the "exclude" condition sets:
-```http
-GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/excludes
-```
+ ```http
+ GET /policies/permissionGrantPolicies/{ microsoft-application-admin }/excludes
+ ```
## Create a custom app consent policy
Follow these steps to create a custom app consent policy:
1. Create a new empty app consent policy.
-```http
-POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies
-Content-Type: application/json
-
-{
- "id": "my-custom-policy",
- "displayName": "My first custom consent policy",
- "description": "This is a sample custom app consent policy"
-}
-```
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies
+ Content-Type: application/json
+
+ {
+ "id": "my-custom-policy",
+ "displayName": "My first custom consent policy",
+ "description": "This is a sample custom app consent policy"
+ }
+ ```
1. Add "include" condition sets. Include delegated permissions classified "low", for apps from verified publishers
-```http
-POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-policy }/includes
-Content-Type: application/json
-
-{
- "permissionType": "delegated",
- ΓÇ£PermissionClassification: "low",
- "clientApplicationsFromVerifiedPublisherOnly": true
-}
-```
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-policy }/includes
+ Content-Type: application/json
+
+ {
+ "permissionType": "delegated",
+ ΓÇ£PermissionClassification: "low",
+ "clientApplicationsFromVerifiedPublisherOnly": true
+ }
+ ```
Repeat this step to add more "include" condition sets. 1. Optionally, add "exclude" condition sets. Exclude delegated permissions for the Azure Management API (appId 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b)
-```http
-POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/my-custom-policy /excludes
-Content-Type: application/json
-
-{
- "permissionType": "delegated",
- "resourceApplication": "46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b "
-}
-```
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/my-custom-policy /excludes
+ Content-Type: application/json
+
+ {
+ "permissionType": "delegated",
+ "resourceApplication": "46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b "
+ }
+ ```
Repeat this step to add more "exclude" condition sets.
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
using Azure.Storage.Blobs;
var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID"); var credentialOptions = new DefaultAzureCredentialOptions {
- ManagedIdentityClientId = clientID;
+ ManagedIdentityClientId = clientID
}; var credential = new DefaultAzureCredential(credentialOptions);
using Azure.Core;
var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID"); var credentialOptions = new DefaultAzureCredentialOptions {
- ManagedIdentityClientId = clientID;
+ ManagedIdentityClientId = clientID
}; var credential = new DefaultAzureCredential(credentialOptions);
using Microsoft.Data.SqlClient;
var clientID = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID"); var credentialOptions = new DefaultAzureCredentialOptions {
- ManagedIdentityClientId = clientID;
+ ManagedIdentityClientId = clientID
}; AccessToken accessToken = await new DefaultAzureCredential(credentialOptions).GetTokenAsync(
Tokens should be treated like credentials. Don't expose them to users or other s
* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
-* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets
+* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Learn more about managed identities for Azure resources:
Learn more about Azure Cosmos DB: -- [Azure Cosmos DB resource model](../../cosmos-db/account-databases-containers-items.md)
+- [Azure Cosmos DB resource model](../../cosmos-db/resource-model.md)
- [Tutorial: Build a .NET console app to manage data in an Azure Cosmos DB for NoSQL account](../../cosmos-db/sql/sql-api-get-started.md)
active-directory Cross Tenant Synchronization Configure Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md
Previously updated : 02/06/2023 Last updated : 02/27/2023
This article describes the key steps to configure cross-tenant synchronization u
## Prerequisites
-### Source tenant
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
- Azure AD Premium P1 or P2 license - [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings
This article describes the key steps to configure cross-tenant synchronization u
- [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration - [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions
-### Target tenant
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
- Azure AD Premium P1 or P2 license - [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings
These steps describe how to use Microsoft Graph Explorer (recommended), but you
1. Start another instance of [Microsoft Graph Explorer tool](https://aka.ms/ge).
-1. Sign in to the source tenant.
+1. Sign in to the target tenant.
1. Consent to the following required permissions:
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
By the end of this article, you'll be able to:
## Prerequisites
-### Source tenant
+![Icon for the source tenant.](./media/common/icon-tenant-source.png)<br/>**Source tenant**
- Azure AD Premium P1 or P2 license - [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings - [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) role to configure cross-tenant synchronization - [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) or [Application Administrator](../roles/permissions-reference.md#application-administrator) role to assign users to a configuration and to delete a configuration
-### Target tenant
+![Icon for the target tenant.](./media/common/icon-tenant-target.png)<br/>**Target tenant**
- Azure AD Premium P1 or P2 license - [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
Follow the steps below to send logs from Azure Active Directory to Azure Monitor
The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
- * `NetworkAccessTrafficLogs`
- * `RiskyServicePrincipals`
* `AADServicePrincipalRiskEvents` * `EnrichedOffice365AuditLogs`
+ * `MicrosoftGraphActivityLogs`
+ * `NetworkAccessTrafficLogs`
+ * `RiskyServicePrincipals`
1. Select the **Destination details** for where you'd like to send the logs. Choose any or all of the following destinations. Additional fields appear, depending on your selection.
If you do not see logs appearing in the selected destination after 15 minutes, s
* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) * [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
-* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
+* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Title: Cost recommendations
description: Full list of available cost recommendations in Advisor. Previously updated : 02/04/2022 Last updated : 02/28/2023 # Cost recommendations
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
### Use Standard Storage to store Managed Disks snapshots
-To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. This is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details.
+To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. It is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details.
Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Storage to store Managed Disks snapshots)](https://aka.ms/aa_manageddisksnapshot_learnmore). ### Right-size or shutdown underutilized virtual machines
-We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
+We've analyzed the usage patterns of your virtual machine over the past seven days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore). ### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.
-We have observed that you have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. Note that if you decide to delete the disk, recovery is not possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
+We've observed that you have disks which haven't been attached to a VM for more than 30 days. Please evaluate if you still need the disk. If you decide to delete the disk, recovery isn't possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
-Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
+Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which haven't been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
## MariaDB
Learn more about [MariaDB server - OrcasMariaDbCpuRightSize (Right-size underuti
### Right-size underutilized MySQL servers
-Our internal telemetry shows that the MySQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+Our internal telemetry shows that the MySQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure, which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
Learn more about [MySQL server - OrcasMySQLCpuRightSize (Right-size underutilized MySQL servers)](https://aka.ms/mysqlpricing).
Learn more about [MySQL server - OrcasMySQLCpuRightSize (Right-size underutilize
### Right-size underutilized PostgreSQL servers
-Our internal telemetry shows that the PostgreSQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+Our internal telemetry shows that the PostgreSQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure, which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size underutilized PostgreSQL servers)](https://aka.ms/postgresqlpricing).
Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically
Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
+## Azure Monitor
+
+For Azure Monitor cost optimization suggestions, please see [Optimize costs in Azure Monitor](../azure-monitor/best-practices-cost.md).
## Next steps
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
In addition to the original in-tree driver features, Azure Files CSI driver supp
|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` | |allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` | |requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` |
+|networkEndpointType | Specify network endpoint type for the storage account created by driver. If `privateEndpoint` is specified, a private endpoint will be created for the storage account. For other cases, a service endpoint will be created by default. | "",`privateEndpoint`| No | "" |
|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. | |tags | [tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" | |matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` |
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
description: Learn about the various methods that you can use to allow the Azure
Previously updated : 01/31/2023 Last updated : 02/27/2023
Azure AD workload identity (preview) is supported on both Windows and Linux clus
az account set --subscription $subscriptionID az identity create --name $UAMI --resource-group $resourceGroupName export USER_ASSIGNED_CLIENT_ID="$(az identity show -g $resourceGroupName --name $UAMI --query 'clientId' -o tsv)"
- export IDENTITY_TENANT=$(az aks show --name $clusterName --resource-group $resourceGroupName --query aadProfile.tenantId -o tsv)
+ export IDENTITY_TENANT=$(az aks show --name $clusterName --resource-group $resourceGroupName --query identity.tenantId -o tsv)
``` 2. You need to set an access policy that grants the workload identity permission to access the Key Vault secrets, access keys, and certificates. The rights are assigned using the `az keyvault set-policy` command shown below.
Azure AD workload identity (preview) is supported on both Windows and Linux clus
EOF ```
+ > [!NOTE]
+ > If you use `objectAlias` instead of `objectName`, make sure to update the YAML script.
+ 6. Deploy a sample pod. Notice the service account reference in the pod definition: ```bash
- cat <<EOF | kubectl -n $serviceAccountNamespace -f -
+ cat <<EOF | kubectl apply -f -
# This is a sample pod definition for using SecretProviderClass and the user-assigned identity to access your key vault kind: Pod apiVersion: v1
Azure AD workload identity (preview) is supported on both Windows and Linux clus
tenantId: <tenant-id> # The tenant ID of the key vault ```
+ > [!NOTE]
+ > If you use `objectAlias` instead of `objectName`, make sure to update the YAML script.
+ 1. Apply the `SecretProviderClass` to your cluster: ```bash
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Global Azure cloud is supported with Arc support on the following regions:
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Install the latest version of the [Azure CLI][install-cli]. - If you don't have one already, you need to create an [AKS cluster][deploy-cluster] or connect an [Arc-enabled Kubernetes cluster][arc-k8s-cluster].
+- Make sure you have [an Azure Kubernetes Service RBAC Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-rbac-admin)
### Set up the Azure CLI extension for cluster extensions
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
AKS uses several managed identities for built-in services and add-ons.
| Add-on | Ingress application gateway | Manages required network resources| Contributor role for node resource group | No | Add-on | omsagent | Used to send AKS metrics to Azure Monitor | Monitoring Metrics Publisher role | No | Add-on | Virtual-Node (ACIConnector) | Manages required network resources for Azure Container Instances (ACI) | Contributor role for node resource group | No
-| OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Microsoft Azure Active Directory (Azure AD) | NA | Steps to grant permission at https://github.com/Azure/aad-pod-identity#role-assignment.
+| OSS project | aad-pod-identity | Enables applications to access cloud resources securely with Microsoft Azure Active Directory (Azure AD) | NA | Steps to grant permission at [Azure AD Pod Identity Role Assignment configuration](https://azure.github.io/aad-pod-identity/docs/getting-started/role-assignment/).
## Create an AKS cluster using a managed identity
az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --role
For user-assigned kubelet identity which is outside the default worker node resource group, you need to assign the `Managed Identity Operator`on kubelet identity. ```azurecli-interactive
-az role assignment create --assignee <control-plane-identity-principal-id> --role "Managed Identity Operator" --scope "<kubelet-identity-resource-id>"
+az role assignment create --assignee <kubelet-identity-principal-id> --role "Managed Identity Operator" --scope "<kubelet-identity-resource-id>"
``` Example:
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[policy-rules]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors [aks-github]: https://github.com/azure/aks/issues [tigera]: https://www.tigera.io/
-[calicoctl]: https://docs.projectcalico.org/reference/calicoctl/
+[calicoctl]: https://docs.tigera.io/calico/3.25/reference/calicoctl/
[calico-support]: https://www.tigera.io/tigera-products/calico/
-[calico-logs]: https://docs.projectcalico.org/maintenance/troubleshoot/component-logs
+[calico-logs]: https://docs.tigera.io/calico/3.25/operations/troubleshoot/component-logs
[calico-aks-cleanup]: https://github.com/Azure/aks-engine/blob/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml [aks-acn-github]: https://github.com/Azure/azure-container-networking/issues
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
description: Learn how to create an App Service environment with an internal loa
ms.assetid: 0f4c1fa4-e344-46e7-8d24-a25e247ae138 Previously updated : 03/29/2022 Last updated : 02/28/2023
To create an ILB ASE:
![ASE creation](media/creating_and_using_an_internal_load_balancer_with_app_service_environment/createilbase.png) > [!NOTE]
-> The App Service Environment name must be no more than 37 characters.
+> The App Service Environment name must be no more than 36 characters.
6. Select Networking
app-service Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/creation.md
Here's how:
1. Search Azure Marketplace for *App Service Environment v3*.
-2. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`.
+2. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name must be no more than 36 characters. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`.
![Screenshot that shows the App Service Environment basics tab.](./media/creation/creation-basics.png)
app-service How To Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md
The basic Resource Manager template that creates an App Service Environment look
In addition to the core properties, there are other configuration options that you can use to configure your App Service Environment.
-* *name*: Required. This parameter defines a unique App Service Environment name.
+* *name*: Required. This parameter defines a unique App Service Environment name. The name must be no more than 36 characters.
* *virtualNetwork -> id*: Required. Specifies the resource ID of the subnet. Subnet must be empty and delegated to Microsoft.Web/hostingEnvironments * *internalLoadBalancingMode*: Required. In most cases, set this property to "Web, Publishing", which means both HTTP/HTTPS traffic and FTP traffic is on an internal VIP (Internal Load Balancer). If this property is set to "None", all traffic remains on the public VIP (External Load Balancer). * *zoneRedundant*: Optional. Defines with true/false if the App Service Environment will be deployed into Availability Zones (AZ). For more information, see [zone redundancy](./zone-redundancy.md).
It can take more than one hour for the App Service Environment to be created.
> [App Service Environment v3 Networking](./networking.md) > [!div class="nextstepaction"]
-> [Certificates in App Service Environment v3](./overview-certificates.md)
+> [Certificates in App Service Environment v3](./overview-certificates.md)
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The [Azure App Configuration](https://marketplace.visualstudio.com/items?itemNam
## Add role assignment
+Assign the proper App Configuration role assignments to the credentials being used within the task so that the task can access the App Configuration store.
+
+1. Go to your target App Configuration store.
+1. In the left menu, select **Access control (IAM)**.
+1. In the right pane, select **Add role assignments**.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button.":::
+1. For **Role**, select **App Configuration Data Reader**. This role allows the task to read from the App Configuration store.
+1. Select the service principal associated with the service connection that you created in the previous section.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-reader.png" alt-text="Screenshot shows the Add role assignment dialog.":::
+1. Select **Review + assign**.
+1. If the store contains Key Vault references, go to relevant Key Vault and assign **Key Vault Secret User** role to the service principal created in the previous step. From the Key Vault menu, select **Access policies** and ensure [Azure role-based access control](../key-vault/general/rbac-guide.md) is selected as the permission model.
## Use in builds
azure-app-configuration Push Kv Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-devops-pipeline.md
The [Azure App Configuration Push](https://marketplace.visualstudio.com/items?it
## Add role assignment
+Assign the proper App Configuration role assignments to the credentials being used within the task so that the task can access the App Configuration store.
+
+1. Go to your target App Configuration store.
+1. In the left menu, select **Access control (IAM)**.
+1. In the right pane, select **Add role assignments**.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-button.png" alt-text="Screenshot shows the Add role assignments button.":::
+1. For **Role**, select **App Configuration Data Owner**. This role allows the task to read from and write to the App Configuration store.
+1. Select the service principal associated with the service connection that you created in the previous section.
+
+ :::image type="content" border="true" source="./media/azure-app-configuration-role-assignment/add-role-assignment-data-owner.png" alt-text="Screenshot shows the Add role assignment dialog.":::
+1. Select **Review + assign**.
## Use in builds
This section will cover how to use the Azure App Configuration Push task in an A
## Use in releases
-This section will cover how to use the Azure App Configuration Push task in an Azure DevOps release pipelines.
+This section will cover how to use the Azure App Configuration Push task in an Azure DevOps release pipeline.
1. Navigate to release pipeline page by selecting **Pipelines** > **Releases**. Documentation for release pipelines can be found [here](/azure/devops/pipelines/release). 1. Choose an existing release pipeline. If you donΓÇÖt have one, select **+ New** to create a new one.
Depending on the file content profile you selected, please refer to examples in
**Why am I receiving a 409 error when attempting to push key-values to my configuration store?**
-A 409 Conflict error message will occur if the task tries to remove or overwrite a key-value that is locked in the App Configuration store.
+A 409 Conflict error message will occur if the task tries to remove or overwrite a key-value that is locked in the App Configuration store.
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
ms.devlang: csharp Previously updated : 09/28/2020 Last updated : 02/28/2023 #Customer intent: As a .NET Framework developer, I want to manage all my app settings in one place.
In this quickstart, a .NET Framework console app is used as an example, but the
|-|-| | *TestApp:Settings:Message* | *Data from Azure App Configuration* |
- Leave **Label** and **Content Type** empty.
+ Leave **Label** and **Content Type** empty. For more information about labels and content types, go to [Keys and values](concept-key-value.md#label-keys).
## Create a .NET Framework console app
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Title: "Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect"
+ Title: "Cluster connect access to Azure Arc-enabled Kubernetes clusters"
Last updated 07/22/2022 description: "Cluster connect allows developers to access their Azure Arc-enabled Kubernetes clusters from anywhere for interactive development and debugging."
-# Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect
+# Cluster connect access to Azure Arc-enabled Kubernetes clusters
The Azure Arc-enabled Kubernetes *cluster connect* feature provides connectivity to the `apiserver` of the cluster without requiring any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
azure-arc Identity Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/identity-access-overview.md
+
+ Title: "Azure Arc-enabled Kubernetes identity and access overview"
Last updated : 02/28/2023+
+description: "Understand identity and access options for Arc-enabled Kubernetes clusters."
++
+# Azure Arc-enabled Kubernetes identity and access overview
+
+You can authenticate, authorize, and control access to your Azure Arc-enabled Kubernetes clusters. Kubernetes role-based access control (Kubernetes RBAC) lets you grant users, groups, and service accounts access to only the resources they need. You can further enhance the security and permissions structure by using Azure Active Directory and Azure role-based access control (RBAC).
+
+While Kubernetes RBAC works only on Kubernetes resources within your cluster, Azure RBAC works on resources across your Azure subscription.
+
+This topic provides an overview of these two RBAC systems and how you can use them with your Arc-enabled Kubernetes clusters.
+
+## Kubernetes RBAC
+
+[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) provides granular filtering of user actions. With Kubernetes RBAC, You assign users or groups permission to create and modify resources or view logs from running application workloads. You can create roles to define permissions, and then assign those roles to users with role bindings. Permissions may be scoped to a single namespace or across the entire cluster.
+
+The Azure Arc-enabled Kubernetes cluster connect feature uses Kubernetes RBAC to provide connectivity to the `apiserver` of the cluster. This connectivity doesn't require any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner. Using the cluster connect feature helps enable interactive debugging and troubleshooting scenarios. It can also be used to provide cluster access to Azure services for [custom locations](conceptual-custom-locations.md).
+
+For more information, see [Cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md) and [Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters](cluster-connect.md).
+
+## Azure RBAC
+
+[Azure role-based access control (RBAC)](/azure/role-based-access-control/overview) is an authorization system built on Azure Resource Manager and Azure Active Directory (Azure AD) that provides fine-grained access management of Azure resources.
+
+With Azure RBAC, role definitions outline the permissions to be applied. You assign these roles to users or groups via a role assignment for a particular scope. The scope can be across the entire subscription or limited to a resource group or to an individual resource such as a Kubernetes cluster.
+
+Using Azure RBAC with your Arc-enabled Kubernetes clusters allows the benefits of Azure role assignments, such as activity logs showing all Azure RBAC changes to an Azure resource.
+
+For more information, see [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md) and [Use Azure RBAC for Azure Arc-enabled Kubernetes clusters](azure-rbac.md).
+
+## Next steps
+
+- Learn about [access and identity options for Azure Kubernetes Service (AKS) clusters](/azure/aks/concepts-identity).
+- Learn about [Cluster connect access to Azure Arc-enabled Kubernetes clusters](conceptual-cluster-connect.md).
+- Learn about [Azure RBAC on Azure Arc-enabled Kubernetes](conceptual-azure-rbac.md)
azure-arc Tutorial Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-workload-management.md
If you don't have an Azure subscription, create a [free account](https://azure.m
In order to successfully deploy the sample, you need: -- [Azure CLI](/cli/azure/install-azure-cli).
+- [Azure CLI](/cli/azure/install-azure-cli)
- [GitHub CLI](https://cli.github.com) - [Helm](https://helm.sh/docs/helm/helm_install/) - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
+- [jq](https://stedolan.github.io/jq/download/)
+- GitHub token with the following scopes: `repo`, `workflow`, `write:packages`, `delete:packages`, `read:org`, `delete_repo`.
## 1 - Deploy the sample
azure-cache-for-redis Cache Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-nodejs-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Node.js'
-description: In this quickstart, you'll learn how to use Azure Cache for Redis with Node.js and node_redis.
+description: In this quickstart, learn how to use Azure Cache for Redis with Node.js and node_redis.
ms.devlang: javascript Previously updated : 02/04/2022 Last updated : 02/16/2023 -+ #Customer intent: As a Node.js developer, new to Azure Cache for Redis, I want to create a new Node.js app that uses Azure Cache for Redis. # Quickstart: Use Azure Cache for Redis in Node.js In this quickstart, you incorporate Azure Cache for Redis into a Node.js app to have access to a secure, dedicated cache that is accessible from any application within Azure.
-## Skip to the code on GitHub
-
-If you want to skip straight to the code, see the [Node.js quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/nodejs) on GitHub.
- ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)
For examples of using other Node.js clients, see the individual documentation fo
Add environment variables for your **HOST NAME** and **Primary** access key. Use these variables from your code instead of including the sensitive information directly in your code. ```powershell
-set REDISCACHEHOSTNAME=contosoCache.redis.cache.windows.net
-set REDISCACHEKEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+set AZURE_CACHE_FOR_REDIS_HOST_NAME=contosoCache
+set AZURE_CACHE_FOR_REDIS_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
``` ## Connect to the cache
-The latest builds of [node_redis](https://github.com/mranney/node_redis) provide support for connecting to Azure Cache for Redis using TLS. The following example shows how to connect to Azure Cache for Redis using the TLS endpoint of 6380.
-
-```js
-var redis = require("redis");
-
-// Add your cache name and access key.
-var client = redis.createClient(6380, process.env.REDISCACHEHOSTNAME,
- {auth_pass: process.env.REDISCACHEKEY, tls: {servername: process.env.REDISCACHEHOSTNAME}});
-```
-
-Don't create a new connection for each operation in your code. Instead, reuse connections as much as possible.
+The latest builds of [node_redis](https://github.com/mranney/node_redis) provide support several connection options. Don't create a new connection for each operation in your code. Instead, reuse connections as much as possible.
## Create a new Node.js app
-Create a new script file named *redistest.js*. Use the command `npm install redis bluebird` to install required packages.
-
-Add the following example JavaScript to the file. This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed. For more examples of using Redis with the [node_redis](https://github.com/mranney/node_redis) client, see [https://redis.js.org/](https://redis.js.org/).
-
-```js
-var redis = require("redis");
-
-async function testCache() {
-
- // Connect to the Azure Cache for Redis over the TLS port using the key.
- var cacheHostName = process.env.REDISCACHEHOSTNAME;
- var cachePassword = process.env.REDISCACHEKEY;
- var cacheConnection = redis.createClient({
- // rediss for TLS
- url: "rediss://" + cacheHostName + ":6380",
- password: cachePassword,
- });
- await cacheConnection.connect();
-
- // Perform cache operations using the cache connection object...
-
- // Simple PING command
- console.log("\nCache command: PING");
- console.log("Cache response : " + await cacheConnection.ping());
-
- // Simple get and put of integral data types into the cache
- console.log("\nCache command: GET Message");
- console.log("Cache response : " + await cacheConnection.get("Message"));
-
- console.log("\nCache command: SET Message");
- console.log("Cache response : " + await cacheConnection.set("Message",
- "Hello! The cache is working from Node.js!"));
-
- // Demonstrate "SET Message" executed as expected...
- console.log("\nCache command: GET Message");
- console.log("Cache response : " + await cacheConnection.get("Message"));
-
- // Get the client list, useful to see if connection list is growing...
- console.log("\nCache command: CLIENT LIST");
- console.log("Cache response : " + await cacheConnection.sendCommand(["CLIENT", "LIST"]));
-
- console.log("\nDone");
- process.exit();
-}
-
-testCache();
-```
-
-Run the script with Node.js.
-
-```powershell
-node redistest.js
-```
-
-In the example below, you can see the `Message` key previously had a cached value, which was set using the Redis Console in the Azure portal. The app updated that cached value. The app also executed the `PING` and `CLIENT LIST` commands.
-
-![Redis Cache app completed](./media/cache-nodejs-get-started/redis-cache-app-complete.png)
+1. Create a new script file named *redistest.js*.
+1. Use the command to install a redis package.
+
+ ```bash
+ `npm install redis`
+ ```
+
+1. Add the following example JavaScript to the file.
++
+ ```javascript
+ const redis = require("redis");
+
+ // Environment variables for cache
+ const cacheHostName = process.env.AZURE_CACHE_FOR_REDIS_HOST_NAME;
+ const cachePassword = process.env.AZURE_CACHE_FOR_REDIS_ACCESS_KEY;
+
+ if(!cacheHostName) throw Error("AZURE_CACHE_FOR_REDIS_HOST_NAME is empty")
+ if(!cachePassword) throw Error("AZURE_CACHE_FOR_REDIS_ACCESS_KEY is empty")
+
+ async function testCache() {
+
+ // Connection configuration
+ const cacheConnection = redis.createClient({
+ // rediss for TLS
+ url: `rediss://${cacheHostName}:6380`,
+ password: cachePassword
+ });
+
+ // Connect to Redis
+ await cacheConnection.connect();
+
+ // PING command
+ console.log("\nCache command: PING");
+ console.log("Cache response : " + await cacheConnection.ping());
+
+ // GET
+ console.log("\nCache command: GET Message");
+ console.log("Cache response : " + await cacheConnection.get("Message"));
+
+ // SET
+ console.log("\nCache command: SET Message");
+ console.log("Cache response : " + await cacheConnection.set("Message",
+ "Hello! The cache is working from Node.js!"));
+
+ // GET again
+ console.log("\nCache command: GET Message");
+ console.log("Cache response : " + await cacheConnection.get("Message"));
+
+ // Client list, useful to see if connection list is growing...
+ console.log("\nCache command: CLIENT LIST");
+ console.log("Cache response : " + await cacheConnection.sendCommand(["CLIENT", "LIST"]));
+
+ // Disconnect
+ cacheConnection.disconnect()
+
+ return "Done"
+ }
+
+ testCache().then((result) => console.log(result)).catch(ex => console.log(ex));
+ ```
+
+ This code shows you how to connect to an Azure Cache for Redis instance using the cache host name and key environment variables. The code also stores and retrieves a string value in the cache. The `PING` and `CLIENT LIST` commands are also executed. For more examples of using Redis with the [node_redis](https://github.com/mranney/node_redis) client, see [https://redis.js.org/](https://redis.js.org/).
++
+1. Run the script with Node.js.
+
+ ```bash
+ node redistest.js
+ ```
+
+1. Example the output.
+
+ ```console
+ Cache command: PING
+ Cache response : PONG
+
+ Cache command: GET Message
+ Cache response : Hello! The cache is working from Node.js!
+
+ Cache command: SET Message
+ Cache response : OK
+
+ Cache command: GET Message
+ Cache response : Hello! The cache is working from Node.js!
+
+ Cache command: CLIENT LIST
+ Cache response : id=10017364 addr=76.22.73.183:59380 fd=221 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 argv-mem=10 obl=0 oll=0 omem=0 tot-mem=61466 ow=0 owmem=0 events=r cmd=client user=default numops=6
+
+ Done
+ ```
## Clean up resources
-If you continue to the next tutorial, can keep the resources created in this quickstart and reuse them.
-
-Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+If you continue to the next tutorial, can keep the resources created in this quickstart and reuse them. Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
> [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually instead of deleting the resource group.
>
-Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
+
+1. In the **Filter by name** text box, enter the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
+
+ ![Delete Azure Resource group](./media/cache-nodejs-get-started/redis-cache-delete-resource-group.png)
-In the **Filter by name** text box, enter the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
+1. Confirm the deletion of the resource group. Enter the name of your resource group to confirm, and select **Delete**.
-![Delete Azure Resource group](./media/cache-nodejs-get-started/redis-cache-delete-resource-group.png)
+1. After a few moments, the resource group and all of its contained resources are deleted.
-You'll be asked to confirm the deletion of the resource group. Enter the name of your resource group to confirm, and select **Delete**.
+## Get the sample code
-After a few moments, the resource group and all of its contained resources are deleted.
+Get the [Node.js quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/nodejs) on GitHub.
## Next steps
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]
-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](/azure/azure-functions/functions-reference-python.md?pivots=python-mode-decorators).
+> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators).
> > In the following code snippets, Python (PM2) denotes programming model V2, the new experience.
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
In this article, you learn how to use the Visual Studio Code Azure Functions ext
:::image type="content" source="./media/quickstart-python-vscode/functions-vs-code-complete.png" alt-text="Screenshot of the running durable function in Azure."::: > [!NOTE]
-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](/azure/azure-functions/functions-reference-python.md?pivots=python-mode-decorators).
+> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators).
## Prerequisites
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
The following table shows approximately how many agents can communicate with a g
## Download the Log Analytics gateway
-Get the latest version of the Log Analytics gateway Setup file from either Microsoft Download Center ([Download Link](https://go.microsoft.com/fwlink/?linkid=837444)) or the Azure portal.
-
-To get the Log Analytics gateway from the Azure portal, follow these steps:
-
-1. Browse the list of services, and then select **Log Analytics**.
-1. Select a workspace.
-1. In your workspace pane, from the pane on the left, under **General**, select **Quick Start**.
-1. Under **Choose a data source to connect to the workspace**, select **Computers**.
-1. In the **Direct Agent** pane, select **Download Log Analytics gateway**.
-
- ![Screenshot of the steps to download the Log Analytics gateway](./media/gateway/download-gateway.png)
-
-or
-
-1. In your workspace pane, from the pane on the left, under **Settings**, select **Advanced settings**.
-1. Go to **Connected Sources** > **Windows Servers** and select **Download Log Analytics gateway**.
+Get the latest version of the Log Analytics gateway Setup file from Microsoft Download Center ([Download Link](https://go.microsoft.com/fwlink/?linkid=837444)).
## Install Log Analytics gateway using setup wizard
After the load balancer is created, a backend pool needs to be created, which di
To configure the Azure Monitor agent (installed on the gateway server) to use the gateway to upload data for Windows or Linux: 1. Follow the instructions to [configure proxy settings on the agent](./azure-monitor-agent-overview.md#proxy-configuration) and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allow list for the gateway
+2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
`Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com` `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com` (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allow list for the gateway
+3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
`Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com` 3. Restart the **OMS Gateway** service to apply the changes `Stop-Service -Name <gateway-name>`
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section outlines supported scenarios.
### Supported platforms and frameworks
-Supported platforms and frameworks are listed here.
+Supported platforms and frameworks are listed below.
#### Azure service integration (portal enablement, Azure Resource Manager deployments) * [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
Leave product feedback for the engineering team on [UserVoice](https://feedback.
- [Create a resource](create-workspace-resource.md) - [Application Map](app-map.md)-- [Transaction search](diagnostic-search.md)
+- [Transaction search](diagnostic-search.md)
azure-monitor Container Insights Cost Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost-config.md
+
+ Title: Configure Container insights cost optimization data collection rules | Microsoft Docs
+description: This article describes how you can configure the Container insights agent to control data collection for metric counters
+ Last updated : 02/23/2023+++
+# Enable cost optimization settings (preview)
+
+Cost optimization settings offer users the ability to customize and control the metrics data collected through the container insights agent. This preview supports the data collection settings such as data collection interval and namespaces to exclude for the data collection through [Azure Monitor Data Collection Rules (DCR)](../essentials/data-collection-rule-overview.md). These settings control the volume of ingestion and reduce the monitoring costs of container insights.
+
+>[!NOTE]
+>This feature is currently in public preview. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
++
+## Data collection parameters
+
+The container insights agent periodically checks for the data collection settings, validates, and applies the applicable settings to applicable container insights Log Analytics tables and Custom Metrics. The data collection settings should be applied in the subsequent configured Data collection interval.
+
+The following table describes the supported data collection settings
+
+| **Data collection setting** | **Allowed Values** | **Description** |
+| -- | | -- |
+| **interval** | \[1m, 30m] in 1m intervals | This value determines how often the agent collects data. The default value is 1m, where m denotes the minutes. If the value is outside the allowed range, then this value defaults to _1 m_ (60 seconds). |
+| **namespaceFilteringMode** | Include, Exclude, or Off | Choosing Include collects only data from the values in the namespaces field. Choosing Exclude collects data from all namespaces except for the values in the namespaces field. Off ignores any namespace selections and collect data on all namespaces.
+| **namespaces** | An array of names i.e. \["kube-system", "default"] | Array of comma separated Kubernetes namespaces for which inventory and perf data will be included or excluded based on the _namespaceFilteringMode_. For example, **namespaces** = ["kube-system", "default"] with an _Include_ setting collects only these two namespaces. With an _Exclude_ setting, the agent will collect data from all other namespaces except for _kube-system_ and _default_. With an _Off_ setting, the agent collects data from all namespaces including _kube-system_ and _default_. Invalid and unrecognized namespaces are ignored. |
+
+## Log Analytics data collection
+
+This table outlines the list of the container insights Log Analytics tables for which data collection settings are applicable.
+
+>[!NOTE]
+>This feature configures settings for all container insights tables (excluding ContainerLog), to configure settings on the ContainerLog please update the ConfigMap listed in documentation for [agent data Collection settings](../containers/container-insights-agent-config.md)
+
+| ContainerInsights Table Name | Is Data collection setting: interval applicable? | Is Data collection setting: namespaces applicable? | Remarks |
+| | | | |
+| ContainerInventory | Yes | Yes | |
+| ContainerNodeInventory | Yes | No | Data collection setting for namespaces is not applicable since Kubernetes Node is not a namespace scoped resource |
+| KubeNodeInventory | Yes | No | Data collection setting for namespaces is not applicable Kubernetes Node is not a namespace scoped resource |
+| KubePodInventory | Yes | Yes ||
+| KubePVInventory | Yes | Yes | |
+| KubeServices | Yes | Yes | |
+| KubeEvents | No | Yes | Data collection setting for interval is not applicable for the Kubernetes Events |
+| Perf | Yes | Yes\* | \*Data collection setting for namespaces is not applicable for the Kubernetes Node related metrics since the Kubernetes Node is not a namespace scoped object. |
+| InsightsMetrics| Yes\*\* | Yes\*\* | \*\*Data collection settings are only applicable for the metrics collecting the following namespaces: container.azm.ms/kubestate, container.azm.ms/pv and container.azm.ms/gpu |
+
+## Custom Metrics
+
+| Metric namespace | Is Data collection setting: interval applicable? | Is Data collection setting: namespaces applicable? | Remarks |
+| | | | |
+| Insights.container/nodes| Yes | No | Node is not a namespace scoped resource |
+|Insights.container/pods | Yes | Yes| |
+| Insights.container/containers | Yes | Yes | |
+| Insights.container/persistentvolumes | Yes | Yes | |
+
+## Impact on existing alerts and visualizations
+If you are currently using the above tables for charts or alerts, then modifying your data collection settings may degrade those experiences. If you are excluding namespaces or reducing data collection frequency, review your existing alerts, dashboards, and workbooks using this data.
+
+To scan for alerts that may be referencing these tables, run the following Azure Resource Graph query:
+
+```Kusto
+resources
+| where type in~ ('microsoft.insights/scheduledqueryrules') and ['kind'] !in~ ('LogToMetric')
+| extend severity = strcat("Sev", properties["severity"])
+| extend enabled = tobool(properties["enabled"])
+| where enabled in~ ('true')
+| where tolower(properties["targetResourceTypes"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["targetResourceType"]) matches regex 'microsoft.operationalinsights/workspaces($|/.*)?' or tolower(properties["scopes"]) matches regex 'providers/microsoft.operationalinsights/workspaces($|/.*)?'
+| where properties contains "Perf" or properties  contains "InsightsMetrics" or properties  contains "ContainerInventory" or properties  contains "ContainerNodeInventory" or properties  contains "KubeNodeInventory" or properties  contains"KubePodInventory" or properties  contains "KubePVInventory" or properties  contains "KubeServices" or properties  contains "KubeEvents"
+| project id,name,type,properties,enabled,severity,subscriptionId
+| order by tolower(name) asc
+```
+
+Reference the [Limitations](./container-insights-cost-config.md#limitations) section for information on migrating your Recommended alerts.
+
+## Pre-requisites
+
+- AKS Cluster MUST be using either System or User Assigned Managed Identity
+ - If the AKS Cluster is using Service Principal, you must upgrade to [Managed Identity](../../aks/use-managed-identity.md#update-an-aks-cluster-to-use-a-managed-identity)
+
+- Azure CLI: Minimum version required for Azure CLI is 2.45.0. Run az --version to find the version, and run az upgrade to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]
+ - For AKS clusters, aks-preview version 0.5.125 or higher
+ - For Arc enabled Kubernetes and AKS hybrid, k8s-extension version 1.3.7 or higher
+
+## Cost presets and collection settings
+Cost presets and collection settings are available for selection in the Azure portal to allow easy configuration. By default, container insights ships with the Standard preset, however, you may choose one of the following to modify your collection settings.
+
+| Cost preset | Collection frequency | Namespace filters | Syslog collection |
+| | | | |
+| Standard | 1 m | None | Not enabled |
+| Cost-optimized | 5 m | Excludes kube-system, gatekeeper-system, azure-arc | Not enabled |
+| Syslog | 1 m | None | Enabled by default |
+
+## Configuring AKS data collection settings using Azure CLI
+
+Using the CLI to enable monitoring for your AKS requires passing in configuration as a JSON file.
+
+The default schema for the config file follows this format:
+
+```json
+{
+ "interval": "string",
+ "namespaceFilteringMode": "string",
+ "namespaces": ["string"]
+}
+```
+
+* `interval`: The frequency of data collection, the input scheme must be a number between [1, 30] followed by m to denote minutes.
+* `namespaceFilteringMode`: The filtering mode for the namespaces, the input must be either Include, Exclude, or Off.
+* `namespaces`: An array of Kubernetes namespaces as strings for inclusion or exclusion
+
+Example input:
+
+```json
+{
+ "interval": "1m",
+ "namespaceFilteringMode": "Include",
+ "namespaces": ["kube-system"]
+}
+```
+Create a file and provide values for _interval_, _namespaceFilteringMode_, and _namespaces_. The following CLI instructions use the name dataCollectionSettings.json.
+
+## Onboarding to a new AKS cluster
+
+Use the following command to enable monitoring of your AKS cluster
+
+```azcli
+az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --data-collection-settings dataCollectionSettings.json --generate-ssh-keys
+```
+
+## Onboarding to an existing AKS Cluster
+
+## [Azure CLI](#tab/create-CLI)
+
+### Onboard to a cluster without the monitoring addon
+
+```azcli
+az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <clusterResourceGroup> -n <clusterName> --data-collection-settings dataCollectionSettings.json
+```
+
+### Onboard to a cluster with an existing monitoring addon
+
+```azcli
+# obtain the configured log analytics workspace resource id
+az aks show -g <clusterResourceGroup> -n <clusterName> | grep -i "logAnalyticsWorkspaceResourceID"
+
+# disable monitoring
+az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>
+
+# enable monitoring with data collection settings
+az aks enable-addons -a monitoring --enable-msi-auth-for-monitoring -g <clusterResourceGroup> -n <clusterName> --workspace-resource-id <logAnalyticsWorkspaceResourceId> --data-collection-settings dataCollectionSettings.json
+```
+
+## [Azure portal](#tab/create-portal)
+1. In the Azure portal, select the AKS cluster that you wish to monitor
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
+3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar
+4. If you are configuring Container Insights for the first time or have not migrated to using [managed identity authentication (preview)](../containers/container-insights-onboard.md#authentication), select the "Use managed identity (preview)" checkbox
+5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit profile settings"
+6. Click the blue "Configure" button to finish
++
+## [ARM](#tab/create-arm)
+
+1. Download the Azure Resource Manager Template and Parameter files
+
+```bash
+curl -L https://aka.ms/aks-enable-monitoring-costopt-onboarding-template-file -o existingClusterOnboarding.json
+```
+
+```bash
+curl -L https://aka.ms/aks-enable-monitoring-costopt-onboarding-template-parameter-file -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file: existingClusterParam.json
+
+- For _aksResourceId_ and _aksResourceLocation_, use the values on the **AKS Overview** page for the AKS cluster.
+- For _workspaceResourceId_, use the resource ID of your Log Analytics workspace.
+- For _workspaceLocation_, use the Location of your Log Analytics workspace
+- For _resourceTagValues_, use the existing tag values specified for the AKS cluster
+- For _dataCollectionInterval_, specify the interval to use for the data collection interval. Allowed values are 1 m, 2 m … 30 m where m suffix indicates the minutes.
+- For _namespaceFilteringModeForDataCollection_, specify if the namespace array is to be included or excluded for collection. If set to off, the agent ignores the namespaces field.
+- For _namespacesForDataCollection_, specify array of the namespaces to exclude or include for the Data collection. For example, to exclude "kube-system" and "default" namespaces, you can specify the value as ["kube-system", "default"] with an Exclude value for namespaceFilteringMode.
+
+3. Deploy the ARM template
+
+```azcli
+az login
+
+az account set --subscription"Cluster Subscription Name"
+
+az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+```
+++
+## Onboarding to an existing AKS hybrid Cluster
+
+## [Azure CLI](#tab/create-CLI)
+
+```azcli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true dataCollectionSettings='{\"interval\": \"1m\",\"namespaceFilteringMode\": \"Include\", \"namespaces\": [ \"kube-system\"]}'
+```
+
+The collection settings can be modified through the input of the `dataCollectionSettings` field.
+
+* `interval`: The frequency of data collection, the input scheme must be a number between [1, 30] followed by m to denote minutes.
+* `namespaceFilteringMode`: The filtering mode for the namespaces, the input must be either Include, Exclude, or Off.
+* `namespaces`: An array of Kubernetes namespaces as strings, to be included or excluded
+
+## [Azure portal](#tab/create-portal)
+1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
+3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar
+4. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings"
+5. Click the blue "Configure" button to finish
++
+## [ARM](#tab/create-arm)
++
+1. Download the Azure Resource Manager Template and Parameter files
+
+```bash
+curl -L https://aka.ms/existingClusterOnboarding.json -o existingClusterOnboarding.json
+```
+
+```bash
+curl -L https://aka.ms/existingClusterParam.json -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file: existingClusterParam.json
+
+- For _clusterResourceId_ and _clusterResourceLocation_, use the values on the **Overview** page for the AKS hybrid cluster.
+- For _workspaceResourceId_, use the resource ID of your Log Analytics workspace.
+- For _workspaceLocation_, use the Location of your Log Analytics workspace
+- For _resourceTagValues_, use the existing tag values specified for the AKS hybrid cluster
+- For _dataCollectionInterval_, specify the interval to use for the data collection interval. Allowed values are 1 m, 2 m … 30 m where m suffix indicates the minutes.
+- For _namespaceFilteringModeForDataCollection_, specify if the namespace array is to be included or excluded for collection. If set to off, the agent ignores the namespaces field.
+- For _namespacesForDataCollection_, specify array of the namespaces to exclude or include for the Data collection. For example, to exclude "kube-system" and "default" namespaces, you can specify the value as ["kube-system", "default"] with an Exclude value for namespaceFilteringMode.
+
+3. Deploy the ARM template
+
+```azcli
+az login
+
+az account set --subscription"Cluster Subscription Name"
+
+az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+```
+++
+## Onboarding to an existing Azure Arc K8s Cluster
+
+## [Azure CLI](#tab/create-CLI)
+
+```azcli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true dataCollectionSettings='{\"interval\": \"1m\",\"namespaceFilteringMode\": \"Include\", \"namespaces\": [ \"kube-system\"]}'
+```
+
+The collection settings can be modified through the input of the `dataCollectionSettings` field.
+
+* `interval`: The frequency of data collection, the input scheme must be a number between [1, 30] followed by m to denote minutes.
+* `namespaceFilteringMode`: The filtering mode for the namespaces, the input must be either Include, Exclude, or Off.
+* `namespaces`: An array of Kubernetes namespaces as strings, to be included or excluded
+
+## [Azure portal](#tab/create-portal)
+1. In the Azure portal, select the Arc cluster that you wish to monitor
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
+3. If you have not previously configured Container Insights, select the 'Configure Azure Monitor' button. For clusters already onboarded to Insights, select the "Monitoring Settings" button in the toolbar
+4. If you are configuring Container Insights for the first time, select the "Use managed identity (preview)" checkbox
+5. Using the dropdown, choose one of the "Cost presets", for more configuration, you may select the "Edit advanced collection settings"
+6. Click the blue "Configure" button to finish
++
+## [ARM](#tab/create-arm)
+
+1. Download the Azure Resource Manager Template and Parameter files
+
+```bash
+curl -L https://aka.ms/arc-k8s-enable-monitoring-costopt-onboarding-template-file -o existingClusterOnboarding.json
+```
+
+```bash
+curl -L https://aka.ms/arc-k8s-enable-monitoring-costopt-onboarding-template-parameter-file -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file: existingClusterParam.json
+
+- For _clusterResourceId_ and _clusterRegion_, use the values on the **Overview** page for the Arc enabled Kubernetes cluster.
+- For _workspaceResourceId_, use the resource ID of your Log Analytics workspace.
+- For _workspaceLocation_, use the Location of your Log Analytics workspace
+- For _resourceTagValues_, use the existing tag values specified for the Arc cluster
+- For _dataCollectionInterval_, specify the interval to use for the data collection interval. Allowed values are 1 m, 2 m … 30 m where m suffix indicates the minutes.
+- For _namespaceFilteringModeForDataCollection_, specify if the namespace array is to be included or excluded for collection. If set to off, the agent ignores the namespaces field.
+- For _namespacesForDataCollection_, specify array of the namespaces to exclude or include for the Data collection. For example, to exclude "kube-system" and "default" namespaces, you can specify the value as ["kube-system", "default"] with an Exclude value for namespaceFilteringMode.
+
+3. Deploy the ARM template
+
+```azcli
+az login
+
+az account set --subscription "Cluster's Subscription Name"
+
+az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.json --parameters @./existingClusterParam.json
+```
++
+## Data Collection Settings Updates
+
+To update your data collection Settings, modify the values in parameter files and redeploy the Azure Resource Manager Templates to your corresponding AKS or Azure Arc Kubernetes cluster. Or select your new options through the Monitoring Settings in the portal.
+
+## Troubleshooting
+
+- Only clusters using [managed identity authentication (preview)](../containers/container-insights-onboard.md#authentication), are able to use this feature.
+- Missing data in your container insights charts is an expected behavior for namespace exclusion, if excluding all namespaces
+
+## Limitations
+
+- Recommended alerts will not work as intended if the Data collection interval is configured more than 1-minute interval. To continue using Recommended alerts, please migrate to the [Prometheus metrics addon](../essentials/prometheus-metrics-overview.md)
+- There may be gaps in Trend Line Charts of Deployments workbook if configured Data collection interval more than time granularity of the selected Time Range.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Title: Monitoring cost for Container insights | Microsoft Docs description: This article describes the monitoring cost for metrics and inventory data collected by Container insights to help customers manage their usage and associated costs. - Previously updated : 08/29/2022 Last updated : 01/24/2023 # Understand monitoring costs for Container insights
The Azure Monitor pricing model is primarily based on the amount of data ingeste
The following types of data collected from a Kubernetes cluster with Container insights influence cost and can be customized based on your usage: -- Stdout and stderr container logs from every monitored container in every Kubernetes namespace in the cluster
+- Perf, Inventory, InsightsMetrics, and KubeEvents can be controlled through [cost optimization settings](../containers/container-insights-cost-config.md)
+- Stdout and stderr container logs from every monitored container in every Kubernetes namespace in the cluster via the [agent ConfigMap](../containers/container-insights-agent-config.md)
- Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics
This workbook helps you visualize the source of your data without having to buil
To learn about managing rights and permissions to the workbook, review [Access control](../visualize/workbooks-overview.md#access-control).
+### Determining the root cause of the data ingestion
+
+Container Insights data primarily consists of metric counters (Perf, Inventory, InsightsMetrics, and custom metrics) and logs (ContainerLog). Based on your cluster usage and size, you may have different requirements and monitoring needs.
+
+By navigating to the By Table section of the Data Usage workbook, you can see the breakdown of table sizes for Container Insights.
+
+[![Screenshot that shows the By Table breakdown in Data Usage workbook.](media/container-insights-cost/data-usage-workbook-by-table.png)](media/container-insights-cost/data-usage-workbook-by-table.png#lightbox)
+
+If the majority of your data comes from one of these following tables:
+- Perf
+- InsightsMetrics
+- ContainerInventory
+- ContainerNodeInventory
+- KubeNodeInventory
+- KubePodInventory
+- KubePVInventory
+- KubeServices
+- KubeEvents
+
+You can adjust your ingestion using the [cost optimization settings](../containers/container-insights-cost-config.md) and/or migrating to the Prometheus metrics addon (../essentials/prometheus-metrics-overview.md)
+
+Otherwise, the majority of your data belongs to the ContainerLog table. and you can follow the steps below to reduce your ContainerLog costs.
+
+### Reducing your ContainerLog costs
+ After you finish your analysis to determine which sources are generating the data that's exceeding your requirements, you can reconfigure data collection. For more information on configuring collection of stdout, stderr, and environmental variables, see [Configure agent data collection settings](container-insights-agent-config.md). The following examples show what changes you can apply to your cluster by modifying the ConfigMap file to help control cost.
The following examples show what changes you can apply to your cluster by modify
After you apply one or more of these changes to your ConfigMaps, apply it to your cluster with the command `kubectl apply -f <config3. map_yaml_file.yaml>`. For example, run the command `kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it.
+### Configure Basic Logs
+
+You can save on data ingestion costs on ContainerLog in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
+
+You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
+ ### Prometheus metrics scraping If you use [Prometheus metric scraping](container-insights-prometheus.md), make sure that you limit the number of metrics you collect from your cluster:
If you use [Prometheus metric scraping](container-insights-prometheus.md), make
- Container insights supports exclusion and inclusion lists by metric name. For example, if you're scraping **kubedns** metrics in your cluster, hundreds of them might get scraped by default. But you're most likely only interested in a subset of the metrics. Confirm that you specified a list of metrics to scrape, or exclude others except for a few to save on data ingestion volume. It's easy to enable scraping and not use many of those metrics, which will only add charges to your Log Analytics bill. - When you scrape through pod annotations, ensure you filter by namespace so that you exclude scraping of pod metrics from namespaces that you don't use. An example is the `dev-test` namespace.
-### Configure Basic Logs
-
-You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs in Azure Monitor](../logs/basic-logs-configure.md). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records.
-
-You must be on the ContainerLogV2 schema to configure Basic Logs. For more information, see [Enable the ContainerLogV2 schema (preview)](container-insights-logging-v2.md).
- ## Data collected from Kubernetes clusters ### Metric data
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
Your storage account resource is experiencing an excess volume of failed transac
In the metric picker, select your storage account and the **Transactions** metric. Switch chart type to **Bar chart**. Click **Apply splitting** and select dimension **API name**. Then click on the **Add filter** and pick the **API name** dimension once again. In the filter dialog, select the APIs that you want to plot on the chart.
+## Total requests of Cosmos DB by Database Names and Collection Names
+
+You want to identify which collection in which database of your Cosmos DB instance is having maximum requests to adjust your costs for Cosmos DB.
+
+![Segmented line chart of Total Requests](./media/metrics-charts/multiple-split-example.png)
+
+### How to configure this chart?
+
+In the metric picker, select your Cosmos DB resource and the **Total Requests** metric. Click **Apply splitting** and select dimensions **DatabaseName** and **CollectionName**.
+ ## Next steps * Learn about Azure Monitor [Workbooks](../visualize/workbooks-overview.md)
-* Learn more about [Metric Explorer](metrics-charts.md)
+* Learn more about [Metric Explorer](metrics-charts.md)
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
You can split a metric by dimension to visualize how different segments of the m
### Apply splitting 1. Above the chart, select **Apply splitting**.-
- > [!NOTE]
- > Charts that have multiple metrics can't use the splitting functionality. Also, although a chart can have multiple filters, it can have only one splitting dimension.
-
-1. Choose a dimension on which to segment your chart:
+
+1. Choose dimension(s) on which to segment your chart:
![Screenshot that shows the selected dimension on which to segment the chart.](./media/metrics-charts/031.png)
You can split a metric by dimension to visualize how different segments of the m
![Screenshot that shows sort order on split values.](./media/metrics-charts/segment-dimension-sort.png)
-1. Click away from the grouping selector to close it.
+1. If you like to segment by multiple segments select multiple dimensions from the values dropdown. The legends will show a comma-separated list of dimension values for each segment
+
+ ![Screenshot that shows multiple segments selected, and the corresponding chart below.](./media/metrics-charts/segment-dimension-multiple.png)
+
+3. Click away from the grouping selector to close it.
> [!NOTE] > To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
azure-monitor Create Custom Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/create-custom-table.md
To create a custom table, you need:
``` For information about the `TimeGenerated` format, see [supported datetime formats](/azure/data-explorer/kusto/query/scalar-data-types/datetime#supported-formats).+ ## Create a custom table Azure tables have predefined schemas. To store log data in a different schema, use data collection rules to define how to collect, transform, and send the data to a custom table in your Log Analytics workspace.
To delete a table using PowerShell:
## Add or delete a custom column You can modify the schema of custom tables and add custom columns to, or delete columns from, a standard table. +
+> [!NOTE]
+> Column names must start with a letter and can consist of up to 45 alphanumeric characters and the characters `_` and `-`. The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`.
+ # [Portal](#tab/azure-portal-3) To add a custom column to a table in your Log Analytics workspace, or delete a column:
azure-monitor Kql Machine Learning Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/kql-machine-learning-azure-monitor.md
Last updated 07/01/2022-
-# Customer intent: As a data analyst, I want to use the native machine learning capabilities of Azure Monitor Logs to gain insights from my log data without having to export data outside of Azure Monitor.
-
+# Customer intent: As a data analyst, I want to use the native machine learning capabilities of Azure Monitor Logs to gain insights from my log data without having to export data outside of Azure Monitor.
-# Tutorial: Detect and analyze anomalies using KQL machine learning capabilities in Azure Monitor
+# Tutorial: Detect and analyze anomalies using KQL machine learning capabilities in Azure Monitor
The Kusto Query Language (KQL) includes machine learning operators, functions and plugins for time series analysis, anomaly detection, forecasting, and root cause analysis. Use these KQL capabilities to perform advanced data analysis in Azure Monitor without the overhead of exporting data to external machine learning tools.
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Last updated 06/27/2022
# Logs Ingestion API in Azure Monitor
-The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. By using this API, you can send data from almost any source to [supported built-in tables](#supported-tables) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns.
+The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client. By using this API, you can send data from almost any source to [supported Azure tables](#supported-tables) or to [custom tables that you create](../logs/create-custom-table.md#create-a-custom-table). You can even [extend the schema of Azure tables with custom columns](../logs/create-custom-table.md#add-or-delete-a-custom-column).
> [!NOTE] > The Logs Ingestion API was previously referred to as the custom logs API.
You can modify the target table and workspace by modifying the DCR without any c
### Custom tables
-The Logs Ingestion API can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix.
+The Logs Ingestion API can send data to any custom table that you create and to certain Azure tables in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix.
-### Built-in tables
+### Azure tables
-The Logs Ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix.
+The Logs Ingestion API can send data to the following Azure tables. Other tables may be added to this list as support for them is implemented.
- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) - [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)
The Logs Ingestion API can send data to the following built-in tables. Other tab
- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) > [!NOTE]
-> Column names can consist of alphanumeric characters and the characters `_` and `-`, and they must start with a letter.
+> Column names must start with a letter and can consist of up to 45 alphanumeric characters and the characters `_` and `-`. The following are reserved column names: `Type`, `TenantId`, `resource`, `resourceid`, `resourcename`, `resourcetype`, `subscriptionid`, `tenanted`. Custom columns you add to an Azure table must have the suffix `_CF`.
## Authentication
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
For Profiler to work properly, make sure:
:::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot of the Continuous WebJob Details pane.":::
-If Profiler still isn't working for you, you can download the log and [send it to our team](mailto:serviceprofilerhelp@microsoft.com).
+If Profiler still isn't working for you, you can download the log and [submit an Azure support ticket](https://azure.microsoft.com/support/).
#### Check the Diagnostic Services site extension' status page
azure-netapp-files Configure Unix Permissions Change Ownership Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-unix-permissions-change-ownership-mode.md
na Previously updated : 04/13/2022 Last updated : 02/28/2023 # Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes
The change ownership mode (**`Chown Mode`**) functionality enables you to set th
## Steps
-1. The Unix permissions and change ownership mode features are currently in preview. Before using these features for the first time, you need to register the features:
-
- 1. Register the **Unix permissions** feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFUnixPermissions
- ```
-
- 2. Register the **change ownership mode** feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFChownMode
- ```
-
- 3. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFUnixPermissions
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFChownMode
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
-
-2. You can specify the **Unix permissions** and change ownership mode (**`Chown Mode`**) settings under the **Protocol** tab when you [create an NFS volume](azure-netapp-files-create-volumes.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md).
+1. You can specify the **Unix permissions** and change ownership mode (**`Chown Mode`**) settings under the **Protocol** tab when you [create an NFS volume](azure-netapp-files-create-volumes.md) or [create a dual-protocol volume](create-volumes-dual-protocol.md).
The following example shows the Create a Volume screen for an NFS volume. ![Screenshots that shows the Create a Volume screen for NFS.](../media/azure-netapp-files/unix-permissions-create-nfs-volume.png)
-3. For existing NFS or dual-protocol volumes, you can set or modify **Unix permissions** and **change ownership mode** as follows:
+2. For existing NFS or dual-protocol volumes, you can set or modify **Unix permissions** and **change ownership mode** as follows:
1. To modify Unix permissions, right-click the **volume**, and select **Edit**. In the Edit window that appears, specify a value for **Unix Permissions**. ![Screenshots that shows the Edit screen for Unix permissions.](../media/azure-netapp-files/unix-permissions-edit.png)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 02/27/2023 Last updated : 02/28/2023 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## February 2023
+* The [Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md) features are now generally available (GA).
+
+ You no longer need to register the features before using them.
+
+* The `Vaults` API is deprecated starting with Azure NetApp Files REST API version 2022-09-01.
+
+ Enabling backup of volumes doesn't require the `Vaults` API. REST API users can use `PUT` and `PATCH` [Volumes API](/rest/api/netapp/volumes) to enable backup for a volume.
+
* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview) Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume user and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
azure-percept Audio Button Led Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/audio-button-led-behavior.md
- Title: Azure Percept Audio button and LED states
-description: Learn more about the button and LED states of Azure Percept Audio
---- Previously updated : 02/07/2023---
-# Azure Percept Audio button and LED states
--
-See the following guidance for information on the button and LED states of the Azure Percept Audio device.
-
-## Button behavior
-
-Use the buttons to control the behavior of the device.
-
-|Button State|Behavior|
-||-|
-|Mute|Press to mute/unmute the mic array. The button event is release-triggered when pressed.|
-|PTT/PTS|Press PTT to bypass the keyword spotting state and activate the command listening state. Press again to stop the agent's active dialogue and revert to the keyword spotting state. The button event is release-triggered when pressed. PTS only works when the button is pressed while the agent is speaking, not when the agent is listening or thinking.|
-
-## LED states
-
-Use the LED indicators to understand which state your device is in.
-
-|LED|LED State|Ear SoM Status|
-|||-|
-|L02|1x white, static on|Power on |
-|L02|1x white, 0.5 Hz flashing|Authentication in progress |
-|L01 & L02 & L03|3x blue, static on|Waiting for keyword|
-|L01 & L02 & L03|LED array flashing, 20 fps |Listening or speaking|
-|L01 & L02 & L03|LED array racing, 20 fps|Thinking|
-|L01 & L02 & L03|3x red, static on |Mute|
-
-## Understanding Ear SoM LED indicators
-You can use LED indicators to understand which state your device is in. It takes around 4-5 minutes for the device to power on and the module to fully initialize. As it goes through initialization steps, you'll see:
-
-1. Center white LED on (static): the device is powered on.
-1. Center white LED on (blinking): authentication is in progress.
-1. Center white LED on (static): the device is authenticated but the keyword isn't configured.ΓÇï
-1. All three LEDs will change to blue once a demo was deployed and the device is ready to use.
--
-## Troubleshooting LED issues
-- **If the center LED is solid white**, try [using a template to create a voice assistant](./tutorial-no-code-speech.md).-- **If the center LED is always blinking**, it indicates an authentication issue. Try these troubleshooting steps:
- 1. Make sure that your USB-A and micro USB connections are secured
- 1. Check to see if the [speech module is running](./troubleshoot-audio-accessory-speech-module.md#checking-runtime-status-of-the-speech-module)
- 1. Restart the device
- 1. [Collect logs](./troubleshoot-audio-accessory-speech-module.md#collecting-speech-module-logs) and attach them to a support request
- 1. Check to see if your dev kit is running the latest software and apply an update if available.
-
-## Next steps
-
-For troubleshooting tips for your Azure Percept Audio device, see this [guide](./troubleshoot-audio-accessory-speech-module.md).
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-audio-datasheet.md
- Title: Azure Percept Audio datasheet
-description: Check out the Azure Percept Audio datasheet for detailed device specifications
---- Previously updated : 02/07/2023---
-# Azure Percept Audio datasheet
---
-|Product Specification |Value |
-|--|--|
-|Performance |180 Degrees Far-field at 4 m, 63 dB |
-|Target Industries |Hospitality <br> Healthcare <br> Smart Buildings <br> Automotive <br> Retail <br> Manufacturing |
-|Hero Scenarios |In-room Virtual Concierge <br> Vehicle Voice Assistant and Command/Control <br> Point of Sale Services and Quality Control <br> Warehouse Task Tracking|
-|Included in Box |1x Azure Percept Audio SoM <br> 1x Developer (Interposer) Board <br> 1x FPC Cable <br> 1x USB 2.0 Type A to Micro USB Cable <br> 1x Mechanical Plate|
-|External Dimensions |90 mm x170mm x 25 mm |
-|Product Weight |0.42 Kg |
-|Management Control Plane |Azure Device Update (ADU) |
-|Supported Software and Services |Customizable Keywords and Commands <br> Azure Speech SDK <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) |
-|Audio Codec |XMOS XUF208 Codec |
-|Sensors, Visual Indicators, and Components |4x MEM Sensing Microsystems Microphones (MSM261D3526Z1CM) <br> 2x Buttons <br> USB Hub <br> DAC <br> 3x LEDs <br> LED Driver |
-|Security Crypto-Controller |ST-Microelectronics STM32L462CE |
-|Ports |1x USB 2.0 Type Micro B <br> 3.5 mm Audio Out |
-|Certification |CE <br> ACMA <br> FCC <br> IC <br> VCCI <br> NRTL <br> CB |
-|Operating Temperature |0 degrees to 35 degrees C |
-|Non-Operating Temperature |-40 degrees to 85 degrees C |
-|Relative Humidity |10% to 95% |
azure-percept Azure Percept Devkit Container Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-container-release-notes.md
- Title: Azure Percept DK Container release notes
-description: Information of changes and fixes for Azure Percept DK Container releases.
---- Previously updated : 02/07/2023---
-# Azure Percept DK Container release notes
--
-This page provides information of changes and fixes for Azure Percept DK Container releases.
-
-To download the container updates, go to [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/main/overview), select Devices from the left navigation pane, choose the specific device, and then select Vision and Speech tabs to initiate container downloads.
-
-## January (2201) Release
--- Security update for HostIPModule and ImageCapturingModule.-
- > [!NOTE]
- > HostIPModule and ImageCapturingModule are for vision AI-related experiences and will be deployed after the devkit connects to your instance of Azure Percept Studio. If these modules have been deployed to your devkit, complete the following steps to update them. If these two modules have not been deployed to your devkit, you can ignore the following steps and the latest modules will be deployed through Azure Percept Studio when needed.
- > 1. Find your devkit in your instance of Azure Percept Studio. In the summary of the device, select **Open device in IoT Hub**.
- > 1. In the IoT Hub UI of your devkit:
- > 1. Check the **modules** list to make sure HostIPModule and ImageCapturingModule have already been deployed.
- > 1. Select **Set modules**.
- > 1. In the deployment list, select the trashcan icon to remove HostIPModule and ImageCapturingModule.
- > 1. Select **review + create**, and then select **create** to finish the deployment.
- > 1. In the IoT Hub UI of your devkit, wait a while, and then refresh until the two modules are no longer displayed on the **modules** list.
- > 1. In your instance of Azure Percept Studio, start and view your device stream. Starting the device stream triggers a new download of HostIPModule and ImageCapturingModule that includes the latest versions.
- >
-
-## December (2112) Release
--- Removed lines in the image frames using automatic image capture in Azure Percept Studio. This issue was introduced in the 2108 module release. -- Security fixes for docker services running as root in azureeyemodule (mcr.microsoft.com/azureedgedevices/azureeyemodule:2112-1), azureearspeechclientmodule, and webstreammodule. -
-## August (2108) Release
--- Azureyemodule (mcr.microsoft.com/azureedgedevices/azureeyemodule:2108-1)
- - Updated to Intel latest (May) drop for MyriadX Camera firmware update.
- - Enabled UVC (USB Video Class) camera as input source. Refer to the [Advanced Development github](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule#using-uvcusb-video-class-camera-as-input-source) on how to use UVC camera as input source.
- - Fixed module crash when using H.264 raw RTSP stream.
-
-## June (2106) Release
--- Azureyemodule
- - This release adds support for time-aligning the inferences of slow neural networks with the video streams. It will add latency into the video stream equal to approximately the latency of the neural network but will result in the inferences (bounding boxes for example) being drawn over the video in the appropriate locations.
- - To enable this feature, add `TimeAlignRTSP: true` to your module twin in the IoT Hub Azure portal.
-- Azureearspeechclientmodule
- - Integrated the [Speech 1.16 SDK](../cognitive-services/speech-service/devices-sdk-release-notes.md) in the Speech module, which includes fixes for speech token support and integrates EAR SOM as default supported device in low-level lib.
- - Fixed a PnP detecting issue while security MCU removed but codec not.
- - Addressed Speech service timeouts with PTT/PTS button functions.
- - Security fixes
- - Out of Bounds Read receiving TLS data in speech module.
- - MCU and Codec USB ports are re-exposed while doing the DFU.
- - Exceptions handling when parsing JSON.
- - Enabling all hardened compiler security flags.
- - Out of Bounds Read receiving TLS data in speech module.
- - Versioning SSL and libcurl dependencies and prevent vulnerable version.
- - Enforcement of HTTPS and Pin a TLS CA on curl connections.
-
-## April (2104) Release
--- Azureyemodule
- - Fixed IoT Hub message format to be UTF-8 encoded JSON (previously it was 64-bit encoded format).
- - Fixed bug with Custom Vision classifier (previously, the Custom Vision classifier models were not working properly - they were interpreting the wrong dimension of the output tensor as the class confidences, which led to always predicting a single class, regardless of confidence).
- - Updated H.264 to use TCP instead of UDP, for Azure video analyzer integration.
-
-## Next steps
--- [How to determine your update strategy](./how-to-determine-your-update-strategy.md)
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
- Title: Azure Percept DK software release notes
-description: Information about changes made to the Azure Percept DK software.
---- Previously updated : 02/07/2023----
-# Azure Percept DK software release notes
--
-This page provides information of changes and fixes for each Azure Percept DK OS and firmware release.
-
-To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
-
-## June (2206) Release
--- Operating System
- - Latest security updates on OpenSSL, cifs-utils, zlib, cpio, Nginx, and Lua packages.
-
-## May (2205) Release
--- Operating System
- - Latest security updates on BIND, Node.js, Cyrus SASL, libxml2, and OpenSSL packages.
-
-## March (2203) Release
--- Operating System
- - Latest security fixes.
-
-## February (2202) Release
--- Operating System
- - Latest security updates on vim and expat packages.
-
-## January (2201) Release
--- Setup Experience
- - Fixed the compatibility issue with Windows 11 PC during OOBE setup.
-- Operating System
- - Latest security updates on vim package.
-
-## November (2111) Release
--- Operating System
- - Latest security fixes.
- - Disabled automatic package update.
- - Setup user permission for Azure Percept container to access USB device node.
-
-## September (2109) Release
--- Wi-Fi:
- - Use a fixed channel instead of automatic-channel selecting to avoid hostapd.service to constantly retry and restart.
-- Setup experience:
- - OOBE server system errors are localized.
- - Enable IPv6 multiple routing tables.
-- Operating System
- - Latest security fixes.
- - Nginx service run as a non-root user.
--
-## July (2107) Release
-
-> [!IMPORTANT]
-> Due to a code signing change OTA (Over-The-Air) package for this release is only compatible with Azure Percept DK running the 2106 release. For Azure Percept DK users who are currently running older SW release version, Microsoft recommends to perform an update over USB cable or perform an OTA update first to release 2106 before updating to 2107.
--- Wi-Fi:
- - Security hardening to ensure the Wi-Fi access point is shut down after setup completes.
- - Fixed an issue where pushing the **Setup** button on the dev kit could cause the dev kit's Wi-Fi access point to be out of sync with the setup experience web service.
- - Enhanced the Wi-Fi access point iptables rules to be more resilient and removed unnecessary rules.
- - Fixed an issue where multiple connected Wi-Fi networks wouldn't be properly prioritized.
-- Setup experience:
- - Added localization for supported regions and updated the text for better readability.
- - Fixed an issue where the setup experience would sometimes get stuck on a loading page.
-- General networking:
- - Resolved issues with IPv6 not obtaining a valid DHCP lease.
-- Operating system:
- - Security fixes.
-
-## June (2106) Release
--- Updated image verification mechanism for OTA agent.-- UI improvements and bug fixes to the setup experience.-
-## May (2105) Release
--- Security updates to CBL-Mariner OS.-
-## April (2104) Release
--- Fixed log rotation issue that may cause Azure Percept DK storage to get full.-- Enabled TPM based provisioning to Azure in the setup experience.-- Added an automatic timeout to the setup experience and Wi-Fi access point. After 30 minutes or after the setup experience completion.-- Wi-Fi access point SSID changed from "**scz-[xxx]**" to "**apd-[xxx]**".-
-## Next steps
--- [How to determine your update strategy](./how-to-determine-your-update-strategy.md)
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-dk-datasheet.md
- Title: Azure Percept DK datasheet
-description: Check out the Azure Percept DK datasheet for detailed device specifications
---- Previously updated : 02/07/2023---
-# Azure Percept DK datasheet
---
-|Product Specification |Value |
-|--|--|
-|Industrial Design |Integrated 80/20 1010 Series mounts |
-|Performance |0.7 TOPS (Azure Percept Vision)|
-|Target Industries |Retail <br> Manufacturing <br> Hospitality <br> Smart Buildings <br> Auto |
-|Hero Scenarios |Out-of-box to first AI frames in under 10 minutes <br> Out-of-box to prototype in under 30 minutes <br> Capture and label initial training data <br> Basic AI model customization <br> Advanced AI model customization |
-|Included in Box |1x Azure Percept DK Carrier Board <br> 1x [Azure Percept Vision](./azure-percept-vision-datasheet.md) <br> 1x RGB Sensor (Camera) <br> 1x USB 3.0 Type C Cable <br> 1x DC Power Cable <br> 1x AC/DC Converter <br> 2x Wi-Fi Antennas |
-|OS  |[CBL-Mariner](https://github.com/microsoft/CBL-Mariner) |
-|Management Control Plane |Azure Device Update (ADU) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) |
-|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](../iot-dps/index.yml) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) |
-|General Processor |NXP iMX8m (Azure Percept DK Carrier Board) |
-|AI Acceleration |1x Intel Movidius Myriad X Integrated ISP (Azure Percept Vision) |
-|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50 cm - infinity<br>FoV: 120-degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
-|Security |TPM 2.0 Nuvoton NCPT750 |
-|Connectivity |Wi-Fi and Bluetooth via Realtek RTL882CE single-chip controller |
-|Storage  |16 GB |
-|Memory  |4 GB |
-|Ports |1x Ethernet <br> 2x USB-A 3.0 <br> 1x USB-C |
-|Operating Temperature |0 degrees to 35 degrees C |
-|Non-Operating Temperature |-40 degrees to 85 degrees C |
-|Relative Humidity |10% to 95% |
-|Certification  |CE <br> ACMA <br> FCC <br> IC <br> NCC <br> VCCI + MIC <br> NRTL <br> CB |
-|Power Supply |19 VDC at 3.42A (65 W) |
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-vision-datasheet.md
- Title: Azure Percept Vision datasheet
-description: Check out the Azure Percept Vision datasheet for detailed device specifications
---- Previously updated : 02/07/2023---
-# Azure Percept Vision datasheet
---
-Specifications listed below are for the Azure Percept Vision device, included in the [Azure Percept DK](./azure-percept-dk-datasheet.md).
-
-|Product Specification |Value |
-|--||
-|Target Industries |Manufacturing <br> Smart Buildings <br> Auto <br> Retail |
-|Hero Scenarios |Shopper analytics <br> On-shelf availability <br> Shrink reduction <br> Workplace monitoring|
-|Dimensions |42 mm x 42 mm x 40 mm (Azure Percept Vision SoM assembly with housing) <br> 42 mm x 42 mm x 6 mm (Vision SoM chip)|
-|Management Control Plane |Azure Device Update (ADU) |
-|Supported Software and Services |[Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [OpenVINO](https://docs.openvinotoolkit.org/latest/https://docsupdatetracker.net/index.html) <br> Azure Device Update |
-|AI Acceleration |Intel Movidius Myriad X (MA2085) Vision Processing Unit (VPU) with Intel Camera ISP integrated, 0.7 TOPS |
-|Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50 cm - infinity<br>FoV: 120-degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
-|Camera Support |RGB |
-|Security Crypto-Controller |ST-Micro STM32L462CE |
-|Versioning / ID Component |64 kb EEPROM |
-|Memory  |LPDDR4 2GB |
-|Power   |3.5 W |
-|Ports |1x USB 3.0 Type C <br> 2x MIPI 4 Lane (up to 1.5 Gbps per lane) |
-|Control Interfaces |2x I2C <br> 2x SPI <br> 6x PWM (GPIOs: 2x clock, 2x frame sync, 2x unused) <br> 2x spare GPIO |
-|Certification |CE <br> ACMA <br> FCC <br> IC <br> VCCI <br> NRTL <br> CB |
-|Operating Temperature    |0 degrees to 27 degrees C (Azure Percept Vision SoM assembly with housing) <br> -10 degrees to 70 degrees C (Vision SoM chip) |
-|Touch Temperature |<= 48 degrees C |
-|Relative Humidity   |8% to 90% |
azure-percept Azureeyemodule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azureeyemodule-overview.md
- Title: Azure Percept Vision AI module
-description: An overview of the azureeyemodule, which is the module responsible for running the AI vision workload on the Azure Percept DK.
---- Previously updated : 02/07/2023----
-# Azure Percept Vision AI module
--
-Azureeyemodule is the name of the edge module responsible for running the AI vision workload on the Azure Percept DK. It's part of the Azure IoT suite of edge modules and is deployed to the Azure Percept DK during the [setup experience](./quickstart-percept-dk-set-up.md). This article provides an overview of the module and its architecture.
-
-## Architecture
--
-The Azure Percept Workload on the Azure Percept DK is a C++ application that runs inside the azureeyemodule docker container. It uses OpenCV GAPI for image processing and model execution. Azureeyemodule runs on the Mariner operating system as part of the Azure IoT suite of modules that run on the Azure Percept DK.
-
-The Azure Percept Workload is meant to take in images and output images and messages. The output images may be marked up with drawings such as bounding boxes, segmentation masks, joints, labels, and so on. The output messages are a JSON stream of inference results that can be ingested and used by downstream tasks.
-The results are served up as an RTSP stream that is available on port 8554 of the device. The results are also shipped over to another module running on the device, which serves the RTSP stream wrapped in an HTTP server, running on port 3000. Either way, they'll be viewable only on the local network.
-
-> [!CAUTION]
-> There is *no* encryption or authentication with respect to the RTSP feeds. Anyone on the local network can view exactly what the Azure Percept Vision is seeing by typing in the correct address into a web browser or RTSP media player.
-
-The Azure Percept Workload enables several features that end users can take advantage of:
-- A no-code solution for common computer vision use cases, such as object classification and common object detection.-- An advanced solution, where a developer can bring their own (potentially cascaded) trained model to the device and run it, possibly passing results to another IoT module of their own creation running on the device.-- A retraining loop for grabbing images from the device periodically, retraining the model in the cloud, and then pushing the newly trained model back down to the device. Using the device's ability to update and swap models on the fly.-
-## AI workload details
-The Workload application is open-sourced in the Azure Percept Advanced Development [GitHub repository](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app) and is made up of many small C++ modules, with some of the more important being:
-- [main.cpp](https://github.com/microsoft/azure-percept-advanced-development/blob/main/azureeyemodule/app/main.cpp): Sets up everything and then runs the main loop.-- [iot](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/iot): This folder contains modules that handle incoming and outgoing messages from the Azure IoT Edge Hub, and the twin update method.-- [model](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/model): This folder contains modules for a class hierarchy of computer vision models.-- [kernels](https://github.com/microsoft/azure-percept-advanced-development/tree/main/azureeyemodule/app/kernels): This folder contains modules for G-API kernels, ops, and C++ wrapper functions.-
-Developers can build custom modules or customize the current azureeyemodule using this workload application.
-
-## Next steps
--- Now that you know more about the azureeyemodule and Azure Percept Workload, try using your own model or pipeline by following one of [these tutorials](https://github.com/microsoft/azure-percept-advanced-development/blob/main/tutorials/README.md)-- Or, try **transfer learning** using one of our ready-made [machine learning notebooks](https://github.com/microsoft/azure-percept-advanced-development/tree/main/machine-learning-notebooks)-
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/concept-security-configuration.md
- Title: Azure Percept security recommendations
-description: Learn more about Azure Percept firewall configuration and security recommendations
---- Previously updated : 02/07/2023---
-# Azure Percept security recommendations
--
-Review the guidelines below for information on configuring firewalls and general security best practices with Azure Percept.
-
-## Configuring firewalls for Azure Percept DK
-
-If your networking setup requires that you explicitly permit connections made from Azure Percept DK devices, review the following list of components.
-
-This checklist is a starting point for firewall rules:
-
-|URL (* = wildcard)|Outbound TCP Ports|Usage|
-|-|||
-|*.auth.azureperceptdk.azure.net|443|Azure DK SOM Authentication and Authorization|
-|*.auth.projectsantacruz.azure.net|443|Azure DK SOM Authentication and Authorization|
-
-Additionally, review the list of [connections used by Azure IoT Edge](../iot-edge/production-checklist.md#allow-connections-from-iot-edge-devices).
-
-## Additional recommendations for deployment to production
-
-Azure Percept DK offers a great variety of security capabilities out of the box. In addition to those powerful security features included in the current release, Microsoft also suggests the following guidelines when considering production deployments:
--- Strong physical protection of the device itself-- Ensure data-at-rest encryption is enabled-- Continuously monitor the device posture and quickly respond to alerts-- Limit the number of administrators who have access to the device-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about Azure Percept security](./overview-percept-security.md)
azure-percept Connect Over Cellular Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-gateway.md
- Title: Connect Azure Percept DK over 5G and LTE networks by using a gateway
-description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a cellular gateway.
---- Previously updated : 02/07/2023---
-# Connect Azure Percept DK over 5G and LTE networks by using a gateway
--
-A simple way to connect Azure Percept to the internet is to use a gateway that connects to the internet over 5G or LTE and provides Ethernet ports. In this case, Azure Percept isn't even aware that it's connected over 5G or LTE. It "knows" only that its Ethernet port has connectivity and it's routing all traffic through that port.
--
-## Overview of 5G and LTE gateway topology
-
-The following diagram shows how a 5G or LTE gateway can be easily paired with Azure Percept DK (development kit).
--
-## If you're connecting to a 5G or LTE gateway
-
-If you're connecting the Azure Percept DK to a 5G or LTE gateway, consider the following important points:
-- Set up the gateway first, and then validate that it's receiving a connection via the SIM. Following this order makes it easier to troubleshoot any issues you find when you connect Azure Percept DK.-- Make sure that both ends of the Ethernet cable are firmly connected to the gateway and Azure Percept DK.-- Follow the [default instructions](./how-to-connect-over-ethernet.md) for connecting Azure Percept DK over Ethernet.-- If your 5G or LTE plan has a quota, we recommend that you optimize for the amount of data that your Azure Percept DK models send to the cloud.-- Make sure that you have a [properly configured firewall](./concept-security-configuration.md) that blocks externally originated inbound traffic.-
-## If you're connecting to the dev kit via SSH protocol
-
-If you're using the Secure Shell (SSH) network protocol to connect with the dev kit via a 5G or LTE Ethernet gateway, use one of the following options:
-- **Use the dev kit's Wi-Fi access point**: If you have Wi-Fi disabled, you can re-enable it by rebooting your dev kit. From there, you can connect to the dev kit's Wi-Fi access point and follow the instructions in [Connect to Azure Percept DK over SSH](./how-to-ssh-into-percept-dk.md).-- **Use an Ethernet connection to a local area network (LAN)**: With this option, you unplug your dev kit from the 5G or LTE gateway and plug it into a LAN router. For more information, see [Connect to Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md). -- **Use the gateway's remote access features**: Many 5G and LTE gateways include remote access managers that can be used to connect to devices on the network via SSH. Check with the 5G or LTE gateway manufacturer to see whether it has this feature. For an example of a remote access manager, see [Cradlepoint 5G and LTE gateways](https://customer.cradlepoint.com/s/article/NCM-Remote-Connect-LAN-Manager).-- **Use the dev kit's serial port**: Azure Percept DK includes a serial connection port that can be used to connect directly to the device. For more information, see [Connect to Azure Percept DK over serial cable](./how-to-connect-to-percept-dk-over-serial.md).-
-## Next steps
-Depending on the cellular device you have access to, you can connect in one of two ways:
-
-* [Connect by using a USB modem](./connect-over-cellular-usb.md)
-* [Connect by using 5G or LTE](./connect-over-cellular.md)
azure-percept Connect Over Cellular Usb Multitech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-multitech.md
- Title: Connect Azure Percept DK over LTE by using a MultiTech MultiConnect USB modem
-description: This article explains how to connect Azure Percept DK over 5G or LTE networks by using a MultiTech MultiConnect USB modem.
---- Previously updated : 02/07/2023----
-# Connect Azure Percept DK over LTE by using a MultiTech MultiConnect USB modem
--
-This article discusses how to connect your Azure Percept DK by using a MultiTech MultiConnect (MTCM-LNA3-B03) USB modem.
-
-> [!Note]
-> The MultiTech MultiConnect USB modem comes in a variety of models. In this article, we used model LNA3, which works with Verizon and Vodafone SIM cards, among others. At this time, we're unable to connect to an AT&T network, but we're investigating the issue and will update this article if and when we find the root cause. For more information about the MultiTech MultiConnect USB modem, visit the [MultiTech](https://www.multitech.com/brands/multiconnect-microcell) site.
-
-## Prepare to connect Azure Percept DK
-To learn how to prepare Azure Percept DK, go to [Connect Azure Percept DK over 5G or LTE networks by using a USB modem](./connect-over-cellular-usb.md). Be sure to note the comments about the USB cables that should be used.
-
-### Prepare the modem
-Before you begin, your modem must be in Mobile Broadband Interface Model (MBIM) mode. To learn how to prepare the modem, see the [Telit wireless solutions Attention (AT) command reference guide](
-https://www.multitech.com/documents/publications/reference-guides/Telit_LE910-V2_Modules_AT_Commands_Reference_Guide_r5.pdf).
-
-In this article, to enable the MBIM interface, we use AT command `AT#USBCFG=<mode>` to configure the correct USB mode.
-
-The AT command reference guide lists all possible modes but, for this article, we're interested in mode `3`. The default mode is `0`.
-
-The easiest way to configure the mode is to connect the MultiTech modem to a PC and use terminal software such as TeraTerm or PuTTY. You can use Windows Device Manager to see which USB port is assigned for the modem. If there are several ports, you might need to test to see which one is responding to AT commands. The COM port settings should be:
-* **Baud rate**: 9600 (or 115200)
-* **Stop bits**: 1
-* **Parity**: None
-* **Byte size**: 8
-* **Flow control**: No control flow
-
-Here are the AT commands:
-
-To check to see which USB mode MultiTech device is currently running, use:
-
-```
-AT#USBCFG?
-```
-
-To change to mode 3, use:
-
-```
-AT#USBCFG=3
-```
-
-If you check again by using the first AT command, you should get:
-`#USBCFG: 3`
-
-After you've set the correct USB mode, you should issue a reset by using:
-
-```
-AT#REBOOT
-```
-
-At this point, the modem should disconnect and later reconnect to the USB port by using the previously set mode.
-
-## Use the modem to connect
-
-Make sure that you've completed the Azure Percept DK preparations outlined in the [Connect by using a USB modem](./connect-over-cellular-usb.md) article.
-
-1. Plug a SIM card into the MultiTech modem.
-
-1. Plug the MultiTech modem into the Azure Percept DK USB A port.
-
-1. Power up Azure Percept DK.
-
-1. Connect to Azure Percept DK by using the Secure Shell (SSH) network protocol.
-
-1. Ensure that ModemManager is running by writing the following command to your SSH prompt:
-
- ```
- systemctl status ModemManager
- ```
- If you're successful, you'll get a result that's similar to the following:
-
- *ModemManager.service - Modem Manager*
- *Loaded: loaded (/lib/systemd/system/ModemManager.service; enabled; vendor preset: enabled)*
- *Active: active (running) since Mon 2021-08-09 20:52:03 UTC; 23 s ago*
-
-1. List the active modems.
-
- To check to ensure that ModemManager can recognize your modem, run:
-
- ```
- mmcli --list-modems
- ```
-
- You should get a result that's similar to the following:
-
- ```
- /org/freedesktop/ModemManager1/Modem/0 [Telit] FIH7160
- ```
-
-1. Get the modem details.
-
- The modem ID here is `0`, but your result might differ. Modem ID (`--modem 0`) is used in the ModemManager commands like this:
-
- ```
- mmcli --modem 0
- ```
-
- By default, the modem is disabled (`Status -> state: disabled`).
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Modem/0
- | device id: f89a480d73f1a9cfef28102a0b44be2a47329c8b
- --
- Hardware | manufacturer: Telit
- | model: FIH7160
- | firmware revision: 20.00.525
- | h/w revision: XMM7160_V1.1_HWID437_MBIM_NAND
- | supported: gsm-umts, lte
- | current: gsm-umts, lte
- | equipment id: xxxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb3/3-1/3-1.1
- | drivers: cdc_acm, cdc_mbim
- | plugin: telit
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyACM1 (at), ttyACM2 (ignored),
- | ttyACM3 (ignored), ttyACM4 (at), ttyACM5 (ignored), ttyACM6 (ignored),
- | wwan0 (net)
- --
- Status | unlock retries: sim-pin2 (3)
- | state: disabled
- | power state: on
- | signal quality: 0% (cached)
- --
- Modes | supported: allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 3g, 4g; preferred: none
- | current: allowed: 3g, 4g; preferred: none
- --
- Bands | supported: utran-5, utran-2, eutran-2, eutran-4, eutran-5, eutran-12,
- | eutran-13, eutran-17
- | current: utran-2, eutran-2
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- --
- 3GPP EPS | ue mode of operation: csps-2
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- ```
-
-1. Enable the modem.
-
- Before you establish a connection, turn on the modem's radio or radios by running the following code:
-
- ```
- mmcli --modem 0 --enable
- ```
-
- You should get a response like "successfully enabled the modem."
-
- After some time, the modem should be registered to a cell tower, and you should see a modem status of `Status -> state: registered` after you run the following code:
-
- ```
- mmcli --modem 0
- ```
-
-1. Connect by using the access point name (APN) information.
-
- Your cell phone provider provides an APN, such as the following APN for Verizon:
-
- ```
- mmcli --modem 0 --simple-connect="apn=vzwinternet"
- ```
-
- You should get a response like "successfully enabled the modem."
-
-1. Get the modem status.
-
- You should now see a status of `Status -> state: connected` and a new `Bearer` category at the end of the status message.
-
- ```
- mmcli --modem 0
- ```
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Modem/0
- | device id: f89a480d73f1a9cfef28102a0b44be2a47329c8b
- --
- Hardware | manufacturer: Telit
- | model: FIH7160
- | firmware revision: 20.00.525
- | h/w revision: XMM7160_V1.1_HWID437_MBIM_NAND
- | supported: gsm-umts, lte
- | current: gsm-umts, lte
- | equipment id: xxxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb3/3-1/3-1.1
- | drivers: cdc_acm, cdc_mbim
- | plugin: telit
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyACM1 (at), ttyACM2 (ignored),
- | ttyACM3 (ignored), ttyACM4 (at), ttyACM5 (ignored), ttyACM6 (ignored),
- | wwan0 (net)
- --
- Numbers | own: +1xxxxxxxx
- --
- Status | unlock retries: sim-pin2 (3)
- | state: connected
- | power state: on
- | access tech: lte
- | signal quality: 16% (recent)
- --
- Modes | supported: allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 3g, 4g; preferred: none
- | current: allowed: 3g, 4g; preferred: none
- --
- Bands | supported: utran-5, utran-2, eutran-2, eutran-4, eutran-5, eutran-12,
- | eutran-13, eutran-17
- | current: utran-2, eutran-2
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- | operator id: 311480
- | operator name: Verizon
- | registration: home
- --
- 3GPP EPS | ue mode of operation: csps-2
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- --
- Bearer | paths: /org/freedesktop/ModemManager1/Bearer/0
- ```
-
-1. Get the bearer details.
-
- You need bearer details to connect the operating system to the packet data connection that the modem has now established with the cellular network. At this point, the modem has an IP connection, but the operating system is not yet configured to use it.
-
- ```
- mmcli --bearer 0
- ```
-
- The bearer details are listed in the following code:
-
- ```
-
- General | path: /org/freedesktop/ModemManager1/Bearer/0
- | type: default
-
- Status | connected: yes
- | suspended: no
- | interface: wwan0
- | ip timeout: 20
-
- Properties | apn: vzwinternet
- | roaming: allowed
-
- IPv4 configuration | method: static
- | address: 100.112.107.46
- | prefix: 24
- | gateway: 100.112.107.1
- | dns: 198.224.166.135, 198.224.167.135
-
- Statistics | duration: 119
- | attempts: 1
- | total-duration: 119
- ```
-
-1. Bring up the network interface.
-
- ```
- sudo ip link set dev wwan0 up
- ```
-
-1. Configure the network interface.
-
- By using the information provided by the bearer, replace the IP address (for example, we use 100.112.107.46/24 here) with the one your bearer has:
-
- ```
- sudo ip address add 100.112.107.46/24 dev wwan0
- ```
-
-1. Check the IP information.
-
- The IP configuration for this interface should match the ModemManager bearer details. Run:
-
- ```
- sudo ip address show dev wwan0
- ```
-
- Your bearer IP is listed as shown here:
-
- ```
- 6: wwan0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1428 qdisc pfifo_fast state UNKNOWN group default qlen 1000
- link/ether 1e:fb:08:e9:2a:25 brd ff:ff:ff:ff:ff:ff
- inet 100.112.107.46/24 scope global wwan0
- valid_lft forever preferred_lft forever
- inet6 fe80::1cfb:8ff:fee9:2a25/64 scope link
- valid_lft forever preferred_lft forever
- ```
-
-1. Set the default route.
-
- Again, by using the information provided by the bearer and using the modem's gateway (replace 100.112.107.1) as the default destination for network packets, run:
-
- ```
- sudo ip route add default via 100.112.107.1 dev wwan0
- ```
-
- Azure Percept DK is now connected with the USB modem!
-
-1. Test the connectivity.
-
- In this article, you're executing a `ping` request through the `wwan0` interface. But you can also use Azure Percept Studio and check to see whether telemetry messages are arriving. Make sure that you're not using an Ethernet cable and that Wi-Fi isn't enabled so that you're using LTE. Run:
-
- ```
- ping -I wwan0 8.8.8.8
- ```
-
- You should get a result that's similar to the following:
-
- ```
- PING 8.8.8.8 (8.8.8.8) from 162.177.2.0 wwan0: 56(84) bytes of data.
- 64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=111 ms
- 64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=92.0 ms
- 64 bytes from 8.8.8.8: icmp_seq=3 ttl=114 time=88.8 ms
- ^C
- 8.8.8.8 ping statistics
- 3 packets transmitted, 3 received, 0% packet loss, time 4ms
- rtt min/avg/max/mdev = 88.779/97.254/110.964/9.787 ms
- ```
--
-## Debugging
-
-For general information about debugging, see [Connect by using a USB modem](./connect-over-cellular-usb.md).
-
-## Next steps
-
-Depending on the cellular device you have access to, you can connect in one of two ways:
-
-* [Connect by using a USB modem](./connect-over-cellular-usb.md)
-* [Connect by using 5G or LTE](./connect-over-cellular.md)
azure-percept Connect Over Cellular Usb Quectel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-quectel.md
- Title: Connect Azure Percept DK over 5G or LTE by using a Quectel RM500 5G modem
-description: This article explains how to connect Azure Percept DK over 5G or LTE networks by using a Quectel 5G modem.
---- Previously updated : 02/07/2023----
-# Connect Azure Percept DK over 5G or LTE by using a Quectel RM500-GL 5G modem
--
-This article discusses how to connect Azure Percept DK over 5G or LTE by using a Quectel RM500-GL 5G modem.
-
-For more information about this 5G modem dev kit, contact your Quectel local sales team:
-
-* For North American customers: northamerica-sales@quectel.com
-* For global customers: sales@quectel.com
-
-> [!Note]
-> **About USB cables for 5G modems**:
-> 5G modems require more power than LTE modems, and the wrong USB cable can be a bottleneck to realizing the best possible 5G data rates. To supply sufficient, consistent power to a 5G modem, make sure that the USB cable meets the following standards:
-> **Power:**
-> - Max amperage should be equal to or greater than 3 amperes.
-> - The cable length should be less than 1 meter.
-> - When you use a 5G modem, only one USB A port on Azure Percept DK should be active.
->
-> **Throughput:**
-> - USB 3.1 Gen2
-> - USB-IF certified
-
-## Prepare to connect Azure Percept DK
-To learn how to prepare Azure Percept DK, go to [Connect Azure Percept DK over 5G or LTE networks by using a USB modem](./connect-over-cellular-usb.md). Be sure to note the comments about the USB cables that should be used.
-
-### Prepare the modem
-Before you begin, your modem must be in Mobile Broadband Interface Model (MBIM) mode. An undocumented but standard Quectel Attention (AT) command can be used for this: `AT+QCFG="usbnet"`.
-
-The `usbnet` property can be set to four different values, from `0` to `3`:
-- `0` for **NDIS / PPP / QMI mode** (supported by `qmi_wwan` driver, enabled with `CONFIG_USB_NET_QMI_WWAN=y|m`)-- `1` for **CDC Ethernet mode** (supported in Linux when `CONFIG_USB_NET_CDCETHER=y|m`)-- `2` for **MBIM mode** (supported in Linux when `CONFIG_USB_NET_CDC_MBIM=y|m`)-- `3` for **RNDIS mode**-
-The easiest way to configure the mode is to connect the Quectel 5G modem to a PC and use terminal software, such as TeraTerm, or Quectel's own PC software, such as QCOM. You can use Windows Device Manager to see which USB port is assigned for the modem. The COM port settings should be:
-* **Baud rate**: 115200
-* **Stop bits**: 1
-* **Parity**: None
-* **Byte size**: 8
-* **Flow control**: No control flow
-
-Here are the AT commands:
-
-To check to see which USB mode Quectel device is currently running, use:
-
-```
-AT+QCFG="usbnet"
-```
-
-To change to mode 2, use:
-
-```
-AT+QCFG="usbnet",2
-```
-
-If you check again by using the first AT command, you should get:
-
-```
-+QCFG: "usbnet",2`
-```
-
-After you've set the correct USB mode, issue a hardware reset by using:
-
-```
-AT+CFUN=1,1
-```
-
-At this point, the modem should disconnect and later reconnect to the USB port.
--
-## Use the modem to connect
-
-1. Put a SIM card in the Quectel modem.
-
-1. Plug the Quectel modem into the Azure Percept DK USB port. Be sure to use a proper USB cable.
-
-1. Power up Azure Percept DK.
-
-1. Ensure that ModemManager is running.
-
- ```
- systemctl status ModemManager
- ```
-
- If you're successful, you'll get a result that's similar to the following:
-
- ```
- * ModemManager.service - Modem Manager
- Loaded: loaded (/lib/systemd/system/ModemManager.service; enabled; vendor pre set: enabled)
- Active: active (running) since Mon 2021-08-09 20:52:03 UTC; 23s ago
- [...]
- ```
-
- If you're unsuccessful, make sure that you've flashed the correct image to Azure Percept DK (5G enabled).
-
-1. List the active modems.
-
- When you list the modems, you'll see that the Quectel modem has been recognized and is now handled by ModemManager.
-
- ```
- mmcli --list-modems
- ```
-
- You should get a result that's similar to the following:
-
- ```
- /org/freedesktop/ModemManager1/Modem/0 [Quectel] RM500Q-GL
- ```
-
- The modem ID here is `0`, which is used in the following commands to address it (that is, `--modem 0`).
-
-1. Get the modem details.
-
- By default, the modem is disabled (`Status -> state: disabled`). To view the status, run:
-
- ```
- mmcli --modem 0
- ```
-
- You should get a result that's similar to the following:
-
- ```
- General | path: /org/freedesktop/ModemManager1/Modem/0
- | device id: 8e3fb84e3755524d25dfa6f3f1943dc568958a2b
- --
- Hardware | manufacturer: Quectel
- | model: RM500Q-GL
- | firmware revision: RM500QGLABR11A04M4G
- | carrier config: CDMAless-Verizon
- | carrier config revision: 0A010126
- | h/w revision: RM500Q-GL
- | supported: gsm-umts, lte, 5gnr
- | current: gsm-umts, lte, 5gnr
- | equipment id: xxxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb4/4-1/4-1.1
- | drivers: option, cdc_mbim
- | plugin: quectel
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyUSB0 (qcdm), ttyUSB1 (gps),
- | ttyUSB2 (at), ttyUSB3 (at), wwan0 (net)
- --
- Numbers | own: +1xxxx
- --
- Status | unlock retries: sim-pin2 (3)
- | state: disabled
- | power state: on
- | signal quality: 0% (cached)
- --
- Modes | supported: allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 3g, 4g; preferred: 4g
- | allowed: 3g, 4g; preferred: 3g
- | allowed: 5g; preferred: none
- | allowed: 3g, 5g; preferred: 5g
- | allowed: 3g, 5g; preferred: 3g
- | allowed: 4g, 5g; preferred: 5g
- | allowed: 4g, 5g; preferred: 4g
- | allowed: 3g, 4g, 5g; preferred: 5g
- | allowed: 3g, 4g, 5g; preferred: 4g
- | allowed: 3g, 4g, 5g; preferred: 3g
- | current: allowed: 3g, 4g, 5g; preferred: 5g
- --
- Bands | supported: utran-1, utran-3, utran-4, utran-6, utran-5, utran-8,
- | utran-2, eutran-1, eutran-2, eutran-3, eutran-4, eutran-5, eutran-7,
- | eutran-8, eutran-12, eutran-13, eutran-14, eutran-17, eutran-18,
- | eutran-19, eutran-20, eutran-25, eutran-26, eutran-28, eutran-29,
- | eutran-30, eutran-32, eutran-34, eutran-38, eutran-39, eutran-40,
- | eutran-41, eutran-42, eutran-43, eutran-46, eutran-48, eutran-66,
- | eutran-71, utran-19
- | current: utran-1, utran-3, utran-4, utran-6, utran-5, utran-8,
- | utran-2, eutran-1, eutran-2, eutran-3, eutran-4, eutran-5, eutran-7,
- | eutran-8, eutran-12, eutran-13, eutran-14, eutran-17, eutran-18,
- | eutran-19, eutran-20, eutran-25, eutran-26, eutran-28, eutran-29,
- | eutran-30, eutran-32, eutran-34, eutran-38, eutran-39, eutran-40,
- | eutran-41, eutran-42, eutran-43, eutran-46, eutran-48, eutran-66,
- | eutran-71, utran-19
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- --
- 3GPP EPS | ue mode of operation: csps-1
- | initial bearer apn: ims
- | initial bearer ip type: ipv4v6
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- ```
-
-1. Enable the modem.
-
- Prior to establishing a connection, turn on the modem's radio or radios by running:
-
- ```
- mmcli --modem 0 --enable
- ```
-
- You should get a response that's similar to the following:
-
- ```
- successfully enabled the modem
- ```
-
- After some time, the modem should be registered to a cell tower, and you should see a modem status of `Status -> state: registered` after you run the following code:
-
- ```
- mmcli --modem 0
- ```
-
-1. Connect by using the access point name (APN) information.
-
- Usually, modems provide the APN to use (see `3GPP EPS -> initial bearer APN` information), so you can use it to establish a connection. If the modem doesn't provide an APN, consult with your cell phone provider for the APN to use.
-
- Here is the ModemManager command for connecting by using, for example, the Verizon APN `APN=vzwinternet`.
-
- ```
- mmcli --modem 0 --simple-connect="apn=vzwinternet"
- ```
-
- Again, you should get a response that's similar to the following:
-
- ```
- successfully connected the modem
- ```
-
-1. Get the modem status.
-
- You should now see a status of `Status -> state: connected` and a new `Bearer` category at the end of the status message.
-
- ```
- mmcli -m 0
- ```
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Modem/0
- | device id: 8e3fb84e3755524d25dfa6f3f1943dc568958a2b
- --
- Hardware | manufacturer: Quectel
- | model: RM500Q-GL
- | firmware revision: RM500QGLABR11A04M4G
- | carrier config: CDMAless-Verizon
- | carrier config revision: 0A010126
- | h/w revision: RM500Q-GL
- | supported: gsm-umts, lte, 5gnr
- | current: gsm-umts, lte, 5gnr
- | equipment id: xxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb4/4-1/4-1.1
- | drivers: option, cdc_mbim
- | plugin: quectel
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyUSB0 (qcdm), ttyUSB1 (gps),
- | ttyUSB2 (at), ttyUSB3 (at), wwan0 (net)
- --
- Numbers | own: +1xxxx
- --
- Status | unlock retries: sim-pin2 (3)
- | state: connected
- | power state: on
- | access tech: lte
- | signal quality: 12% (recent)
- --
- Modes | supported: allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 3g, 4g; preferred: 4g
- | allowed: 3g, 4g; preferred: 3g
- | allowed: 5g; preferred: none
- | allowed: 3g, 5g; preferred: 5g
- | allowed: 3g, 5g; preferred: 3g
- | allowed: 4g, 5g; preferred: 5g
- | allowed: 4g, 5g; preferred: 4g
- | allowed: 3g, 4g, 5g; preferred: 5g
- | allowed: 3g, 4g, 5g; preferred: 4g
- | allowed: 3g, 4g, 5g; preferred: 3g
- | current: allowed: 3g, 4g, 5g; preferred: 5g
- --
- Bands | supported: utran-1, utran-3, utran-4, utran-6, utran-5, utran-8,
- | utran-2, eutran-1, eutran-2, eutran-3, eutran-4, eutran-5, eutran-7,
- | eutran-8, eutran-12, eutran-13, eutran-14, eutran-17, eutran-18,
- | eutran-19, eutran-20, eutran-25, eutran-26, eutran-28, eutran-29,
- | eutran-30, eutran-32, eutran-34, eutran-38, eutran-39, eutran-40,
- | eutran-41, eutran-42, eutran-43, eutran-46, eutran-48, eutran-66,
- | eutran-71, utran-19
- | current: utran-1, utran-3, utran-4, utran-6, utran-5, utran-8,
- | utran-2, eutran-1, eutran-2, eutran-3, eutran-4, eutran-5, eutran-7,
- | eutran-8, eutran-12, eutran-13, eutran-14, eutran-17, eutran-18,
- | eutran-19, eutran-20, eutran-25, eutran-26, eutran-28, eutran-29,
- | eutran-30, eutran-32, eutran-34, eutran-38, eutran-39, eutran-40,
- | eutran-41, eutran-42, eutran-43, eutran-46, eutran-48, eutran-66,
- | eutran-71, utran-19
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- | operator id: 311480
- | operator name: Verizon
- | registration: home
- | pco: 0: (partial) '27058000FF0100'
-
- --
- 3GPP EPS | ue mode of operation: csps-1
- | initial bearer path: /org/freedesktop/ModemManager1/Bearer/0
- | initial bearer apn: ims
- | initial bearer ip type: ipv4v6
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- --
- Bearer | paths: /org/freedesktop/ModemManager1/Bearer/1
-
- ```
-
-1. Get the bearer details.
-
- The bearer resulting from the preceding step, `--simple-connect`, is at path `/org/freedesktop/ModemManager1/Bearer/1`.
-
- This is the bearer that we're querying for modem information about the active connection. The initial bearer isn't attached to an active connection and, therefore, holds no IP information.
-
- ```
- mmcli --bearer 1
- ```
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Bearer/1
- | type: default
- --
- Status | connected: yes
- | suspended: no
- | interface: wwan0
- | ip timeout: 20
- --
- Properties | apn: fast.t-mobile.com
- | roaming: allowed
- --
- IPv4 configuration | method: static
- | address: 25.21.113.165
- | prefix: 30
- | gateway: 25.21.113.166
- | dns: 10.177.0.34, 10.177.0.210
- | mtu: 1500
- --
- Statistics | attempts: 1
- ```
-
- Here are descriptions of some key details:
- - `Status -> interface: wwan0`: Lists which Linux network interface matches this modem.
- - `IPv4 configuration`: Provides the IP configuration for the preceding interface to set for it to be usable.
-
-1. Check the status of the modem network interface.
-
- By default, the network interface displays `DOWN`.
-
- ```
- ip link show dev wwan0
- ```
-
- You should get a result that's similar to the following:
-
- ```
- 4: wwan0: <BROADCAST,MULTICAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
- link/ether ce:92:c2:b8:1e:f2 brd ff:ff:ff:ff:ff:ff
- ```
-
-1. Bring up the interface.
-
- ```
- sudo ip link set dev wwan0 up
- ```
-
-1. Check the IP information.
-
- By default, the interface displays `UP,LOWER_UP`, with no IP information.
-
- ```
- sudo ip address show dev wwan0
- ```
-
- ```
- 4: wwan0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
- link/ether ce:92:c2:b8:1e:f2 brd ff:ff:ff:ff:ff:ff
- inet6 fe80::cc92:c2ff:feb8:1ef2/64 scope link
- valid_lft forever preferred_lft forever
- ```
-
-1. Issue a DHCP request.
-
- This feature is specific to, but not limited to, the Quectel module. The IP information is usually to be set manually to the interface or through a network manager daemon that supports ModemManager (for example, NetworkManager), but here you can simply use the dhclient on the Quectel modem:
-
- ```
- sudo dhclient wwan0
- ```
-
-1. Check the IP information.
-
- The IP configuration for this interface should match the ModemManager bearer details.
-
- ```
- sudo ip address show dev wwan0
- ```
-
- You should get a result that's similar to the following:
-
- ```
- 4: wwan0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
- link/ether ce:92:c2:b8:1e:f2 brd ff:ff:ff:ff:ff:ff
- inet 25.21.113.165/30 brd 25.21.113.167 scope global wwan0
- valid_lft forever preferred_lft forever
- inet6 fe80::cc92:c2ff:feb8:1ef2/64 scope link
- valid_lft forever preferred_lft forever
- ```
-
-1. Check the interface routes.
-
- Notice that the DHCP client also set a default route for packets to go through the `wwan0` interface.
-
- ```
- ip route show dev wwan0
- ```
-
- You should get a result that's similar to the following:
-
- ```
- default via 25.21.113.166
- 25.21.113.164/30 proto kernel scope link src 25.21.113.165
- ```
-
- You've now established a connection to Azure Percept DK by using the Quectel modem!
--
-1. Test connectivity.
-
- Execute a `ping` request through the `wwan0` interface.
-
- ```
- ping -I wwan0 8.8.8.8
- ```
-
- You should get a result that's similar to the following:
-
- ```
- PING 8.8.8.8 (8.8.8.8) from 25.21.113.165 wwan0: 56(84) bytes of data.
- 64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=137 ms
- 64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=114 ms
- ^C
- 8.8.8.8 ping statistics
- 2 packets transmitted, 2 received, 0% packet loss, time 2ms
- rtt min/avg/max/mdev = 113.899/125.530/137.162/11.636 ms
- ```
-
-## Debugging
-
-For general information about debugging, see [Connect by using a USB modem](./connect-over-cellular-usb.md).
-
-## Next steps
-
-Depending on the cellular device you have access to, you can connect in one of two ways:
-
-* [Connect by using a USB modem](./connect-over-cellular-usb.md)
-* [Connect by using 5G or LTE](./connect-over-cellular.md)
azure-percept Connect Over Cellular Usb Vodafone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb-vodafone.md
- Title: Connect Azure Percept DK over 5G and LTE by using a Vodafone USB modem
-description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a Vodafone USB modem.
---- Previously updated : 02/07/2023----
-# Connect Azure Percept DK over 5G and LTE by using a Vodafone USB Connect 4G v2 modem
--
-This article discusses how to connect Azure Percept DK by using a Vodafone USB Connect 4G v2 modem.
-
-For more information about this modem, go to the [Vodafone Integrated Terminals](https://www.vodafone.com/business/iot/iot-devices/integrated-terminals) page.
-
-## Use the modem to connect
-
-Before you begin, make sure that you've prepared Azure Percept DK for [connecting by using a USB modem](./connect-over-cellular-usb.md). No preparation for the USB modem itself is required.
-
-1. Plug a SIM card into the Vodafone modem.
-
-1. Plug the Vodafone modem into the Azure Percept USB A port.
-
-1. Power up Azure Percept DK.
-
-1. Connect to Azure Percept DK by using the Secure Shell (SSH) network protocol.
-
-1. Ensure that ModemManager is running by writing the following command to your SSH prompt:
-
- ```
- systemctl status ModemManager
- ```
-
- If you're successful, you'll get a result that's similar to the following:
-
- ```
- ModemManager.service - Modem Manager
- Loaded: loaded (/lib/systemd/system/ModemManager.service; enabled; vendor preset: enabled)
- Active: active (running) since Mon 2021-08-09 20:52:03 UTC; 23 s ago
- ```
-
-1. List the active modems.
-
- To check to ensure that ModemManager can recognize your modem, run:
-
- ```
- mmcli --list-modems
- ```
-
- You should get a result that's similar to the following. Here the modem ID is `0`, but your result might differ.
-
- ```
- /org/freedesktop/ModemManager1/Modem/0 [Alcatel] Mobilebroadband
- ```
-
-1. Get the modem details.
-
- To get the modem status details, run the following command (where modem ID is `0`).
-
- ```
- mmcli --modem 0
- ```
-
- By default, the modem is disabled (`Status -> state: disabled`).
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Modem/0
- | device id: 20a6021958444bcb6f6589b47fd264932c340e69
- --
- Hardware | manufacturer: Alcatel
- | model: Mobilebroadband
- | firmware revision: MPSS.JO.2.0.2.c1.7-00004-9607_
- | carrier config: default
- | h/w revision: 0
- | supported: gsm-umts, lte
- | current: gsm-umts, lte
- | equipment id: xxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb3/3-1/3-1.2
- | drivers: option, cdc_mbim
- | plugin: generic
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyUSB0 (at), ttyUSB1 (qcdm),
- | ttyUSB2 (at), wwan0 (net)
- --
- Status | unlock retries: sim-pin2 (3)
- | state: disabled
- | power state: on
- | signal quality: 0% (cached)
- --
- Modes | supported: allowed: 2g; preferred: none
- | allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 2g, 3g; preferred: 3g
- | allowed: 2g, 3g; preferred: 2g
- | allowed: 2g, 4g; preferred: 4g
- | allowed: 2g, 4g; preferred: 2g
- | allowed: 3g, 4g; preferred: 4g
- | allowed: 3g, 4g; preferred: 3g
- | allowed: 2g, 3g, 4g; preferred: 4g
- | allowed: 2g, 3g, 4g; preferred: 3g
- | allowed: 2g, 3g, 4g; preferred: 2g
- | current: allowed: 2g, 3g, 4g; preferred: 2g
- --
- Bands | supported: egsm, dcs, pcs, g850, utran-4, utran-5, utran-2, eutran-2,
- | eutran-4, eutran-5, eutran-7, eutran-12, eutran-13, eutran-71
- | current: egsm, dcs, pcs, g850, utran-4, utran-5, utran-2, eutran-2,
- | eutran-4, eutran-5, eutran-7, eutran-12, eutran-13, eutran-71
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- --
- 3GPP EPS | ue mode of operation: csps-2
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- ```
-
- We recommend that you start with the default setting:
-
- `Modes: current: allowed: 2g, 3g, 4g; preferred: 2g`.
-
- If you're not already using this setting, run:
-
- `mmcli --modem 0 --set-allowed-modes='2g|3g|4g' --set-preferred-mode='2g'`.
-
-1. Enable the modem.
-
- Before you establish a connection, turn on the modem's radio or radios by running the following code:
-
- ```
- mmcli --modem 0 --enable
- ```
-
- You should get a response like "successfully enabled the modem."
-
- After some time, the modem should be registered to a cell tower, and you should see a modem status of `Status -> state: registered` after you run the following code:
-
- ```
- mmcli --modem 0
- ```
-
-1. Connect by using the access point name (APN) information.
-
- Your cell phone provider provides an APN, such as the following APN for Vodafone:
-
- ```
- mmcli --modem 0 --simple-connect="apn=internet4gd.gdsp"
- ```
-
- You should get a response like "successfully connected the modem."
-
-1. Get the modem status.
-
- You should now see a status of `Status -> state: connected` and a new `Bearer` category at the end of the status message.
-
- ```
- mmcli --modem 0
- ```
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Modem/0-mobile.
- | device id: 20a6021958444bcb6f6589b47fd264932c340e69
- --
- Hardware | manufacturer: Alcatel
- | model: Mobilebroadband
- | firmware revision: MPSS.JO.2.0.2.c1.7-00004-9607_
- | carrier config: default
- | h/w revision: 0
- | supported: gsm-umts, lte
- | current: gsm-umts, lte
- | equipment id: xxx
- --
- System | device: /sys/devices/platform/soc@0/38200000.usb/xhci-hcd.1.auto/usb3/3-1/3-1.2
- | drivers: option, cdc_mbim
- | plugin: generic
- | primary port: cdc-wdm0
- | ports: cdc-wdm0 (mbim), ttyUSB0 (at), ttyUSB1 (qcdm),
- | ttyUSB2 (at), wwan0 (net)
- --
- Numbers | own: xxx
- --
- Status | unlock retries: sim-pin2 (10)
- | state: connected
- | power state: on
- | access tech: lte
- | signal quality: 19% (recent)
- --
- Modes | supported: allowed: 2g; preferred: none
- | allowed: 3g; preferred: none
- | allowed: 4g; preferred: none
- | allowed: 2g, 3g; preferred: 3g
- | allowed: 2g, 3g; preferred: 2g
- | allowed: 2g, 4g; preferred: 4g
- | allowed: 2g, 4g; preferred: 2g
- | allowed: 3g, 4g; preferred: 4g
- | allowed: 3g, 4g; preferred: 3g
- | allowed: 2g, 3g, 4g; preferred: 4g
- | allowed: 2g, 3g, 4g; preferred: 3g
- | allowed: 2g, 3g, 4g; preferred: 2g
- | current: allowed: 2g, 3g, 4g; preferred: 2g
- --
- Bands | supported: egsm, dcs, pcs, g850, utran-4, utran-5, utran-2, eutran-2,
- | eutran-4, eutran-5, eutran-7, eutran-12, eutran-13, eutran-71
- | current: egsm, dcs, pcs, g850, utran-4, utran-5, utran-2, eutran-2,
- | eutran-4, eutran-5, eutran-7, eutran-12, eutran-13, eutran-71
- --
- IP | supported: ipv4, ipv6, ipv4v6
- --
- 3GPP | imei: xxxxxxxxxxxxxxx
- | enabled locks: fixed-dialing
- | operator id: 302220
- | operator name: TELUS
- | registration: roaming
- --
- 3GPP EPS | ue mode of operation: csps-2
- --
- SIM | primary sim path: /org/freedesktop/ModemManager1/SIM/0
- --
- Bearer | paths: /org/freedesktop/ModemManager1/Bearer/0
- ```
-
-1. Get the bearer details.
-
- You need the bearer details to connect the operating system to the packet data connection that the modem has now established with the cellular network. At this point, the modem has an IP connection, but the operating system is not yet configured to use it.
-
- ```
- mmcli --bearer 0
- ```
-
- The bearer details are listed in the following code:
-
- ```
- --
- General | path: /org/freedesktop/ModemManager1/Bearer/0
- | type: default
- --
- Status | connected: yes
- | suspended: no
- | interface: wwan0
- | ip timeout: 20
- --
- Properties | apn: internet4gd.gdsp
- | roaming: allowed
- --
- IPv4 configuration | method: static
- | address: 162.177.2.0
- | prefix: 22
- | gateway: 162.177.2.1
- | dns: 10.177.0.34, 10.177.0.210
- | mtu: 1500
- --
- Statistics | attempts: 1
- ```
-
-1. Bring up the network interface.
-
- ```
- sudo ip link set dev wwan0 up
- ```
-
-1. Configure the network interface.
-
- By using the information provided by the bearer, replace the IP address (for example, we use 162.177.2.0/22 here) with the one your bearer has:
-
- ```
- sudo ip address add 162.177.2.0/22 dev wwan0
- ```
-
-1. Check the IP information.
-
- The IP configuration for this interface should match the ModemManager bearer details. Run:
-
- ```
- sudo ip address show dev wwan0
- ```
-
- Your bearer IP is listed as shown here:
-
- ```
- wwan0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
- link/ether c2:12:44:c4:27:3c brd ff:ff:ff:ff:ff:ff
- inet 162.177.2.0/22 scope global wwan0
- valid_lft forever preferred_lft forever
- inet6 fe80::c012:44ff:fec4:273c/64 scope link
- valid_lft forever preferred_lft forever
- ```
-
-1. Set the default route.
-
- Again, by using the information provided by the bearer and using the modem's gateway (replace 162.177.2.1) as the default destination for network packets, run:
-
- ```
- sudo ip route add default via 162.177.2.1 dev wwan0
- ```
-
- Azure Percept DK is now enabled to connect to Azure by using the LTE modem.
--
-1. Test the connectivity.
-
- In this article, you're executing a `ping` request through the `wwan0` interface. But you can also use Azure Percept Studio and check to see whether telemetry messages are arriving. Make sure that you're not using an Ethernet cable and that Wi-Fi isn't enabled so that you're using LTE. Run:
-
- ```
- ping -I wwan0 8.8.8.8
- ```
-
- You should get a result that's similar to the following:
-
- ```
- PING 8.8.8.8 (8.8.8.8) from 162.177.2.0 wwan0: 56(84) bytes of data.
- 64 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=111 ms
- 64 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=92.0 ms
- 64 bytes from 8.8.8.8: icmp_seq=3 ttl=114 time=88.8 ms
- ^C
- 8.8.8.8 ping statistics
- 3 packets transmitted, 3 received, 0% packet loss, time 4ms
- rtt min/avg/max/mdev = 88.779/97.254/110.964/9.787 ms
- ```
-
-## Debugging
-
-For general information about debugging, see [Connect by using a USB modem](./connect-over-cellular-usb.md).
-
-### Vodafone modem rules to mitigate enumeration issues
-
-To prevent the modem from enumerating in a non-supported mode, we suggest that you apply the following userspace/dev (udev) rules to have ModemManager ignore unwanted interfaces.
-
-Create a */usr/lib/udev/rules.d/77-mm-vodafone-port-types.rules* file with the following content:
-
-```
-ACTION!="add|change|move|bind", GOTO="mm_vodafone_port_types_end"
-SUBSYSTEMS=="usb", ATTRS{idVendor}=="1bbb", GOTO="mm_vodafone_generic_vendorcheck"
-GOTO="mm_vodafone_port_types_end"
-
-LABEL="mm_vodafone_generic_vendorcheck"
-SUBSYSTEMS=="usb", ATTRS{bInterfaceNumber}=="?*", ENV{.MM_USBIFNUM}="$attr{bInterfaceNumber}"
-
-# Interface 1 is QDCM (ignored) and interfaces 3 and 4 are MBIM Control and Data.
-ATTRS{idVendor}=="1bbb", ATTRS{idProduct}=="00b6", ENV{.MM_USBIFNUM}=="00", ENV{ID_MM_PORT_TYPE_AT_PRIMARY}="1"
-ATTRS{idVendor}=="1bbb", ATTRS{idProduct}=="00b6", ENV{.MM_USBIFNUM}=="01", ENV{ID_MM_PORT_IGNORE}="1"
-ATTRS{idVendor}=="1bbb", ATTRS{idProduct}=="00b6", ENV{.MM_USBIFNUM}=="02", ENV{ID_MM_PORT_AT_SECONDARY}="1"
-
-GOTO="mm_vodafone_port_types_end"
-
-LABEL="mm_vodafone_port_types_end"
-```
-
-After they're installed, reload the udev rules and restart Modem
-
-```
-udevadm control -R
-systemctl restart ModemManager
-```
-
-## Next steps
-
-Depending on the cellular device you have access to, you can connect in one of two ways:
-
-* [Connect by using a USB modem](./connect-over-cellular-usb.md)
-* [Connect by using 5G or LTE](./connect-over-cellular.md)
azure-percept Connect Over Cellular Usb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular-usb.md
- Title: Connect Azure Percept DK over 5G and LTE networks by using a USB modem
-description: This article explains how to connect Azure Percept DK over 5G and LTE networks by using a USB modem.
---- Previously updated : 02/07/2023----
-# Connect Azure Percept DK over 5G and LTE networks by using a USB modem
--
-This article discusses how to connect Azure Percept DK to 5G or LTE networks by using a USB modem.
-
-> [!NOTE]
-> The information in this article applies only to special Azure Percept DK software that you can download according to the instructions in the next section. A special Azure Percept DK image includes ModemManager open-source software, which supports a wide variety of USB modems. The image doesn't support over-the-air (OTA) updates to the operating system or other software. With ModemManager open-source software, you can use a simple, cost-efficient LTE USB modem or more sophisticated 5G modems to connect Azure Percept DK to the internet and Azure.
->
-> The instructions in this article are intended to be used with USB modems that support a Mobile Broadband Interface Model (MBIM) interface. Before you obtain a USB modem, make sure that it supports the MBIM interface. Also make sure that it's listed in the ModemManager list of supported modems. ModemManager software can be used with other interfaces, but in this article we focus on the MBIM interface. For more information, go to the [freedesktop.org ModemManager](https://www.freedesktop.org/wiki/Software/ModemManager/) page.
---
-## Set up Azure Percept DK to use a USB modem
-
-1. [Download the Azure Percept 5G software image](https://aka.ms/azpercept5gimage) that supports ModemManager.
-
- These three files are needed to update your Azure Percept DK software to support USB modems.
-
-1. [Update the Azure Percept DK software](./how-to-update-via-usb.md) with the special 5G/LTE-enabled Azure Percept DK software that you downloaded in the preceding step.
-
- > [!IMPORTANT]
- > Follow the instructions in [Update Azure Percept DK over a USB-C connection](./how-to-update-via-usb.md), but be sure to use *only* the files that you downloaded in the preceding step, and not the files that are mentioned in the article.
-
-1. Follow the normal process to [set up the Azure Percept DK device](./quickstart-percept-dk-set-up.md), if it's unfamiliar to you.
-
- The setup experience is not different on this ModemManager-enabled version of Azure Percept DK.
-
-1. [Connect to Azure Percept DK by using the Secure Shell (SSH) network protocol](./how-to-ssh-into-percept-dk.md).
-
-## Connect to a modem
-
-The next three sections have instructions for connecting to various USB modems.
-
-### Vodafone USB Connect 4G v2 modem
-
-This Vodafone USB modem is a simple LTE CAT-4 USB dongle that has no special features. The instructions for this modem can be used for other similar, simple, cost-efficient USB modems.
--
-For instructions for connecting your Azure Percept DK by using a simple USB modem such as the Vodafone USB Connect 4G v2, see [Connect by using the Vodafone Connect 4G v2 USB modem](./connect-over-cellular-usb-vodafone.md).
-
-### MultiTech Multiconnect USB modem
-
-This MultiTech USB modem offers several USB modes of operation. For this type of modem, you first have to enable the proper USB mode before you enable the MBIM interface that ModemManager supports.
--
-To connect Azure Percept DK by using a simple USB modem such as the MultiTech USB modem (MTCM-LNA3-B03), follow the instructions in [Connect by using the MultiTech USB modem](./connect-over-cellular-usb-multitech.md).
-
-### Quectel 5G developer kit
-
-The third modem is the Quectel 5G DK. It also offers several modes, and you have to enable the proper MBIM mode first.
--
-For instructions for connecting your Azure Percept DK by using a 5G USB modem such as Quectel RM500Q-GL, see [Connect by using Quectel 5G Developer Kit](./connect-over-cellular-usb-quectel.md).
-
-## Help your 5G or LTE connection recover from reboot
-You can configure the USB modem to connect to the network, but if you reboot your device, you have to reconnect again manually. We're currently working on a solution to improve this experience. For more information, contact [our support team](mailto:azpercept5G@microsoft.com) with a short note referencing this issue.
-
-## Debugging information
-Check to ensure that your SIM card works on the specific hardware that you intend to use. Several carriers limit the data-only IoT SIM cards to work on only one device. For this reason, make sure that your device International Mobile Equipment Identity (IMEI) or serial number is listed on the carrier's SIM card allowed device list.
-
-### ModemManager debug mode
-
-You can enable the ModemManager debug mode by editing the */lib/systemd/system/ModemManager.service* file at the `ExecStart=/usr/sbin/ModemManager [...]` line by appending `--debug`, as shown in the following example:
-
-```
-[...]
-ExecStart=/usr/sbin/ModemManager [...] --debug
-[...]
-```
-
-For your changes to take effect, reload the services and restart ModemManager, as shown here:
-
-```
-systemctl daemon-reload
-systemctl restart ModemManager
-```
-
-By running the following commands, you can view the logs and clean the log files:
-
-```
-journalctl -u ModemManager.service
-journalctl --rotate
-journalctl --vacuum-time=1s
-
-```
-
-### Enhance reliability and stability
-
-To prevent ModemManager from interacting with non-modem serial interfaces, you can restrict the interfaces to be probed (to determine which are modems) by changing the [filter policies](https://www.freedesktop.org/software/ModemManager/api/latest/ch03s02.html).
-
-We recommend that you use the `STRICT` mode.
-
-To do so, edit the */lib/systemd/system/ModemManager.service* file at the `ExecStart=/usr/sbin/ModemManager [...]` line by appending `--filter-policy=STRICT`, as shown in the following example:
-
-```
-[...]
-ExecStart=/usr/sbin/ModemManager --filter-policy=STRICT
-[...]
-```
-For your changes to take effect, reload the services and restart ModemManager, as shown here:
-
-```
-systemctl daemon-reload
-systemctl restart ModemManager
-```
-
-## Next steps
-
-* [Connect by using 5G or LTE](./connect-over-cellular.md)
-* [Connect by using a cellular gateway](./connect-over-cellular-gateway.md)
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/connect-over-cellular.md
- Title: Connect Azure Percept over 5G or LTE networks
-description: This article explains how to connect the Azure Percept DK over 5G or LTE networks.
---- Previously updated : 02/07/2023----
-# Connect Azure Percept over 5G or LTE networks
--
-The benefits of connecting Edge AI devices over 5G/LTE networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, 5G/LTE networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on 5G/LTE networks. Only the necessary information is sent to the cloud while most of the data is processed on the device. Today, Azure Percept DK even supports direct connection to 5G/LTE networks using a simple USB modem. Below more about the different options.
-
-## Options for connecting Azure Percept DK over 5G or LTE networks
-With additional hardware, you can connect the Azure Percept DK using 5G/LTE connectivity. There are three options supported today, you can find a link to more details for each option:
-- **USB 5G/LTE Modem device** - We have now released a new SW image that supports open-source ModemManger SW that adds USB modem support to our Linux Operating System. This allows you to connect your Azure Percept over LTE or 5G networks using various often inexpensive USB modems. More info here [Connecting using USB modem](./connect-over-cellular-usb.md). -- **5G/LTE Ethernet gateway device** - Here Azure Percept is connected to the 5G/LTE gateway over Ethernet. More info here [Connecting using cellular gateway](./connect-over-cellular-gateway.md).-- **5G/LTE Wi-Fi hotspot device** - Where Azure Percept is connected to the Wi-Fi network that the Wi-Fi hotspot provides. In this case, the dev kit connects to the network like any other Wi-Fi network. For more instructions, follow the [Azure Percept DK Setup Guide](./quickstart-percept-dk-set-up.md) and select the 5G/LTE Wi-Fi network broadcasted from the hotspot.--
-## Considerations when selecting a 5G or LTE device in general
-5G/LTE devices whether they are USB modems or ethernet gateways support different technologies that impact the maximum data rate for downloads and uploads. The advertised data rates provide guidance for decision making but are rarely reached in real world. Here is some guidance for selecting the right device for your needs.
-
-- **LTE CAT-1** provides up to 10 Mbps down and 5 Mbps up. It is enough for default Azure Percept Devkit features such as object detection and creating a voice assistant. However, it may not be enough for solutions that require video streaming data up to the cloud.-- **LTE CAT-3 and 4** provides up to 100 Mbps down and 50 Mbps up, which is enough for streaming video to the cloud. However, it is not enough to stream full HD quality video.-- **LTE CAT-5 and higher** provides data rates high enough for streaming HD video for a single device. If you need to connect multiple devices to a single gateway, you will want to consider 5G.-- **5G** gateways will best position your scenarios for the future. They have data rates and bandwidth to support high data throughput for multiple devices at a time. Additionally, also provide lower latency for data transfer.--
-## Next steps
-Depending on what cellular device you might have access to, follow these links to connect your Azure Percept dev kit:
-
-[Connect using cellular gateway](./connect-over-cellular-gateway.md).
-
-[Connect using USB modem](./connect-over-cellular-usb.md).
azure-percept Create And Deploy Manually Azure Precept Devkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-and-deploy-manually-azure-precept-devkit.md
- Title: How to do a manual default container deployment to Azure Percept DK
-description: this article shows the audience how to manually create and deploy an Azure Precept Devkit
---- Previously updated : 02/07/2023-----
-# How to do a manual default container deployment to Azure Percept DK
--
-The following guide is to help customers manually deploy a factory fresh IoT Edge deployment to existing Azure Percept devices. We've also included the steps to manually create your Azure Percept IoT Edge device instance.
-
-## Prerequisites
--- Highly recommended: Update your Azure Percept DK to the [latest version](./software-releases-usb-cable-updates.md)-- Create an Azure account with an IoT Hub -- Install [VSCode](https://code.visualstudio.com/Download)-- Install the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) Extension for VSCode-- Find the software image version running on your Azure Percept Devkit (see below)-
-## Identify your Azure Percept DK software version
-
-Using SSH (Secure Shell), run the command line below. Write down the output for later reference.
-
-`cat /etc/adu-version`
-
-Example output: 2021.111.124.109
-
-## Create an Azure IoT Edge device for the Azure Percept DK
-If you already have an IoT Edge device instance created in your subscription for the devkit, you can skip to the [Manually deploy the deployment.json to the Azure Percept DK](#manually-deploy-the-deploymentjson-to-the-azure-percept-dk) section.
-1. Go to [Azure portal](https://portal.azure.com) and select the **IoT Hub** where you'll create the device
-2. Navigate to **IoT Edge** and select **Add an IoT Edge device**
-3. On the **Create a Device** screen, name your device in the **Device ID** section and leave all other fields as default, then select **Save**
-
-1. Select your newly created device
-
-2. Copy the **Primary Connection String**, we will use this copied text in the Azure Percept Onboarding/setup web pages
--
-## Connect to and set up the Azure Percept DK
-<!-- Introduction paragraph -->
-1. Set up your devkit using the main instructions and **STOP** at to the **Select your preferred configuration** page
-1. Select **Connect to an existing device**
-1. Paste the **Primary Connection String** that you copied from the earlier steps
-2. Select Finish
-3. The **Device set up complete!** page should now display
- **If this page does not disappear after 10 secs, donΓÇÖt worry. Just go ahead with the next steps**
-4. You'll be disconnected from the devkitΓÇÖs Wi-Fi hotspot, reconnect your computer to your main Wi-Fi (if needed)
--
-## Manually deploy the deployment.json to the Azure Percept DK
-
-The deployment.json files are a representation of all default modules necessary to begin using the Azure Percept DK.
-1. Download the appropriate deployment.json from [GitHub](https://github.com/microsoft/azure-percept-advanced-development/tree/main/default-configuration) for your reported software version. Refer to the [Identify your Azure Percept DK software version](#identify-your-azure-percept-dk-software-version) section above.
- 1. For 2021.111.124.xxx and later, use [default-deployment-2112.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2112.json)
- 2. For 2021.109.129.xxx and lower, use [default-deployment-2108.json](https://github.com/microsoft/azure-percept-advanced-development/blob/main/default-configuration/default-deployment-2108.json)
-2. Launch VSCode and Sign into Azure. Be sure you've installed the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) Extension.
-
- ![Sign into Azure in VScode.](./media/manually-deploy-azure-precept-devkit-images/azure-sign-in.png)
-
-3. Connect to your subscription and select your IoT Hub
-4. Locate your IoT Edge Device then right select it and choose **Create deployment for a Single Device**.
-
- ![find edge device.](./media/manually-deploy-azure-precept-devkit-images/iot-edge-device.png) ![create deployment for edge device](./media/manually-deploy-azure-precept-devkit-images/create-deployment.png)
-
-5. Navigate to the "Deployment.json" you saved from step 1 and select it. Then select OK.
-6. Deployment will take 1-5 minutes to fully complete
- 1. If you are interested in watching the Azure IoT Edge log while the deployment is going on, you can SSH into your Azure Percept DK and watch the Azure IoT Edge logs by issuing the command below.
- `sudo journalctl -u iotedge -f`
-7. Your Azure Percept DK is now ready to use!
--
-<!-- 5. Next steps
- @@ -77,4 +76,4 @@ context so the customer can determine why they would click the link.
>-
-## Next steps
-Navigate to the [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview) for more AI models.
-
azure-percept Create People Counting Solution With Azure Percept Devkit Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md
- Title: Create a people counting solution with Azure Percept Vision
-description: This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard.
---- Previously updated : 02/07/2023-----
-# Create a people counting solution with Azure Percept Vision
--
-This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard.
-
-The tutorial is intended to show detailed steps on how users can create, configure, and implement the basic components of this solution. Users can easily expand the tutorial and create additional ways to visualize people counting data.
-
-Top customer scenarios:
-- People counting intelligence: aggregation of people counting over a given day, week, or duration. -- Occupancy: determine when a space is free and available for use. Quantify how long the space is idle and unused. -- Understanding peak occupancy levels and when they occur. -- Detecting people counting after hours: count of people in space during non-business hours. -
-In this tutorial, you learn how to:
--- Set up your Azure Percept DK and Vision AI model-- Create a Container Registry resource-- Build and push your edge solution to Container Registry -- Deploy edge solution to device-- Add a consumer group to your IoT Hub-- Create a Stream Analytics Job-- Create and publish a Power BI report to visualize data
-
-## Solution architecture
-[ ![Solution Architecture](./media/create-people-counting-solution-with-azure-percept-vision-images/solution-architecture-mini.png) ](./media/create-people-counting-solution-with-azure-percept-vision-images/solution-architecture.png#lightbox)
--- Input : Video stream from Azure Percept DK--- Output: Count of people in Power BI dashboard
-
-[ ![Power BI](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-mini.png) ](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi.png#lightbox)
---- Azure Subscription: ([Free trial account](https://azure.microsoft.com/free/))-- Power BI subscription: ([Try Power BI for free](https://go.microsoft.com/fwlink/?LinkId=874445&clcid=0x409&cmpid=pbi-gett-hero-try-powerbifree))-- Power BI workspace: ([Create the new workspaces in Power BI](https://github.com/MicrosoftDocs/powerbi-docs/blob/main/powerbi-docs/collaborate-share/service-create-the-new-workspaces.md))-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- Download and install [VS Code]()-- Download and install [Git]()-- Install the IoT Hub Extension in VS Code-- Install the Azure IoT Tools Extension in VS Code-- Download and install [Docker Desktop]() (Will require a PC reboot)-- (Only for Windows Users) Install WSL2 by running the following commands in Windows PowerShell or Terminal (on macOS) (Will require a PC restart)-
- `wsl --install`
-
- `wsl --set-default-version 2`
--
-## Step 0: Set up your Azure Percept DK and Vision AI model
-Setting up the Azure Percept DK is the first step in the tutorial. Below are the steps to follow and links to further guidance.
-
-1. Follow [Quickstart: unbox and assemble your Azure Percept DK components](./quickstart-percept-dk-unboxing.md) and the next steps.
-2. Connect the camera module to the Azure Percept DK via the USB-C cable.
-3. Open Command Prompt (for Windows) or Terminal (on macOS) and execute the command-
-
- `git clone https://github.com/microsoft/Azure-Percept-Reference-Solutions.git`
-
- Within the cloned repository go to `people-counting-with-azure-percept-vision` directory.
--
-## Step 1: Create a Container Registry resource
-Azure Container Registry is a managed, private Docker registry service based on the open-souce Docker Registry. Container Registries are used to manage and store your private Docker containers images and related artifacts.
-
-1. Login to Azure portal https://portal.azure.com/
-2. To create a Container Registry, go to [Create container registry - Microsoft Azure](https://portal.azure.com/#create/Microsoft.ContainerRegistry)
-
- a. Select your Azure Subscription in the `Subscription` drop-down box
-
- b. Select your preferred resource group from the `Resource group` drop-down menu. It is recommended to use the `Resource group` which contains the IOT Hub connected to the Azure Percept DK.
-
- c. Provide a unique `Registry Name`
-
- d. Under `Location`, select the region to deploy resource (We suggest select `West US`)
-
- e. `Availability Zones` - disabled
-
- f. For `SKU`, select `Standard`
-
- g. Keep all other tab as default and click `Review + create` at the bottom of the screen. Once the validation passes, click `Create`. This will create your Container Registry.
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/container-registry.png" alt-text="Container Registry Creation.":::
-3. After successful resource deployment go to your container registry resource. On the left scroll panel select `Access Keys` under `Settings` and `enable` the `Admin user`
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/access-keys.png" alt-text="Container Registry access key setting.":::
-4. Make a note of the `Login Server`, `Username`, and `password`
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/access-keys-1.png" alt-text="Container Registry login.":::
-5. Go to the git repository and `people-counting-with-azure-percept-vision` directory and rename `envtemplate` to `.env`. Open the file and fill in the following details-
-
- a. CONTAINER_REGISTRY_USERNAME= your container registry Username
-
- b. CONTAINER_REGISTRY_PASSWORD= your container registry Password
-
- c. CONTAINER_REGISTRY_LOGINSERVER= your container registry Login Server
-
- ![Environment](./media/create-people-counting-solution-with-azure-percept-vision-images/env.png)
--
-## Step 2: Build and push your edge solution to Container Registry
-This section guides users on modifying the cloned people counting repo with their individual deployment information, building the model image, and pushing model image to container registry.
-
-1. Open VS Code, at the bottom of the screen ensure you have `arm64v8` as the `Default Platform for IoT Edge Solution` selected (if not, then please click and select arm64v8 from the list)
-
- ![select arm64v8](./media/create-people-counting-solution-with-azure-percept-vision-images/vscode-arm64v8.png)
-
-2. Within the `people-counting-with-azure-percept-vision` directory go to `modules/CountModule/` directory and open `module.json`. Fill in your `Container registry address` (same as the `Login server` saved earlier) and followed by a `repository name` **(Note- please make sure your repository name is all lowercase)**
-
- `"repository": "<Your container registry login server/repository name>"`
-
- will change as follows, for example-
-
- `"repository": "visiontrainingacr.azurecr.io/countmodule"`
-
- ![example of count module](./media/create-people-counting-solution-with-azure-percept-vision-images/count-module.png)
-
-3. Now you will build the module image and push it to your container registry. Open Visual Studio Code integrated terminal by selecting `View > Terminal `
-
-4. Sign into Docker with the Azure Container registry (ACR) credentials that you saved after creating the registry using below command in terminal. Note that this command would give a warning that using --password or -p via CLI is insecure. Therefore, if you want a more secure login for your future solution development, use `--password-stdin` instead by following [this instruction](https://docs.docker.com/engine/reference/commandline/login/).
-
- `docker login -u <ACR username> -p <ACR password> <ACR login server>`
-
-5. Visual Studio Code now has access to your container registry. In the next steps you will turn the solution code into a container image. In Visual Studio Code explorer, right click the `deployment.template.json` file and select `Build and Push IoT Edge Solution`
-
- ![Build and Push IoT Edge Solution](./media/create-people-counting-solution-with-azure-percept-vision-images/build-and-push.png)
-
- The build and push command starts three operations. First, it creates a new folder in the solution called `config` that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate docker file for your target architecture. Then, it runs `docker push` to push the image repository to your container registry. This process may take several minutes the first time but is faster the next time that you run the commands.
-
-6. Open the `deployment.arm64v8.json` file in the newly created config folder. The filename reflects the target architecture, so it will be different if you choose a different architecture.
-
-7. Notice that the two parameters that had placeholders now are filled in with their proper values. The `registryCredentials` section has your registry username and password pulled from the .env file. The `CountModule` has the full image repository with the `name`, `version`, and `architecture` tag from the `module.json` file.
-
-8. To further verify what the build and push command did, go to the Azure portal, and navigate to your container registry. In your container registry, select `Repositories` then `countmodule`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/azure-container-registry.png" alt-text="Select repositories.":::
--
-## Step 3: Deploy edge solution to device
-Step 3 will guide users on creating and deploying a manifest to the Azure Percept Dev Kit. This deployment will create a new edge module ΓÇÿCountModuleΓÇÖ and will overwrite any previous deployments of ΓÇÿCountModuleΓÇÖ.
-
-1. In the Visual Studio Code explorer, under the `Azure IoT Hub` section, expand `Devices` to see your list of IoT devices
-
-2. Right-click the IoT Edge device that you want to deploy to, then select `Create Deployment for Single Device`
-
- ![Create Deployment for Single Device](./media/create-people-counting-solution-with-azure-percept-vision-images/deployment.png)
-
-3. In the file explorer, navigate into the `config` folder then select the `deployment.arm64v8.json` file and click `Select Edge Deployment Manifest`.
-
- **Do not use the deployment.template.json file, which does not have the container registry credentials or module image values in it.**
-
-4. Under your device, expand `Modules` to see a list of deployed and running modules. Click the refresh button. You should see the `CountModule` running on your device.
-
- ![view the count module](./media/create-people-counting-solution-with-azure-percept-vision-images/module-run.png)
-
-5. Go to [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/devices) and on the left panel, select Devices, then select your Azure Percept device
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/devices.png" alt-text="Select devices.":::
-
-6. Ensure that your device is `Connected`. Click on `Vision`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/vision.png" alt-text="check for device connected.":::
-
-7. Click `View your device stream `
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/device-stream.png" alt-text="View your device stream.":::
-
-8. The previous step will deploy modules to your device. In the `Notifications` tab click `View Stream`. This will open a new tab in your browser, please verify that you see the video stream. If you point the camera module to a person then you will see the person detection with bounding box
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream.png" alt-text="Verify the video stream.":::
-
-9. After verifying the video stream and bounding boxes, please close the web stream browser tab.
-
-10. To ensure the Count Module is setup correctly, in the Azure portal go to your IoT Hub. On the left panel under `Device management `select `IoT Edge`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/iot-edge.png" alt-text="Select IoT edge.":::
--
-11. From the IoT device list click on your Azure Percept DK device
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/device.png" alt-text="Azure Percept DK device.":::
-
-12. Scroll down to check if all deployed modules are in `running` status
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/running.png" alt-text="Check the running status.":::
-
-13. Click `Troubleshoot`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/troubleshoot.png" alt-text="Choose troubleshoot.":::
-
-14. From the drop-down list select `CountModule`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/dropdown.png" alt-text="View count module.":::
-
-15. Ensure you see `People_Count` logs as follows-
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/logs.png" alt-text="Check the box.":::
--
-## Step 4: Add a consumer group to your IoT Hub
-Consumer Groups provide independent views into the event stream that enable apps and Azure services to independently consume data. This consumer group will be used by the Stream Analytics Job we will create in Step 5.
-
-1. In the [Azure portal](https://portal.azure.com), go to your IoT hub which is connected to your Azure Percept DK.
-
-2. On the left pane, select `Hub settings > Built-in endpoints`. Enter a name for your new consumer group in the text box under `Consumer Groups`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/consumer-group.png" alt-text="New consumer group.":::
-
-3. Click anywhere outside the text box to save the consumer group
-
-## Step 5: Create a Stream Analytics job
-Step 5 guides users through creating, configuring, and running a Stream Analytics job. Stream Analytics is a hot path to stream data from out Azure IoT Hub to a Power BI workspace in real time. We will create a query so only People Counting telemetry will be streamed. Once People Counting data is in our Power BI workspace it will be easy to render with a Power BI report.
-
-1. Go to New [Stream Analytics job - Microsoft Azure](https://portal.azure.com/#create/Microsoft.StreamAnalyticsJob)
-
-2. Enter the following information for the job -
-
- - `Job name` - The name of the job. The name must be globally unique.
-
- - `Resource group` - Use the same resource group that your IoT hub uses.
-
- - `Location` - Use the same location as your resource group.
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-job.png" alt-text="New Stream Analytics job.":::
-
-3. Click `Create`
-
-### Add an input to the Stream Analytics job
-1. Open the previously created Stream Analytics job. Under `Job topology`, select `Inputs`
-
-2. In the `Inputs` pane, select `Add stream input`, then select `IoT Hub` from the drop-down list.
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-input.png" alt-text="Add an input.":::
-
-3. On the new input pane, enter the following information -
-
- - `Input alias` - Enter a unique alias for the input
-
- - `Select IoT Hub from your subscription` - Select this radio button
- - `Subscription` - Select the Azure subscription you are using for this lab
- - `IoT Hub` - Select the IoT Hub you are using for this lab
- - `Consumer group` - Select the consumer group you created previously
- - `Shared access policy name` - Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this lab, you can select service
- - `Shared access policy key` - This field is auto filled based on your selection for the shared access policy name
- - `Endpoint` - Select Messaging
-
- Leave all other fields as default-
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-input-fields.png" alt-text="Example input fields.":::
-
-4. Click `Save`
-
-### Add an output to the Stream Analytics job
-
-1. Create a Group Workspace, take the following steps to create one -
-
- a. In a new web browser tab open [Power BI](https://msit.powerbi.com/home)
-
- b. On the left panel click on `Workspaces > Create a workspace`
-
- c. Give your workspace a name and description (optional) and click `Save `
-
- d. Go back to the Azure portal and go to the Stream Analytics job
-
-2. Under `Job topology`, select `Outputs`
-3. In the `Outputs` pane, select `Add`, and then select `Power BI` from the drop-down list
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-output.png" alt-text="Add an output.":::
-
-4. Enter the following information-
-
- - `Output alias` - A unique alias for the output
-
- - `Select Group workspace from your subscriptions` - Select this radio button
- - `Group workspace` - Select your target group workspace
- - `Dataset name` - Enter a dataset name
- - `Table name` - Enter a table name
- - `Authentication mode` - User token
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-output-fields.png" alt-text="Power BI new output fields.":::
-
-5. On the `Power BI - New output` pane, select `Authorize` and follow the prompts to sign into your Power BI account
-
-6. Click `Save `
-
-### Configure the query of the Stream Analytics job
-1. Under `Job topology`, select `Query `
-
-2. Replace `[YourInputAlias]` with the input alias of the job
-
-3. Replace `[YourOutputAlias]` with the output alias of the job
-
-4. Add the following `WHERE` clause as the last line of the query. This line ensures that only messages with a `People_Count` property will be forwarded to Power BI.
-
- `WHERE People_Count IS NOT NULL `
-
-5. The query will look as follows -
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/query.png" alt-text="Displays the query.":::
-
-6. Click `Save Query`
-
- **Note- The `People_Count` property is sent from the `countmodule` to the IoT hub and is forwarded to the Stream Analytics job.**
-
-### Run the Stream Analytics job
-1. In the Stream Analytics job, select `Overview`, then select `Start > Now > Start`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-start.png" alt-text="Start the stream analytics job.":::
-
-2. Once the job successfully starts, the job status changes from `Stopped` to `Running`
-
- :::image type="content" source="./media/create-people-counting-solution-with-azure-percept-vision-images/stream-analytics-running.png" alt-text="See running state.":::
--
-## Step 6: Create and publish a Power BI report to visualize data
-This step will guide users on how to create a Power BI report from the People Counting telemetry data. The tutorial walks through initial steps to visualize people counting data. Users who are interested to learn more ways to transform, aggregate, and visualize their data could explore the [Power BI product page](https://powerbi.microsoft.com/) for ideas and templates.
-
-1. Login to [Power BI](https://msit.powerbi.com/home) and select your Workspace (this is the same Group Workspace you used while creating the Stream Analytics job output)
-
- ![select your Workspace](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-1.png)
-
-2. Verify that you see your dataset
-
- ![Verify Power BI Dataset.](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-data-set.png)
-
-3. On the left scroll panel select `+ Create` and then click `Pick a published dataset`
-
- ![publish dataset](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-create.png)
-
-4. Select your dataset and click `Create `
-
-5. On the right, expand the `Fields` dropdown and select `EventEnqueuedUtcTime` and `ΣPeople_Count`
-
-6. Under `Visualizations` select `Line and clustered column chart`
-
- ![select correct column chart](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-fields.png)
-
-7. This will generate a graph as follows-
-
- ![graph is generated](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-graph.png)
-
-8. Click `Refresh` periodically to update the graph
-
- ![update the graph](./media/create-people-counting-solution-with-azure-percept-vision-images/power-bi-graph-refresh.png)
-
-<!-- 6. Clean up resources
-Required. If resources were created during the tutorial. If no resources were created,
-state that there are no resources to clean up in this section.
>-
-## Step 7: Clean up resources
-
-If you're not going to continue to use this application, delete
-Azure resources with the following steps:
-
-1. Login to the [Azure portal](https://portal.azure.com), go to `Resource Group` you have been using for this tutorial. Select the `Stream Analytics Job` resource created and stop the job from running then delete.
-
-2. Login to [Power BI](https://msit.powerbi.com/home) and select your Workspace (this is the same Group Workspace you used while creating the Stream Analytics job output), and delete workspace.
-
-<!-- 7. Next steps
-Required: A single link in the blue box format. Point to the next logical tutorial
-in a series, or, if there are no other tutorials, to some other cool thing the
-customer can do.
>-
-## Next steps
-
-Check out the other tutorial under Advanced prototyping with Azure Percept section for your Azure Percept DK.
-
-<!--
-Remove all the comments in this template before you sign-off or merge to the
-main branch.
>
azure-percept Delete Voice Assistant Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/delete-voice-assistant-application.md
- Title: Delete your Azure Percept Audio voice assistant application
-description: This article shows you how to delete a previously created voice assistant application.
---- Previously updated : 02/07/2023----
-# Delete your Azure Percept Audio voice assistant application
--
-These instructions will show you how to delete a voice assistant application from your Azure Percept Audio device.
-
-## Prerequisites
--- [A previously created voice assistant application](./tutorial-no-code-speech.md)-- Your Azure Percept DK is powered on and the Azure Percept Audio accessory is connected via a USB cable.-
-## Remove all voice assistant resources from the Azure portal
-
-Once you're done working with your voice assistant application, follow these steps to clean up the speech resources you deployed when creating the application.
-
-1. From the [Azure portal](https://portal.azure.com), select **Resource groups** from the left menu panel or type it into the search bar.
-
- :::image type="content" source="./media/tutorial-no-code-speech/azure-portal.png" alt-text="Screenshot of Azure portal homepage showing left menu panel and Resource Groups.":::
-
-1. Select your resource group.
-
-1. Select all six resources that contain your application prefix and select the **Delete** icon on the top menu panel.
-
- :::image type="content" source="./media/tutorial-no-code-speech/select-resources.png" alt-text="Screenshot of speech resources selected for deletion.":::
-
-1. To confirm deletion, type **yes** in the confirmation box, verify you've selected the correct resources, and select **Delete**.
-
- :::image type="content" source="./media/tutorial-no-code-speech/delete-confirmation.png" alt-text="Screenshot of delete confirmation window.":::
-
-> [!WARNING]
-> This will remove any custom keywords created with the speech resources you are deleting, and the voice assistant demo will no longer function.
--
-## Next steps
-Now that you've removed your voice assistant application, try creating other applications on your Azure Percept DK by following these tutorials.
-- [Create a no-code vision solution](./tutorial-nocode-vision.md)-- [Create a no-code voice assistant application](./tutorial-no-code-speech.md)--
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/dev-tools-installer.md
- Title: Install Azure Percept development tools
-description: Learn more about using the Dev Tools Pack Installer to accelerate advanced development with Azure Percept
---- Previously updated : 02/07/2023----
-# Install Azure Percept development tools
--
-The Dev Tools Pack Installer is a one-stop solution that installs and configures all the tools required to develop an advanced intelligent edge solution.
-
-## Mandatory tools
-
-* [Visual Studio Code](https://code.visualstudio.com/)
-* [Python 3.6 or later](https://www.python.org/)
-* [Docker 20.10](https://www.docker.com/)
-* [PIP3 21.1](https://pip.pypa.io/en/stable/user_guide/)
-* [TensorFlow 2.0](https://www.tensorflow.org/)
-* [Azure Machine Learning SDK 1.2](/python/api/overview/azure/ml/)
-
-## Optional tools
-
-* [NVIDIA DeepStream SDK 5](https://developer.nvidia.com/deepstream-sdk) (toolkit for developing solutions for NVIDIA Accelerators)
-* [Intel OpenVINO Toolkit 2021.3](https://docs.openvinotoolkit.org/) (toolkit for developing solutions for Intel Accelerators)
-* [Lobe.ai 0.9](https://lobe.ai/)
-* [Streamlit 0.8](https://www.streamlit.io/)
-* [Pytorch 1.4.0 (Windows) or 1.2.0 (Linux)](https://pytorch.org/)
-* [Miniconda 4.5](https://docs.conda.io/en/latest/miniconda.html)
-* [Chainer 7.7](https://chainer.org/)
-* [Caffe 1.0](https://caffe.berkeleyvision.org/)
-* [CUDA Toolkit 11.2](https://developer.nvidia.com/cuda-toolkit)
-* [Microsoft Cognitive Toolkit 2.5.1](https://www.microsoft.com/research/product/cognitive-toolkit/?lang=fr_ca)
-
-## Known issues
--- Optional Caffe, NVIDIA DeepStream SDK, and Intel OpenVINO Toolkit installations might fail if Docker isn't running properly. To install these optional tools, ensure that Docker is installed and running before you attempt the installations through the Dev Tools Pack Installer.--- Optional CUDA Toolkit installed on the Mac version is 10.0.130. CUDA Toolkit 11 no longer supports development or running applications on macOSity.-
-## Docker minimum requirements
-
-### Windows
--- Windows 10 64-bit: Pro, Enterprise, or Education (build 16299 or later).--- Hyper-V and Containers Windows features must be enabled. The following hardware prerequisites are required to successfully run Hyper-V on Windows 10:-
- - 64-bit processor with [Second Level Address Translation (SLAT)](https://en.wikipedia.org/wiki/Second_Level_Address_Translation)
- - 4 GB system RAM
- - BIOS-level hardware virtualization support must be enabled in the BIOS settings. For more information, see Virtualization.
-
-> [!NOTE]
-> Docker supports Docker Desktop on Windows based on MicrosoftΓÇÖs support lifecycle for Windows 10 operating system. For more information, see the [Windows lifecycle fact sheet](https://support.microsoft.com/help/13853/windows-lifecycle-fact-sheet).
-
-Learn more about [installing Docker Desktop on Windows](https://docs.docker.com/docker-for-windows/install/#install-docker-desktop-on-windows).
-
-### Mac
--- Mac must be a 2010 or a newer model with the following attributes:
- - Intel processor
- - IntelΓÇÖs hardware support for memory management unit (MMU) virtualization, including Extended Page Tables (EPT) and Unrestricted Mode. You can check to see if your machine has this support by running the following command in a terminal: ```sysctl kern.hv_support```. If your Mac supports the Hypervisor framework, the command prints ```kern.hv_support: 1```.
--- macOS version 10.14 or newer (Mojave, Catalina, or Big Sur). We recommend upgrading to the latest version of macOS. If you experience any issues after upgrading your macOS to version 10.15, you must install the latest version of Docker Desktop to be compatible with this version of macOS.--- At least 4 GB of RAM.--- Do NOT install VirtualBox prior to version 4.3.30--it is not compatible with Docker Desktop.--- The installer is not supported on Apple M1.-
-Learn more about [installing Docker Desktop on Mac](https://docs.docker.com/docker-for-mac/install/#system-requirements).
-
-## Launch the installer
-
-Download the Dev Tools Pack Installer for [Windows](https://go.microsoft.com/fwlink/?linkid=2132187), [Linux](https://go.microsoft.com/fwlink/?linkid=2132186), or [Mac](https://go.microsoft.com/fwlink/?linkid=2132296). Launch the installer according to your platform, as described below.
-
-### Windows
-
-1. Click on **Dev-Tools-Pack-Installer** to open the installation wizard.
-
-### Mac
-
-1. After downloading, move the **Dev-Tools-Pack-Installer.app** file to the **Applications** folder.
-
-1. Click on **Dev-Tools-Pack-Installer.app** to open the installation wizard.
-
-1. If you receive an ΓÇ£unidentified developerΓÇ¥ security dialog:
-
- 1. Go to **System Preferences** -> **Security & Privacy** -> **General** and click **Open Anyway** next to **Dev-Tools-Pack-Installer.app**.
- 1. Click the electron icon.
- 1. Click **Open** in the security dialog.
-
-### Linux
-
-1. When prompted by the browser, click **Save** to complete the installer download.
-
-1. Add execution permissions to the **.appimage** file:
-
- 1. Open a Linux terminal.
-
- 1. Enter the following in the terminal to go to the **Downloads** folder:
-
- ```bash
- cd ~/Downloads/
- ```
-
- 1. Make the AppImage executable:
-
- ```bash
- chmod +x Dev-Tools-Pack-Installer.AppImage
- ```
-
- 1. Run the installer:
-
- ```bash
- ./Dev-Tools-Pack-Installer.AppImage
- ```
-
-1. Add execution permissions to the **.appimage** file:
-
- 1. Right click on the .appimage file and select **Properties**.
- 1. Open the **Permissions** tab.
- 1. Check the box next to **Allow executing file as a program**.
- 1. Close **Properties** and open the **.appimage** file.
-
-## Run the installer
-
-1. On the **Install Dev Tools Pack Installer** page, click **View license** to view the license agreements of each software package included in the installer. If you accept the terms in the license agreements, check the box and click **Next**.
-
- :::image type="content" source="./media/dev-tools-installer/dev-tools-license-agreements.png" alt-text="License agreement screen in the installer.":::
-
-1. Click on **Privacy Statement** to review the Microsoft Privacy Statement. If you agree to the privacy statement terms and would like to send diagnostic data to Microsoft, select **Yes** and click **Next**. Otherwise, select **No** and click **Next**.
-
- :::image type="content" source="./media/dev-tools-installer/dev-tools-privacy-statement.png" alt-text="Privacy statement agreement screen in the installer.":::
-
-1. On the **Configure Components** page, select the optional tools you would like to install (the mandatory tools will install by default).
-
- 1. If you are working with the Azure Percept Audio SoM, which is part of the Azure Percept DK, make sure to install the Intel OpenVino Toolkit and Miniconda3.
-
- 1. Click **Install** to proceed with the installation.
-
- :::image type="content" source="./media/dev-tools-installer/dev-tools-configure-components.png" alt-text="Installer screen showing available software packages.":::
-
-1. After successful installation of all selected components, the wizard proceeds to the **Completing the Setup Wizard** page. Click **Finish** to exit the installer.
-
- :::image type="content" source="./media/dev-tools-installer/dev-tools-finish.png" alt-text="Installer completion screen.":::
-
-## Docker status check
-
-If the installer notifies you to verify Docker Desktop is in a good running state, see the following steps:
-
-### Windows
-
-1. Expand system tray hidden icons.
-
- :::image type="content" source="./media/dev-tools-installer/system-tray.png" alt-text="System Tray.":::
-
-1. Verify the Docker Desktop icon shows **Docker Desktop is Running**.
-
- :::image type="content" source="./media/dev-tools-installer/docker-status-running.png" alt-text="Docker Status.":::
-
-1. If you do not see the above icon listed in the system tray, launch Docker Desktop from the start menu.
-
-1. If Docker prompts you to reboot, it's fine to close the installer and relaunch after a reboot has completed and Docker is in a running state. Any successfully installed third-party applications should be detected and will not be automatically reinstalled.
-
-## Next steps
-
-Check out the [Azure Percept advanced development repository](https://github.com/microsoft/azure-percept-advanced-development) to get started with advanced development for Azure Percept DK.
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-capture-images.md
- Title: Capture images in Azure Percept Studio
-description: How to capture images with your Azure Percept DK in Azure Percept Studio
---- Previously updated : 02/07/2023----
-# Capture images in Azure Percept Studio
--
-Follow this guide to capture images using Azure Percept DK for an existing vision project. If you haven't created a vision project yet, see the [no-code vision tutorial](./tutorial-nocode-vision.md).
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- [No-code vision project](./tutorial-nocode-vision.md)-
-## Capture images
-
-1. Power on your devkit.
-
-1. Navigate to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-
-1. On the left side of the overview page, select **Devices**.
-
- :::image type="content" source="./media/how-to-capture-images/overview-devices-inline.png" alt-text="Azure Percept Studio overview screen." lightbox="./media/how-to-capture-images/overview-devices.png":::
-
-1. Select your devkit from the list.
-
- :::image type="content" source="./media/how-to-capture-images/select-device.png" alt-text="Percept devices list.":::
-
-1. On your device page, select **Capture images for a project**.
-
- :::image type="content" source="./media/how-to-capture-images/capture-images.png" alt-text="Percept devices page with available actions listed.":::
-
-1. In the **Image capture** window, follow these steps:
-
- 1. In the **Project** dropdown menu, select the vision project you would like to collect images for.
-
- 1. Select **View device stream** to ensure the camera of the Vision SoM is placed correctly.
-
- 1. Select **Take photo** to capture an image.
-
- 1. Instead, check the box next to **Automatic image capture** to set up a timer for image capture:
-
- 1. Select your preferred imaging rate under **Capture rate**.
- 1. Select the total number of images you would like to collect under **Target**.
-
- :::image type="content" source="./media/how-to-capture-images/take-photo.png" alt-text="Image capture screen.":::
-
-All images will be accessible in [Custom Vision](https://www.customvision.ai/).
-
-## Next steps
-
-[Test and retrain your no-code vision model](../cognitive-services/custom-vision-service/test-your-model.md).
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-configure-voice-assistant.md
- Title: Configure your Azure Percept voice assistant application
-description: Configure your voice assistant application using Azure IoT Hub
---- Previously updated : 02/07/2023----
-# Configure your Azure Percept voice assistant application
--
-This article describes how to configure your voice assistant application using IoT Hub. For a step-by-step tutorial for the process of creating a voice assistant, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
-
-## Update your voice assistant configuration
-
-1. Open the [Azure portal](https://portal.azure.com) and type **IoT Hub** into the search bar. Select the icon to open the IoT Hub page.
-
-1. On the IoT Hub page, select the IoT Hub to which your device was provisioned.
-
-1. Select **IoT Edge** under **Automatic Device Management** in the left navigation menu to view all devices connected to your IoT Hub.
-
-1. Select the device to which your voice assistant application was deployed.
-
-1. Select **Set Modules**.
-
- :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/set-modules.png" alt-text="Screenshot of device page with Set Modules highlighted.":::
-
-1. Verify that the following entry is present under the **Container Registry Credentials** section. Add credentials if necessary.
-
- |Name|Address|Username|Password|
- |-|-|--|--|
- |azureedgedevices|azureedgedevices.azurecr.io|devkitprivatepreviewpull|
-
-1. In the **IoT Edge Modules** section, select **azureearspeechclientmodule**.
-
- :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/modules.png" alt-text="Screenshot showing list of all IoT Edge modules on the device.":::
-
-1. Select the **Module Settings** tab. Verify the following configuration:
-
- Image URI|Restart Policy|Desired Status
- |--|--
- mcr.microsoft.com/azureedgedevices/azureearspeechclientmodule: preload-devkit|always|running
-
- If your settings don't match, edit them and select **Update**.
-
-1. Select the **Environment Variables** tab. Verify that there are no environment variables defined.
-
-1. Select the **Module Twin Settings** tab. Update the **speechConfigs** section as follows:
-
- ```
- "speechConfigs": {
- "appId": "<Application id for custom command project>",
- "key": "<Speech Resource key for custom command project>",
- "region": "<Region for the speech service>",
- "keywordModelUrl": "https://aedsamples.blob.core.windows.net/speech/keyword-tables/computer.table",
- "keyword": "computer"
- }
- ```
-
- > [!NOTE]
- > The keyword used above is a default publicly available keyword. If you wish to use your own, you can add your own custom keyword by uploading a created table file to blob storage. Blob storage needs to be configured with either anonymous container access or anonymous blob access.
-
-## How to find out appId, key and region
-
-To locate your **appID**, **key**, and **region**, go to [Speech Studio](https://speech.microsoft.com/):
-
-1. Sign in and select the appropriate speech resource.
-1. On the **Speech Studio** home page, select **Custom Commands** under **Voice Assistants**.
-1. Select your target project.
-
- :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/project.png" alt-text="Screenshot of project page in Speech Studio.":::
-
-1. Select **Settings** on the left-hand menu panel.
-1. The **appID** and **key** will be located under the **General** settings tab.
-
- :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/general-settings.png" alt-text="Screenshot of speech project general settings.":::
-
-1. To find your **region**, open the **LUIS resources** tab within the settings. The **Authoring resource** selection will contain region information.
-
- :::image type="content" source="./media/manage-voice-assistant-using-iot-hub/luis-resources.png" alt-text="Screenshot of speech project LUIS resources.":::
-
-1. After entering your **speechConfigs** information, select **Update**.
-
-1. Select the **Routes** tab at the top of the **Set modules** page. Ensure you have a route with the following value:
-
- ```
- FROM /messages/modules/azureearspeechclientmodule/outputs/* INTO $upstream
- ```
-
- Add the route if it doesn't exist.
-
-1. Select **Review + Create**.
-
-1. Select **Create**.
--
-## Next steps
-
-After updating your voice assistant configuration, return to the demo in Azure Percept Studio to interact with the application.
azure-percept How To Connect Over Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-over-ethernet.md
- Title: Connect to Azure Percept DK over Ethernet
-description: This guide shows users how to connect to the Azure Percept DK setup experience when connected over an Ethernet connection.
---- Previously updated : 02/07/2023----
-# Connect to Azure Percept DK over Ethernet
--
-In this how-to guide you'll learn how to launch the Azure Percept DK setup experience over an Ethernet connection. It's a companion to the [Quick Start: Set up your Azure Percept DK and deploy your first AI model](./quickstart-percept-dk-set-up.md) guide. See each option outlined below and choose which one is most appropriate for your environment.
-
-## Prerequisites
--- An Azure Percept DK -- A Windows, Linux, or OS X based host computer with Wi-Fi or ethernet capability and a web browser-- Network cable-
-## Identify your dev kit's IP address
-
-The key to running the Azure Percept DK setup experience over an Ethernet connection is finding your dev kit's IP address. This article covers three options:
-1. From your network router
-1. Via SSH
-1. Via the Nmap tool
-
-### From your network router
-The fastest way to identify your dev kits's IP address is to look it up on your network router.
-1. Plug the Ethernet cable into the dev kit and the other end into the router.
-1. Power on your Azure Percept DK.
-1. Look for a sticker on the network router specifying access instructions
-
- **Here are examples of router stickers**
-
- :::image type="content" source="media/how-to-connect-over-ethernet/router-sticker-01.png" alt-text="example sticker from a network router":::
-
- :::image type="content" source="media/how-to-connect-over-ethernet/router-sticker-02.png" alt-text="another example sticker from a network router":::
-
-1. On your computer that is connected to Ethernet or Wi-Fi, open a web browser.
-1. Type the browser address for the router as found on the sticker.
-1. When prompted, enter the name and password for the router as found on the sticker.
-1. Once in the router interface, select My Devices (or something similar, depending on your router).
-1. Find the Azure Percept dev kit in the list of devices
-1. Copy the IP address of the Azure Percept dev kit
-
-### Via SSH
-It's possible to find your dev kits's IP address by connecting to the dev kit over SSH.
-
-> [!NOTE]
-> Using the SSH method of identifying your dev kit's IP address requires that you are able to connect to your dev kit's Wi-Fi access point. If this is not possible for you, please use one of the other methods.
-
-1. Plug the ethernet cable into the dev kit and the other end into the router
-1. Power on your Azure Percept dev kit
-1. Connect to your dev kit over SSH. See [Connect to your Azure Percept DK over SSH](./how-to-ssh-into-percept-dk.md) for detailed instruction on how to connect to your dev kit over SSH.
-1. To list the ethernet local network IP address, type the bellow command in your SSH terminal window:
-
- ```bash
- ip a | grep eth1
- ```
-
- :::image type="content" source="media/how-to-connect-over-ethernet/ssh-local-network-address.png" alt-text="example of identifying local network IP in SSH terminal":::
--
-1. The dev kit's IP address is displayed after ΓÇÿinetΓÇÖ. Copy the IP address.
-
-### Using the Nmap tool
-You can also use free tools found on the Web to identify your dev kit's IP address. In these instructions, we cover a tool called Nmap.
-1. Plug the ethernet cable into the dev kit and the other end into the router.
-1. Power on your Azure Percept dev kit.
-1. On your host computer, download and install the [Free Nmap Security Scanner](https://nmap.org/download.html) that is needed for your platform (Windows/Mac/Linux).
-1. Obtain your computerΓÇÖs ΓÇ£Default GatewayΓÇ¥ - [How to Find Your Default Gateway](https://www.noip.com/support/knowledgebase/finding-your-default-gateway/)
-1. Open the Nmap application
-1. Enter your Default Gateway into the *Target* box and append **/24** to the end. Change *Profile* to **Quick scan** and select the **Scan** button.
-
- :::image type="content" source="media/how-to-connect-over-ethernet/nmap-tool.png" alt-text="example of the Nmap tool input":::
-
-1. In the results, find the Azure Percept dev kit in the list of devices ΓÇô similar to **apd-xxxxxxxx**
-1. Copy the IP address of the Azure Percept dev kit
-
-## Launch the Azure Percept DK setup experience
-1. Plug the ethernet cable into the dev kit and the other end into the router.
-1. Power on your Azure Percept dev kit.
-1. Open a web browser and paste the dev kit's IP address. The setup experience should launch in the browser.
-
-## Next steps
-- [Complete the set up experience](./quickstart-percept-dk-set-up.md)
azure-percept How To Connect To Percept Dk Over Serial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md
- Title: Connect to Azure Percept DK over serial
-description: How to set up a serial connection to your Azure Percept DK with a USB to TTL serial cable
---- Previously updated : 02/07/2023----
-# Connect to Azure Percept DK over serial
--
-Follow the steps below to set up a serial connection to your Azure Percept DK through [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-
-> [!WARNING]
-> **If you have a private preview dev kit** we do **NOT** recommend attempting to connect your dev kit over serial except in extreme failure cases (e.g. you bricked your device). Connecting over serial requires that the private preview dev kit be disassembled to access the GPIO pins. Taking apart the carrier board enclosure is very difficult and could break the Wi-Fi antenna cables.
-
-## Prerequisites
--- [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)-- Host PC-- Azure Percept DK-- [USB to TTL serial cable](https://www.adafruit.com/product/954)-
- :::image type="content" source="./media/how-to-connect-to-percept-dk-over-serial/usb-serial-cable.png" alt-text="USB to TTL serial cable.":::
-
-## Start the serial connection
-
-1. Connect the [USB to TTL serial cable](https://www.adafruit.com/product/954) to the three GPIO pins on the motherboard as shown below.
-
- :::image type="content" source="./media/how-to-connect-to-percept-dk-over-serial/apdk-serial-pins.jpg" alt-text="Carrier board serial pin connections.":::
-
-1. Power on your dev kit and connect the USB side of the serial cable to your PC.
-
-1. In Windows, go to **Start** -> **Windows Update settings** -> **View optional updates** -> **Driver updates**. Look for a Serial to USB update in the list, check the box next to it, and select **Download and Install**.
-
-1. Next, open the Windows Device Manager (**Start** -> **Device Manager**). Go to **Ports** and select **USB to UART** to open **Properties**. Note which COM port your device is connected to.
-
-1. Select the **Port Settings** tab. Make sure **Bits per second** is set to 115200.
-
-1. Open PuTTY. Enter the following and select **Open** to connect to your devkit via serial:
-
- 1. Serial line: COM[port #]
- 1. Speed: 115200
- 1. Connection Type: Serial
-
- :::image type="content" source="./media/how-to-connect-to-percept-dk-over-serial/putty-serial-session.png" alt-text="PuTTY session window with serial parameters selected.":::
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-deploy-model.md
- Title: Deploy a vision AI model to Azure Percept DK
-description: Learn how to deploy a vision AI model to your Azure Percept DK from Azure Percept Studio
---- Previously updated : 02/07/2023----
-# Deploy a vision AI model to Azure Percept DK
--
-Follow this guide to deploy a vision AI model to your Azure Percept DK from within Azure Percept Studio.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-
-## Model deployment
-
-1. Power on your devkit.
-
-1. Navigate to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-
-1. On the left side of the overview page, click **Devices**.
-
- :::image type="content" source="./media/how-to-deploy-model/overview-devices-inline.png" alt-text="Azure Percept Studio overview screen." lightbox="./media/how-to-deploy-model/overview-devices.png":::
-
-1. Select your devkit from the list.
-
- :::image type="content" source="./media/how-to-deploy-model/select-device.png" alt-text="Percept devices list.":::
-
-1. On the next page, click **Deploy a sample model** if you would like to deploy one of the pre-trained sample vision models. If you would like to deploy an existing [custom no-code vision solution](./tutorial-nocode-vision.md), click **Deploy a Custom Vision project**. If you do not see your Custom Vision projects, set project's domain to "General (Compact)" on [Custom Vision portal](https://www.customvision.ai/) and train a model again. Other domains are not supported currently.
-
- :::image type="content" source="./media/how-to-deploy-model/deploy-model.png" alt-text="Model choices for deployment.":::
-
-1. If you opted to deploy a no-code vision solution, select your project and your preferred model iteration, and click **Deploy**.
-
-1. If you opted to deploy a sample model, select the model and click **Deploy to device**.
-
-1. When your model deployment is successful, you will receive a status message in the upper right corner of your screen. To view your model inferencing in action, click the **View stream** link in the status message to see the RTSP video stream from the Vision SoM of your devkit.
-
-## Next steps
-
-Learn how to view your [Azure Percept DK telemetry](how-to-view-telemetry.md).
azure-percept How To Determine Your Update Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-determine-your-update-strategy.md
- Title: Determine your update strategy for Azure Percept DK
-description: Pros and cons of Azure Percept DK OTA or USB cable updates. Recommendation for choosing the best update approach for different users.
---- Previously updated : 02/07/2023----
-# Determine your update strategy for Azure Percept DK
---
->[!CAUTION]
->**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
-
-To keep your Azure Percept DK software update-to-date, Microsoft offers two update methods for the dev kit. **Update over USB cable** or **Over-the-air (OTA) update**.
-
-Update over USB cable does a clean install to the dev kit. Existing configurations and all the user data in each partition will be wiped out after the new image is deployed. To do that, connect the dev kit to a host system with a type-c USB cable. The host system can be a Windows/Linux machine. You can also use this update method as the factory reset. To do that, redeployed the exact same version to the dev kit. Refer to [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md) for detail about the USB cable update.
-
-The OTA update is built on top of the [Device Update for IoT Hub](../iot-hub-device-update/device-update-resources.md) Azure service. Connect the dev kit to Azure IoT Hub to do this type of update. Configurations and user data will be preserved after the OTA update. Refer to [Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md) for detail about doing the OTA update.
-
-Check the pros and cons for both USB cable update and OTA update, then follow the Microsoft recommendations for different scenarios.
-
-## USB cable update
--- Pros
- - You don't need to connect the dev kit to internet/Azure.
- - Latest image is always applicable no matter what version of software and firmware are currently loaded on the dev kit.
-- Cons
- - Reimages the device and will remove configurations, and user data.
- - Need to rerun OOBE and download any non-preloaded container.
- - Cannot be performed remotely.
-
-## OTA update
--- Pros
- - Preserves user data, configurations, and downloaded containers. Dev kit will keep working as it was after the OTA.
- - Update can be performed remotely.
- - Several similar devices can be updated at the same time. Updates can also be schedule to happen, for example during night-time.
-- Cons
- - There may be hard-stop version(s) that cannot be skipped. Refer to [Hard-Stop Version of OTA](./software-releases-over-the-air-updates.md#hard-stop-version-of-ota).
- - The device needs to connect to a IoT Hub, which has been properly configured the ΓÇ£Device Update for IoT HubΓÇ¥ feature.
- - It won't work well for downgrade.
-
-> [!IMPORTANT]
-> Device Update for IoT Hub does not block deployment of image with version that is older than the currently running OS. However doing so to dev kit will result in loss of data and functionality.
-
-## Microsoft recommendations
-
-|Type|Scenario|Update Method|
-|::||::|
-|Production|Keep dev kit up to date for latest fix and security patch while it's already running your solution or deployed to the field.|OTA|
-|Production/Develop|Unboxing a new dev kit and update it to the latest software.|USB|
-|Production/Develop|Want to update to the latest software version while have already skipped several monthly releases.|USB|
-|Production/Develop|Factory rest a dev kit.|USB|
-|Develop|During solution development, want to keep the dev kit OS and F/W up to date.|USB/OTA|
-|Develop|Jump to any specific (older) version for issue investigation/debugging.|USB|
-
-## Next steps
-
-After deciding the update method of choice, visit the following pages for getting ready to do update:
-
-USB cable update
--- [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md)-- [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md)-
-OTA
--- [Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md)-- [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md)
azure-percept How To Get Hardware Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-get-hardware-support.md
- Title: Get Azure Percept hardware support from ASUS
-description: This guide shows you how to contact ASUS for technical support for the Azure Percept DK hardware.
---- Previously updated : 02/07/2023----
-# Get Azure Percept hardware support from ASUS
--
-As the OEM for the Azure Percept DK, ASUS provides technical support to all customer who purchased a device and business support for customers interested in purchasing devices. This article shows you how to contact ASUS to get support.
--
-## Prerequisites
--- For the best support experience, be ready to provide the device serial number found on the back of the developer board.-
-## Get technical support for hardware issues
-If you experience issues with the hardware, which can include missing and broken components, you must contact ASUS directly to get support.
-1. Go to the official ASUS [technical support website](https://www.asus.com/us/support/contact/troubleshooting).
-1. Enter your device serial number or if you've already registered your device you can select **Choose your registered product**.
-1. If you don't have the serial number, you can search for the product.
- 1. Under **Select a Product**, select **Show All Products**.
- 1. Select **AIOT & Industrial Solutions**.
- 1. For **Product Series**, select **Intelligent Edge Computer**.
- 1. For **Product Model**, select **Azure Percept DK** or **Azure Percept Audio**.
- 1. Select **Continue**.
-1. You'll be presented with a list of articles for common support issues. Select the article that best represents the issue you're experiencing.
-1. If none of the articles cover your issue, select the **See support** button for options on receiving direct support.
-
-## Get support for business and sales questions
-If you would like to contact ASUS about purchasing dev kits, you can submit an inquiry and they'll connect you with the right people.
-1. Go to the [inquiry form](https://iot.asus.com/inquiry/).
-1. Fill out the needed fields and **Submit**.
-1. An ASUS representative will follow up.
-
-## Next steps
-If you think you need more support, you can also try these options from Microsoft.
-- [Microsoft Q&A](/answers/products/)-- [Azure Support](https://azure.microsoft.com/support/plans/)
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-manage-voice-assistant.md
- Title: Manage your Azure Percept voice assistant application
-description: Configure a voice assistant application within Azure Percept Studio
---- Previously updated : 02/07/2023----
-# Manage your Azure Percept voice assistant application
--
-This article describes how to configure the keyword and commands of your voice assistant application within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). For guidance on configuring your keyword within IoT Hub instead of the portal, see this [how-to article](./how-to-configure-voice-assistant.md).
-
-If you have not yet created a voice assistant application, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
-
-## Keyword configuration
-
-A keyword is a word or short phrase used to activate a voice assistant. For example, "Hey Cortana" is the keyword for the Cortana assistant. Voice activation allows your users to start interacting with your product hands-free by speaking the keyword. As your product continuously listens for the keyword, all audio is processed locally on the device until a detection occurs to ensure user data stays as private as possible.
-
-### Configuration within the voice assistant demo window
-
-1. Select **change** next to **Custom Keyword** on the demo page.
-
- :::image type="content" source="./media/manage-voice-assistant/hospitality-demo.png" alt-text="Screenshot of hospitality demo window.":::
-
- If you do not have the demo page open, navigate to the device page (see below) and select **Test your voice assistant** under **Actions** to access the demo.
-
-1. Select one of the available keywords and select **Save** to apply changes.
-
-1. The three LED lights on the Azure Percept Audio device will change to bright blue (no flashing) when configuration is complete and your voice assistant is ready to use.
-
-### Configuration within the device page
-
-1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), select on **Devices** on the left menu pane.
-
- :::image type="content" source="./media/manage-voice-assistant/portal-overview-devices.png" alt-text="Screenshot of Azure Percept Studio overview page with Devices highlighted.":::
-
-1. Select the device to which your voice assistant application was deployed.
-
-1. Open the **Speech** tab.
-
- :::image type="content" source="./media/manage-voice-assistant/device-page.png" alt-text="Screenshot of the edge device page with the Speech tab highlighted.":::
-
-1. Select **Change** next to **Keyword**.
-
- :::image type="content" source="./media/manage-voice-assistant/change-keyword-device.png" alt-text="Screenshot of the available speech solution actions.":::
-
-1. Select one of the available keywords and select **Save** to apply changes.
-
-1. The three LED lights on the Azure Percept Audio device will change to bright blue (no flashing) when configuration is complete and your voice assistant is ready to use.
-
-## Create a custom keyword
-
-With [Speech Studio](https://speech.microsoft.com/), you can create a custom keyword for your voice assistant. It takes up to 30 minutes to train a basic custom keyword model.
-
-Follow the [Speech Studio documentation](../cognitive-services/speech-service/custom-keyword-basics.md) for guidance on creating a custom keyword. Once configured, your new keyword will be available in the Project Santa Cruz portal for use with your voice assistant application.
-
-## Commands configuration
-
-Custom commands make it easy to build rich voice commanding apps optimized for voice-first interaction experiences. Custom commands are best suited for task completion or command-and-control scenarios.
-
-### Configuration within the voice assistant demo window
-
-1. Select **Change** next to **Custom Command** on the demo page. If you do not have the demo page open, navigate to the device page (see below) and select **Test your voice assistant** under **Actions** to access the demo.
-
-1. Select one of the available custom commands and select **Save** to apply changes.
-
-### Configuration within the device page
-
-1. On the overview page of the [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), select on **Devices** on the left menu pane.
-
-1. Select the device to which your voice assistant application was deployed.
-
-1. Open the **Speech** tab.
-
-1. Select **Change** next to **Command**.
-
-1. Select one of the available custom commands and select **Save** to apply changes.
-
-## Create custom commands
-
-With [Speech Studio](https://speech.microsoft.com/), you can create custom commands for your voice assistant to execute.
-
-Follow the [Speech Studio documentation](../cognitive-services/speech-service/quickstart-custom-commands-application.md) for guidance on creating custom commands. Once configured, your new commands will be available in Azure Percept Studio for use with your voice assistant application.
-
-## Next steps
-
-After building a voice assistant application, try developing a [no-code vision solution](./tutorial-nocode-vision.md) with your Azure Percept DK.
azure-percept How To Set Up Advanced Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-advanced-network-settings.md
- Title: Set up advanced network settings on the Azure Percept DK
-description: This article walks user through the Advanced Network Settings during the Azure Percept DK setup experience
---- Previously updated : 02/07/2023----
-# Set up advanced network settings on the Azure Percept DK
--
-The Azure Percept DK allows you to control various networking components on the dev kit. This is done via the Advanced Networking Settings in the setup experience. To access these settings, you must [start the setup experience](./quickstart-percept-dk-set-up.md) and select **Access advanced network settings** on the **Network connection** page.
--
-## Select the security setting
-IPv4 and IPv6 are both supported on the Azure Percept DK for local connectivity.
-
-> [!NOTE]
-> Azure IoTHub [does not supports IPv6](../iot-hub/iot-hub-understand-ip-address.md#support-for-ipv6). IPv4 must be used to communicate with IoTHub.
-1. Select the IPv4 radio button and then select an item under Network Settings to change its IPv4 settings
-1. Select the IPv6 radio button and then select an item under Network Settings to change its IPv6 settings
-1. The **Network setting** options may change depending on your selection
--
-## Define a Static IP Address
-
-1. From the **Advanced network settings** page, select **Define a static IP address** from the list
-1. Select your **Network interface** from the drop-down menu
-1. Uncheck **Dynamic IP address**
-1. Enter your static IP address
-1. Enter your subnet IP address (also known as your subnet mask)
-1. Enter your gateway IP address (also known as your default gateway)
-1. If applicable, enter your DNS address
-1. Select **Save**
-1. Select **Back** to return to the main **Advanced networking settings** page
-
-## Define DNS server for Docker
-These settings allow you to modify or add new Docker DNS IP addresses.
-
-> [!NOTE]
-> The Docker service is configured to only accept IPv4 DNS entries. Entries added from the IPv6 screens will be ignored.
-
-1. From the **Advanced network settings** page, select **Define DNS server for Docker** from the list
-1. Enter your Docker IPv4 DNS address
-1. Select **Save**
-1. Select **Back** to return to the main **Advanced networking settings** page
-
-## Define Bridge Internet Protocol for Docker
-The Bridge Internet Protocol screens allow you to change the IPv4 address space for Docker containers.
-
-If your deviceΓÇÖs IP address shares the same route as the Azure Percept DevkitΓÇÖs Docker service (172.17.x.x), then you'll need to change DockerΓÇÖs Bridge to something else to allow communications between Docker containers and Azure IoTHub.
-
-1. From the **Advanced network settings** page, select **Define Bridge Internet Protocol for Docker** from the list
-1. Type in the Docker Bridge Internet Protocol IPv4 address (BIP)
-1. Select **Save**
-1. Select **Back** to return to the main **Advanced networking settings** page
-
-## Define an internet proxy server
-This option allows you to define a proxy server.
-
-1. From the **Advanced network settings** page, select **Define an internet proxy server** from the list
-1. Check the **Use a proxy server** box to enable this option.
-1. Enter the **HTTP address** of your proxy server (if applicable)
-1. Enter the **HTTPS address** of your proxy server (if applicable)
-1. Enter the **FTP address** of your proxy server (if applicable)
-1. In the **No proxy addresses** box, enter any IP addresses that the proxy server shouldn't be used for
-1. Select **Save**
-1. Select **Back** to return to the main **Advanced networking settings** page
-
-## Setup Zero Touch Provisioning
-
-> [!IMPORTANT]
-> The **Setup Zero Touch Provisioning** setting are not currently functional
-
-This option allows you to turn your Azure Percept DK into a [Wi-Fi Easy Connect<sup>TM</sup> Bulk Configurator](https://techcommunity.microsoft.com/t5/internet-of-things/simplify-wi-fi-iot-device-onboarding-with-zero-touch/ba-p/2161129#:~:text=A%20Wi-Fi%20Easy%20Connect%E2%84%A2%20Configurator%2C%20paired%20with%20the,device%20to%20any%20WPA2-Personal%20or%20WPA3-Personal%20wireless%20LAN.) for onboarding multiple devices at once to your Wi-Fi infrastructure.
-
-## Define access point passphrase
-This option allows you to update the Azure Percept DK Wi-Fi access point passphrase.
-
-> [!CAUTION]
-> You will be immediately disconnected from the Wi-Fi access point after saving your new passphrase. Please reconnect using the new passphrase to regain access.
-
-Passphrase requirements:
-- Must be between 12 and 123 characters long-- Must contain at least one lower case, one upper case, one number, and one special character.-
-1. From the **Advanced network settings** page, select **Define access point passphrase** from the list
-1. Enter a new passphrase
-1. Select **Save**
-1. Select **Back** to return to the main **Advanced networking settings** page
-
-## Next steps
-After you have finished making changes in **Advanced network settings**, select the **Back** button to [continue through the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
azure-percept How To Set Up Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-over-the-air-updates.md
- Title: Set up Azure IoT Hub to deploy over-the-air updates
-description: Learn how to configure Azure IoT Hub to deploy updates over-the-air to Azure Percept DK
---- Previously updated : 02/07/2023----
-# Set up Azure IoT Hub to deploy over-the-air updates
--
->[!CAUTION]
->**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
-
-Keep your Azure Percept DK secure and up to date using over-the-air updates. In a few simple steps, you will be able to set up your Azure environment with Device Update for IoT Hub and deploy the latest updates to your Azure Percept DK.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your dev kit to a Wi-Fi network, created an IoT Hub, and connected your dev kit to the IoT Hub-
-## Create a Device Update Account
-
-1. Go to the [Azure portal](https://portal.azure.com) and sign in with the Azure account you are using with Azure Percept.
-
-1. In the search bar at the top of the page, enter **Device Update for IoT Hubs**.
-
-1. Select **Device Update for IoT Hubs** when it appears in the search bar.
-
-1. Select the **+Add** button in the upper-left portion of the page.
-
-1. Select the **Azure Subscription** and **Resource Group** associated with your Azure Percept device and its IoT Hub.
-
-1. Specify a **Name** and **Location** for your Device Update Account.
-
-1. Check the box that says **Assign Device Update Administrator role.**
-
-1. Review the details and select **Review + Create**.
-
-1. Select the **Create** button.
-
-1. Once deployment is complete, click **Go to resource**.
-
-## Create a Device Update Instance
-
-1. In your Device Update for IoT Hub resource, click **Instances** under **Instance Management**.
-
-1. Click **+ Create**, specify an instance name, and select the IoT Hub associated with your Azure Percept device. This may take a few minutes to complete.
-
-1. Click **Create**.
-
-## Configure IoT Hub
-
-1. In the Instance Management **Instances** page, wait for your Device Update Instance to move to a **Succeeded** state. Click the **Refresh** icon to update the state.
-
-1. Select the Instance that has been created for you and click **Configure IoT Hub**. In the left pane, select **I agree to make these changes** and click **Update**.
-
-1. Wait for the process to complete successfully.
-
-## Configure access control roles
-
-The final step will enable you to grant permissions to users to publish and deploy updates.
-
-1. In your Device Update for IoT Hub resource, select **Access control (IAM)**.
-
-1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
- | Setting | Value |
- | | |
- | Role | Device Update Administrator |
- | Assign access to | User, group, or service principal |
- | Members | &lt;Your account or the account deploying updates&gt; |
-
- ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-
-> [!TIP]
-> If you would like to give more people in your organization access, you can repeat this step and make each of these users a **Device Update Administrator**.
-
-## Next steps
-
-You are now ready to [update your Azure Percept dev kit over-the-air](./how-to-update-over-the-air.md) using Device Update for IoT Hub.
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-ssh-into-percept-dk.md
- Title: Connect to Azure Percept DK over SSH
-description: Learn how to SSH into your Azure Percept DK with PuTTY
---- Previously updated : 02/07/2023----
-# Connect to Azure Percept DK over SSH
---
-Follow the steps below to set up an SSH connection to your Azure Percept DK through OpenSSH or [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-
-## Prerequisites
--- A Windows, Linux, or OS X based host computer with Wi-Fi capability-- An SSH client (see the next section for installation guidance)-- An Azure Percept DK (dev kit)-- An SSH account, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)-
-## Install your preferred SSH client
-
-If your host computer runs Linux or OS X, SSH services are included in those operating systems and can be run without a separate client application. Check your operating system product documentation for more information on how to run SSH services.
-
-If your host computer runs Windows, you may have two SSH client options to choose from: OpenSSH and PuTTY.
-
-### OpenSSH
-
-Windows 10 includes a built-in SSH client called OpenSSH that can be run with a simple command in a command prompt. We recommend using OpenSSH with Azure Percept if it's available to you. To check if your Windows computer has OpenSSH installed, follow these steps:
-
-1. Go to **Start** -> **Settings**.
-
-1. Select **Apps**.
-
-1. Under **Apps & features**, select **Optional features**.
-
-1. Type **OpenSSH Client** into the **Installed features** search bar. If OpenSSH appears, the client is already installed, and you may move on to the next section. If you do not see OpenSSH, select **Add a feature**.
-
- :::image type="content" source="./media/how-to-ssh-into-percept-dk/open-ssh-install.png" alt-text="Screenshot of settings showing OpenSSH installation status.":::
-
-1. Select **OpenSSH Client** and select **Install**. You may now move on to the next section. If OpenSSH is not available to install on your computer, follow the steps below to install PuTTY, a third-party SSH client.
-
-### PuTTY
-
-If your Windows computer does not include OpenSSH, we recommend using [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html). To download and install PuTTY, complete the following steps:
-
-1. Go to the [PuTTY download page](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
-
-1. Under **Package files**, select the 32-bit or 64-bit .msi file to download the installer. If you are unsure of which version to choose, check out the [FAQs](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-32bit-64bit).
-
-1. Select the installer to start the installation process. Follow the prompts as required.
-
-1. Congratulations! You have successfully installed the PuTTY SSH client.
-
-## Initiate the SSH connection
- >[!NOTE]
- > You may receive a warning message from your SSH client that says your cached/stored key does not match. This can happen after flashing your device or when the IP/hostname has changed and is now provisioned to a new target. Follow your SSH clientΓÇÖs instructions for how to remedy this.
-1. Power on your Azure Percept DK.
-
-1. If your dev kit is already connected to a network over Ethernet or Wi-Fi, skip to the next step. Otherwise, connect your host computer directly to the dev kitΓÇÖs Wi-Fi access point. Like connecting to any other Wi-Fi network, open the network and internet settings on your computer, select the following network, and enter the network password when prompted:
-
- - **Network name**: depending on your dev kit's operating system version, the name of the Wi-Fi access point is either **scz-xxxx** or **apd-xxxx** (where ΓÇ£xxxxΓÇ¥ is the last four digits of the dev kitΓÇÖs MAC address)
- - **Password**: can be found on the Welcome Card that came with the dev kit
-
- > [!WARNING]
- > While connected to the Azure Percept DK Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
-
-1. Complete the SSH connection process according to your SSH client.
-
-### Using OpenSSH
-
-1. Open a command prompt (**Start** -> **Command Prompt**).
-
-1. Enter the following into the command prompt:
-
- ```console
- ssh [your ssh user name]@[IP address]
- ```
-
- If your computer is connected to the dev kit's Wi-Fi access point, the IP address will be 10.1.1.1. If your dev kit is connected over Ethernet, use the local IP address of the device, which you can get from the Ethernet router or hub. If your dev kit is connected over Wi-Fi, you must use the IP address that was assigned to your dev kit during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
-
- > [!TIP]
- > If your dev kit is connected to a Wi-Fi network but you do not know its IP address, go to Azure Percept Studio and [open your device's video stream](./how-to-view-video-stream.md). The address bar in the video stream browser tab will show your device's IP address.
-
-1. Enter your SSH password when prompted.
-
- :::image type="content" source="./media/how-to-ssh-into-percept-dk/open-ssh-prompt.png" alt-text="Screenshot of Open SSH command prompt login.":::
-
-1. If this is the first time connecting to your dev kit through OpenSSH, you may also be prompted to accept the host's key. Enter **yes** to accept the key.
-
-1. Congratulations! You have successfully connected to your dev kit over SSH.
-
-### Using PuTTY
-
-1. Open PuTTY. Enter the following into the **PuTTY Configuration** window and select **Open** to SSH into your dev kit:
-
- 1. Host Name: [IP address]
- 1. Port: 22
- 1. Connection Type: SSH
-
- The **Host Name** is your dev kit's IP address. If your computer is connected to the dev kit's Wi-Fi access point, the IP address will be 10.1.1.1. If your dev kit is connected over Ethernet, use the local IP address of the device, which you can get from the Ethernet router or hub. If your dev kit is connected over Wi-Fi, you must use the IP address that was assigned to your dev kit during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md).
-
- > [!TIP]
- > If your dev kit is connected to a Wi-Fi network but you do not know its IP address, go to Azure Percept Studio and [open your device's video stream](./how-to-view-video-stream.md). The address bar in the video stream browser tab will show your device's IP address.
-
- :::image type="content" source="./media/how-to-ssh-into-percept-dk/ssh-putty.png" alt-text="Screenshot of PuTTY Configuration window.":::
-
-1. A PuTTY terminal will open. When prompted, enter your SSH username and password into the terminal.
-
-1. Congratulations! You have successfully connected to your dev kit over SSH.
-
-## Next steps
-
-After connecting to your Azure Percept DK through SSH, you may perform a variety of tasks, including [device troubleshooting](./troubleshoot-dev-kit.md) and [USB updates](./how-to-update-via-usb.md).
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-troubleshoot-setup.md
- Title: Troubleshoot the Azure Percept DK setup experience
-description: Get troubleshooting tips for some of the more common issues found during the setup experience
---- Previously updated : 02/07/2023----
-# Troubleshoot the Azure Percept DK setup experience
---
-Refer to the table below for workarounds to common issues found during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md). If your issue still persists, contact Azure customer support.
-
-|Issue|Reason|Workaround|
-|:--|:|:-|
-|SSH username or password is lost or unknown. | You have lost or canΓÇÖt remember your SSH username or password. | You can create a new SSH user by relaunching the Setup Experience as outlined in this [launch the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md#launch-the-azure-percept-dk-setup-experience) step. Skip through the steps youΓÇÖve previously completed until you get to the SSH User creation page and create a new SSH login.|
-|The Azure Percept DK Wi-Fi access point passphrase/password doesn't work.| We have heard reports that some welcome cards may have incorrect passphrase/password printed.|In order to retrieve the Wi-Fi SoftAP password of your Percept Devkit, you must connect and use an Ethernet cable. Once the cable is attached and the device powered on, youΓÇÖll need to find the IP address that was assigned to your devkit. In ΓÇ£HomeΓÇ¥ situations you may be able to log in to your home router to get this info. Look for an ASUS device named ΓÇ£apdk-xxxxxxxΓÇ¥. The article [Connect to Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md) can guide you if youΓÇÖre not able to get the IP from the router. Once you have the EthernetΓÇÖs IP, start a web browser and manually copy and paste this address: IE: http://192.168.0.222 to go to the Onboarding experience. <ul><li>DonΓÇÖt go through the full setup just yet.</li><li>Setup Wi-Fi and create your SSH User and pause there (you can leave that window open and complete setup after we get the SoftAP password).</li><li>Open Putty or an SSH client and connect to the devkit using the user/pw you just created.</li><li>**Run: sudo tpm2_handle2psk 0x81000009.** The output from this command will be your password for the SoftAP. ΓÇô Please write it down on the card ΓÇô</li></ul>
-|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If you don't sign in with the correct account, it may result in an experience that is inconsistent with the documentation.|The result of a browser setting to "remember" an account you have previously used.|From the Azure page, select on your account name in the upper right corner and select **sign out**. You can then sign in with the correct account.|
-|The Azure Percept DK Wi-Fi access point (apd-xxxx) doesn't appear in the list of available Wi-Fi networks.|It's usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it doesn't appear after more than 15 minutes, reboot the device.|
-|The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|It's usually because of a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they're running on the host computer.|
-|The host computer shows a security warning about the connection to the Azure Percept DK access point.|It's a known issue that will be fixed in a later update.|It's safe to continue through the setup experience.|
-|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) appears in the network list but fails to connect.|It could be because of a temporary corruption of the dev kit's Wi-Fi access point.|Reboot the dev kit and try again.|
-|Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity to communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and enterprise EAP-TLS connectivity is currently not supported.|Ensure your Wi-Fi network type is supported and has internet connectivity.|
-|**Device Code Errors** <br><br> If you received the following errors on the device code page: <br><br>**In the setup experience UI** - Unable to get device code. Make sure the device is connected to internet; <br><br> **In the browser's Web Developer Mode** - Failed to load resource: the server responded with a status of 503 (Service Unavailable) <br><br>or <br><br>Certificate not yet valid. | There's an issue with your Wi-Fi network that's blocking the device from completing DNS queries or contacting a NTP time server. | Try plugging in an Ethernet cable to the devkit or connecting to a different Wi-Fi network then try again. <br><br> Less common causes could be that your host computer's date/time are incorrect. |
-|**Issues when using the Device Code**<br><br> After using the Device Code and signing into Azure, you're presented with an Azure error message about policy permissions or compliance issues. You'll be unable to continue the setup experience.<br><br> Here are some of the errors you may see:<br><br>**BlockedByConditionalAccessOnSecurityPolicy** The tenant admin has configured a security policy that blocks this request. Check the security policies defined at the tenant level to determine if your request meets the policy. <br><br>**DevicePolicyError** The user tried to sign into a device from a platform that's currently not supported through Conditional Access policy.<br><br>**DeviceNotCompliant** - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune<br><br>**BlockedByConditionalAccess** Access has been blocked by Conditional Access policies. The access policy doesn't allow token issuance.<br><br>**You cannot access this right now** - Your sign-in was successful but does not meet the criteria to access this resource |Some Azure tenants may block the usage of ΓÇ£Device CodesΓÇ¥ for manipulating Azure resources as a Security precaution. It's usually the result of your organization's Conditional Access IT policies. As a result, the Azure Percept Setup experience can't create any Azure resources for you. <br><br>Your Conditional Access policy requires you to be connected to your corporate network or VPN to proceed. |Work with your organization to understand their conditional access IT policies. |
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-over-the-air.md
- Title: Update Azure Percept DK over-the-air
-description: Learn how to receive over-the air (OTA) updates to your Azure Percept DK
---- Previously updated : 02/07/2023----
-# Update Azure Percept DK over-the-air
---
->[!CAUTION]
->**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
-
-Follow this guide to learn how to update the OS and firmware of the carrier board of your Azure Percept DK over-the-air (OTA) with Device Update for IoT Hub.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your dev kit to a Wi-Fi network, created an IoT Hub, and connected your dev kit to the IoT Hub-- [Device Update for IoT Hub has been successfully configured](./how-to-set-up-over-the-air-updates.md)-- Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management** > **Updates** in your IoT Hub, select the **switch to the older version** link in the banner.-
- :::image type="content" source="media/how-to-update-over-the-air/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/how-to-update-over-the-air/switch-banner.png":::
- > [!CAUTION]
- > The devkit is currently incompatible with latest changes in the Device Update for IoT Hub service. Therefore, it is important to switch to the **older version** of the Device Update of Iot Hub as instructed above before moving forward.
--
-## Import your update file and manifest file
-
-> [!NOTE]
-> If you have already imported the update, you can skip directly to **Create a device update group**.
-
-1. Determine which [manifest and update package](./software-releases-over-the-air-updates.md) is appropriate for your dev kit.
-
-1. Navigate to the Azure IoT Hub that you are using for your Azure Percept device. On the left-hand menu panel, select **Device Updates** under **Automatic Device Management**.
-
-1. You will see several tabs across the top of the screen. Select the **Updates** tab.
-
-1. Select **+ Import New Update** below the **Ready to Deploy** header.
-
-1. Select on the boxes under **Select Import Manifest File** and **Select Update Files** to select your manifest file (.json) and update file (.swu).
-
-1. Select the folder icon or text box under **Select a storage container** and select the appropriate storage account. If youΓÇÖve already created a storage container, you may reuse it. Otherwise, select **+ Container** to create a new storage container for OTA updates. Select the container you wish to use and click **Select**.
-
-1. Select **Submit** to start the import process. Due to the image size, the submission process may take up to 5 minutes.
-
- > [!NOTE]
- > You may be asked to add a Cross Origin Request (CORS) rule to access the selected storage container. Select **Add rule and retry** to proceed.
-
-1. When the import process begins, you will be redirected to the **Import History** tab of the **Device Updates** page. Click **Refresh** to monitor progress while the import process is completed. Depending on the size of the update, this may take a few minutes or longer (during peak times, the import service may take up to 1 hour).
-
-1. When the **Status** column indicates that the import has succeeded, select the **Ready to Deploy** tab and click **Refresh**. You should now see your imported update in the list.
-
-## Create a device update group
-
-Device Update for IoT Hub allows you to target an update to specific groups of Azure Percept DKs. To create a group, you must add a tag to your target set of devices in Azure IoT Hub.
-
-> [!NOTE]
-> If you have already created a group, you can skip to the next section.
-
-Group Tag Requirements:
--- You can add any value to your tag except for "Uncategorized", which is a reserved value.-- Tag value cannot exceed 255 characters.-- Tag value can only contain these special characters: ΓÇ£.ΓÇ¥,ΓÇ¥-ΓÇ£,ΓÇ¥_ΓÇ¥,ΓÇ¥~ΓÇ¥.-- Tag and group names are case-sensitive.-- A device can only have one tag. Any subsequent tag added to the device will override the previous tag.-- A device can only belong to one group.-
-1. Add a Tag to your device(s):
- 1. From **IoT Edge** on the left navigation pane, find your Azure Percept DK and navigate to its **Device Twin**.
- 1. Add a new **Device Update for IoT Hub** tag value as shown below (```<CustomTagValue>``` refers to your tag value/name, for example, AzurePerceptGroup1). Learn more about device twin [JSON document tags](../iot-hub/iot-hub-devguide-device-twins.md#device-twins).
-
- ```json
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- },
- ```
-
-1. Click **Save** and resolve any formatting issues.
-
-1. Create a group by selecting an existing Azure IoT Hub tag:
-
- 1. Navigate back to your Azure IoT Hub page.
- 1. Select **Device Updates** under **Automatic Device Management** on the left-hand menu panel.
- 1. Select the **Groups** tab. This page will display the number of ungrouped devices connected to Device Update.
- 1. Select **+ Add** to create a new group.
- 1. Select an IoT Hub tag from the list and click **Submit**.
- 1. Once the group is created, the update compliance chart and groups list will update. The chart shows the number of devices in various states of compliance: **On latest update**, **New updates available**, **Updates in progress**, and **Not yet grouped**.
-
-## Deploy an update
-
-1. You should see your newly created group with a new update listed under **Available updates** (you may need to refresh once). Select the update.
-
-1. Confirm that the correct device group is selected as the target device group. Select a **Start date** and **Start time** for your deployment, then click **Create deployment**.
-
- > [!CAUTION]
- > Setting the start time in the past will trigger the deployment immediately.
-
-1. Check the compliance chart. You should see the update is now in progress.
-
-1. After your update has completed, your compliance chart will reflect your new update status.
-
-1. Select the **Deployments** tab at the top of the **Device updates** page.
-
-1. Select your deployment to view the deployment details. You may need to click **Refresh** until the **Status** changes to **Succeeded**.
-
-## Next steps
-
-Your dev kit is now successfully updated. You may continue development and operation with your dev kit.
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-via-usb.md
- Title: Update Azure Percept DK over a USB-C connection
-description: Learn how to update the Azure Percept DK over a USB-C cable connection
---- Previously updated : 02/07/2023----
-# Update Azure Percept DK over a USB-C connection
--
-This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here's an overview of what you will be doing during this procedure.
-
-1. Download the update package to a host computer
-1. Run the command that transfers the update package to the dev kit
-1. Set the dev kit into USB mode using SSH or DIP switches
-1. Connect the dev kit to the host computer via a USB-C cable
-1. Wait for the update to complete
-
-> [!WARNING]
-> Updating your dev kit over USB will delete all existing data on the device, including AI models and containers.
->
-> Follow all instructions in order. Skipping steps could put your dev kit in an unusable state.
-
-## Prerequisites
--- An Azure Percept DK-- A Windows or Linux-based host computer with Wi-Fi capability and an available USB-C or USB-A port-- A USB-C to USB-A cable (optional, sold separately)-- An SSH login account, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)-- A hex wrench, shipped with the dev kit, to remove the screws on the back of the dev kit (if using the DIP switch method)-
-> [!NOTE]
-> **Mac users** - Updating the Azure Percept DK over a USB connection will not work using a Mac as the host computer.
-
-## Download software tools and update files
-
-1. [NXP UUU tool](https://github.com/NXPmicro/mfgtools/releases). Download the **Latest Release** uuu.exe file (for Windows) or the uuu file (for Linux) under the **Assets** tab. UUU is a tool created by NXP used to update NXP dev boards.
-
-1. [Download the update files](./software-releases-usb-cable-updates.md). They're all contained in a zip file that you'll extract in the next section.
-
-1. Ensure all three build artifacts are present:
- - Azure-Percept-DK-*&lt;version number&gt;*.raw
- - fast-hab-fw.raw
- - emmc_full.txt
-
-## Set up your environment
-
-1. Create a folder/directory on the host computer in a location that is easy to access via command line.
-
-1. Copy the UUU tool (**uuu.exe** or **uuu**) to the new folder.
-
-1. Extract the previously downloaded update files to the new folder that contains the UUU tool.
-
-## Run the command that transfers the update package to the dev kit
-
-1. Open a Windows command prompt (Start > cmd) or a Linux terminal and **navigate to the folder where the update files and UUU tool are stored**.
-
-1. Enter the following command in the command prompt or terminal.
-
- - Windows:
-
- ```console
- uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw
- ```
-
- - Linux:
-
- ```bash
- sudo ./uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw
- ```
-
-1. The command prompt window will display a message that says **Waiting for Known USB Device to Appear...** The UUU tool is now waiting for the dev kit to be detected by the host computer. **Proceed to the next steps and put the dev kit into USB mode.**
-
-## Set the dev kit into USB mode
-There are two ways to set the dev kit into "USB mode," via SSH or by changing the DIP switches on the dev kit. Choose the method that works best for your situation.
-
-### Using SSH
-SSH is the safest and preferred method for setting the dev kit into USB mode. However, it does require you can connect to the dev kit's wi-fi access point. If you're unable to connect to the dev kit's wi-fi access point, then you'll need to use the DIP switch method.
-
-1. Connect the supplied USB-C cable to the dev kit's USB-C port and to the host computer's USB-C port. If your computer only has a USB-A port, connect a USB-C to USB-A cable (sold separately) to the dev kit and host computer.
-
-1. Connect to your dev kit via SSH. If you need help to SSH, [follow these instructions](./how-to-ssh-into-percept-dk.md).
-
-1. In the SSH terminal, enter the following commands:
-
- 1. Set the device to USB update mode:
-
- ```bash
- sudo flagutil -wBfRequestUsbFlash -v1
- ```
-
- 1. Reboot the device. The update installation will begin.
-
- ```bash
- sudo reboot -f
- ```
-
-### Using the DIP switch method
-Use the DIP switch method when you can't SSH into to the device.
-
-1. Unplug the dev board if it's plugged into the power cable.
-1. Remove the four screws on the back of the dev board using the hex wrench that was shipped with the dev kit.
-
- :::image type="content" source="media/how-to-usb-update/dip-switch-01.jpg" alt-text="remove the four screws on the back of the dev board":::
-
-1. Gently slide the dev board in the direction of the LEDs. The heat sink will stay attached to the top of the dev board. Only slide the dev board 2 - 3 centimeters to avoid disconnecting any cables.
-
- :::image type="content" source="media/how-to-usb-update/dip-switch-02.jpg" alt-text="slide the board over a few centimeters":::
-
-1. The DIP switches can be found on the corner of the board. There are four switches that each have two positions, up (1) or down (0). The default positions of the switches are up-down-down-up (1001). Using a paperclip or other fine-pointed instrument, change the positions of the switches to down-up-down-up (0101).
-
- :::image type="content" source="media/how-to-usb-update/dip-switch-03.jpg" alt-text="find the switches on the lower corner of the board":::
-
-1. The dev kit is now in USB mode and you can continue with the next steps. **Once the update is completed, change the DIP switches back to the default position of up-down-down-up (1001).** Then slide the dev board back into position and reapply the four screws on the back.
-
-## Connect the dev kit to the host computer via a USB-C cable
-This procedure uses the dev kit's single USB-C port for updating. If your computer has a USB-C port, you can use the USB-C to USB-C cable that came with the dev kit. If your computer only has a USB-A port, you'll need to use a USB-C to USB-A cable (sold separately).
-
-1. Connect the dev kit to the host computer using the appropriate USB-C cable.
-1. The host computer should now detect the dev kit as a USB device. If you successfully ran the command that transfers the update package to the dev kit and your command prompt says Waiting for Known USB Device to Appear...,** then the update should automatically start in about 10 seconds.
-
-## Wait for the update to complete
-
-1. Navigate back to the other command prompt or terminal. When the update is finished, you'll see a message with ```Success 1 Failure 0```:
-
- > [!NOTE]
- > After updating, your device will be reset to factory settings and you will lose your Wi-Fi connection and SSH login.
-
-1. Once the update is complete, power off the dev kit. Unplug the USB cable from the PC.
-1. If you used the DIP switch method to put the dev kit into USB mode, be sure to put the DIP switches back to the default positions. Then slide the dev board back into position and reapply the four screws on the back. ΓÇ»
-
-## Next steps
-
-Work through the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md) to reconfigure your device.
azure-percept How To View Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-telemetry.md
- Title: View your Azure Percept DK's model inference telemetry
-description: Learn how to view your Azure Percept DK's vision model inference telemetry in Azure IoT Explorer
---- Previously updated : 02/07/2023----
-# View your Azure Percept DK's model inference telemetry
--
-Follow this guide to view your Azure Percept DK's vision model inference telemetry in [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases).
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- [Vision AI model has been deployed to your Azure Percept DK](./how-to-deploy-model.md)-
-## View telemetry
-
-1. Power on your devkit.
-
-1. Download and install [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases). If you are a Windows user, select the .msi file.
-
- :::image type="content" source="./media/how-to-view-telemetry/azure-iot-explorer-download.png" alt-text="Download screen for Azure IoT Explorer.":::
-
-1. Connect your IoT Hub to Azure IoT Explorer:
-
- 1. Go to the [Azure portal](https://portal.azure.com).
-
- 1. Select **All resources**.
-
- :::image type="content" source="./media/how-to-view-telemetry/azure-portal.png" alt-text="Azure portal homepage.":::
-
- 1. Select the IoT Hub that your Azure Percept DK is connected to.
-
- :::image type="content" source="./media/how-to-view-telemetry/iot-hub.png" alt-text="IoT Hub list in Azure portal.":::
-
- 1. On the left side of your IoT Hub page, select **Shared access policies**.
-
- :::image type="content" source="./media/how-to-view-telemetry/shared-access-policies.png" alt-text="IoT Hub page showing shared access policies.":::
-
- 1. Click on **iothubowner**.
-
- :::image type="content" source="./media/how-to-view-telemetry/iothubowner.png" alt-text="Shared access policies screen with iothubowner highlighted.":::
-
- 1. Click the blue copy icon next to **Connection stringΓÇöprimary key**.
-
- :::image type="content" source="./media/how-to-view-telemetry/connection-string.png" alt-text="iothubowner window with connection string copy button highlighted.":::
-
- 1. Open Azure IoT Explorer and click **+ Add connection**.
-
- 1. Paste the connection string into the **Connection string** box on the **Add connection string** window and click **Save**.
-
- :::image type="content" source="./media/how-to-view-telemetry/add-connection-string.png" alt-text="Azure Iot Explorer window with box for pasting connection string into.":::
-
- 1. Point the Vision SoM at an object for model inferencing.
-
- 1. Select **Telemetry**.
-
- 1. Click **Start** to view telemetry events from the device.
-
-## Next steps
-Learn how to view your [Azure Percept DK video stream](./how-to-view-video-stream.md).
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-view-video-stream.md
- Title: View your Azure Percept DK RTSP video stream
-description: Learn how to view the RTSP video stream from Azure Percept DK
---- Previously updated : 02/07/2023----
-# View your Azure Percept DK RTSP video stream
--
-Follow this guide to view the RTSP video stream from the Azure Percept DK within Azure Percept Studio. Inferencing from vision AI models deployed to your device will be viewable in the web stream.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-
-## View the RTSP video stream
-
-1. Power on your devkit.
-
-1. Navigate to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-
-1. On the left side of the overview page, click **Devices**.
-
- :::image type="content" source="./media/how-to-view-video-stream/overview-devices-inline.png" alt-text="Azure Percept Studio overview screen." lightbox="./media/how-to-view-video-stream/overview-devices.png":::
-
-1. Select your devkit from the list.
-
- :::image type="content" source="./media/how-to-view-video-stream/select-device.png" alt-text="Screenshot of available devices in Azure Percept Studio.":::
-
-1. Click **View your device stream**.
-
- :::image type="content" source="./media/how-to-view-video-stream/view-device-stream.png" alt-text="Screenshot of the device page showing available vision project actions.":::
-
- This opens a separate tab showing the live web stream from your Azure Percept DK.
-
- :::image type="content" source="./media/how-to-view-video-stream/webstream.png" alt-text="Screenshot of the device web stream.":::
-
-## Next steps
-
-Learn how to view your [Azure Percept DK telemetry](./how-to-view-telemetry.md).
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/known-issues.md
- Title: Azure Percept known issues
-description: Learn more about Azure Percept known issues and their workarounds
---- Previously updated : 02/07/2023---
-# Azure Percept known issues
--
-Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept Studio that the product teams are aware of. Workarounds and troubleshooting steps are provided where possible. If you're blocked by any of these issues, you can post it as a question on [Microsoft Q&A](/answers/topics/azure-percept.html) or submit a customer support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-|Area|Symptoms|Description of Issue|Workaround|
-|-||||
-| Azure Percept DK | Unable to deploy the sample and demo models in Azure Percept Studio | Sometimes the azureeyemodule or azureearspeechmodule modules stop running. edgeAgent logs show "too many levels of symbolic links" error. | Reset your device by [updating it over USB](./how-to-update-via-usb.md) |
-| Localization | Non-English speaking users may see parts of the Azure Percept DK setup experience display English text. | The Azure Percept DK setup experience isn't fully localized. | Fix is scheduled for July 2021 |
-| Azure Percept DK | When going through the setup experience on a Mac, the setup experience my abruptly close after connecting to Wi-Fi. | When going through the setup experience on a Mac, it initially opens in a window rather than a web browser. The window isn't persisted once the connection switches from the device's access point to Wi-Fi. | Open a web browser and go to https://10.1.1.1, which will allow you to complete the setup experience. |
-| Azure Percept DK | The dev kit is running a custom model and after rebooting the dev kit it runs the default sample model. | The module twin container for the custom model doesn't persist across device reboots. After the reboot, the module twin for the custom module must be rebuilt which can take 5 minutes or longer. The dev kit will run the default model until that process is completed. | After a reboot, you must wait until the custom module twin is recreated. |
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-8020-integration.md
- Title: Azure Percept DK 80/20 integration
-description: Learn more about how Azure Percept DK integrates with the 80/20 railing system.
---- Previously updated : 02/07/2023----
-# Azure Percept DK 80/20 integration
--
-The Azure Percept DK and Audio Accessory were designed to integrate with the [80/20 T-slot aluminum building system](https://8020.net/).
-
-## 80/20 features
-
-The Azure Percept DK carrier board, Azure Percept Vision device, and Azure Percept Audio accessory are manufactured with integrated 80/20 1010 extrusion connections, which allow for endless mounting configurations with 80/20 rails. This integration enables customers and solution builders to more easily extend their proof of concepts to production environments.
-
-Check out this video for more information on how to use Azure Percept DK with 80/20:
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/Dg6mtD9psLU]
--
-To accelerate your prototype creation, we have also designed a few examples of 80/20 mounting assemblies.
-We have included the technical drawings of these options below so they can be easily ordered and built by
-your local 80/20 distributor: https://8020.net/distributorlookup/
--
-| Design Name | Overall Design | CAD Design 1 | CAD Design 2 |
-|--|--|||
-| Wall Mounts| ![Wall Mount Image](./media/overview-8020-integration-images/wall-mount.png) | [ ![Horizontal Wall Mount Image](./media/overview-8020-integration-images/azure-percept-8020-horizontal-wall-mount-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-horizontal-wall-mount.png#lightbox) | [ ![Vertical Wall Mount Image](./media/overview-8020-integration-images/azure-percept-8020-vertical-wall-mount-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-vertical-wall-mount.png#lightbox)|
-| Ceiling Mounts| ![Ceiling Mount Image](./media/overview-8020-integration-images/ceiling-mount.png) | [ ![Ceiling Mount Small Image](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-small-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-small.png#lightbox) | [ ![Ceiling Mount Large Image](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-large-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-ceiling-mount-large.png#lightbox) |
-| Arm Mounts | ![Arm Mount Image](./media/overview-8020-integration-images/arm-mount.png) | [ ![Clamp Bracket Image](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket-mini.png) ](./media/overview-8020-integration-images/azure-percept-8020-clamp-bracket.png#lightbox)
---
azure-percept Overview Advanced Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-advanced-code.md
- Title: Advanced development with Azure Percept
-description: Learn more about advanced development tools on Azure Percept
---- Previously updated : 02/07/2023----
-# Advanced development with Azure Percept
--
-With Azure Percept, software developers and data scientists can use advanced code workflows for AI lifecycle management. Through a growing open source library, they can use samples to get started with their AI development journey and build production-ready solutions.
-
-## Get started with advanced development
-
-See the [Azure Percept DK advanced development GitHub](https://github.com/microsoft/azure-percept-advanced-development) for
-up-to-date guidance, tutorials, and examples for things like:
--- Deploying a custom AI model to your Azure Percept DK-- Updating a supported model with transfer learning-- And more-
-## Next steps
-
-Learn more about the available [Azure Percept AI models](./overview-ai-models.md). If none of these models suit your needs, use the advanced code journey to bring your own model or computer vision pipeline to the Percept DK. If you have a contribution that you think would help others, feel free open a pull request too.
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-ai-models.md
- Title: Azure Percept sample AI models
-description: Learn more about the AI models available for prototyping and deployment
---- Previously updated : 02/07/2023----
-# Azure Percept sample AI models
--
-Azure Percept enables you to develop and deploy AI models directly to your [Azure Percept DK](./overview-azure-percept-dk.md) from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). Model deployment utilizes [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/#iotedge-overview).
-
-## Sample AI models
-
-Azure Percept Studio contains sample models for the following applications:
--- people detection-- vehicle detection-- general object detection-- products-on-shelf detection-
-With pre-trained models, no coding or training data collection is required. Simply [deploy your desired model](./how-to-deploy-model.md) to your Azure Percept DK from the portal and open your devkitΓÇÖs [video stream](./how-to-view-video-stream.md) to see the model inferencing in action. [Model inferencing telemetry](./how-to-view-telemetry.md) can also be accessed through the [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases) tool.
-
-## Reference solutions
-
-A people counting reference solution is also available. This reference solution is an open-source AI application providing edge-based people counting with user-defined zone entry/exit events. Video and AI output from the on-premises edge device is egressed to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/), with the user interface running as an Azure website. AI inferencing is provided by an open-source AI model for people detection.
--
-## Custom no-code solutions
-
-Through Azure Percept Studio, you can develop custom [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) solutions, no coding required.
-
-For custom vision solutions, both object detection and classification AI models are available. Simply upload and tag your training images, which can be taken directly with the Azure Percept Vision SoM of the Azure Percept DK if desired. Model training and evaluation are easily performed in [Custom Vision](https://www.customvision.ai/), which is part of [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/#overview).
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/9LvafyazlJM]
-
-For custom speech solutions, voice assistant templates are currently available for the following applications:
--- Hospitality: hotel room equipped with voice-controlled smart devices.-- Healthcare: care facility equipped with voice-controlled smart devices.-- Inventory: inventory hub equipped with voice-controlled smart devices.-- Automotive: automotive hub equipped with voice-controlled smart devices.-
-Pre-built voice assistant keywords and commands are available directly through the portal. Custom keywords and commands may be created and trained in [Speech Studio](https://speech.microsoft.com/), which is also part of Azure Cognitive Services.
-
-## Advanced development
-
-Please see the [Azure Percept DK advanced development GitHub](https://github.com/microsoft/azure-percept-advanced-development) for
-up-to-date guidance, tutorials, and examples for things like:
--- Deploying a custom AI model to your Azure Percept DK-- Updating a supported model with transfer learning-- And more
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-audio.md
- Title: Azure Percept Audio device overview
-description: Learn more about Azure Percept Audio
---- Previously updated : 02/07/2023----
-# Azure Percept Audio device overview
--
-Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services.
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/Qj8NGn-7s5A]
-
-## Azure Percept Audio components
-
-Azure Percept Audio contains the following major components:
--- Production-ready Azure Percept Audio device (SoM) with a four-microphone linear array and audio processing via XMOS Codec-- Developer (interposer) board: 2x buttons, 3x LEDs, Micro USB, and 3.5 mm audio jack-- Required cables: FPC cable, USB Micro Type-B to USB-A-- Welcome card-- Mechanical mounting plate with integrated 80/20 1010 series mount-
-## Compute capabilities ​
-
-Azure Percept Audio passes audio input through the speech stack that runs on the CPU of the Azure Percept DK carrier board in a hybrid edge-cloud manner. Therefore, Azure Percept Audio requires a carrier board with an OS that supports the speech stack in order to perform. ​
-
-The audio processing is done as follows: ​
--- Azure Percept Audio: captures and converts the audio and sends it to the DK and audio jack.--- Azure Percept DK: the speech stack performs beam forming and echo cancellation and processes the incoming audio to optimize for speech. After processing, it performs keyword spotting.--- Cloud: processes natural language commands and phrases, keyword verification, and retraining. ​--- Offline: if the device is offline, it will detect the keyword and capture internet connection status telemetry. An increased false accept rate for keyword spotting may be observed as keyword verification in the cloud cannot be performed. ​-
-## Getting started
--- [Assemble your Azure Percept DK](./quickstart-percept-dk-unboxing.md)-- [Connect your Azure Percept Audio device to your devkit](./quickstart-percept-audio-setup.md)-- [Complete the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)-
-## Build a no-code prototype
-
-Build a [no-code speech solution](./tutorial-no-code-speech.md) in [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) using Azure Percept voice assistant templates for hospitality, healthcare, inventory, and automotive scenarios.
-
-### Manage your no-code speech solution
--- [Configure your voice assistant in Azure Percept Studio](./how-to-manage-voice-assistant.md)-- [Configure your voice assistant in Iot Hub](./how-to-configure-voice-assistant.md)-- [Azure Percept Audio troubleshooting](./troubleshoot-audio-accessory-speech-module.md)-
-## Additional technical information
--- [Azure Percept Audio datasheet](./azure-percept-audio-datasheet.md)-- [Button and LED behavior](./audio-button-led-behavior.md)-
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-dk.md
- Title: Azure Percept DK and Vision device overview
-description: Learn more about the Azure Percept DK and Azure Percept Vision
---- Previously updated : 02/07/2023----
-# Azure Percept DK and Vision device overview
--
-Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md).
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/Qj8NGn-7s5A]
-
-## Key features
--- Run AI at the edge. With built-in hardware acceleration, the dev kit can run AI models without a connection to the cloud.--- Hardware root of trust security built in. Learn more about [Azure Percept security](./overview-percept-security.md).--- Seamless integration with [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and other Azure services, such as Azure IoT Hub, Azure Cognitive Services, and [Live Video Analytics](../azure-video-analyzer/video-analyzer-docs/overview.md).--- Compatible with [Azure Percept Audio](./overview-azure-percept-audio.md), an optional accessory for building AI audio solutions.--- Support for third-party AI tools, such as ONNX and TensorFlow.--- Integration with the 80/20 railing system, which allows for endless device mounting configurations. Learn more about [80/20 integration](./overview-8020-integration.md).-
-## Hardware components
--- Azure Percept DK carrier board:
- - NXP iMX8m processor
- - Trusted Platform Module (TPM) version 2.0
- - Wi-Fi and Bluetooth connectivity
- - For more information, see the [Azure Percept DK datasheet](./azure-percept-dk-datasheet.md)
--- Azure Percept Vision system-on-module (SoM):
- - Intel Movidius Myriad X (MA2085) vision processing unit (VPU)
- - RGB camera sensor
- - For more information, see the [Azure Percept Vision datasheet](./azure-percept-vision-datasheet.md)
-
-## Getting started with Azure Percept DK
--- Set up your dev kit:
- - [Unbox and assemble the Azure Percept DK](./quickstart-percept-dk-unboxing.md)
- - [Complete the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
--- Start building vision and audio solutions:
- - [Create a no-code vision solution in Azure Percept Studio](./tutorial-nocode-vision.md)
- - [Create a no-code speech solution in Azure Percept Studio](./tutorial-no-code-speech.md) (Azure Percept Audio accessory required)
-
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-studio.md
- Title: Azure Percept Studio overview
-description: Learn more about Azure Percept Studio
---- Previously updated : 02/07/2023----
-# Azure Percept Studio overview
--
-[Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI-capable hardware and powerful Azure AI and IoT cloud services.
-
-In the Studio, you can see your edge AI-capable devices as end points for collecting initial and ongoing training data as well as deployment targets for model iterations. Having access to devices and training data allows for rapid prototyping and iterative edge AI model development for both [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) scenarios.
-
-The workflows in Azure Percept Studio integrate many Azure AI and IoT services, like Azure IoT Hub, Custom Vision, Speech Studio, and Azure ML, so you can use these services to create an end-to-end solution without significant pre-existing knowledge. If you are already familiar with these Azure services, you can also connect to and modify existing Azure service resources outside of Azure Percept Studio.
-
-Regardless of if you are a beginner or an advanced AI model and solution developer, working on a prototype, or moving to a production solution, Azure Percept Studio offers access to workflows you can use to reduce friction around building edge AI solutions.
-
-## Video walkthrough
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/rZsUuCytZWY]
-
-## Next steps
--- Check out [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)-- Learn more about [Azure Percept AI models and solutions](./overview-ai-models.md)
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept.md
- Title: Azure Percept overview
-description: Learn more about the Azure Percept platform
---- Previously updated : 02/07/2023----
-# Azure Percept overview
--
-Azure Percept is a family of hardware, software, and services designed to accelerate business transformation using IoT and AI at the edge. Azure Percept covers the full stack from silicon to services to solve the integration challenges of edge AI at scale.
-
-The integration challenges one faces when attempting to deploy edge AI solutions at scale can be summed up into three major points of friction:
--- Identifying and selecting the right silicon to power the solutions.-- Ensuring the collective security of the hardware, software, models, and data.-- The ability to build and manage solutions that seamlessly work at scale.-
-## Components of Azure Percept
-
-The main components of Azure Percept are:
--- [Azure Percept DK.](./overview-azure-percept-dk.md)-
- - A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders, and customers.
--- Services and workflows that accelerate edge AI model and solution development.-
- - Development workflows and pre-built models accessible from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
- - Model development services.
- - Device management services for scaling.
- - End-to-end security.
--- AI hardware reference design and certification programs.-
- - Provides the ecosystem of hardware developers with patterns and best practices for developing edge AI hardware that can be integrated easily with Azure AI and IoT services.
-
-## Next steps
-
-Learn more about [Azure Percept DK](./overview-azure-percept-dk.md) and [Azure Percept Studio](./overview-azure-percept-studio.md).
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md
- Title: Azure Percept security
-description: Learn more about Azure Percept security
---- Previously updated : 02/07/2023----
-# Azure Percept security
--
-Azure Percept devices are designed with a hardware root of trust. This built-in security helps protect inference data and privacy-sensitive sensors like cameras and microphones and enables device authentication and authorization for Azure Percept Studio services.
-
-> [!NOTE]
-> The Azure Percept DK is licensed for use in development and test environments only.
-
-## Devices
-
-### Azure Percept DK
-
-Azure Percept DK includes a Trusted Platform Module (TPM) version 2.0, which can be utilized to connect the device to Azure Device Provisioning Services (DPS) with additional security. TPM is an industry-wide, ISO standard from the Trusted Computing Group. Check out the [Trusted Computing Group website](https://trustedcomputinggroup.org/resource/tpm-library-specification/) for more information about the complete TPM 2.0 spec or the ISO/IEC 11889 spec. For more information on how DPS can provision devices in a secure manner, see [Azure IoT Hub Device Provisioning Service - TPM Attestation](../iot-dps/concepts-tpm-attestation.md).
-
-### Azure Percept system-on-modules (SoMs)
-
-The Azure Percept Vision system-on-module (SoM) and the Azure Percept Audio SoM both include a microcontroller unit (MCU) for protecting access to the embedded AI sensors. At every boot, the MCU firmware authenticates and authorizes the AI accelerator with Azure Percept Studio services using the Device Identifier Composition Engine (DICE) architecture. DICE works by breaking up boot into layers and creating Unique Device Secrets (UDS) for each layer and configuration. If different code or configuration is booted at any point in the chain, the secrets will be different. You can read more about DICE at the [DICE workgroup spec](https://trustedcomputinggroup.org/work-groups/dice-architectures/). For configuring access to Azure Percept Studio and required services see the article on [configuring firewalls for Azure Percept DK](concept-security-configuration.md).
-
-Azure Percept devices use the hardware root of trust to secure firmware. The boot ROM ensures integrity of firmware between ROM and operating system (OS) loader, which in turn ensures integrity of the other software components, creating a chain of trust.
-
-## Services
-
-### IoT Edge
-
-Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge-enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge-enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](../iot-edge/iot-edge-security-manager.md).
-
-### Device Update for IoT Hub
-
-Device Update for IoT Hub enables more secure, scalable, and reliable over-the-air updating that brings renewable security to Azure Percept devices. It provides rich management controls and update compliance through insights. Azure Percept DK includes a pre-integrated device update solution providing resilient update (A/B) from firmware to OS layers.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about firewall configurations and security recommendations](concept-security-configuration.md)
-
azure-percept Overview Update Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-update-experience.md
- Title: Azure Percept DK update experience
-description: Learn more about how to keep the Azure Percept DK up-to-date
---- Previously updated : 02/07/2023----
-# Azure Percept DK update experience
--
-With Azure Percept DK, you may update your dev kit OS and firmware over-the-air (OTA) or via USB. OTA updating is an easy way keep devices up-to-date through the [Device Update for IoT Hub](../iot-hub-device-update/index.yml) service. USB updates are available for users who are unable to use OTA updates or when a factory reset of the device is needed. Check out the following how-to guides to get started with Azure Percept DK device updates:
--- [Set up Azure IoT Hub to deploy over-the-air (OTA) updates to your Azure Percept DK](./how-to-set-up-over-the-air-updates.md)-- [Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md)-- [Update your Azure Percept DK over USB](./how-to-update-via-usb.md)-
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-audio-setup.md
- Title: Set up the Azure Percept Audio device
-description: Learn how to connect your Azure Percept Audio device to your Azure Percept DK
---- Previously updated : 02/07/2023----
-# Set up the Azure Percept Audio device
--
-Azure Percept Audio works out of the box with Azure Percept DK. No unique setup is required.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- Azure Percept Audio-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- Speaker or headphones that can connect to a 3.5-mm audio jack (optional)-
-## Connecting your devices
-
-1. Connect the Azure Percept Audio device to the Azure Percept DK carrier board with the included Micro USB to USB Type-A cable. Connect the Micro USB end of the cable to the Audio interposer (developer) board and the Type-A end to the Percept DK carrier board.
-
-1. (Optional) connect your speaker or headphones to your Azure Percept Audio device via the audio jack, labeled "Line Out." This will allow you to hear audio responses.
-
-1. Power on the dev kit by connecting it to the power adaptor. LED L02 will change to blinking white, which indicates that the device was powered on and is authenticating.
-
-1. Wait for the authentication process to complete, which takes up to 5 minutes.
-
-1. You're ready to begin prototyping when you see one of the following LED states:
-
- - LED L02 will change to solid white, indicating that the authentication is complete and the devkit is configured without a keyword.
- - All three LEDs turn blue, indicating that the authentication is complete and the devkit is configured with a keyword.
-
-## Next steps
-
-Create a [no-code speech solution](./tutorial-no-code-speech.md) in [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-set-up.md
- Title: Set up the Azure Percept DK device
-description: Set up your Azure Percept DK and connect it to Azure IoT Hub
---- Previously updated : 02/07/2023----
-# Set up the Azure Percept DK device
--
-Complete the Azure Percept DK setup experience to configure your dev kit. After verifying that your Azure account is compatible with Azure Percept, you will:
--- Launch the Azure Percept DK setup experience-- Connect your dev kit to a Wi-Fi network-- Set up an SSH login for remote access to your dev kit-- Create a new device in Azure IoT Hub-
-If you experience any issues during this process, refer to the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for possible solutions.
-
-> [!NOTE]
-> The setup experience web service automatically shuts down after 30 minutes of non-use. If you are unable to connect to the dev kit or do not see its Wi-Fi access point, restart the device.
-
-## Prerequisites
--- An Azure Percept DK (dev kit).-- A Windows, Linux, or OS X based host computer with Wi-Fi capability and a web browser.-- An active Azure subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- Users must have the **owner** or **contributor** role within the subscription. Follow the steps below to check your Azure account role. For more information on Azure role definitions, check out the [Azure role-based access control documentation](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).-
- > [!CAUTION]
- > Close all browser windows and log into your subscription via the [Azure portal](https://portal.azure.com/) before starting the setup experience. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for additional information on how to ensure you are signed in with the correct account.
-
-### Check your Azure account role
-
-To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within the subscription, follow these steps:
-
-1. Go to the [Azure portal](https://portal.azure.com/) and log in with the same Azure account you intend to use with Azure Percept.
-
-1. Select the **Subscriptions** icon (it looks like a yellow key).
-
-1. Select your subscription from the list. If you do not see your subscription, make sure you are signed in with the correct Azure account. If you wish to create a new subscription, follow [these steps](../cost-management-billing/manage/create-subscription.md).
-
-1. From the Subscription menu, select **Access control (IAM)**.
-1. Select **View my access**.
-1. Check the role:
- - If your role is listed as **Reader** or if you get a message that says you do not have permission to see roles, you will need to follow the necessary process in your organization to elevate your account role.
- - If your role is listed as **owner** or **contributor**, your account will work with Azure Percept, and you may proceed with the setup experience.
-
-## Launch the Azure Percept DK Setup Experience
-
-1. Connect your host computer to the dev kitΓÇÖs Wi-Fi access point. Select the following network, and enter the Wi-Fi password when prompted:
-
- - **Network name**: **scz-xxxx** or **apd-xxxx** (where **xxxx** is the last four digits of the dev kitΓÇÖs MAC address)
- - **Password**: found on the welcome card that came with the dev kit
-
- > [!WARNING]
- > While connected to the Azure Percept DK's Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
-
-1. Once connected to the dev kitΓÇÖs Wi-Fi access point, the host computer should automatically launch the setup experience in a new browser window with **your.new.device/** in the address bar. If the tab does not open automatically, launch the setup experience by going to [http://10.1.1.1](http://10.1.1.1) in a web browser. Make sure your browser is signed in with the same Azure account credentials you intend to use with Azure Percept.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-welcome.png" alt-text="Welcome page.":::
-
- > [!NOTE]
- > **Mac users** - When going through the setup experience on a Mac, it initially opens a captive portal window that is unable to complete the Setup Experience. Close this window and open a web browser to https://10.1.1.1, which will allow you to complete the setup experience.
-
-## Connect your dev kit to a Wi-Fi network
-
-1. Select **Next** on the **Welcome** screen.
-
-1. On the **Network connection** page, select **Connect to a new WiFi network**.
-
- If you have already connected your dev kit to your Wi-Fi network, select **Skip**.
-
-1. Select your Wi-Fi network from the list of available networks and select **connect**. Enter your network password when prompted.
-
- > [!NOTE]
- > It is recommended that you set this network as a ΓÇ£Preferred NetworkΓÇ¥ (Mac) or check the box to ΓÇ£connect automaticallyΓÇ¥ (Windows). This will allow the host computer to reconnect to the dev kitΓÇÖs Wi-Fi access point if the connection is interrupted during this process.
-
-1. Once your dev kit has successfully connected, the page will show the IPv4 address assigned to your dev kit. **Write down the IPv4 address displayed on the page.** You will need the IP address when connecting to your dev kit over SSH for troubleshooting and device updates.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-success-wi-fi.png" alt-text="Copy IP address.":::
-
- > [!NOTE]
- > The IP address may change with each device boot.
-
-1. Read through the License Agreement (you must scroll to the bottom of the agreement), select **I have read and agree to the License Agreement**, and select **Next**.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-eula.png" alt-text="Accept EULA.":::
-
-## Set up an SSH login for remote access to your dev kit
-
-1. Create an SSH account name and public key/password, and select **Next**.
-
- If you have already created an SSH account, you can skip this step.
-
- **Write down your login information for later use**.
-
- > [!NOTE]
- > SSH (Secure Shell) is a network protocol that enables you to connect to the dev kit remotely via a host computer.
-
-## Create a new device in Azure IoT Hub
-
-1. Select **Setup as a new device** to create a new device within your Azure account.
-
- A Device Code will now be obtained from Azure.
-
-1. Select **Copy**.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-copy-code.png" alt-text="Copy device code.":::
-
- > [!NOTE]
- > If you receive an error when using your Device Code in the next steps or if the Device Code wonΓÇÖt display, please see our [troubleshooting steps](./how-to-troubleshoot-setup.md) for more information.
-
-1. Select **Login to Azure**.
-
-1. A new browser tab will open with a window that says **Enter code**. Paste the code into the window and select **Next**. Do NOT close the **Welcome** tab with the setup experience.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-enter-code.png" alt-text="Enter device code.":::
-
-1. Sign into Azure Percept using the Azure account credentials you will use with your dev kit. Leave the browser tab open when complete.
-
- > [!CAUTION]
- > Your browser may auto cache other credentials. Double check that you are signing in with the correct account.
-
- After successfully signing into Azure Percent on the device, select **Allow**.
-
- Return to the **Welcome** tab to continue the setup experience.
-
-1. When the **Assign your device to your Azure IoT Hub** page appears on the **Welcome** tab, take one of the following actions:
-
- - Jump ahead to **Select your Azure IoT Hub**, if your Iot Hub is listed on this page.
- - If you do not have an IoT Hub or would like to create a new one, select **Create a new Azure IoT Hub**.
-
- > [!IMPORTANT]
- > If you have an IoT Hub, but it is not appearing in the list, you may have signed into Azure Percept with the wrong credentials. See the [setup troubleshooting guide](./how-to-troubleshoot-setup.md) for help.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-iot-hub-select.png" alt-text="Select an IoT Hub.":::
-
-1. To create a new IoT Hub,
-
- - Select the Azure subscription you will use with Azure Percept.
- - Select an existing Resource Group. If one does not exist, select **Create new** and follow the prompts.
- - Select the Azure region closest to your physical location.
- - Give your new IoT Hub a name.
- - Select the **S1 (standard) pricing tier**.
-
- > [!NOTE]
- > It may take a few minutes for your IoT Hub deployment to complete. If you need a higher [message throughput](../iot-hub/iot-hub-scaling.md#tier-editions-and-units) for your edge AI applications, you may [upgrade your IoT Hub to a higher standard tier](../iot-hub/iot-hub-upgrade.md) in the Azure Portal at any time. B and F tiers do NOT support Azure Percept.
-
-1. When the deployment is complete, select **Register**.
-
-1. Select your Azure IoT Hub
-
-1. Enter a device name for your dev kit and select **Next**.
- > [!NOTE]
- > You **cannot** reuse an existing IoT Edge device name when going through the **Create New Device** flow. If you wish to reuse the same name and deploy the default Percept modules, you must first delete the existing cloud-side device instance from the Azure IoT Hub before proceeding.
-
-1. The device modules will now be deployed to your device. ΓÇô this can take a few minutes.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-finalize.png" alt-text="Finalizing setup.":::
-
-1. **Device setup complete!** Your dev kit has successfully linked to your IoT Hub and deployed all modules.
-
- > [!NOTE]
- > After completion, the dev kitΓÇÖs Wi-Fi access point will automatically disconnect and the setup experience web service will be terminated resulting in two notifications.
-
- > [!NOTE]
- > The IoT Edge containers that get configured as part of this set up process use certificates that will expire after 90 days. The certificates can be automatically regenerated by restarting IoT Edge. Refer to [Manage certificates on an IoT Edge device](../iot-edge/how-to-manage-device-certificates.md) for more details.
-
-1. Connect your host computer to the Wi-Fi network your dev kit is connected to.
-
-1. Select **Continue to the Azure portal**.
-
- :::image type="content" source="./media/quickstart-percept-dk-setup/main-Azure-portal-continue.png" alt-text="Go to Azure Percept Studio.":::
-
-### Video walk-through
-See the below video for a visual walk-through of the steps described above.
-> [!VIDEO https://www.youtube.com/embed/-dmcE2aQkDE]
-
-## Next steps
-
-Now that your dev kit is set up, it's time to see vision AI in action.
-- [View your dev kit video stream](./how-to-view-video-stream.md)-- [Deploy a vision AI model to your dev kit](./how-to-deploy-model.md)
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/quickstart-percept-dk-unboxing.md
- Title: Unbox and assemble the Azure Percept DK device
-description: Learn how to unbox, connect, and power on your Azure Percept DK
---- Previously updated : 02/07/2023----
-# Unbox and assemble the Azure Percept DK device
--
-Once you have received your Azure Percept DK, reference this guide for information on connecting the components and powering on the device.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- P7 screwdriver (optional, used for securing the power cable connector to the carrier board)-
-## Unbox and assemble your device
-
-1. Unbox the Azure Percept DK components.
-
- The devkit contains a carrier board, Azure Percept Vision SoM, accessories box containing antennas and cables, and a welcome card with a hex key.
-
-1. Connect the devkit components.
-
- > [!NOTE]
- > The power adapter port is located on the right side of the carrier board. The remaining ports (2x USB-A, 1x USB-C, and 1x Ethernet) and the power button are located on the left side of the carrier board.
-
- 1. Hand screw both Wi-Fi antennas into the carrier board.
-
- 1. Connect the Vision SoM to the carrier board's USB-C port with the USB-C cable.
-
- 1. Connect the power cable to the power adapter.
-
- 1. Remove any remaining plastic packaging from the devices.
-
- 1. Connect the power adapter/cable to the carrier board and a wall outlet. To fully secure the power cable connector to the carrier board, use a P7 screwdriver (not included in the devkit) to tighten the connector screws.
-
- 1. After plugging the power cable into a wall outlet, the device will automatically power on. The power button on the left side of the carrier board will be illuminated. Please allow some time for the device to boot up.
-
- > [!NOTE]
- > The power button is for powering off or restarting the device while connected to a power outlet. In the event of a power outage, the device will automatically restart.
-
-For a visual demonstration of the devkit assembly, please see 0:00 through 0:50 of the following video:
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/-dmcE2aQkDE]
-
-## Next steps
-
-Now that your devkit is connected and powered on, please see the [Azure Percept DK setup experience walkthrough](./quickstart-percept-dk-set-up.md) to complete device setup. The setup experience allows you to connect your devkit to a Wi-Fi network, set up an SSH login, create an IoT Hub, and provision your devkit to your Azure account. Once you have completed device setup, you will be ready to start prototyping.
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
- Title: Retirement of Azure Percept DK
-description: Information about the retirement of the Azure Percept DK.
---- Previously updated : 02/22/2023---
-# Retirement of Azure Percept DK
-
-**Update February 22, 2023**: A firmware update for the Percept DK Vision and Audio accessory components (also known as Vision and Audio SOM) is now available [here](https://aka.ms/audio_vision_som_update), and will enable the accessory components to continue functioning beyond the retirement date.
-
-The Azure Percept preview including the Percept DK, Audio Accessory, and associated supporting Azure services will be retired March 30th, 2023.
-
-## How does this change affect me?
--- After March 30, 2023, the Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio, OS updates, containers updates, view web stream, and Custom Vision integration. -- Microsoft will no longer provide customer success support for the Azure Percept DK and Audio Accessory and any associated supporting services for the Percept DK.-- Existing Custom Vision and Custom Speech projects created using Percept Studio for the Percept DK will not be deleted and billing if applicable will continue for any backend services utilized after the retirement date. You can no longer modify or use your project with Percept Studio.
-
-## Recommended action
-
-You should plan to close the resources and projects associated with the Azure Percept Studio and DK to avoid unanticipated billing of backend resources and projects will continue to bill after retirement.
-
-## Help and support
-
-If you have questions regarding Azure Percept DK, please refer to the below **FAQ**.
--
-| Question | Answer |
-|-||
-| Why is this change being made? | The Azure Percept DK, Percept Audio Accessory, and Azure Percept Studio were in preview stages. This is like a public beta. Previews give customers the opportunity to try the latest and greatest software and hardware. Due to the preview nature of the software and hardware retirements may occur. |
-| What is changing? | Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio and Updates. |
-| When is this change occurring? | On March 30, 2023. Until this date your DK and Studio will function as-is and updates and customer support will be offered. After this date, all updates and customer support will stop. |
-| Will my projects be deleted? | Your projects remain in the underlying Azure Services they were created in (example: Custom Vision, Speech Studio, etc.). They won't be deleted due to this retirement. You can no longer modify or use your project with Percept Studio. |
-| Do I need to do anything before March 30, 2023? | Yes, you will need to close the resources and projects associated with the Azure Percept Studio and DK to avoid future billing, as these backend resources and projects will continue to bill after retirement. For SoMs to continue having functionality, you will need to apply a firmware update that enables the Vision SoM and Audio SoM to retain functionality that is now available [here](https://aka.ms/audio_vision_som_update). |
-
azure-percept Return To Voice Assistant Application Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/return-to-voice-assistant-application-window.md
- Title: Find your voice assistant application in Azure Percept Studio
-description: This article shows you how to return to a previously created voice assistant application window.
---- Previously updated : 02/07/2023----
-# Find your voice assistant application in Azure Percept Studio
--
-This how-to guide shows you how to return to a previously created voice assistant application.
-
-## Prerequisites
--- [Create a voice assistant demo application](./tutorial-no-code-speech.md)-- Your Azure Percept DK is powered on and the Azure Percept Audio accessory is connected via a USB cable.-
-## Open your voice assistant application
-1. Go to [Azure Percept Studio](https://portal.azure.com/#blade/AzureEdgeDevices/Main/overview)
-1. Select **Devices** from the left menu pane.
- :::image type="content" source="media/return-to-voice-assistant-demo-window/select-device.png" alt-text="select device from the left menu pane":::
-1. Select the device to which your voice assistant application was deployed.
-1. Select the **Speech** tab.
- :::image type="content" source="media/return-to-voice-assistant-demo-window/speech-tab.png" alt-text="select the speech tab":::
-1. Under **Actions**, select **Test your voice assistant**
- :::image type="content" source="media/return-to-voice-assistant-demo-window/actions-test-va.png" alt-text="select test your voice assistant under the Actions section":::
-
-## Next steps
-Now that your voice assistant application is open, try making some [more configurations](./how-to-manage-voice-assistant.md).
-
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
- Title: Software releases for Azure Percept DK OTA updates
-description: Information and download links for the Azure Percept DK over-the-air update packages
---- Previously updated : 02/08/2023-----
-# Software releases for OTA updates
---
->[!CAUTION]
->**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
-
-The OTA update is built for users who tend to always keep the dev kit up to date. That's why only the hard-stop versions and the latest version are provided here. To change your dev kit to a specific (older) version, use the USB cable update. Refer to [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md). Also use the USB update if you want to jump to a much advanced version.
-
->[!CAUTION]
->Dev kit doesn't support SW version downgrade with OTA. The Device Update for IoT Hub framework will NOT block deploying an image with version older than the current one. However doing so to dev kit will result in loss of data and functionality.
-
->[!IMPORTANT]
->Be sure to check the following document before you decide to go with either OTA or USB cable update.
->
->[How to determine your update strategy](./how-to-determine-your-update-strategy.md)
-
-## Hard-stop version of OTA
-
-Microsoft would service each dev kit release with OTA packages. However, as there are breaking changes for dev kit OS/firmware, or the OTA platform, OTA directly from an old version to a much-advanced version may be problematic. Generally, when a breaking change happens, Microsoft will make sure that the OTA update process transitions the old system seamlessly to **the very first version that introduces/deliver this breaking change**. This specific version becomes a hard-stop version for OTA. Take a known hard-stop version: **June release** as an example. OTA will work if a user updates the dev kit from 2104 to 2106, then from 2106 to 2107. However, it will NOT work if a user tries to skip the hard-stop (2106) and update the dev kit from 2104 directly to 2107.
--
-## Recommendations for applying the OTA update
-
-**Scenario 1:** Frequently (monthly) update the dev kit to make sure itΓÇÖs always up to date
--- There should be no problem if you always do OTA to update the dev kit from last release to the newly released version.-
-**Scenario 2:** Do update while few versions might be skipped.
-
-1. Identify the current software version of dev kit.
-1. Review the OTA package release list to look for any hard-stop version between the current version and target version.
- - If there is, you need to sequentially deploy the hard-stop version(s) until you can deploy the latest update package.
- - If there isn't, then you can directly deploy the latest OTA package to the dev kit.
-
-## Identify the current software version of dev kit
-
-**Option 1:**
-
-1. Sign in to the [Azure Percept Studio](./overview-azure-percept-studio.md).
-1. In **Devices**, choose your dev kit device.
-1. In the **General** tab, look for the **Model** and **SW Version** information.
-
-**Option 2:**
-
-1. View the **IoT Edge Device** of **IoT Hub** service from Microsoft Azure portal.
-1. Choose your dev kit device from the device list.
-1. Select **Device twin**.
-1. Scroll through the device twin properties and locate **"model"** and **"swVersion"** under **"deviceInformation"** and make a note of their values.
-
-## Identify the OTA package(s) to be deployed
-
->[!IMPORTANT]
->If the current version of your dev kit isn't included in any of the releases below, it's NOT supported for OTA update. Please do a USB cable update to get to the latest version.
-
->[!CAUTION]
->Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).
-
-**Latest release:**
-
-|Release|Applicable Version(s)|Download Links|Note|
-|||||
-|June Service Release (2206)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106, <br>2022.102.109.102, <br>2022.103.110.103|[2022.106.120.102 OTA update package](<https://download.microsoft.com/download/b/7/1/b71877b8-4882-4447-b3f3-8359ee8341e2/2022.106.120.102 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
-
-**Hard-stop releases:**
-
-|Release|Applicable Version(s)|Download Links|Note|
-|||||
-|June Service Release (2106)|2021.102.108.112, 2021.104.110.103, 2021.105.111.122 |[2021.106.111.115 OTA manifest (for PE-101)](https://download.microsoft.com/download/d/f/0/df0f17dc-d2fb-42ff-aaa5-98edf4d6d1e8/aduimportmanifest_PE-101_2021.106.111.115_v3.json)<br>[2021.106.111.115 OTA manifest (for APDK-101)](https://download.microsoft.com/download/d/f/0/df0f17dc-d2fb-42ff-aaa5-98edf4d6d1e8/aduimportmanifest_Azure-Percept-DK_2021.106.111.115_v3.json) <br>[2021.106.111.115 OTA update package](https://download.microsoft.com/download/d/f/0/df0f17dc-d2fb-42ff-aaa5-98edf4d6d1e8/Microsoft-Azure-Percept-DK-2021.106.111.115.swu) |Be sure to use the correct manifest based on "model name" (PE-101/APDK-101)|
-
-## Next steps
-
-[Update your Azure Percept DK over-the-air (OTA)](./how-to-update-over-the-air.md)
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
- Title: Software releases for Azure Percept DK USB cable updates
-description: Information and download links for the USB cable update package of Azure Percept DK
---- Previously updated : 02/08/2023----
-# Software releases for USB cable updates
--
-This page provides information and download links for all the dev kit OS/firmware image releases. For detail of changes/fixes in each version, refer to the release notes:
--- [Azure Percept DK software release notes](./azure-percept-devkit-software-release-notes.md).-
->[!IMPORTANT]
->Be sure to check the following document before you decide to go with either OTA or USB cable update.
->
->[How to determine your update strategy](./how-to-determine-your-update-strategy.md)
-
-## Latest releases
--- **Latest service release**
-June Service Release (2206): [Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)
-- **Latest major update or known stable version**
-Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
-
-## Full list of releases
-
-|Release|Download Links|Note|
-|||::|
-|June Service Release (2206)|[Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip](https://download.microsoft.com/download/4/7/a/47af6fc2-d9a0-4e66-822b-ad36700fefff/Azure-Percept-DK-1.0.20220620.1126-public_preview_1.0.zip)||
-|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)||
-|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip)||
-|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip)||
-|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip)||
-|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip)||
-|September Service Release (2109)|[Azure-Percept-DK-1.0.20210929.1747-public_preview_1.0.zip](https://go.microsoft.com/fwlink/?linkid=2174462)||
-|July Service Release (2107)|[Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip](https://download.microsoft.com/download/f/a/9/fa95d9d9-a739-493c-8fad-bccf839072c9/Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip)||
-|June Service Release (2106)|[Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip](https://download.microsoft.com/download/1/5/8/1588f7e3-f8ae-4c06-baa2-c559364daae5/Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip)||
-|May Service Release (2105)|[Azure-Percept-DK-1.0.20210511.1825.zip](https://download.microsoft.com/download/e/0/1/e01b6f7e-04f7-45ee-8933-8514c2fdbe6a/Azure-Percept-DK-1.0.20210511.1825.zip)||
-|Feature Update (2104) |[Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)||
-
-## Next steps
--- [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md)-
azure-percept Speech Module Interface Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/speech-module-interface-workflow.md
- Title: Azure Percept speech module interface workflow
-description: Describes the workflow and available methods for the Azure Percept speech module
---- Previously updated : 02/07/2023----
-# Azure Percept speech module interface workflow
--
-This article describes how the Azure Percept speech module interacts with IoT Hub. It does so via Module Twin and Module methods. Furthermore, it lists the direct method calls used to invoke the speech module.
-
-## Speech module interaction with IoT hub via Module Twin and Module method
-- IoT Hub uses Module Twin to deploy speech module settings and the settings are saved in the properties. The speech module can update device information and telemetry to IoT hub by Module Twin reported properties.-- IoT Hub can send control requests to speech module via the Module method.-- IoT Hub can get speech module status via the Module method.-
-For more details, please refer to [Understand and use module twins in IoT Hub](../iot-hub/iot-hub-devguide-module-twins.md).
--
-## Speech module states
-- **IoTInitialized**: Indicates IoT module is initialized and the network between speech module and edge Hub module is connected.-- **Authenticating**: Azure Audio device authentication is processing.-- **Authenticated**: Azure Audio device authentication is finished. If failed, IoT hub will get an error message.-- **MicDiscovering**: Start to enumerate microphone array via ALSA interface.-- **MicDiscovered**: Enum microphone array is finished. If failed, IoT hub will get an error message.-- **SpeechConfigured**: CC configuring is finished. If failed, IoT hub will get an error message.-- **SpeechStarted**: Indicates bot is configured and is running.-- **SpeechStopped**: Indicates bot is stopped.-- **DeviceRemoved**: Indicates Azure Audio device is removed.--
-## Speech bot states
-Querying speech bot states is only supported under the **SpeechStarted** speech module state.
-- **Ready**: KWS is ready and waiting for voice activation.-- **Listening**: bot is listening to the voice input.-- **Thinking**: bot is waiting for response.-- **Speaking**: bot gets response and speaking the response.-
-## Interaction between IoT Hub and the speech module
-This section describes how IoT Hub interacts with the speech module. As the diagram shows, there are three types of messages.
-- Deployment with needed properties and update with reported properties-- Module method invoke-- Update telemetry--
-IoT Hub invokes the module method with two parameters:
-- The module method name (case sensitive)-- The method payload-
-The speech module responds with:
-- A status code
- - **0** = idle
- - **102** = processing
- - **200** = success
- - **202** = pending
- - **500** = failure
- - **501** = not present
-- A status payload-
-Here's an example using the module method GetModuleState:
-1. Invoke the method with these parameters:
- - String: "GetModuleState"
- - Unspecified
-1. Response:
- - Status code: 200
- - Payload: "DeviceRemoved"
-
-## Next steps
-Try to apply these concepts when [configuring a voice assistant application using Azure IoT Hub](./how-to-configure-voice-assistant.md).
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
- Title: Troubleshoot Azure Percept Audio and speech module
-description: Get troubleshooting tips for Azure Percept Audio and azureearspeechclientmodule
---- Previously updated : 02/07/2023----
-# Troubleshoot Azure Percept Audio and speech module
--
-Use the guidelines below to troubleshoot voice assistant application issues.
-
-## Checking runtime status of the speech module
-
-Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Select the **Modules** tab to see the runtime status of all installed modules.
--
-If the runtime status of **azureearspeechclientmodule** isn't listed as **running**, select **Set modules** -> **azureearspeechclientmodule**. On the **Module Settings** page, set **Desired Status** to **running** and select **Update**.
-
-## Voice assistant application doesn't load
-Try [deploying one of the voice assistant templates](./tutorial-no-code-speech.md). Deploying a template ensures that all the supporting resources needed for voice assistant applications get created.
-
-## Voice assistant template doesn't get created
-Failure of when creating a voice assistant template is usually an issue with one of the supporting resources.
-1. [Delete all previously created voice assistant resources](./delete-voice-assistant-application.md).
-1. Deploy a new [voice assistant template](./tutorial-no-code-speech.md).
-
-## Voice assistant was created but doesn't respond to commands
-Follow the instructions on the [LED behavior and troubleshooting guide](audio-button-led-behavior.md) to troubleshoot this issue.
-
-## Voice assistant doesn't respond to custom keywords created in Speech Studio
-This may occur if the speech module is out of date. Follow these steps to update the speech module to the latest version:
-
-1. Select on **Devices** in the left-hand menu panel of the Azure Percept Studio homepage.
-1. Find and select your device.
-
- :::image type="content" source="./media/tutorial-no-code-speech/devices.png" alt-text="Screenshot of device list in Azure Percept Studio.":::
-1. In the device window, select the **Speech** tab.
-1. Check the speech module version. If an update is available, you'll see an **Update** button next to the version number.
-1. Select **Update** to deploy the speech module update. The update process generally takes 2-3 minutes to complete.
-
-## Collecting speech module logs
-To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt.
-
-Collect speech module logs:
-
-```console
-sudo iotedge logs azureearspeechclientmodule
-```
-
-To redirect output to a .txt file for further analysis, use the following syntax:
-
-```console
-sudo [command] > [file name].txt
-```
-
-Change the permissions of the .txt file so it can be copied:
-
-```console
-sudo chmod 666 [file name].txt
-```
-
-After redirecting output to a .txt file, copy the file to your host PC via SCP:
-
-```console
-scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path]
-```
-
-[local host file path] refers to the location on your host PC, which you would like to copy the .txt file to. [remote username] is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
-
-## Known issues
-- If using a free trial, the speech model may exceed the free trial price plan. In this case, the model will stop working without an error message.-- If more than 5 IoT Edge devices are connected, the report (the text sent via telemetry to IoT Hub and Speech Studio) may be blocked.-- If the device is in a different region than the resources, the report message may be delayed. -
-## Useful links
-- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)-- [Azure Percept Audio button and LED behavior](./audio-button-led-behavior.md)-- [Create a voice assistant with Azure Percept DK and Azure Percept Audio](./tutorial-no-code-speech.md)-- [Azure Percept DK general troubleshooting guide](./troubleshoot-dev-kit.md)-- [How to return to a previously crated voice assistant application](return-to-voice-assistant-application-window.md)
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-dev-kit.md
- Title: Troubleshoot the Azure Percept DK device
-description: Get troubleshooting tips for some of the more common issues with Azure Percept DK and IoT Edge
---- Previously updated : 02/07/2023----
-# Troubleshoot the Azure Percept DK device
---
-The purpose of this troubleshooting article is to help Azure Percept DK users to quickly resolve common issues with their dev kits. It also provides guidance on collecting logs for when extra support is needed.
-
-## Log collection
-In this section, you'll get guidance on which logs to collect and how to collect them.
-
-### How to collect logs
-1. Connect to your dev kit [over SSH](./how-to-ssh-into-percept-dk.md).
-1. Run the needed commands in the SSH terminal window. See the next section for the list of log collection commands.
-1. Redirect any output to a .txt file for further analysis, use the following syntax:
- ```console
- sudo [command] > [file name].txt
- ```
-1. Change the permissions of the .txt file so it can be copied:
- ```console
- sudo chmod 666 [file name].txt
- ```
-1. Copy the file to your host PC via SCP:
- ```console
- scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path]
- ```
-
- ```[local host file path]``` refers to the location on your host PC that you would like to copy the .txt file to. ```[remote username]``` is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
-
-### Log types and commands
-
-|Log purpose |When to collect it |Command |
-|--||-|
-|*Support bundle* - provides a set of logs needed for most customer support requests.|Collect whenever requesting support.|```sudo iotedge support-bundle --since 1h``` <br><br>*"--since 1h" can be changed to any time span, for example, "6h" (6 hours), "6d" (6 days) or "6m" (6 minutes)*|
-|*OOBE logs* - records details about the setup experience.|Collect when you find issues during the setup experience.|```sudo journalctl -u oobe -b```|
-|*edgeAgent logs* - records the version numbers of all modules running on your device.|Collect when one or more modules aren't working.|```sudo iotedge logs edgeAgent```|
-|*Module container logs* - records details about specific IoT Edge module containers|Collect when you find issues with a module|```sudo iotedge logs [container name]```|
-|*Network logs* - a set of logs covering Wi-Fi services and the network stack.|Collect when you find Wi-Fi or network issues.|```sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u systemd-networkd > network_log.txt```<br><br>```cat /etc/os-release && cat /etc/os-subrelease && cat /etc/adu-version && rpm -q ztpd > system_ver.txt```<br><br>Run both commands. Each command collects multiple logs and puts them into a single output.|
-
-> [!WARNING]
-> Output from the `support-bundle` command can contain host, device and module names, information logged by your modules etc. Please be aware of this if sharing the output in a public forum.
-
-## Troubleshooting commands
-Here's a set of commands that can be used for troubleshooting issues you may find with the dev kit. To run these commands, you must first connect to your dev kit [over SSH](./how-to-ssh-into-percept-dk.md).
-
-For more information on the Azure IoT Edge commands, see the [Azure IoT Edge device troubleshooting documentation](../iot-edge/troubleshoot.md).
-
-|Function |When to use |Command |
-||-||
-|Checks the software version on the dev kit.|Use anytime you need confirm which software version is on your dev kit.|```cat /etc/os-release && cat /etc/os-subrelease && cat /etc/adu-version```|
-|Checks the temperature of the dev kit|Use in cases where you think the dev kit might be overheating.|```cat /sys/class/thermal/thermal_zone0/temp```|
-|Checks the dev kit's telemetry ID|Use in cases where you need to know the dev kits unique telemetry identifier.|```sudo azure-device-health-id```|
-|Checks the status of IoT Edge|Use whenever there are issues with IoT Edge modules connecting to the cloud.|```sudo iotedge check```|
-|Restarts the Azure IoT Edge security daemon|Use when IoT Edge is unresponsive or not working correctly.|```sudo systemctl restart iotedge``` |
-|Lists the deployed Azure IoT Edge modules|Uwe when you need to see all of the modules deployed on the dev kit|```sudo iotedge list``` |
-|Displays the available/total space in the specified file system(s)|Use if you need to know the available storage on the dev kit.|```df [option] [file]```|
-|Displays the dev kit's IP and interface information|Use when you need to know the dev kit's IP address.|`ip route get 1.1.1.1` |
-|Display dev kit's IP address only|Use when you only want the dev kit's IP address and not the other interface information.|<code>ip route get 1.1.1.1 &#124; awk '{print $7}'</code> <br> `ifconfig [interface]` |
-
-## USB update errors
-
-|Error: |Solution: |
-||--|
-|LIBUSB_ERROR_XXX during USB flash via UUU |This error is the result of a USB connection failure during UUU updating. If the USB cable isn't properly connected to the USB ports on the PC or the Percept DK carrier board, an error of this form will occur. Try unplugging and reconnecting both ends of the USB cable and jiggling the cable to ensure a secure connection.|
-
-## Clearing hard drive space on the Azure Percept DK
-There are two components that take up the hard drive space on the Azure Percept DK, the docker container logs and the docker containers themselves. To ensure the container logs don't take up all fo the hard space, the Azure Percept DK has log rotation built in which rotates out any old logs as new logs get generated.
-
-For situations when the number of docker containers cause hard drive space issues you can delete unused containers by following these steps:
-1. [SSH into the dev kit](./how-to-ssh-into-percept-dk.md)
-1. Run this command:
- `docker system prune`
-
-This will remove all unused containers, networks, images and optionally, volumes. [Go to this page](https://docs.docker.com/engine/reference/commandline/system_prune/) for more details.
-
-## Azure Percept DK carrier board LED states
-
-There are three small LEDs on top of the carrier board housing. A cloud icon is printed next to LED 1, a Wi-Fi icon is printed next to LED 2, and an exclamation mark icon is printed next to LED 3. See the table below for information on each LED state.
-
-|LED |State |Description |
-|-|--||
-|LED 1 (IoT Hub) |On (solid) |Device is connected to an IoT Hub. |
-|LED 2 (Wi-Fi) |Slow blink |Device is ready to be configured by Wi-Fi Easy Connect and is announcing its presence to a configurator. |
-|LED 2 (Wi-Fi) |Fast blink |Authentication was successful, device association in progress. |
-|LED 2 (Wi-Fi) |On (solid) |Authentication and association were successful; device is connected to a Wi-Fi network. |
-|LED 3 |NA |LED not in use. |
azure-percept Tutorial No Code Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-no-code-speech.md
- Title: Create a no-code voice assistant in Azure Percept Studio
-description: Learn how to create and deploy a no-code speech solution to your Azure Percept DK
---- Previously updated : 02/07/2023----
-# Create a no-code voice assistant in Azure Percept Studio
---
-In this tutorial, you will create a voice assistant from a template to use with your Azure Percept DK and Azure Percept Audio. The voice assistant demo runs within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and contains a selection of voice-controlled virtual objects. To control an object, say your keyword, which is a word or short phrase that wakes your device, followed by a command. Each template responds to a set of specific commands.
-
-This guide will walk you through the process of setting up your devices, creating a voice assistant and the necessary [Speech Services](../cognitive-services/speech-service/overview.md) resources, testing your voice assistant, configuring your keyword, and creating custom keywords.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- Azure Percept Audio-- Speaker or headphones that can connect to 3.5mm audio jack (optional)-- [Azure subscription](https://azure.microsoft.com/free/)-- [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)--
-## Create a voice assistant using an available template
-
-1. Navigate to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-
-1. Open the **Demos & tutorials** tab.
-
- :::image type="content" source="./media/tutorial-no-code-speech/portal-overview.png" alt-text="Screenshot of Azure portal homepage.":::
-
-1. Click **Try out voice assistant templates** under **Speech tutorials and demos**. This will open a window on the right side of your screen.
-
-1. Do the following in the window:
-
- 1. In the **IoT Hub** dropdown menu, select the IoT hub to which your devkit is connected.
-
- 1. In the **Device** dropdown menu, select your devkit.
-
- 1. Select one of the available voice assistant templates.
-
- 1. Click the **I agree to terms & conditions for this project** checkbox.
-
- 1. Click **Create**.
-
- :::image type="content" source="./media/tutorial-no-code-speech/template-creation.png" alt-text="Screenshot of voice assistant template creation.":::
-
-1. After clicking **Create**, the portal opens another window to create your speech theme resource. Do the following in the window:
-
- 1. Select your Azure subscription in the **Subscription** box.
-
- 1. Select your preferred resource group from the **Resource group** dropdown menu. If you would like to create a new resource group to use with your voice assistant, click **Create** under the dropdown menu and follow the prompts.
-
- 1. For **Application prefix**, enter a name. This will be the prefix for your project and your custom command name.
-
- 1. Under **Region**, select the region to deploy resources to.
-
- 1. Under **LUIS prediction pricing tier**, select **Standard** (the free tier does not support speech requests).
-
- 1. Click the **Create** button. Resources for the voice assistant application will be deployed to your subscription.
-
- > [!WARNING]
- > Do **NOT** close the window until the portal has finished deploying the resource. Closing the window prematurely can result in unexpected behavior of the voice assistant. Once your resource has been deployed, the demo will be displayed.
-
- :::image type="content" source="./media/tutorial-no-code-speech/resource-group.png" alt-text="Screenshot of subscription and resource group selection window.":::
-
-## Test out your voice assistant
-
-To interact with your voice assistant, say the keyword followed by a command. When the Ear SoM recognizes your keyword, the device emits a chime (which you can hear if a speaker or headphones are connected), and the LEDs will blink blue. The LEDs will switch to racing blue while your command is processed. The voice assistant's response to your command will be printed in text in the demo window and emitted audibly through your speaker/headphones. The default keyword (listed next to **Custom Keyword**) is set to "Computer," and each template has a set of compatible commands that allow you to interact with virtual objects in the demo window. For example, if you are using the hospitality or healthcare demo, say "Computer, turn on TV" to turn on the virtual TV.
--
-### Hospitality and healthcare demo commands
-
-Both the healthcare and hospitality demos have virtual TVs, lights, blinds, and thermostats you can interact with. The following commands (and additional variations) are supported:
-
-* "Turn on/off the lights."
-* "Turn on/off the TV."
-* "Turn on/off the AC."
-* "Open/close the blinds."
-* "Set temperature to X degrees." (X is the desired temperature, e.g. 75.)
--
-### Automotive demo commands
-
-The automotive demo has a virtual seat warmer, defroster, and thermostat you can interact with. The following commands (and additional variations) are supported:
-
-* "Turn on/off the defroster."
-* "Turn on/off the seat warmer."
-* "Set temperature to X degrees." (X is the desired temperature, e.g. 75.)
-* "Increase/decrease the temperature by Y degrees."
---
-### Inventory demo commands
-
-The inventory demo has a selection of virtual blue, yellow, and green boxes to interact with along with a virtual inventory app. The following commands (and additional variations) are supported:
-
-* "Add/remove X boxes." (X is the number of boxes, e.g. 4.)
-* "Order/ship X boxes."
-* "How many boxes are in stock?"
-* "Count Y boxes." (Y is the color of the boxes, e.g. yellow.)
-* "Ship everything in stock."
---
-## Configure your keyword
-
-You can customize keyword for your voice assistant application.
-
-1. Click **change** next to **Custom Keyword** in the demo window.
-
-1. Select one of the available keywords. You will be able to choose from a selection of sample keywords and any custom keywords you have created.
-
-1. Click **Save**.
-
-### Create a custom keyword
-
-You can create your own keyword for your voice application. Training for your custom keyword may complete in just a few minutes.
-
-1. Click **+ Create Custom Keyword** near the top of the demo window.
-
-1. Enter your desired keyword, which can be a single word or a short phrase.
-
-1. Select your **Speech resource** (this is listed next to **Custom Command** in the demo window and contains your application prefix).
-
-1. Click **Save**.
-
-## Create a custom command
-
-The portal also provides functionality for creating custom commands with existing speech resources. "Custom command" refers to the voice assistant application itself, not a specific command within the existing application. By creating a custom command, you are creating a new speech project, which you must further develop in [Speech Studio](https://speech.microsoft.com/).
-
-To create a new custom command from within the demo window, click **+ Create Custom Command** at the top of the page and do the following:
-
-1. Enter a name for your custom command.
-
-1. Enter a description of your project (optional).
-
-1. Select your preferred language.
-
-1. Select your speech resource.
-
-1. Select your LUIS resource.
-
-1. Select your LUIS authoring resource or create a new one.
-
-1. Click **Create**.
--
-Once you create a custom command, you must go to [Speech Studio](https://speech.microsoft.com/) for further development. If you open Speech Studio and do not see your custom command listed, follow these steps:
-
-1. On the left-hand menu panel in Azure Percept Studio, click on **Speech** under **AI Projects**.
-
-1. Select the **Commands** tab.
-
- :::image type="content" source="./media/tutorial-no-code-speech/ai-projects.png" alt-text="Screenshot of list of custom commands available to edit.":::
-
-1. Select the custom command you wish to develop. This opens the project in Speech Studio.
-
- :::image type="content" source="./media/tutorial-no-code-speech/speech-studio.png" alt-text="Screenshot of speech studio home screen.":::
-
-For more information on developing custom commands, please see the [Speech Service documentation](../cognitive-services/speech-service/custom-commands.md).
-
-## Troubleshooting
-
-### Voice assistant was created but does not respond to commands
-
-Check the LED lights on the Interposer Board:
-
-* Three solid blue lights indicate that the voice assistant is ready and waiting for the keyword.
-* If the center LED (L02) is white, the devkit completed initialization and needs to be configured with a keyword.
-* If the center LED (L02) is flashing white, the Audio SoM has not completed initialization yet. Initialization may take a few minutes to complete.
-
-For more information about the LED indicators, please see the [LED article](./audio-button-led-behavior.md).
-
-### Voice assistant does not respond to a custom keyword created in Speech Studio
-
-This may occur if the speech module is out of date. Follow these steps to update the speech module to the latest version:
-
-1. Click on **Devices** in the left-hand menu panel of the Azure Percept Studio homepage.
-
-1. Find and select your device.
-
- :::image type="content" source="./media/tutorial-no-code-speech/devices.png" alt-text="Screenshot of device list in Azure Percept Studio.":::
-
-1. In the device window, select the **Speech** tab.
-
-1. Check the speech module version. If an update is available, you will see an **Update** button next to the version number.
-
-1. Click **Update** to deploy the speech module update. The update process generally takes 2-3 minutes to complete.
-
-## Clean up resources
-
-Once you are done working with your voice assistant application, follow these steps to clean up the speech resources you deployed during this tutorial:
-
-1. From the [Azure portal](https://portal.azure.com), select **Resource groups** from the left menu panel or type it into the search bar.
-
- :::image type="content" source="./media/tutorial-no-code-speech/azure-portal.png" alt-text="Screenshot of Azure portal homepage showing left menu panel and Resource Groups.":::
-
-1. Select your resource group.
-
-1. Select all six resources that contain your application prefix and click the **Delete** icon on the top menu panel.
-\
- :::image type="content" source="./media/tutorial-no-code-speech/select-resources.png" alt-text="Screenshot of speech resources selected for deletion.":::
-
-1. To confirm deletion, type **yes** in the confirmation box, verify you have selected the correct resources, and click **Delete**.
-
- :::image type="content" source="./media/tutorial-no-code-speech/delete-confirmation.png" alt-text="Screenshot of delete confirmation window.":::
-
-> [!WARNING]
-> This will remove any custom keywords created with the speech resources you are deleting, and the voice assistant demo will no longer function.
-
-## Next Steps
-
-Now that you have created a no-code speech solution, try creating a [no-code vision solution](./tutorial-nocode-vision.md) for your Azure Percept DK.
azure-percept Tutorial Nocode Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-nocode-vision.md
- Title: Create a no-code vision solution in Azure Percept Studio
-description: Learn how to create a no-code vision solution in Azure Percept Studio and deploy it to your Azure Percept DK
---- Previously updated : 02/07/2023----
-# Create a no-code vision solution in Azure Percept Studio
---
-Azure Percept Studio enables you to build and deploy custom computer vision solutions, no coding required. In this article, you will:
--- Create a vision project in [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)-- Collect training images with your devkit-- Label your training images in [Custom Vision](https://www.customvision.ai/)-- Train your custom object detection or classification model-- Deploy your model to your devkit-- Improve your model by setting up retraining-
-This tutorial is suitable for developers with little to no AI experience and those just getting started with Azure Percept.
-
-## Prerequisites
--- Azure Percept DK (devkit)-- [Azure subscription](https://azure.microsoft.com/free/)-- Azure Percept DK setup experience: you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-
-## Create a vision prototype
-
-1. Start your browser and go to [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
-
-1. On the overview page, click the **Demos & tutorials** tab.
- :::image type="content" source="./media/tutorial-nocode-vision/percept-studio-overview-inline.png" alt-text="Azure Percept Studio overview screen." lightbox="./media/tutorial-nocode-vision/percept-studio-overview.png":::
-
-1. Under **Vision tutorials and demos**, click **Create a vision prototype**.
-
- :::image type="content" source="./media/tutorial-nocode-vision/vision-tutorials-and-demos-inline.png" alt-text="Azure Percept Studio demos and tutorials screen." lightbox="./media/tutorial-nocode-vision/vision-tutorials-and-demos.png":::
-
-1. On the **New Azure Percept Custom Vision prototype** page, do the following:
-
- 1. In the **Project name** box, enter a name for your vision prototype.
-
- 1. Enter a description of the vision prototype in the **Project description** box.
-
- 1. Select **Azure Percept DK** under the **Device type** drop-down menu.
-
- 1. Select a resource under the **Resource** drop-down menu or click **Create a new resource**. If you elect to create a new resource, do the following in the **Create** window:
- 1. Enter a name for your new resource.
- 1. Select your Azure subscription.
- 1. Select a resource group or create a new one.
- 1. Select your preferred region.
- 1. Select your pricing tier (we recommend S0).
- 1. Click **Create** at the bottom of the window.
-
- :::image type="content" source="./media/tutorial-nocode-vision/create-resource.png" alt-text="Create resource window.":::
-
- 1. For **Project type**, choose whether your vision project will perform object detection or image classification. For more information on the project types, click **Help me choose**.
-
- 1. For **Optimization**, select whether you want to optimize your project for accuracy, low network latency, or a balance of both.
-
- 1. Click the **Create** button.
-
- :::image type="content" source="./media/tutorial-nocode-vision/create-prototype.png" alt-text="Create custom vision prototype screen.":::
-
-## Connect a device to your project and capture images
-
-After creating a vision solution, you must add your devkit and its corresponding IoT Hub to it.
-
-1. Power on your devkit.
-
-1. In the **IoT Hub** dropdown menu, select the IoT hub that your devkit was connected to during the OOBE.
-
-1. In the **Devices** dropdown menu, select your devkit.
-
-Next, you must either load images or capture images for training your AI model. We recommended uploading at least 30 images per tag type. For example, if you want to build a dog and cat detector, you must upload at least 30 images of dogs and 30 images of cats. To capture images with the vision SoM of your devkit, do the following:
-
-1. In the **Image capture** window, select **View device stream** to view the vision SoM video stream.
-
-1. Check the video stream to ensure that your vision SoM camera is correctly aligned to take the training pictures. Make adjustments as necessary.
-
-1. In the **Image capture** window, click **Take photo**.
-
- :::image type="content" source="./media/tutorial-nocode-vision/image-capture.png" alt-text="Image capture screen.":::
-
-1. Alternatively, set up an automated image capture to collect a large quantity of images at a time by checking the **Automatic image capture** box. Select your preferred imaging rate under **Capture rate** and the total number of images you would like to collect under **Target**. Click **Set automatic capture** to begin the automatic image capture process.
-
- :::image type="content" source="./media/tutorial-nocode-vision/image-capture-auto.png" alt-text="Automatic image capture dropdown menu.":::
-
-When you have enough photos, click **Next: Tag images and model training** at the bottom of the screen. All images will be saved in [Custom Vision](https://www.customvision.ai/).
-
-> [!NOTE]
-> If you elect to upload training images directly to Custom Vision, please note that image file size cannot exceed 6MB.
-
-## Tag images and train your model
-
-Before training your model, add labels to your images.
-
-1. On the **Tag images and model training** page, click **Open project in Custom Vision**.
-
-1. On the left-hand side of the **Custom Vision** page, click **Untagged** under **Tags** to view the images you just collected in the previous step. Select one or more of your untagged images.
-
-1. In the **Image Detail** window, click on the image to begin tagging. If you selected object detection as your project type, you must also draw a [bounding box](../cognitive-services/custom-vision-service/get-started-build-detector.md#upload-and-tag-images) around specific objects you would like to tag. Adjust the bounding box as needed. Type your object tag and click **+** to apply the tag. For example, if you were creating a vision solution that would notify you when a store shelf needs restocking, add the tag "Empty Shelf" to images of empty shelves, and add the tag "Full Shelf" to images of fully-stocked shelves. Repeat for all untagged images.
-
- :::image type="content" source="./media/tutorial-nocode-vision/image-tagging.png" alt-text="Image tagging screen in Custom Vision.":::
-
-1. After tagging your images, click the **X** icon in the upper right corner of the window. Click **Tagged** under **Tags** to view all of your newly tagged images.
-
-1. After your images are labeled, you are ready to train your AI model. To do so, click **Train** near the top of the page. You must have at least 15 images per tag type to train your model (we recommend using at least 30). Training typically takes about 30 minutes, but it may take longer if your image set is extremely large.
-
- :::image type="content" source="./media/tutorial-nocode-vision/train-model.png" alt-text="Training image selection with train button highlighted.":::
-
-1. When the training has completed, your screen will show your model performance. For more information about evaluating these results, please see the [model evaluation documentation](../cognitive-services/custom-vision-service/get-started-build-detector.md#evaluate-the-detector). After training, you may also wish to [test your model](../cognitive-services/custom-vision-service/test-your-model.md) on additional images and retrain as necessary. Each time you train your model, it will be saved as a new iteration. Reference the [Custom Vision documentation](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md) for additional information on how to improve model performance.
-
- :::image type="content" source="./media/tutorial-nocode-vision/iteration.png" alt-text="Model training results.":::
-
- > [!NOTE]
- > If you elect to test your model on additional images in Custom Vision, please note that test image file size cannot exceed 4MB.
-
-Once you are satisfied with the performance of your model, close Custom Vision by closing the browser tab.
-
-## Deploy your AI model
-
-1. Go back to your Azure Percept Studio tab and click **Next: Evaluate and deploy** at the bottom of your screen.
-
-1. The **Evaluate and deploy** window will show the performance of your selected model iteration. Select the iteration you would like to deploy to your devkit under the **Model iteration** drop-down menu and click **Deploy model** at the bottom of the screen.
-
- :::image type="content" source="./media/tutorial-nocode-vision/deploy-model-inline.png" alt-text="Model deployment screen." lightbox="./media/tutorial-nocode-vision/deploy-model.png":::
-
-1. After deploying your model, view your device's video stream to see your model inferencing in action.
-
- :::image type="content" source="./media/tutorial-nocode-vision/view-device-stream.png" alt-text="Device stream showing headphone detector in action.":::
-
-After closing this window, you may go back and edit your vision project anytime by clicking **Vision** under **AI Projects** on the Azure Percept Studio homepage and selecting the name of your vision project.
--
-## Improve your model by setting up retraining
-
-After you have trained your model and deployed it to the device, you can improve model performance by setting up retraining parameters to capture more training data. This feature is used to improve a trained model's performance by giving you the ability to capture images based on a probability range. For example, you can set your device to only capture training images when the probability is low. Here is some [additional guidance](../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md) on adding more images and balancing training data.
-
-1. To set up retraining, go back to your **Project**, then to **Project Summary**
-1. In the **Image capture** tab, select **Automatic image capture** and **Set up retraining**.
-1. Set up the automated image capture to collect a large quantity of images at a time by checking the **Automatic image capture** box.
-1. Select your preferred imaging rate under **Capture rate** and the total number of images you would like to collect under **Target**.
-1. In the **set up retraining** section, select the iteration that you would like to capture more training data for, then select the probability range. Only images that meet the probability rate will be uploaded to your project.
-
- :::image type="content" source="./media/tutorial-nocode-vision/vision-image-capture.png" alt-text="image capture.":::
-
-## Clean up resources
-
-If you created a new Azure resource for this tutorial and you no longer wish to develop or use your vision solution, perform the following steps to delete your resource:
-
-1. Go to the [Azure portal](https://portal.azure.com/).
-1. Click on **All resources**.
-1. Click the checkbox next to the resource created during this tutorial. The resource type will be listed as **Cognitive Services**.
-1. Click the **Delete** icon near the top of the screen.
-
-## Video walkthrough
-
-For a visual walkthrough of the steps described above, please see the following video:
-
-</br>
-
-> [!VIDEO https://www.youtube.com/embed/9LvafyazlJM]
-
-</br>
-
-## Next Steps
-
-Next, check out the vision how-to articles for information on additional vision solution features in Azure Percept Studio.
-
-<!--
-Add links to how-to articles and oobe article.
>
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/vision-solution-troubleshooting.md
- Title: Troubleshoot Azure Percept Vision and vision modules
-description: Get troubleshooting tips for some of the more common issues found in the vision AI prototyping experiences.
---- Previously updated : 02/07/2023----
-# Troubleshoot Azure Percept Vision and vision modules
---
-This article provides information on troubleshooting no-code vision solutions in Azure Percept Studio.
-
-## Delete a vision project
-
-1. Go to the [Custom Vision projects](https://www.customvision.ai/projects) page.
-
-1. Hover over the project you want to delete, and select the trash can icon to delete the project.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-delete-project.png" alt-text="Screenshot that shows the Projects page in Custom Vision with the delete icon highlighted.":::
-
-## Check which modules are on a device
-
-1. Go to the [Azure portal](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_Iothub=aduprod&microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_ADUHidden#home).
-
-1. Select the **Iot Hub** icon.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-iot-hub-2-inline.png" alt-text="Screenshot that shows the Azure portal home page with the Iot Hub icon highlighted." lightbox= "./media/vision-solution-troubleshooting/vision-iot-hub-2.png":::
-
-1. Select the IoT hub that your target device is connected to.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-iot-hub.png" alt-text="Screenshot that shows a list of IoT hubs.":::
-
-1. Select **IoT Edge**, and select your device under the **Device ID** tab.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-iot-edge.png" alt-text="Screenshot that shows the IoT Edge home page.":::
-
-1. Your device modules appear in a list on the **Modules** tab.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-device-modules-inline.png" alt-text="Screenshot that shows the IoT Edge page for the selected device showing the Modules tab contents." lightbox= "./media/vision-solution-troubleshooting/vision-device-modules.png":::
-
-## Delete a device
-
-1. Go to the [Azure portal](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_Iothub=aduprod&microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_ADUHidden#home).
-
-1. Select the **Iot Hub** icon.
-
-1. Select the IoT hub that your target device is connected to.
-
-1. Select **IoT Edge**, and select the checkbox next to your target device ID. Select **Delete** to delete your device.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/vision-delete-device.png" alt-text="Screenshot that shows the Delete button highlighted on the IoT Edge home page.":::
-
-## Check the runtime status of azureeyemodule
-
-If there's a problem with **WebStreamModule**, ensure that **azureeyemodule**, which handles the vision model inferencing, is running. To check the runtime status:
-
-1. Go to the [Azure portal](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_Iothub=aduprod&microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_ADUHidden#home), and go to **All resources** > *\<your IoT hub>* > **IoT Edge** > *\<your device ID>*.
-1. Select the **Modules** tab to see the runtime status of all installed modules.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/over-the-air-iot-edge-device-page-inline.png" alt-text="Screenshot that shows the device module runtime status screen." lightbox= "./media/vision-solution-troubleshooting/over-the-air-iot-edge-device-page.png":::
-
-1. If the runtime status of **azureeyemodule** isn't listed as **running**, select **Set modules** > **azureeyemodule**.
-1. On the **Module Settings** page, set **Desired Status** to **running**, and select **Update**.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/firmware-desired-status-stopped.png" alt-text="Screenshot that shows the Module Settings configuration screen.":::
-
-## Change how often messages are sent from the azureeyemodule
-
-Your subscription tier may cap the number of messages that can be sent from your device to IoT Hub. For instance, the Free Tier will limit the number of messages to 8,000 per day. Once that limit is reached, your azureeyemodule will stop functioning and you may receive this error:
-
-|Error message|
-||
-|*Total number of messages on IotHub 'xxxxxxxxx' exceeded the allocated quota. Max allowed message count: '8000', current message count: 'xxxx'. Send and Receive operations are blocked for this hub until the next UTC day. Consider increasing the units for this hub to increase the quota.*|
-
-Using the azureeyemodule module twin, it's possible change the interval rate for how often messages are sent. The value entered for the interval rate indicates the frequency that each message gets sent, in milliseconds. The larger the number the more time there is between each message. For example, if you set the interval rate to 12,000 it means one message will be sent every 12 seconds. For a model that is running for the entire day this rate factors out to 7,200 messages per day, which is under the Free Tier limit. The value that you choose depends on how responsive you need your vision model to be.
-
-> [!NOTE]
-> Changing the message interval rate does not impact the size of each message. The message size depends on a few different factors such as the model type and the number of objects being detected in each message. As such, it is difficult to determine message size.
-
-Follow these steps to update the message interval:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_Iothub=aduprod#home), and open **All resources**.
-
-1. On the **All resources** page, select the name of the IoT hub that was provisioned to your development kit during setup.
-
-1. On the left side of the **IoT Hub** page, under **Automatic Device Management**, select **IoT Edge**. On the IoT Edge devices page, find the device ID of your development kit. Select the device ID of your development kit to open its IoT Edge device page.
-
-1. On the **Modules** tab, select **azureeyemodule**.
-
-1. On the **azureeyemodule** page, open **Module Identity Twin**.
-
- :::image type="content" source="./media/vision-solution-troubleshooting/module-page-inline.png" alt-text="Screenshot of a module page." lightbox= "./media/vision-solution-troubleshooting/module-page.png":::
-
-1. Scroll down to **properties**
-1. Find **TelemetryInterval** and replace it with **TelemetryIntervalNeuralNetworkMs**
-
- :::image type="content" source="./media/vision-solution-troubleshooting/module-identity-twin-inline-02.png" alt-text="Screenshot of Module Identity Twin properties." lightbox= "./media/vision-solution-troubleshooting/module-identity-twin.png":::
-
-1. Update the **TelemetryIntervalNeuralNetworkMs** value to the needed value
-
-1. Select the **Save** icon.
-
-## View device RTSP video stream
-
-View your device's RTSP video stream in [Azure Percept Studio](./how-to-view-video-stream.md) or [VLC media player](https://www.videolan.org/vlc/https://docsupdatetracker.net/index.html).
-
-To open the RTSP stream in VLC media player, go to **Media** > **Open network stream** > **rtsp://[device IP address]:8554/result**.
-
-If your RTSP stream is partially blocked by a gray box, you may be trying to view it over a poor network connection. Check that your connection has sufficient bandwidth for video streams.
-
-## Next steps
-
-For more information on troubleshooting your Azure Percept DK instance, see the [General troubleshooting guide](./troubleshoot-dev-kit.md).
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
You need a [Bicep file](./quickstart-create-bicep-use-visual-studio-code.md) tha
You can use Azure Resource Group Deployment task or Azure CLI task to deploy a Bicep file.
-### Use Azure Resource Group Deployment task
+### Use Azure Resource Manager Template Deployment task
-Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Group Deployment task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment):
+Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3).
```yml trigger:
steps:
deploymentName: 'DeployPipelineTemplate' ```
-For the descriptions of the task inputs, see [Azure Resource Group Deployment task](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment).
+For the descriptions of the task inputs, see [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3).
Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status. ### Use Azure CLI task
-Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli):
+Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2):
```yml trigger:
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 02/28/2022 Last updated : 02/28/2023
The resources providers that are marked with **- registered** are registered by
## Registration
-The resources providers above that are marked with **- registered** are registered by default for your subscription. To use the other resource providers, you must [register them](resource-providers-and-types.md). However, many resource providers are registered for you when you take certain actions. For example, if you create a resource through the portal, the portal automatically registers any unregistered resource providers that are needed. When deploy resources through an [Azure Resource Manager template](../templates/overview.md), any required resource providers are also registered.
+The resources providers in the preceding section that are marked with **- registered** are registered by default for your subscription. To use the other resource providers, you must [register them](resource-providers-and-types.md). However, many resource providers are registered for you when you take certain actions. For example, if you create a resource through the portal, the portal automatically registers any unregistered resource providers that are needed. When deploy resources through an [Azure Resource Manager template](../templates/overview.md), any required resource providers are also registered.
> [!IMPORTANT] > Only register a resource provider when you're ready to use it. The registration step enables you to maintain least privileges within your subscription. A malicious user can't use resource providers that aren't registered.
+>
+> When you register resource providers that aren't needed, you may see apps in your Azure Active Directory tenant that you don't recognize. Microsoft adds the app for a resource provider when you register it. These applications are typically added by Windows Azure Service Management API. To avoid having unnecessary apps in your tenant, only register resource providers that are needed.
## Find resource provider
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results. Some operations, which don't seem
- The Storage Account API exposes [data plane](control-plane-and-data-plane.md#data-plane) and [control plane](control-plane-and-data-plane.md#control-plane) operations. If a request uses **data plane** operations, the lock on the storage account doesn't protect blob, queue, table, or file data within that storage account. If the request uses **control plane** operations, however, the lock protects those resources. For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion fails. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use a control plane operation.
+
+- A read-only lock or cannot-delete lock on a **network security group (NSG)** prevents the creation of a traffic flow log for the NSG.
- A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access.
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
Learn how to use Azure CLI with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure CLI](manage-resources-cli.md).
-Other articles about managing resource groups:
--- [Manage Azure resource groups by using the Azure portal](manage-resources-portal.md)-- [Manage Azure resource groups by using Azure PowerShell](manage-resources-powershell.md)- ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using the Azure portal](manage-resources-portal.md).
-Other articles about managing resource groups:
--- [Manage Azure resource groups by using Azure CLI](manage-resources-cli.md)-- [Manage Azure resource groups by using Azure PowerShell](manage-resources-powershell.md)- [!INCLUDE [Handle personal data](../../../includes/gdpr-intro-sentence.md)] ## What is a resource group
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
Learn how to use Azure PowerShell with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure PowerShell](manage-resources-powershell.md).
-Other articles about managing resource groups:
--- [Manage Azure resource groups by using the Azure portal](manage-resources-portal.md)-- [Manage Azure resource groups by using Azure CLI](manage-resources-cli.md)- ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
For more information about deploying a Bicep file, see [Deploy resources with Bi
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources..
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources.
To prevent a resource group and its resources from being deleted, use [New-AzResourceLock](/powershell/module/az.resources/new-azresourcelock).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 10/05/2022 Last updated : 02/28/2023 # What is Azure Resource Manager?
There are some important factors to consider when defining your resource group:
You may be wondering, "Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?" The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.+
+ To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications.
If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. This condition doesn't apply to global resources like Azure Content Delivery Network, Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door.
azure-signalr Signalr Tutorial Authenticate Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md
Title: "Tutorial: Authentication with Azure Functions - Azure SignalR" description: In this tutorial, you learn how to authenticate Azure SignalR Service clients for Azure Functions binding-+ Previously updated : 08/17/2022- Last updated : 02/16/2023+ ms.devlang: javascript
A step by step tutorial to build a chat room with authentication and private mes
* [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages * [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients
-* [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Host the static website for the chat client UI
+* [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions
### Prerequisites
-The following software is required to build this tutorial.
-
-* [Git](https://git-scm.com/downloads)
-* [Node.js](https://nodejs.org/en/download/) (Version 10.x)
-* [.NET SDK](https://www.microsoft.com/net/download) (Version 2.x, required for Functions extensions)
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) (Version 2)
-* [Visual Studio Code](https://code.visualstudio.com/) (VS Code) with the following extensions
- * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) - work with Azure Functions in VS Code
- * [Live Server](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer) - serve web pages locally for testing
-
-[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-
-## Sign into the Azure portal
-
-Go to the [Azure portal](https://portal.azure.com/) and sign in with your credentials.
+* An Azure account with an active subscription.
+ * If you don't have one, you can [create one for free](https://azure.microsoft.com/free/).
+* [Node.js](https://nodejs.org/en/download/) (Version 18.x)
+* [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-## Create an Azure SignalR Service instance
-
-You will build and test the Azure Functions app locally. The app will access a SignalR Service instance in Azure that needs to be created ahead of time.
+## Create essential resources on Azure
+### Create an Azure SignalR service resource
-1. Click on the **Create a resource** (**+**) button for creating a new Azure resource.
+Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal.
-1. Search for **SignalR Service** and select it. Click **Create**.
+1. Select on the **Create a resource** (**+**) button for creating a new Azure resource.
- ![New SignalR Service](media/signalr-tutorial-authenticate-azure-functions/signalr-quickstart-new.png)
+1. Search for **SignalR Service** and select it.
+1. Select **Create**.
1. Enter the following information. | Name | Value | |||
- | Resource name | A unique name for the SignalR Service instance |
- | Resource group | Create a new resource group with a unique name |
- | Location | Select a location close to you |
- | Pricing Tier | Free |
+ | **Resource group** | Create a new resource group with a unique name |
+ | **Resource name** | A unique name for the SignalR Service instance |
+ | **Region** | Select a region close to you |
+ | **Pricing Tier** | Free |
+ | **Service mode** | Serverless |
-1. Click **Create**.
+1. Select **Review + Create**.
+1. Select **Create**.
-1. After the instance is deployed, open it in the portal and locate its Settings page. Change the Service Mode setting to *Serverless*.
- ![SignalR Service Mode](media/signalr-concept-azure-functions/signalr-service-mode.png)
-
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-## Initialize the function app
+### Create an Azure Function App and an Azure Storage account
-### Create a new Azure Functions project
+1. From the home page in the Azure portal, select on the **Create a resource** (**+**).
-1. In a new VS Code window, use `File > Open Folder` in the menu to create and open an empty folder in an appropriate location. This will be the main project folder for the application that you will build.
+1. Search for **Function App** and select it.
+1. Select **Create**.
-1. Using the Azure Functions extension in VS Code, initialize a Function app in the main project folder.
- 1. Open the Command Palette in VS Code by selecting **View > Command Palette** from the menu (shortcut `Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
- 1. Search for the **Azure Functions: Create New Project** command and select it.
- 1. The main project folder should appear. Select it (or use "Browse" to locate it).
- 1. In the prompt to choose a language, select **JavaScript**.
-
- ![Create a function app](media/signalr-tutorial-authenticate-azure-functions/signalr-create-vscode-app.png)
+1. Enter the following information.
-### Install function app extensions
+ | Name | Value |
+ |||
+ | **Resource group** | Use the same resource group with your SignalR Service instance |
+ | **Function App name** | A unique name for the Function app instance |
+ | **Runtime stack** | Node.js |
+ | **Region** | Select a region close to you |
-This tutorial uses Azure Functions bindings to interact with Azure SignalR Service. Like most other bindings, the SignalR Service bindings are available as an extension that needs to be installed using the Azure Functions Core Tools CLI before they can be used.
+1. By default, a new Azure Storage account will also be created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to **Hosting** tab to choose an account.
-1. Open a terminal in VS Code by selecting **View > Terminal** from the menu (Ctrl-\`).
+1. Select **Review + Create**, then select **Create**.
-1. Ensure the main project folder is the current directory.
+## Create an Azure Functions project locally
+### Initialize a function app
-1. Install the SignalR Service function app extension.
+1. From a command line, create a root folder for your project and change to the folder.
- ```bash
- func extensions install -p Microsoft.Azure.WebJobs.Extensions.SignalRService -v 1.0.0
- ```
+1. Execute the following command in your terminal to create a new JavaScript Functions project.
+```
+func init --worker-runtime node --language javascript --name my-app
+```
+By default, the generated project includes a *host.json* file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles).
### Configure application settings
-When running and debugging the Azure Functions runtime locally, application settings are read from **local.settings.json**. Update this file with the connection string of the SignalR Service instance that you created earlier.
+When running and debugging the Azure Functions runtime locally, application settings are read by the function app from *local.settings.json*. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier.
-1. In VS Code, select **local.settings.json** in the Explorer pane to open it.
-
-1. Replace the file's contents with the following.
+1. Replace the content of *local.settings.json* with the following code:
```json { "IsEncrypted": false, "Values": {
- "AzureSignalRConnectionString": "<signalr-connection-string>",
- "WEBSITE_NODE_DEFAULT_VERSION": "10.14.1",
- "FUNCTIONS_WORKER_RUNTIME": "node"
- },
- "Host": {
- "LocalHttpPort": 7071,
- "CORS": "http://127.0.0.1:5500",
- "CORSCredentials": true
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "AzureWebJobsStorage": "<your-storage-account-connection-string>",
+ "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>"
} } ```
- * Enter the Azure SignalR Service connection string into a setting named `AzureSignalRConnectionString`. Obtain the value from the **Keys** page in the Azure SignalR Service resource in the Azure portal; either the primary or secondary connection string can be used.
- * The `WEBSITE_NODE_DEFAULT_VERSION` setting is not used locally, but is required when deployed to Azure.
- * The `Host` section configures the port and CORS settings for the local Functions host (this setting has no effect when running in Azure).
+ * Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting.
- > [!NOTE]
- > Live Server is typically configured to serve content from `http://127.0.0.1:5500`. If you find that it is using a different URL or you are using a different HTTP server, change the `CORS` setting to reflect the correct origin.
+ Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
- ![Get SignalR Service key](media/signalr-tutorial-authenticate-azure-functions/signalr-get-key.png)
+ * Enter the storage account connection string into the `AzureWebJobsStorage` setting.
+
+ Navigate to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
-1. Save the file.
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-## Create a function to authenticate users to SignalR Service
+### Create a function to authenticate users to SignalR Service
-When the chat app first opens in the browser, it requires valid connection credentials to connect to Azure SignalR Service. You'll create an HTTP triggered function named *negotiate* in your function app to return this connection information.
+When the chat app first opens in the browser, it requires valid connection credentials to connect to Azure SignalR Service. You'll create an HTTP triggered function named `negotiate` in your function app to return this connection information.
> [!NOTE]
-> This function must be named *negotiate* as the SignalR client requires an endpoint that ends in `/negotiate`.
-
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
+> This function must be named `negotiate` as the SignalR client requires an endpoint that ends in `/negotiate`.
-1. Search for and select the **Azure Functions: Create Function** command.
-
-1. When prompted, provide the following information.
-
- | Name | Value |
- |||
- | Function app folder | Select the main project folder |
- | Template | HTTP Trigger |
- | Name | negotiate |
- | Authorization level | Anonymous |
+1. From the root project folder, create the `negotiate` function from a built-in template with the following command.
+ ```bash
+ func new --template "SignalR negotiate HTTP trigger" --name negotiate
+ ```
- A folder named **negotiate** is created that contains the new function.
+1. Open *negotiate/function.json* to view the function binding configuration.
-1. Open **negotiate/function.json** to configure bindings for the function. Modify the content of the file to the following. This adds an input binding that generates valid credentials for a client to connect to an Azure SignalR Service hub named `chat`.
+ The function contains an HTTP trigger binding to receive requests from SignalR clients and a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`.
```json {
When the chat app first opens in the browser, it requires valid connection crede
"authLevel": "anonymous", "type": "httpTrigger", "direction": "in",
- "name": "req"
+ "methods": ["post"],
+ "name": "req",
+ "route": "negotiate"
}, { "type": "http",
When the chat app first opens in the browser, it requires valid connection crede
{ "type": "signalRConnectionInfo", "name": "connectionInfo",
- "userId": "",
- "hubName": "chat",
+ "hubName": "default",
+ "connectionStringSetting": "AzureSignalRConnectionString",
"direction": "in" } ] } ```
- The `userId` property in the `signalRConnectionInfo` binding is used to create an authenticated SignalR Service connection. Leave the property blank for local development. You will use it when the function app is deployed to Azure.
+ There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure.
+
+1. Close the *negotiate/function.json* file.
-1. Open **negotiate/index.js** to view the body of the function. Modify the content of the file to the following.
+++
+1. Open *negotiate/index.js* to view the body of the function.
```javascript module.exports = async function (context, req, connectionInfo) {
When the chat app first opens in the browser, it requires valid connection crede
}; ```
- This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client will use this information to connect to the SignalR Service instance.
+ This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance.
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-## Create a function to send chat messages
-
-The web app also requires an HTTP API to send chat messages. You will create an HTTP triggered function named *SendMessage* that sends messages to all connected clients using SignalR Service.
+### Create a function to send chat messages
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
-
-1. Search for and select the **Azure Functions: Create Function** command.
-
-1. When prompted, provide the following information.
-
- | Name | Value |
- |||
- | Function app folder | select the main project folder |
- | Template | HTTP Trigger |
- | Name | SendMessage |
- | Authorization level | Anonymous |
+The web app also requires an HTTP API to send chat messages. You'll create an HTTP triggered function named `sendMessage` that sends messages to all connected clients using SignalR Service.
- A folder named **SendMessage** is created that contains the new function.
+1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template with the command:
+ ```bash
+ func new --name sendMessage --template "Http trigger"
+ ```
-1. Open **SendMessage/function.json** to configure bindings for the function. Modify the content of the file to the following.
+1. To configure bindings for the function, replace the content of *sendMessage/function.json* with the following code:
```json { "disabled": false,
The web app also requires an HTTP API to send chat messages. You will create an
"direction": "in", "name": "req", "route": "messages",
- "methods": [
- "post"
- ]
+ "methods": ["post"]
}, { "type": "http",
The web app also requires an HTTP API to send chat messages. You will create an
{ "type": "signalR", "name": "$return",
- "hubName": "chat",
+ "hubName": "default",
"direction": "out" } ] } ```
- This makes two changes to the original function:
- * Changes the route to `messages` and restricts the HTTP trigger to the **POST** HTTP method.
- * Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `chat`.
-
-1. Save the file.
+ Two changes are made to the original file:
+ * Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method.
+ * Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`.
-1. Open **SendMessage/index.js** to view the body of the function. Modify the content of the file to the following.
+1. Replace the content of *sendMessage/index.js* with the following code:
```javascript module.exports = async function (context, req) {
The web app also requires an HTTP API to send chat messages. You will create an
This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client.
- The function can read the sender's identity and can accept a *recipient* value in the message body to allow for a message to be sent privately to a single user. These functionalities will be used later in the tutorial.
-
-1. Save the file.
-
-[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-
-## Create and run the chat client web user interface
-
-The chat application's UI is a simple single page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). It will be hosted separately from the function app. Locally, you will run the web interface using the Live Server VS Code extension.
-
-1. In VS Code, create a new folder named **content** at the root of the main project folder.
-
-1. In the **content** folder, create a new file named **https://docsupdatetracker.net/index.html**.
-
-1. Copy and paste the content of **[https://docsupdatetracker.net/index.html](https://github.com/Azure-Samples/signalr-service-quickstart-serverless-chat/blob/2720a9a565e925db09ef972505e1c5a7a3765be4/docs/demo/chat-with-auth/https://docsupdatetracker.net/index.html)**.
+ The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial.
1. Save the file.
-1. Press **F5** to run the function app locally and attach a debugger.
-
-1. With **https://docsupdatetracker.net/index.html** open, start Live Server by opening the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`) and selecting **Live Server: Open with Live Server**. Live Server will open the application in a browser.
-
-1. The application opens. Enter a message in the chat box and press enter. Refresh the application to see new messages. Because no authentication was configured, all messages will be sent as "anonymous".
- [Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-## Deploy to Azure and enable authentication
-
-You have been running the function app and chat application locally. You will now deploy them to Azure and enable authentication and private messaging in the application.
-
-### Log into Azure with VS Code
+### Host the chat client web user interface
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
+The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the web page. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the web page.
-1. Search for and select the **Azure: Sign in** command.
+1. Create a new folder named *content* in the root directory of your function project.
+1. In the *content* folder, create a new file named *https://docsupdatetracker.net/index.html*.
-1. Follow the instructions to complete the sign in process in your browser.
+1. Copy and paste the content of [https://docsupdatetracker.net/index.html](https://github.com/aspnet/AzureSignalR-samples/blob/da0aca70f490f3d8f4c220d0c88466b6048ebf65/samples/ServerlessChatWithAuth/content/https://docsupdatetracker.net/index.html) to your file. Save the file.
-### Create a Storage account
+1. From the root project folder, create an HTTP trigger function named `index` from the template with the command:
+ ```bash
+ func new --name index --template "Http trigger"
+ ```
-An Azure Storage account is required by a function app running in Azure. You will also host the web page for the chat UI using the static websites feature of Azure Storage.
+1. Modify the content of `index/index.js` to the following:
+ ```js
+ const fs = require('fs');
-1. In the Azure portal, click on the **Create a resource** (**+**) button for creating a new Azure resource.
+ module.exports = async function (context, req) {
+ const fileContent = fs.readFileSync('content/https://docsupdatetracker.net/index.html', 'utf8');
-1. Select the **Storage** category, then select **Storage account**.
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: fileContent,
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ };
+ }
+ ```
+ The function reads the static web page and returns it to the user.
-1. Enter the following information.
+1. Open *index/function.json*, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this:
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
- | Name | Value |
- |||
- | Subscription | Select the subscription containing the SignalR Service instance |
- | Resource group | Select the same resource group |
- | Resource name | A unique name for the Storage account |
- | Location | Select the same location as your other resources |
- | Performance | Standard |
- | Account kind | StorageV2 (general purpose V2) |
- | Replication | Locally-redundant storage (LRS) |
- | Access Tier | Hot |
+1. Now you can test your app locally. Start the function app with the command:
+ ```bash
+ func start
+ ```
-1. Click **Review + create**, then **Create**.
+1. Open **http://localhost:7071/api/index** in your web browser. You should be able to see a web page as follows:
-### Configure static websites
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface.":::
-1. After the Storage account is created, open it in the Azure portal.
+1. Enter a message in the chat box and press enter.
-1. Select **Static website**.
+ The message is displayed on the web page. Because the user name of the SignalR client isn't set, we send all messages as "anonymous".
-1. Select **Enabled** to enable the static website feature.
-1. In **Index document name**, enter *https://docsupdatetracker.net/index.html*.
+[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
-1. Click **Save**.
+## Deploy to Azure and enable authentication
-1. A **Primary endpoint** appears. Note this value. It will be required to configure the function app.
+You have been running the function app and chat application locally. You'll now deploy them to Azure and enable authentication and private messaging in the application.
### Configure function app for authentication
-So far, the chat app works anonymously. In Azure, you will use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user can be passed to the *SignalRConnectionInfo* binding to generate connection information that is authenticated as the user.
-
-When a sending message, the app can decide whether to send it to all connected clients, or only to the clients that have been authenticated to a given user.
+So far, the chat app works anonymously. In Azure, you'll use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user.
-1. In VS Code, open **negotiate/function.json**.
+1. Open *negotiate/function.json*.
-1. Insert a [binding expression](../azure-functions/functions-triggers-bindings.md) into the *userId* property of the *SignalRConnectionInfo* binding: `{headers.x-ms-client-principal-name}`. This sets the value to the username of the authenticated user. The attribute should now look like this.
+1. Insert a `userId` property to the `SignalRConnectionInfo` binding with value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the user name of the SignalR client to the name of the authenticated user. The binding should now look like this.
```json { "type": "signalRConnectionInfo", "name": "connectionInfo", "userId": "{headers.x-ms-client-principal-name}",
- "hubName": "chat",
+ "hubName": "default",
"direction": "in" } ```
When a sending message, the app can decide whether to send it to all connected c
### Deploy function app to Azure
+Deploy the function app to Azure with the following command:
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`) and select **Azure Functions: Deploy to Function App**.
-
-1. When prompted, provide the following information.
-
- | Name | Value |
- |||
- | Folder to deploy | Select the main project folder |
- | Subscription | Select your subscription |
- | Function app | Select **Create New Function App** |
- | Function app name | Enter a unique name |
- | Resource group | Select the same resource group as the SignalR Service instance |
- | Storage account | Select the storage account you created earlier |
-
- A new function app is created in Azure and the deployment begins. Wait for the deployment to complete.
-
-### Upload function app local settings
-
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
-
-1. Search for and select the **Azure Functions: Upload local settings** command.
-
-1. When prompted, provide the following information.
-
- | Name | Value |
- |||
- | Local settings file | local.settings.json |
- | Subscription | Select your subscription |
- | Function app | Select the previously deployed function app |
+```bash
+func azure functionapp publish <your-function-app-name> --publish-local-settings
+```
-Local settings are uploaded to the function app in Azure. If prompted to overwrite existing settings, select **Yes to all**.
+The `--publish-local-settings` option publishes your local settings from the *local.settings.json* file to Azure, so you don't need to configure them in Azure again.
### Enable App Service Authentication
-App Service Authentication supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google.
+Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial.
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
+1. Go to the resource page of your function app on Azure portal.
+1. Select **Settings** -> **Authentication**.
+1. Select **Add identity provider**.
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page.":::
-1. Search for and select the **Azure Functions: Open in portal** command.
+1. Select **Microsoft** from the **Identity provider** list.
+ :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page.":::
-1. Select the subscription and function app name to open the function app in the Azure portal.
-
-1. In the function app that was opened in the portal, locate the **Platform features** tab, select **Authentication/Authorization**.
-
-1. Turn **On** App Service Authentication.
-
-1. In **Action to take when request is not authenticated**, select "Log in with {authentication provider you selected earlier}".
-
-1. In **Allowed External Redirect URLs**, enter the URL of your storage account primary web endpoint that you previously noted.
-
-1. Follow the documentation for the login provider of your choice to complete the configuration.
+ Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles:
- [Azure Active Directory](../app-service/configure-authentication-provider-aad.md) - [Facebook](../app-service/configure-authentication-provider-facebook.md)
App Service Authentication supports authentication with Azure Active Directory,
- [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) - [Google](../app-service/configure-authentication-provider-google.md)
-### Update the web app
-
-1. In the Azure portal, navigate to the function app's overview page.
-
-1. Copy the function app's URL.
-
- ![Get URL](media/signalr-tutorial-authenticate-azure-functions/signalr-get-url.png)
-
-1. In VS Code, open **https://docsupdatetracker.net/index.html** and replace the value of `apiBaseUrl` with the function app's URL.
-
-1. The application can be configured with authentication using Azure Active Directory, Facebook, Twitter, Microsoft account, or Google. Select the authentication provider that you will use by setting the value of `authProvider`.
-
-1. Save the file.
-
-### Deploy the web application to blob storage
-
-The web application will be hosted using Azure Blob Storage's static websites feature.
-
-1. Open the VS Code command palette (`Ctrl-Shift-P`, macOS: `Cmd-Shift-P`).
-
-1. Search for and select the **Azure Storage: Deploy to Static Website** command.
-
-1. Enter the following values:
-
- | Name | Value |
- |||
- | Subscription | Select your subscription |
- | Storage account | Select the storage account you created earlier |
- | Folder to deploy | Select **Browse** and select the *content* folder |
-
-The files in the *content* folder should now be deployed to the static website.
-
-### Enable function app cross origin resource sharing (CORS)
-
-Although there is a CORS setting in **local.settings.json**, it is not propagated to the function app in Azure. You need to set it separately.
-
-1. Open the function app in the Azure portal.
-
-1. Under the **Platform features** tab, select **CORS**.
-
- ![Find CORS](media/signalr-tutorial-authenticate-azure-functions/signalr-find-cors.png)
-
-1. In the *Allowed origins* section, add an entry with the static website *primary endpoint* as the value (remove the trailing */*).
-
-1. In order for the SignalR JavaScript SDK call your function app from a browser, support for credentials in CORS must be enabled. Select the "Enable Access-Control-Allow-Credentials" checkbox.
-
- ![Enable Access-Control-Allow-Credentials](media/signalr-tutorial-authenticate-azure-functions/signalr-cors-credentials.png)
-
-1. Click **Save** to persist the CORS settings.
+1. Select **Add** to complete the settings. An app registration will be created, which associates your identity provider with your function app.
### Try the application
-1. In a browser, navigate to the storage account's primary web endpoint.
+1. Open **https://\<YOUR-FUNCTION-APP-NAME\>.azurewebsites.net/api/index**
1. Select **Login** to authenticate with your chosen authentication provider.
Although there is a CORS setting in **local.settings.json**, it is not propagate
1. Send private messages by clicking on a username in the chat history. Only the selected recipient will receive these messages.
-Congratulations! You have deployed a real-time, serverless chat app!
-![Demo](media/signalr-tutorial-authenticate-azure-functions/signalr-serverless-chat.gif)
+Congratulations! You've deployed a real-time, serverless chat app!
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
Congratulations! You have deployed a real-time, serverless chat app!
To clean up the resources created in this tutorial, delete the resource group using the Azure portal.
+>[!CAUTION]
+> Deleting the resource group deletes all resources contained within it. If the resource group contains resources outside the scope of this tutorial, they will also be deleted.
+ [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Next steps
To clean up the resources created in this tutorial, delete the resource group us
In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with SignalR Service bindings for Azure Functions. > [!div class="nextstepaction"]
-> [Build Real-time Apps with Azure Functions](signalr-concept-azure-functions.md)
+> [Real-time apps with Azure SignalR Service and Azure Functions](signalr-concept-azure-functions.md)
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/27/2023 Last updated : 02/28/2023
Before you begin the prerequisites, review the [Performance best practices](#per
## Supported regions
-Azure VMware Solution is currently supported in the following regions:
+Azure NetApp Files datastores for Azure VMware Solution are currently supported in the following regions:
* Australia East * Australia Southeast
Azure VMware Solution is currently supported in the following regions:
* South Central US * Southeast Asia * Sweden Central
-* Sweden North
+* Switzerland North
* Switzerland West * UK South * UK West
Azure VMware Solution is currently supported in the following regions:
* West US * West US 2 + ## Performance best practices There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes.
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
For full interconnectivity to your private cloud, you need to enable ExpressRout
> [!IMPORTANT] > Customers should not advertise bogon routes over ExpressRoute from on-premises or their Azure VNET. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3. +
+## Route advertisement guidelines to Azure VMware Solution
+ You need to follow these guidelines while advertising routes from your on-premises and Azure VNET to Azure VMware Solution over ExpressRoute:
+
+| **Supported** |**Not supported**|
+| | |
+| Default route ΓÇô 0.0.0.0/0*| Bogon routes. For example: ``0.0.0.0/1, 128.0.0.0/1 0.0.0.0/5``, or ``192.0.0.0/3.``|
+|RFC-1918 address blocks. For example, (``10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16``) or its subnets ( ``10.1.0.0/16, 172.24.0.0/16, 192.168.1.0/24``).| Special address block reserved by IANA. For example,``RFC 6598-100.64.0.0/10`` and its subnets. |
+|Customer owned public-IP CIDR block or its subnets.||
+
+> [!NOTE]
+> The customer-advertised default route to Azure VMware Solution can't be used to route back the traffic when the customer accesses Azure VMware Solution management appliances (vCenter, NSX-T Manager, HCX Manager). The customer needs to advertise a more specific route to Azure VMware Solution for that traffic to be routed back.
++ ## Limitations [!INCLUDE [azure-vmware-solutions-limits](includes/azure-vmware-solutions-limits.md)]
backup Backup Azure Restore Key Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-key-secret.md
Title: Restore Key Vault key & secret for encrypted VM description: Learn how to restore Key Vault key and secret in Azure Backup using PowerShell- Previously updated : 08/28/2017 + Last updated : 02/28/2023
$secretdata = $encryptionObject.OsDiskKeyAndSecretDetails.SecretData
$Secret = ConvertTo-SecureString -String $secretdata -AsPlainText -Force $secretname = 'B3284AAA-DAAA-4AAA-B393-60CAA848AAAA' $Tags = @{'DiskEncryptionKeyEncryptionAlgorithm' = 'RSA-OAEP';'DiskEncryptionKeyFileName' = 'B3284AAA-DAAA-4AAA-B393-60CAA848AAAA.BEK';'DiskEncryptionKeyEncryptionKeyURL' = $encryptionObject.OsDiskKeyAndSecretDetails.KeyUrl;'MachineName' = 'vm-name'}
-Set-AzureKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $Secret -ContentType 'Wrapped BEK' -Tags $Tags
+Set-AzKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $Secret -ContentType 'Wrapped BEK' -Tags $Tags
``` **Use these cmdlets if your Linux VM is encrypted using BEK and KEK.**
$secretdata = $encryptionObject.OsDiskKeyAndSecretDetails.SecretData
$Secret = ConvertTo-SecureString -String $secretdata -AsPlainText -Force $secretname = 'B3284AAA-DAAA-4AAA-B393-60CAA848AAAA' $Tags = @{'DiskEncryptionKeyEncryptionAlgorithm' = 'RSA-OAEP';'DiskEncryptionKeyFileName' = 'LinuxPassPhraseFileName';'DiskEncryptionKeyEncryptionKeyURL' = <Key_url_of_newly_restored_key>;'MachineName' = 'vm-name'}
-Set-AzureKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $Secret -ContentType 'Wrapped BEK' -Tags $Tags
+Set-AzKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $Secret -ContentType 'Wrapped BEK' -Tags $Tags
``` Use the JSON file generated above to get secret name and value and feed it to set secret cmdlet to put the secret (BEK) back in the key vault. Use these cmdlets if your **VM is encrypted using BEK** only.
Use the JSON file generated above to get secret name and value and feed it to se
```powershell $secretDestination = 'C:\secret.blob' [io.file]::WriteAllBytes($secretDestination, [System.Convert]::FromBase64String($encryptionObject.OsDiskKeyAndSecretDetails.KeyVaultSecretBackupData))
-Restore-AzureKeyVaultSecret -VaultName '<target_key_vault_name>' -InputFile $secretDestination -Verbose
+Restore-AzKeyVaultSecret -VaultName '<target_key_vault_name>' -InputFile $secretDestination -Verbose
``` > [!NOTE]
$secretname = 'B3284AAA-DAAA-4AAA-B393-60CAA848AAAA'
$secretdata = $rp1.KeyAndSecretDetails.SecretData $Secret = ConvertTo-SecureString -String $secretdata -AsPlainText -Force $Tags = @{'DiskEncryptionKeyEncryptionAlgorithm' = 'RSA-OAEP';'DiskEncryptionKeyFileName' = 'B3284AAA-DAAA-4AAA-B393-60CAA848AAAA.BEK';'DiskEncryptionKeyEncryptionKeyURL' = 'https://mykeyvault.vault.azure.net:443/keys/KeyName/84daaac999949999030bf99aaa5a9f9';'MachineName' = 'vm-name'}
-Set-AzureKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $secret -Tags $Tags -SecretValue $Secret -ContentType 'Wrapped BEK'
+Set-AzKeyVaultSecret -VaultName '<target_key_vault_name>' -Name $secretname -SecretValue $secret -Tags $Tags -SecretValue $Secret -ContentType 'Wrapped BEK'
``` > [!NOTE]
bastion Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/work-remotely-support.md
Title: 'Working remotely using Bastion: Azure Bastion'
-description: Learn how to use Azure Bastion to enable working remotely due to the COVID-19 pandemic.
+ Title: Enable remote work by using Azure Bastion
+description: Learn how to use Azure Bastion to enable remote access to virtual machines.
-# Working remotely using Azure Bastion
+# Enable remote work by using Azure Bastion
-Azure Bastion plays a pivotal role in supporting remote work scenarios by allowing users with internet connectivity to access Azure virtual machines. In particular, it enables IT administrators to manage their applications running on Azure at any time and from anywhere around the globe.
-
->[!NOTE]
->This article describes how you can leverage Azure Bastion, Azure, Microsoft network, and the Azure partner ecosystem to work remotely and mitigate network issues that you are facing because of COVID-19 crisis.
->
+Azure Bastion supports remote work scenarios by allowing users with internet connectivity to access Azure virtual machines. In particular, it enables IT administrators to manage their applications running on Azure at any time and from anywhere around the globe.
## Securely access virtual machines
-Specifically, Azure Bastion provides secure and seamless RDP/SSH connectivity to virtual machines within the Azure virtual network, directly in the Azure portal, without the use of a public IP address. For more information about the Azure Bastion architecture and key features, check out [What is Azure Bastion](bastion-overview.md).
+Azure Bastion provides RDP/SSH connectivity to virtual machines within an Azure virtual network, directly in the Azure portal, without the use of a public IP address. For more information about the Azure Bastion architecture and key features, check out [What is Azure Bastion?](bastion-overview.md).
-Azure Bastion is deployed per virtual network, meaning companies can configure and manage one Azure Bastion to quickly support remote user access to virtual machines within an Azure virtual network. For guidance on how to create and manage Azure Bastion, refer to [Create a bastion host](./tutorial-create-host-portal.md).
+Azure Bastion is deployed per virtual network. Companies can configure and manage one Azure Bastion to quickly support remote user access to virtual machines within an Azure virtual network. For guidance on how to create and manage Azure Bastion, see [Create an Azure Bastion host](./tutorial-create-host-portal.md).
## Next steps
-* Configure Azure Bastion using the [Azure portal](./tutorial-create-host-portal.md), [PowerShell](bastion-create-host-powershell.md), or Azure CLI.
+* Configure Azure Bastion by using the [Azure portal](tutorial-create-host-portal.md), [PowerShell](bastion-create-host-powershell.md), or the [Azure CLI](create-host-cli.md).
-* Read the [Bastion FAQ](bastion-faq.md) for additional information.
+* Read the [Azure Bastion FAQ](bastion-faq.md).
batch Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/account-move.md
Title: Move an Azure Batch account to another region description: Learn how to move an Azure Batch account to a different region using an Azure Resource Manager template in the Azure portal. Previously updated : 12/20/2021 Last updated : 02/27/2023
For more information on Resource Manager and templates, see [Quickstart: Create
## Prerequisites - Make sure that the services and features that your Batch account uses are supported in the new target region.-- It's recommended to move the storage account associated with your Batch account to the new target region. Follow the steps in [Move an Azure Storage account to another region](../storage/common/storage-account-move.md). If you prefer, you can leave the storage account in the original region. Typically, performance is better when your storage account is in the same region as your Batch account. This article assumes you've already migrated your storage account.
+- It's recommended to move any Azure resources associated with your Batch account to the new target region. For example, follow the steps in [Move an Azure Storage account to another region](../storage/common/storage-account-move.md) to move an associated autostorage account. If you prefer, you can leave resources in the original region, however, performance is typically better when your Batch account is in the same region as your other Azure resources used by your workload. This article assumes you've already migrated your storage account or any other regional Azure resources to be aligned with your Batch account.
## Prepare the template
-To get started, you'll need to export and then modify an ARM template.
+To get started, you need to export and then modify an ARM template.
### Export a template
Load and modify the template so you can create a new Batch account in the target
``` 1. Finally, edit the **location** property to use your target region. This example sets the target region to `centralus`.
-
+ ```json { "resources": [
Load and modify the template so you can create a new Batch account in the target
"type": "Microsoft.Batch/batchAccounts", "apiVersion": "2021-01-01", "name": "[parameters('batchAccounts_mysourceaccount_name')]",
- "location": "centralus",
+ "location": "centralus",
``` To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces. For example, **Central US** = **centralus**.
Deploy the template to create a new Batch account in the target region.
### Configure the new Batch account
-Some features won't export to a template, so you'll have to recreate them in the new Batch account. These features include:
+Some features don't export to a template, so you have to recreate them in the new Batch account. These features include:
-- Jobs
+- Jobs (and tasks)
- Job schedules - Certificates - Application packages Be sure to configure features in the new account as needed. You can look at how you've configured these features in your source Batch account for reference.
+> [!IMPORTANT]
+> New Batch accounts are entirely separate from any prior existing Batch accounts, even within the same region. These newly
+> created Batch accounts will have [default service and core quotas](batch-quota-limit.md) associated with them. For User
+> Subscription pool allocation mode Batch accounts, core quotas from the subscription will apply. You will need to ensure
+> that these new Batch accounts have sufficient quota before migrating your workload.
+ ## Discard or clean up Confirm that your new Batch account is successfully working in the new region. Also make sure to restore the necessary features. Then, you can delete the source Batch account.
Confirm that your new Batch account is successfully working in the new region. A
## Next steps - Learn more about [moving resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).-- Learn how to [move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md).
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
Title: Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity
-description: Learn how to encrypt Batch data using customer-managed keys.
+description: Learn how to encrypt Batch data using customer-managed keys.
Previously updated : 02/11/2021 Last updated : 02/27/2023 ms.devlang: csharp # Configure customer-managed keys for your Azure Batch account with Azure Key Vault and Managed Identity
-By default Azure Batch uses platform-managed keys to encrypt all the customer data stored in the Azure Batch Service, like certificates, job/task metadata. Optionally, you can use your own keys, i.e., customer-managed keys, to encrypt data stored in Azure Batch.
+By default Azure Batch uses platform-managed keys to encrypt all the customer data stored in the Azure Batch Service, like certificates, job/task metadata. Optionally, you can use your own keys, that is, customer-managed keys, to encrypt data stored in Azure Batch.
The keys you provide must be generated in [Azure Key Vault](../key-vault/general/basic-concepts.md), and they must be accessed with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). There are two types of managed identities: [*system-assigned* and *user-assigned*](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
-You can either create your Batch account with system-assigned managed identity, or create a separate user-assigned managed identity that will have access to the customer-managed keys. Review the [comparison table](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) to understand the differences and consider which option works best for your solution. For example, if you want to use the same managed identity to access multiple Azure resources, a user-assigned managed identity will be needed. If not, a system-assigned managed identity associated with your Batch account may be sufficient. Using a user-assigned managed identity also gives you the option to enforce customer-managed keys at Batch account creation, as shown [in the example below](#create-a-batch-account-with-user-assigned-managed-identity-and-customer-managed-keys).
+You can either create your Batch account with system-assigned managed identity, or create a separate user-assigned managed identity
+that has access to the customer-managed keys. Review the
+[comparison table](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) to understand the
+differences and consider which option works best for your solution. For example, if you want to use the same managed identity to
+access multiple Azure resources, a user-assigned managed identity is needed. If not, a system-assigned managed identity associated
+with your Batch account may be sufficient. Using a user-assigned managed identity also gives you the option to enforce
+customer-managed keys at Batch account creation, as shown next.
## Create a Batch account with system-assigned managed identity
After the account is created, you can find a unique GUID in the **Identity princ
![Screenshot showing a unique GUID in the Identity principal Id field.](./media/batch-customer-managed-key/linked-batch-principal.png)
-You will need this value in order to grant this Batch account access to the Key Vault.
+You need this value in order to grant this Batch account access to the Key Vault.
### Azure CLI When you create a new Batch account, specify `SystemAssigned` for the `--identity` parameter.
-```azurecli
+```azurecli-interactive
resourceGroupName='myResourceGroup' accountName='mybatchaccount'
az batch account create \
--identity 'SystemAssigned' ```
-After the account is created, you can verify that system-assigned managed identity has been enabled on this account. Be sure to note the `PrincipalId`, as this value will be needed to grant this Batch account access to the Key Vault.
+After the account is created, you can verify that system-assigned managed identity has been enabled on this account. Be sure to note the `PrincipalId`, as this value is needed to grant this Batch account access to the Key Vault.
-```azurecli
+```azurecli-interactive
az batch account show \ --name $accountName \ --resource-group $resourceGroupName \
az batch account show \
## Create a user-assigned managed identity
-If you prefer, you can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) which can be used to access your customer-managed keys.
+If you prefer, you can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) that can be used to access your customer-managed keys.
-You will need the **Client ID** value of this identity in order for it to access the Key Vault.
+You need the **Client ID** value of this identity in order for it to access the Key Vault.
## Configure your Azure Key Vault instance
-The Azure Key Vault in which your keys will be generated must be created in the same tenant as your Batch account. It does not need to be in the same resource group or even in the same subscription.
+The Azure Key Vault in which your keys are generated must be created in the same tenant as your Batch account. It doesn't need to be in the same resource group or even in the same subscription.
### Create an Azure Key Vault
In the Azure portal, go to the Key Vault instance in the **key** section, select
![Create a key](./media/batch-customer-managed-key/create-key.png)
-After the key is created, click on the newly created key and the current version, copy the **Key Identifier** under **properties** section. Be sure sure that under **Permitted Operations**, **Wrap Key** and **Unwrap Key** are both checked.
+After the key is created, click on the newly created key and the current version, copy the **Key Identifier** under **properties** section. Be sure that under **Permitted Operations**, **Wrap Key** and **Unwrap Key** are both checked.
## Enable customer-managed keys on a Batch account
-Once you have followed the steps above, you can enable customer-managed keys on your Batch account.
+Now that the prerequisites are in place, you can enable customer-managed keys on your Batch account.
### Azure portal
In the [Azure portal](https://portal.azure.com/), go to the Batch account page.
After the Batch account is created with system-assigned managed identity and the access to Key Vault is granted, update the Batch account with the `{Key Identifier}` URL under `keyVaultProperties` parameter. Also set `--encryption-key-source` as `Microsoft.KeyVault`.
-```azurecli
+```azurecli-interactive
az batch account set \ --name $accountName \ --resource-group $resourceGroupName \
az batch account set \
## Create a Batch account with user-assigned managed identity and customer-managed keys
-Using the Batch management .NET client, you can create a Batch account that will have a user-assigned managed identity and customer-managed keys.
+As an example using the Batch management .NET client, you can create a Batch account that has a user-assigned managed identity
+and customer-managed keys.
```c# EncryptionProperties encryptionProperties = new EncryptionProperties()
BatchAccountIdentity identity = new BatchAccountIdentity()
var parameters = new BatchAccountCreateParameters(TestConfiguration.ManagementRegion, encryption:encryptionProperties, identity: identity); var account = await batchManagementClient.Account.CreateAsync("MyResourceGroup",
- "mynewaccount", parameters);
+ "mynewaccount", parameters);
``` ## Update the customer-managed key version
When you create a new version of a key, update the Batch account to use the new
You can also use Azure CLI to update the version.
-```azurecli
+```azurecli-interactive
az batch account set \ --name $accountName \ --resource-group $resourceGroupName \ --encryption-key-identifier {YourKeyIdentifierWithNewVersion} ```
+> [!TIP]
+> You can have your keys automatically rotate by creating a key rotation policy within Key Vault. When specifying a Key Identifier
+> for the Batch account, use the versionless key identifier to enable autorotation with a valid rotation policy. For more information,
+> see [how to configure key rotation](../key-vault/keys/how-to-configure-key-rotation.md) in Key Vault.
+ ## Use a different key for Batch encryption To change the key used for Batch encryption, follow these steps:
To change the key used for Batch encryption, follow these steps:
You can also use Azure CLI to use a different key.
-```azurecli
+```azurecli-interactive
az batch account set \ --name $accountName \ --resource-group $resourceGroupName \
az batch account set \
- **Can I select RSA key sizes larger than 2048 bits?** Yes, RSA key sizes of `3072` and `4096` bits are also supported. - **What operations are available after a customer-managed key is revoked?** The only operation allowed is account deletion if Batch loses access to the customer-managed key. - **How should I restore access to my Batch account if I accidentally delete the Key Vault key?** Since purge protection and soft delete are enabled, you could restore the existing keys. For more information, see [Recover an Azure Key Vault](../key-vault/general/key-vault-recovery.md).-- **Can I disable customer-managed keys?** You can set the encryption type of the Batch Account back to "Microsoft managed key" at any time. After this, you are free to delete or change the key.-- **How can I rotate my keys?** Customer-managed keys are not automatically rotated. To rotate the key, update the Key Identifier that the account is associated with.
+- **Can I disable customer-managed keys?** You can set the encryption type of the Batch Account back to "Microsoft managed key" at any time. You're free to delete or change the key afterwards.
+- **How can I rotate my keys?** Customer-managed keys aren't automatically rotated unless the [key is versionless with an appropriate key rotation policy set within Key Vault](../key-vault/keys/how-to-configure-key-rotation.md). To manually rotate the key, update the Key Identifier that the account is associated with.
- **After I restore access how long will it take for the Batch account to work again?** It can take up to 10 minutes for the account to be accessible again once access is restored.-- **While the Batch Account is unavailable what happens to my resources?** Any pools that are running when Batch access to customer-managed keys is lost will continue to run. However, the nodes will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes will become available again and tasks will be restarted.-- **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Services Configuration pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration pools, the OS and any specified data disks will be encrypted with a Microsoft platform managed key by default. Currently, you cannot specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md)
+- **While the Batch Account is unavailable what happens to my resources?** Any pools that are running when Batch access to the customer-managed key is lost will continue to run. However, the nodes in these pools will transition into an unavailable state, and tasks will stop running (and be requeued). Once access is restored, nodes become available again, and tasks are restarted.
+- **Does this encryption mechanism apply to VM disks in a Batch pool?** No. For Cloud Services Configuration pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), no encryption is applied for the OS and temporary disk. For Virtual Machine Configuration pools, the OS and any specified data disks are encrypted with a Microsoft platform managed key by default. Currently, you can't specify your own key for these disks. To encrypt the temporary disk of VMs for a Batch pool with a Microsoft platform managed key, you must enable the [diskEncryptionConfiguration](/rest/api/batchservice/pool/add#diskencryptionconfiguration) property in your [Virtual Machine Configuration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) Pool. For highly sensitive environments, we recommend enabling temporary disk encryption and avoiding storing sensitive data on OS and data disks. For more information, see [Create a pool with disk encryption enabled](./disk-encryption.md)
- **Is the system-assigned managed identity on the Batch account available on the compute nodes?** No. The system-assigned managed identity is currently used only for accessing the Azure Key Vault for the customer-managed key. To use a user-assigned managed identity on compute nodes, see [Configure managed identities in Batch pools](managed-identity-pools.md). ## Next steps
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
Title: Tutorial - Add Azure CDN to an Azure App Service web app | Microsoft Docs description: In this tutorial, Azure Content Delivery Network (CDN) is added to an Azure App Service web app to cache and deliver your static files from servers close to your customers around the world. --+ ms.assetid: na Previously updated : 05/14/2018 Last updated : 02/27/2023 - + # Tutorial: Add Azure CDN to an Azure App Service web app
-This tutorial shows how to add [Azure Content Delivery Network (CDN)](cdn-overview.md) to a [web app in Azure App Service](../app-service/overview.md). Web apps is a service for hosting web applications, REST APIs, and mobile back ends.
+This tutorial shows how to add [Azure Content Delivery Network (CDN)](cdn-overview.md) to a [web app in Azure App Service](../app-service/overview.md). Web apps are services for hosting web applications, REST APIs, and mobile back ends.
-Here's the home page of the sample static HTML site that you'll work with:
+Here's the home page of the sample static HTML site that you work with:
![Sample app home page](media/cdn-add-to-web-app/sample-app-home-page.png)
-What you'll learn:
+What you learn:
> [!div class="checklist"] > * Create a CDN endpoint.
To complete this tutorial:
## Create the web app
-To create the web app that you'll work with, follow the [static HTML quickstart](../app-service/quickstart-html.md) through the **Browse to the app** step.
+To create the web app that you work with, follow the [static HTML quickstart](../app-service/quickstart-html.md) through the **Browse to the app** step.
-## Log in to the Azure portal
+## Sign in to the Azure portal
Open a browser and navigate to the [Azure portal](https://portal.azure.com). ### Dynamic site acceleration optimization If you want to optimize your CDN endpoint for dynamic site acceleration (DSA), you should use the [CDN portal](cdn-create-new-endpoint.md) to create your profile and endpoint. With [DSA optimization](cdn-dynamic-site-acceleration.md), the performance of web pages with dynamic content is measurably improved. For instructions about how to optimize a CDN endpoint for DSA from the CDN portal, see [CDN endpoint configuration to accelerate delivery of dynamic files](cdn-dynamic-site-acceleration.md#cdn-endpoint-configuration-to-accelerate-delivery-of-dynamic-files).
-Otherwise, if you don't want to optimize your new endpoint, you can use the web app portal to create it by following the steps in the next section. Note that for **Azure CDN from Verizon** profiles, you cannot change the optimization of a CDN endpoint after it has been created.
+Otherwise, if you don't want to optimize your new endpoint, you can use the web app portal to create it by following the steps in the next section. For **Azure CDN from Verizon** profiles, you can't change the optimization of a CDN endpoint after it has been created.
## Create a CDN profile and endpoint In the left navigation, select **App Services**, and then select the app that you created in the [static HTML quickstart](../app-service/quickstart-html.md).
-![Select App Service app in the portal](media/cdn-add-to-web-app/portal-select-app-services.png)
-In the **App Service** page, in the **Settings** section, select **Networking > Configure Azure CDN for your app**.
+In the **App Service** page, in the **Settings** section, select **Networking > Azure CDN**.
-![Select CDN in the portal](media/cdn-add-to-web-app/portal-select-cdn.png)
In the **Azure Content Delivery Network** page, provide the **New endpoint** settings as specified in the table.
-![Create profile and endpoint in the portal](media/cdn-add-to-web-app/portal-new-endpoint.png)
| Setting | Suggested value | Description | | - | | -- |
Select **Create** to create a CDN profile.
Azure creates the profile and endpoint. The new endpoint appears in the **Endpoints** list, and when it's provisioned, the status is **Running**.
-![New endpoint in list](media/cdn-add-to-web-app/portal-new-endpoint-in-list.png)
### Test the CDN endpoint
http://<appname>.azurewebsites.net/https://docsupdatetracker.net/index.html
![V2 in title in web app](media/cdn-add-to-web-app/v2-in-web-app-title.png)
-If you browse to the CDN endpoint URL for the home page, you won't see the change because the cached version in the CDN hasn't expired yet.
+If you browse to the CDN endpoint URL for the home page, you don't see the changes because the cached version in the CDN hasn't expired yet.
``` http://<endpointname>.azureedge.net/https://docsupdatetracker.net/index.html
To trigger the CDN to update its cached version, purge the CDN.
In the portal left navigation, select **Resource groups**, and then select the resource group that you created for your web app (myResourceGroup).
-![Select resource group](media/cdn-add-to-web-app/portal-select-group.png)
In the list of resources, select your CDN endpoint.
-![Select endpoint](media/cdn-add-to-web-app/portal-select-endpoint.png)
At the top of the **Endpoint** page, select **Purge**.
-![Select Purge](media/cdn-add-to-web-app/portal-select-purge.png)
Enter the content paths you want to purge. You can pass a complete file path to purge an individual file, or a path segment to purge and refresh all content in a folder. Because you changed *https://docsupdatetracker.net/index.html*, ensure that is in one of the paths. At the bottom of the page, select **Purge**.
-![Purge page](media/cdn-add-to-web-app/app-service-web-purge-cdn.png)
### Verify that the CDN is updated Wait until the purge request finishes processing, which is typically a couple of minutes. To see the current status, select the bell icon at the top of the page.
-![Purge notification](media/cdn-add-to-web-app/portal-purge-notification.png)
-When you browse to the CDN endpoint URL for *https://docsupdatetracker.net/index.html*, you'll see the *V2* that you added to the title on the home page, which indicates that the CDN cache has been refreshed.
+When you browse to the CDN endpoint URL for *https://docsupdatetracker.net/index.html*, you see the *V2* that you added to the title on the home page, which indicates that the CDN cache has been refreshed.
``` http://<endpointname>.azureedge.net/https://docsupdatetracker.net/index.html
Azure CDN offers the following caching behavior options:
* Bypass caching for query strings * Cache every unique URL
-The first option is the default, which means there is only one cached version of an asset regardless of the query string in the URL.
+The first option is the default, which means there's only one cached version of an asset regardless of the query string in the URL.
In this section of the tutorial, you change the caching behavior to cache every unique URL.
Select **Cache every unique URL** from the **Query string caching behavior** dro
Select **Save**.
-![Select query string caching behavior](media/cdn-add-to-web-app/portal-select-caching-behavior.png)
### Verify that unique URLs are cached separately
cdn Cdn Advanced Http Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-advanced-http-reports.md
Title: Analyze usage statistics with Azure CDN advanced HTTP reports | Microsoft Docs description: Learn how to create advanced HTTP reports in Microsoft Azure CDN. These reports provide detailed information on CDN activity. ---++ ms.assetid: ef90adc1-580e-4955-8ff1-bde3f3cafc5d na Previously updated : 01/23/2017- Last updated : 02/27/2023+ + # Analyze usage statistics with Azure CDN advanced HTTP reports ## Overview+ This document explains advanced HTTP reporting in Microsoft Azure CDN. These reports provide detailed information on CDN activity. [!INCLUDE [cdn-premium-feature](../../includes/cdn-premium-feature.md)] ## Accessing advanced HTTP reports
-1. From the CDN profile blade, click the **Manage** button.
+1. From the CDN profile page, select the **Manage** button.
![CDN profile blade manage button](./media/cdn-advanced-http-reports/cdn-manage-btn.png) The CDN management portal opens.
-2. Hover over the **Analytics** tab, then hover over the **Advanced HTTP Reports** flyout. Click on **HTTP Large Platform**.
+2. Hover over the **Analytics** tab, then hover over the **Advanced HTTP Reports** flyout. Select on **HTTP Large Platform**.
![CDN management portal - Advanced Reports menu](./media/cdn-advanced-http-reports/cdn-advanced-reports.png)
This document explains advanced HTTP reporting in Microsoft Azure CDN. These rep
## Geography Reports (Map-Based) There are five reports that take advantage of a map to indicate the regions from which your content is being requested. These reports are World Map, United States Map, Canada Map, Europe Map, and Asia Pacific Map.
-Each map-based report ranks geographic entities (i.e., countries/regions, a map is provided to help you visualize the locations from which your content is being requested. It is able to do so by color-coding each region according to the amount of demand experienced in that region. Lighter shaded regions indicate lower demand for your content, while darker regions indicate higher levels of demand for your content.
+Each map-based report ranks geographic entities (for example, countries/regions, a map is provided to help you visualize the locations from which your content is being requested. It's able to do so by color-coding each region according to the amount of demand experienced in that region. Lighter shaded regions indicate lower demand for your content, while darker regions indicate higher levels of demand for your content.
-Detailed traffic and bandwidth information for each region is provided directly below the map. This allows you to view the total number of hits, the percentage of hits, the total amount of data transferred (in gigabytes), and the percentage of data transferred for each region. View a description for each of these metrics. Finally, when you hover over a region (i.e., country/region, state, or province), the name and the percentage of hits that occurred in the region will be displayed as a tooltip.
+Detailed traffic and bandwidth information for each region is provided directly below the map. This information allows you to view the total number of hits, the percentage of hits, the total amount of data transferred (in gigabytes), and the percentage of data transferred for each region. View a description for each of these metrics. Finally, when you hover over a region (for example, country/region, state, or province), the name and the percentage of hits that occurred in the region gets displayed as a tooltip.
-A brief description is provided below for each type of map-based geography report.
+A brief description is provided for each type of map-based geography report.
| Report Name | Description | | | |
A brief description is provided below for each type of map-based geography repor
| Asia Pacific Map |This report allows you to view the demand for your CDN content in Asia. Each country/region is color-coded on this map to indicate the percentage of hits that originated from that region. | ## Geography Reports (Bar Charts)
-There are two additional reports that provide statistical information according to geography, which are Top Cities and Top Countries. These reports rank cities and countries/regions, respectively, according to the number of hits that originated from those countries/regions. Upon generating this type of report, a bar chart will indicate the top 10 cities or countries/regions that requested content over a specific platform. This bar chart allows you to quickly assess the regions that generate the highest number of requests for your content.
-The left-hand side of the graph (y-axis) indicates how many hits occurred in the specified region. Directly below the graph (x-axis), you will find a label for each of the top 10 regions.
+There are two more reports that provide statistical information according to geography, which is Top Cities and Top Countries. These reports rank cities and countries/regions, respectively, according to the number of hits that originated from those countries/regions. A bar chart indicates the top 10 cities or countries/regions that requested content over a specific platform. This bar chart allows you to quickly assess the regions that generate the highest number of requests for your content.
+
+The left-hand side of the graph (y-axis) indicates how many hits occurred in the specified region. Directly below the graph (x-axis), you find a label for each of the top 10 regions.
### Using the bar charts
-* If you hover over a bar, the name and the total number of hits that occurred in the region will be displayed as a tooltip.
+
+* If you hover over a bar, the name and the total number of hits that occurred in the region gets displayed as a tooltip.
* The tooltip for the Top Cities report identifies a city by its name, state/province, and country/region abbreviation.
-* If the city or region (i.e., state/province) from which a request originated could not be determined, then it will indicate that they are unknown. If the country/region is unknown, then two question marks (i.e., ??) will be displayed.
-* A report may include metrics for "Europe" or the "Asia/Pacific Region." Those items are not meant to provide statistical information on all IP addresses in those regions. Rather, they only apply to requests that originate from IP addresses that are spread out over Europe or Asia/Pacific instead of to a specific city or country/region.
+* If the city or region (for example, state/province) from which a request originated couldn't be determined, then it indicates that they're unknown. If the country/region is unknown, then two question marks (for example, ??) gets displayed.
+* A report may include metrics for "Europe" or the "Asia/Pacific Region." Those items aren't meant to provide statistical information on all IP addresses in those regions. Rather, they only apply to requests that originate from IP addresses that are spread out over Europe or Asia/Pacific instead of to a specific city or country/region.
-The data that was used to generate the bar chart can be viewed below it. There you will find the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred for the top 250 regions. View a description for each of these metrics.
+The data that was used to generate the bar chart can be viewed below it. There you find the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred for the top 250 regions. View a description for each of these metrics.
-A brief description is provided for both types of reports below.
+A brief description is provided for both types of reports in the following table.
| Report Name | Description | | | |
A brief description is provided for both types of reports below.
## Daily Summary The Daily Summary report allows you to view the total number of hits and data transferred over a particular platform on a daily basis. This information can be used to quickly discern CDN activity patterns. For example, this report can help you detect which days experienced higher or lower than expected traffic.
-Upon generating this type of report, a bar chart will provide a visual indication as to the amount of platform-specific demand experienced on a daily basis over the time period covered by the report. It will do so by displaying a bar for each day in the report. For example, selecting the time period called "Last Week" will generate a bar chart with seven bars. Each bar will indicate the total number of hits experienced on that day.
+Once you generate this report, a bar chart provides a visual indication as to the amount of platform-specific demand experienced on a daily basis over the time period covered by the report. It does so by displaying a bar for each day in the report. For example, selecting the time period called "Last Week" generates a bar chart with seven bars. Each bar indicates the total number of hits experienced on that day.
-The left-hand side of the graph (y-axis) indicates how many hits occurred on the specified date. Directly below the graph (x-axis), you will find a label that indicates the date (Format: YYYY-MM-DD) for each day included in the report.
+The left-hand side of the graph (y-axis) indicates how many hits occurred on the specified date. Directly below the graph (x-axis), you find a label that indicates the date (Format: YYYY-MM-DD) for each day included in the report.
> [!TIP] > If you hover over a bar, the total number of hits that occurred on that date will be displayed as a tooltip. > >
-The data that was used to generate the bar chart can be viewed below it. There you will find the total number of hits and the amount of data transferred (in gigabytes) for each day covered by the report.
+The data that was used to generate the bar chart can be viewed below it. There you find the total number of hits and the amount of data transferred (in gigabytes) for each day covered by the report.
## By Hour+ The By Hour report allows you to view the total number of hits and data transferred over a particular platform on an hourly basis. This information can be used to quickly discern CDN activity patterns. For example, this report can help you detect the time periods during the day that experience higher or lower than expected traffic.
-Upon generating this type of report, a bar chart will provide a visual indication as to the amount of platform-specific demand experienced on an hourly basis over the time period covered by the report. It will do so by displaying a bar for each hour covered by the report. For example, selecting a 24 hour time period will generate a bar chart with twenty four bars. Each bar will indicate the total number of hits experienced during that hour.
+In this report, you see a bar chart provides a visual indication as to the amount of platform-specific demand experienced on an hourly basis over the time period covered by the report. It does so by displaying a bar for each hour covered by the report. For example, selecting a 24 hour time period generates a bar chart with 24 bars. Each bar indicates the total number of hits experienced during that hour.
-The left-hand side of the graph (y-axis) indicates how many hits occurred on the specified hour. Directly below the graph (x-axis), you will find a label that indicates the date/time (Format: YYYY-MM-DD hh:mm) for each hour included in the report. Time is reported using 24 hour format and it is specified using the UTC/GMT time zone.
+The left-hand side of the graph (y-axis) indicates how many hits occurred on the specified hour. Directly below the graph (x-axis), you find a label that indicates the date/time (Format: YYYY-MM-DD hh:mm) for each hour included in the report. Time is reported using 24 hour format and it's specified using the UTC/GMT time zone.
> [!TIP] > If you hover over a bar, the total number of hits that occurred during that hour will be displayed as a tooltip. > >
-The data that was used to generate the bar chart can be viewed below it. There you will find the total number of hits and the amount of data transferred (in gigabytes) for each hour covered by the report.
+The data that was used to generate the bar chart can be viewed below it. There you find the total number of hits and the amount of data transferred (in gigabytes) for each hour covered by the report.
## By File
-The By File report allows you to view the amount of demand and the traffic incurred over a particular platform for the most requested assets. Upon generating this type of report, a bar chart will be generated on the top 10 most requested assets over the specified time period.
+
+The By File report allows you to view the amount of demand and the traffic incurred over a particular platform for the most requested assets. When you generate this report, a bar chart is generated on the top 10 most requested assets over the specified time period.
> [!NOTE] > For the purposes of this report, edge CNAME URLs are converted to their equivalent CDN URLs. This allows an accurate tally for the total number of hits associated with an asset regardless of the CDN or edge CNAME URL used to request it. > >
-The left-hand side of the graph (y-axis) indicates the number of requests for each asset over the specified time period. Directly below the graph (x-axis), you will find a label that indicates the file name for each of the top 10 requested assets.
+The left-hand side of the graph (y-axis) indicates the number of requests for each asset over the specified time period. Directly below the graph (x-axis), you find a label that indicates the file name for each of the top 10 requested assets.
-The data that was used to generate the bar chart can be viewed below it. There you will find the following information for each of the top 250 requested assets: relative path, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred.
+The data that was used to generate the bar chart can be viewed below it. There you find the following information for each of the top 250 requested assets: relative path, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred.
## By File Detail
-The By File Detail report allows you to view the amount of demand and the traffic incurred over a particular platform for a specific asset. At the very top of this report is the File Details For option. This option provides a list of your most requested assets on the selected platform. In order to generate a By File Detail report, you will need to select the desired asset from the File Details For option. After which, a bar chart will indicate the amount of daily demand that it generated over the specified time period.
+The By File Detail report allows you to view the amount of demand and the traffic incurred over a particular platform for a specific asset. At the top of this report, you find the File Details For option. This option provides a list of your most requested assets on the selected platform. In order to generate a By File Detail report, you need to select the desired asset from the File Details For option. After which, a bar chart will indicate the amount of daily demand that it generated over the specified time period.
-The left-hand side of the graph (y-axis) indicates the total number of requests that an asset experienced on a particular day. Directly below the graph (x-axis), you will find a label that indicates the date (Format: YYYY-MM-DD) for which CDN demand for the asset was reported.
+The left-hand side of the graph (y-axis) indicates the total number of requests that an asset experienced on a particular day. Directly below the graph (x-axis), you find a label that indicates the date (Format: YYYY-MM-DD) for which CDN demand for the asset was reported.
-The data that was used to generate the bar chart can be viewed below it. There you will find the total number of hits and the amount of data transferred (in gigabytes) for each day covered by the report.
+The data that was used to generate the bar chart can be viewed below it. There you find the total number of hits and the amount of data transferred (in gigabytes) for each day covered by the report.
## By File Type
-The By File Type report allows you to view the amount of demand and the traffic incurred by file type. Upon generating this type of report, a donut chart will indicate the percentage of hits generated by the top 10 file types.
+
+The By File Type report allows you to view the amount of demand and the traffic incurred by file type. With this report, a donut chart indicates the percentage of hits generated by the top 10 file types.
> [!TIP] > If you hover over a slice in the donut chart, the Internet media type of that file type will be displayed as a tooltip. > >
-The data that was used to generate the donut chart can be viewed below it. There you will find the file name extension/Internet media type, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred for each of the top 250 file types.
+The data that was used to generate the donut chart can be viewed below it. There you find the file name extension/Internet media type, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred for each of the top 250 file types.
## By Directory
-The By Directory report allows you to view the amount of demand and the traffic incurred over a particular platform for content from a specific directory. Upon generating this type of report, a bar chart will indicate the total number of hits generated by content in the top 10 directories.
+The By Directory report allows you to view the amount of demand and the traffic incurred over a particular platform for content from a specific directory. Upon generating this type of report, a bar chart indicates the total number of hits generated by content in the top 10 directories.
### Using the bar chart * Hover over a bar to view the relative path to the corresponding directory.
-* Content stored in a subfolder of a directory does not count when calculating demand by directory. This calculation relies solely on the number of requests generated for content stored in the actual directory.
+* Content stored in a subfolder of a directory doesn't count when calculating demand by directory. This calculation relies solely on the number of requests generated for content stored in the actual directory.
* For the purposes of this report, edge CNAME URLs are converted to their equivalent CDN URLs. This allows an accurate tally for all statistics associated with an asset regardless of the CDN or edge CNAME URL used to request it. The left-hand side of the graph (y-axis) indicates the total number of requests for the content stored in your top 10 directories. Each bar on the chart represents a directory. Use the color-coding scheme to match up a bar to a directory listed in the Top 250 Full Directories section.
-The data that was used to generate the bar chart can be viewed below it. There you will find the following information for each of the top 250 directories: relative path, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred.
+The data that was used to generate the bar chart can be viewed below it. There you find the following information for each of the top 250 directories: relative path, the total number of hits, the percentage of hits, the amount of data transferred (in gigabytes), and the percentage of data transferred.
## By Browser
-The By Browser report allows you to view which browsers were used to request content. Upon generating this type of report, a pie chart will indicate the percentage of requests handled by the top 10 browsers.
+
+The By Browser report allows you to view which browsers were used to request content. When you generate this report, a pie chart indicates the percentage of requests handled by the top 10 browsers.
### Using the pie chart * Hover over a slice in the pie chart to view a browser's name and version. * For the purposes of this report, each unique browser/version combination is considered a different browser. * The slice called "Other" indicates the percentage of requests handled by all other browsers and versions.
-The data that was used to generate the pie chart can be viewed below it. There you will find the browser type/version number, the total number of hits and the percentage of hits for each of the top 250 browsers.
+The data that was used to generate the pie chart can be viewed below it. There you find the browser type/version number, the total number of hits and the percentage of hits for each of the top 250 browsers.
## By Referrer
-The By Referrer report allows you to view the top referrers to content on the selected platform. A referrer indicates the hostname from which a request was generated. Upon generating this type of report, a bar chart will indicate the amount of demand (i.e., hits) generated by the top 10 referrers.
+The By Referrer report allows you to view the top referrers to content on the selected platform. A referrer indicates the hostname from which a request was generated. Once you generate this report, a bar chart indicates the amount of demand (for example, hits) generated by the top 10 referrers.
The left-hand side of the graph (y-axis) indicates the total number of requests that an asset experienced for each referrer. Each bar on the chart represents a referrer. Use the color-coding scheme to match up a bar to a referrer listed in the Top 250 Referrer section.
-The data that was used to generate the bar chart can be viewed below it. There you will find the URL, the total number of hits, and the percentage of hits generated from each of the top 250 referrers.
+The data that was used to generate the bar chart can be viewed below it. There you find the URL, the total number of hits, and the percentage of hits generated from each of the top 250 referrers.
## By Download
-The By Download report allows you to analyze download patterns for your most requested content. The top of the report contains a bar chart that compares attempted downloads with completed downloads for the top 10 requested assets. Each bar is color-coded according to whether it is an attempted download (blue) or a completed download (green).
+The By Download report allows you to analyze download patterns for your most requested content. The top of the report contains a bar chart that compares attempted downloads with completed downloads for the top 10 requested assets. Each bar is color-coded according to whether it's an attempted download (blue) or a completed download (green).
> [!NOTE] > For the purposes of this report, edge CNAME URLs are converted to their equivalent CDN URLs. This allows an accurate tally for all statistics associated with an asset regardless of the CDN or edge CNAME URL used to request it. > >
-The left-hand side of the graph (y-axis) indicates the file name for each of the top 10 requested assets. Directly below the graph (x-axis), you will find labels that indicate the total number of attempted/completed downloads.
+The left-hand side of the graph (y-axis) indicates the file name for each of the top 10 requested assets. Directly below the graph (x-axis), you find labels that indicate the total number of attempted/completed downloads.
-Directly below the bar chart, the following information will be listed for the top 250 requested assets: relative path (including file name), the number of times that it was downloaded to completion, the number of times that it was requested, and the percentage of requests that resulted in a complete download.
+Directly below the bar chart, the following information is listed for the top 250 requested assets: relative path (including file name), the number of times that it gets downloaded to completion, the number of times that it gets requested, and the percentage of requests that resulted in a complete download.
> [!TIP]
-> Our CDN is not informed by an HTTP client (i.e. browser) when an asset has been completely downloaded. As a result, we have to calculate whether an asset has been completely downloaded according to status codes and byte-range requests. The first thing we look for when making this calculation is whether the request results in a 200 OK status code. If so, then we look at byte-range requests to ensure that they cover the entire asset. Finally, we compare the amount of data transferred to the size of the requested asset. If the data transferred is equal to or greater than the file size and the byte-range requests are appropriate for that asset, then the hit will be counted as a complete download.
+> Our CDN is not informed by an HTTP client (i.e. browser) when an asset has been completely downloaded. As a result, we have to calculate whether an asset has been completely downloaded according to status codes and byte-range requests. The first thing we look for when making this calculation is whether a request results in a 200 OK status code. If so, then we look at byte-range requests to ensure that they cover the entire asset. Finally, we compare the amount of data transferred to the size of the requested asset. If the data transferred is equal to or greater than the file size and the byte-range requests are appropriate for that asset, then the hit will be counted as a complete download.
> > Due to the interpretive nature of this report, you should keep in mind the following points that may alter the consistency and accuracy of this report. >
The By 404 Errors report allows you to identify the type of content that generat
> >
-The left-hand side of the graph (y-axis) indicates the file name for each of the top 10 requested assets that resulted in a 404 Not Found status code. Directly below the graph (x-axis), you will find labels that indicate the total number of requests and the number of requests that resulted in a 404 Not Found status code.
+The left-hand side of the graph (y-axis) indicates the file name for each of the top 10 requested assets that resulted in a 404 Not Found status code. Directly below the graph (x-axis), you find labels that indicate the total number of requests and the number of requests that resulted in a 404 Not Found status code.
-Directly below the bar chart, the following information will be listed for the top 250 requested assets: relative path (including file name), the number of requests that resulted in a 404 Not Found status code, the total number of times that the asset was requested, and the percentage of requests that resulted in a 404 Not Found status code.
+Directly below the bar chart, the following information is listed for the top 250 requested assets: relative path (including file name), the number of requests that resulted in a 404 Not Found status code, the total number of times that the asset gets requested, and the percentage of requests that resulted in a 404 Not Found status code.
## See also * [Azure CDN Overview](cdn-overview.md)
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
Title: Get started with the Azure CDN Library for .NET | Microsoft Docs
description: Learn how to write .NET applications to manage Azure CDN using Visual Studio. documentationcenter: .net---++ ms.assetid: 63cf4101-92e7-49dd-a155-a90e54a792ca na Previously updated : 01/23/2017- Last updated : 02/27/2023+ + # Get started with the Azure CDN Library for .NET > [!div class="op_single_selector"] > * [Node.js](cdn-app-dev-node.md)
> >
-You can use the [Azure CDN Library for .NET](/dotnet/api/overview/azure/cdn) to automate creation and management of CDN profiles and endpoints. This tutorial walks through the creation of a simple .NET console application that demonstrates several of the available operations. This tutorial is not intended to describe all aspects of the Azure CDN Library for .NET in detail.
+You can use the [Azure CDN Library for .NET](/dotnet/api/overview/azure/cdn) to automate creation and management of CDN profiles and endpoints. This tutorial walks through the creation of a simple .NET console application that demonstrates several of the available operations. This tutorial isn't intended to describe all aspects of the Azure CDN Library for .NET in detail.
You need Visual Studio 2015 to complete this tutorial. [Visual Studio Community 2015](https://www.visualstudio.com/products/visual-studio-community-vs.aspx) is freely available for download.
You need Visual Studio 2015 to complete this tutorial. [Visual Studio Community
[!INCLUDE [cdn-app-dev-prep](../../includes/cdn-app-dev-prep.md)]
-## Create your project and add Nuget packages
+## Create your project and add NuGet packages
Now that we've created a resource group for our CDN profiles and given our Azure AD application permission to manage CDN profiles and endpoints within that group, we can start creating our application. > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade. For more information, see the [migration guide](../active-directory/develop/msal-migration.md).
-From within Visual Studio 2015, click **File**, **New**, **Project...** to open the new project dialog. Expand **Visual C#**, then select **Windows** in the pane on the left. Click **Console Application** in the center pane. Name your project, then click **OK**.
+From within Visual Studio 2015, select **File**, **New**, **Project...** to open the new project dialog. Expand **Visual C#**, then select **Windows** in the pane on the left. Select **Console Application** in the center pane. Name your project, then select **OK**.
![New Project](./media/cdn-app-dev-net/cdn-new-project.png)
-Our project is going to use some Azure libraries contained in Nuget packages. Let's add those to the project.
+Our project is going to use some Azure libraries contained in NuGet packages. Let's add those libraries to the project.
-1. Click the **Tools** menu, **Nuget Package Manager**, then **Package Manager Console**.
+1. Select the **Tools** menu, **Nuget Package Manager**, then **Package Manager Console**.
![Manage Nuget Packages](./media/cdn-app-dev-net/cdn-manage-nuget.png) 2. In the Package Manager Console, execute the following command to install the **Active Directory Authentication Library (ADAL)**:
Our project is going to use some Azure libraries contained in Nuget packages. L
## Directives, constants, main method, and helper methods Let's get the basic structure of our program written.
-1. Back in the Program.cs tab, replace the `using` directives at the top with the following:
+1. Back in the Program.cs tab, replace the `using` directives at the top with the following command:
```csharp using System;
Let's get the basic structure of our program written.
using Microsoft.IdentityModel.Clients.ActiveDirectory; using Microsoft.Rest; ```
-2. We need to define some constants our methods will use. In the `Program` class, but before the `Main` method, add the following. Be sure to replace the placeholders, including the **&lt;angle brackets&gt;**, with your own values as needed.
+2. We need to define some constants our methods use. In the `Program` class, but before the `Main` method, add the following code blocks. Be sure to replace the placeholders, including the **&lt;angle brackets&gt;**, with your own values as needed.
```csharp //Tenant app constants
Let's get the basic structure of our program written.
private const string resourceGroupName = "CdnConsoleTutorial"; private const string resourceLocation = "<YOUR PREFERRED AZURE LOCATION, SUCH AS Central US>"; ```
-3. Also at the class level, define these two variables. We'll use these later to determine if our profile and endpoint already exist.
+3. Also at the class level, define these two variables. We use these variables later to determine if our profile and endpoint already exist.
```csharp static bool profileAlreadyExists = false;
Let's get the basic structure of our program written.
Now that the basic structure of our program is written, we should create the methods called by the `Main` method. ## Authentication
-Before we can use the Azure CDN Management Library, we need to authenticate our service principal and obtain an authentication token. This method uses ADAL to retrieve the token.
+Before we can use the Azure CDN Management Library, we need to authenticate our service principal and obtain an authentication token. This method uses Active Directory Authentication Library to retrieve the token.
```csharp private static AuthenticationResult GetAccessToken()
private static AuthenticationResult GetAccessToken()
} ```
-If you are using individual user authentication, the `GetAccessToken` method will look slightly different.
+If you're using individual user authentication, the `GetAccessToken` method looks slightly different.
> [!IMPORTANT] > Only use this code sample if you are choosing to have individual user authentication instead of a service principal.
private static AuthenticationResult GetAccessToken()
Be sure to replace `<redirect URI>` with the redirect URI you entered when you registered the application in Azure AD. ## List CDN profiles and endpoints
-Now we're ready to perform CDN operations. The first thing our method does is list all the profiles and endpoints in our resource group, and if it finds a match for the profile and endpoint names specified in our constants, makes a note of that for later so we don't try to create duplicates.
+Now we're ready to perform CDN operations. The first thing our method does is list all the profiles and endpoints in our resource group, and if it finds a match for the profile and endpoint names specified in our constants, makes a note for later so we don't try to create duplicates.
```csharp private static void ListProfilesAndEndpoints(CdnManagementClient cdn)
private static void ListProfilesAndEndpoints(CdnManagementClient cdn)
``` ## Create CDN profiles and endpoints
-Next, we'll create a profile.
+Next, we create a profile.
```csharp private static void CreateCdnProfile(CdnManagementClient cdn)
private static void CreateCdnProfile(CdnManagementClient cdn)
} ```
-Once the profile is created, we'll create an endpoint.
+Once the profile is created, we create an endpoint.
```csharp private static void CreateCdnEndpoint(CdnManagementClient cdn)
private static void PromptPurgeCdnEndpoint(CdnManagementClient cdn)
``` > [!NOTE]
-> In the example above, the string `/*` denotes that I want to purge everything in the root of the endpoint path. This is equivalent to checking **Purge All** in the Azure portal's "purge" dialog. In the `CreateCdnProfile` method, I created our profile as an **Azure CDN from Verizon** profile using the code `Sku = new Sku(SkuName.StandardVerizon)`, so this will be successful. However, **Azure CDN from Akamai** profiles do not support **Purge All**, so if I was using an Akamai profile for this tutorial, I would need to include specific paths to purge.
+> In the example previously, the string `/*` denotes that I want to purge everything in the root of the endpoint path. This is equivalent to checking **Purge All** in the Azure portal's "purge" dialog. In the `CreateCdnProfile` method, I created our profile as an **Azure CDN from Verizon** profile using the code `Sku = new Sku(SkuName.StandardVerizon)`, so this will be successful. However, **Azure CDN from Akamai** profiles do not support **Purge All**, so if I was using an Akamai profile for this tutorial, I would need to include specific paths to purge.
> > ## Delete CDN profiles and endpoints
-The last methods will delete our endpoint and profile.
+The last methods delete our endpoint and profile.
```csharp private static void PromptDeleteCdnEndpoint(CdnManagementClient cdn)
We can then confirm the prompts to run the rest of the program.
## Next Steps To see the completed project from this walkthrough, [download the sample](https://code.msdn.microsoft.com/Azure-CDN-Management-1f2fba2c).
-To find additional documentation on the Azure CDN Management Library for .NET, view the [reference on MSDN](/dotnet/api/overview/azure/cdn).
+To find more documentation on the Azure CDN Management Library for .NET, view the [reference on MSDN](/dotnet/api/overview/azure/cdn).
Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
description: Learn how to use Azure diagnostic logs to save core analytics, whic
na Previously updated : 07/15/2020 Last updated : 02/27/2023 - # Diagnostic logs - Azure Content Delivery Network With Azure diagnostic logs, you can view core analytics and save them into one or more destinations including:
Not all metrics are available from all providers, although such differences are
| RequestCountCacheHit | Count of all requests that resulted in a Cache hit. The asset was served directly from the POP to the client. | Yes | Yes | No | | RequestCountCacheMiss | Count of all requests that resulted in a Cache miss. A Cache miss means the asset wasn't found on the POP closest to the client, and was retrieved from the origin. | Yes | Yes | No | | RequestCountCacheNoCache | Count of all requests to an asset that are prevented from being cached because of a user configuration on the edge. | Yes | Yes | No |
-| RequestCountCacheUncacheable | Count of all requests to assets that are prevented from being cached by the asset's Cache-Control and Expires headers. This count indicates that it shouldn't be cached on a POP or by the HTTP client. | Yes | Yes | No |
-| RequestCountCacheOthers | Count of all requests with cache status not covered by above. | No | Yes | No |
+| RequestCountCacheUncacheable | Count of all requests to assets that are prevented from getting cached by the asset's Cache-Control and Expires headers. This count indicates that it shouldn't be cached on a POP or by the HTTP client. | Yes | Yes | No |
+| RequestCountCacheOthers | Count of all requests with cache status not covered by metrics listed previously. | No | Yes | No |
| EgressTotal | Outbound data transfer in GB | Yes |Yes |Yes | | EgressHttpStatus2xx | Outbound data transfer* for responses with 2xx HTTP status codes in GB. | Yes | Yes | No | | EgressHttpStatus3xx | Outbound data transfer for responses with 3xx HTTP status codes in GB. | Yes | Yes | No |
Not all metrics are available from all providers, although such differences are
| EgressCacheHit | Outbound data transfer for responses that were delivered directly from the CDN cache on the CDN POPs/Edges. | Yes | Yes | No | | EgressCacheMiss. | Outbound data transfer for responses that weren't found on the nearest POP server, and retrieved from the origin server. | Yes | Yes | No | | EgressCacheNoCache | Outbound data transfer for assets that are prevented from being cached because of a user configuration on the edge. | Yes | Yes | No |
-| EgressCacheUncacheable | Outbound data transfer for assets that are prevented from being cached by the asset's Cache-Control and, or Expires headers. Indicates that it shouldn't be cached on a POP or by the HTTP client. | Yes | Yes | No |
+| EgressCacheUncacheable | Outbound data transfer for assets that are prevented from getting cached by the asset's Cache-Control and, or Expires headers. Indicates that it shouldn't be cached on a POP or by the HTTP client. | Yes | Yes | No |
| EgressCacheOthers | Outbound data transfers for other cache scenarios. | No | Yes | No | *Outbound data transfer refers to traffic delivered from CDN POP servers to the client.
Example properties:
```
-## Additional resources
+## More resources
* [Azure Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) * [Core analytics via Azure CDN supplemental portal](./cdn-analyze-usage-patterns.md)
cdn Cdn Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-billing.md
Title: Understanding Azure CDN billing | Microsoft Docs description: Learn about the billing structure for content hosted by Azure Content Delivery Network, including billing regions, delivery charges, and to manage costs. --+ na Previously updated : 09/13/2019 Last updated : 02/27/2023
This FAQ describes the billing structure for content hosted by Azure Content Delivery Network (CDN). ## What is a billing region?+ A billing region is a geographic area used to determine what rate is charged for delivery of objects from Azure CDN. The current billing zones and their regions are as follows: - Zone 1: North America, Europe, Middle East, and Africa
For information about point-of-presence (POP) regions, see [Azure CDN POP locati
For information about Azure CDN pricing, see [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/cdn/). ## How are delivery charges calculated by region?
-The Azure CDN billing region is based on the location of the source server delivering the content to the end user. The destination (physical location) of the client is not considered the billing region.
+The Azure CDN billing region is based on the location of the source server delivering the content to the end user. The destination (physical location) of the client isn't considered the billing region.
-For example, if a user located in Mexico issues a request and this request is serviced by a server located in a United States POP due to peering or traffic conditions, the billing region will be the United States.
+For example, if a user located in Mexico issues a request and this request gets serviced by a server located in a United States POP due to peering or traffic conditions, the billing region is the United States.
## What is a billable Azure CDN transaction?
-Any HTTP(S) request that terminates at the CDN is a billable event, which includes all response types: success, failure, or other. However, different responses may generate different traffic amounts. For example, *304 Not Modified* and other header-only responses generate little traffic because they are a small header response; similarly, error responses (for example, *404 Not Found*) are billable but incur a small cost because of the tiny response payload.
+Any HTTP(S) request that terminates at the CDN is a billable event, which includes all response types: success, failure, or other. However, different responses may generate different traffic amounts. For example, *304 Not Modified* and other header-only responses generate little traffic because they're a small header response. Similarly, error responses (for example, *404 Not Found*) are billable but incur a small cost because of the tiny response payload.
## What other Azure costs are associated with Azure CDN use? Using Azure CDN also incurs some usage charges on the services used as the origin for your objects. These costs are typically a small fraction of the overall CDN usage cost.
-If you are using Azure Blob storage as the origin for your content, you also incur the following storage charges for cache fills:
+If you're using Azure Blob storage as the origin for your content, you also incur the following storage charges for cache fills:
- Actual GB used: The actual storage of your source objects.
If you are using Azure Blob storage as the origin for your content, you also inc
- Transfers in GB: The amount of data transferred to fill the CDN caches. > [!NOTE]
-> Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Verizon and Azure CDN from Akamai are subject to the rates described below.
+> Starting October 2019, If you are using Azure CDN from Microsoft, the cost of data transfer from Origins hosted in Azure to CDN PoPs is free of charge. Azure CDN from Verizon and Azure CDN from Akamai are subject to the rates described as followed.
For more information about Azure Storage billing, see [Plan and manage costs for Azure Storage](../storage/common/storage-plan-manage-costs.md).
-If you are using *hosted service delivery*, you will incur charges as follows:
+If you're using *hosted service delivery*, you incur charges as follows:
- Azure compute time: The compute instances that act as the origin.
If your client uses byte-range requests (regardless of origin service), the foll
- When a request arrives for only part of an object (by specifying a byte-range header), the CDN may fetch the entire object into its cache. As a result, even though the billable transaction from the CDN is for a partial response, the billable transaction from the origin may involve the full size of the object. ## How much transfer activity occurs to support the cache?
-Each time a CDN POP needs to fill its cache, it makes a request to the origin for the object being cached. As a result, the origin incurs a billable transaction on every cache miss. The number of cache misses depends on a number of factors:
+Each time a CDN POP needs to fill its cache, it makes a request to the origin for the object being cached. As a result, the origin incurs a billable transaction on every cache miss. The number of cache misses depends on many factors:
-- How cacheable the content is: If the content has high TTL (time-to-live)/expiration values and is accessed frequently so it stays popular in cache, then the vast majority of the load is handled by the CDN. A typical good cache-hit ratio is well over 90%, meaning that less than 10% of client requests have to return to origin, either for a cache miss or object refresh.
+- How cacheable the content is: If the content has high TTL (time-to-live)/expiration values and is accessed frequently so it stays popular in cache, then most of the load gets handled by the CDN. A typical good cache-hit ratio is well over 90%, meaning that less than 10% of client requests have to return to origin, either for a cache miss or object refresh.
- How many nodes need to load the object: Each time a node loads an object from the origin, it incurs a billable transaction. As a result, more global content (accessed from more nodes) results in more billable transactions. - TTL influence: A higher TTL for an object means it needs to be fetched from the origin less frequently. It also means clients, such as browsers, can cache the object longer, which can reduce the transactions to the CDN. ## Which origin services are eligible for free data transfer with Azure CDN from Microsoft?
-If you use one of the following Azure services as your CDN origin, you will not be charged from Data transfer from the Origin to the CDN PoPs.
+If you use one of the following Azure services as your CDN origin, you don't get charged from Data transfer from the Origin to the CDN PoPs.
- Azure Storage - Azure Media Services
cdn Cdn China Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-china-delivery.md
description: Learn about using Azure Content Delivery Network (CDN) to deliver c
documentationcenter: '' --+ na Previously updated : 05/16/2018 Last updated : 02/27/2023
Azure Content Delivery Network (CDN) global can serve content to China users with point-of-presence (POP) locations near China or any POP that provides the best performance to requests originating from China. However, if China is a significant market for your customers and they need fast performance, consider using Azure CDN China instead.
-Azure CDN China differs from Azure CDN global in that it delivers content from POPs inside of China by partnering with a number of local providers. Due to Chinese compliance and regulation, you must register a separate subscription to use Azure CDN China and your websites need to have an ICP license. The portal and API experience to enable and manage content delivery is identical between Azure CDN global and Azure CDN China.
+Azure CDN China differs from Azure CDN global in that it delivers content from POPs inside of China by partnering with many local providers. Due to Chinese compliance and regulation, you must register a separate subscription to use Azure CDN China and your websites need to have an ICP license. The portal and API experience to enable and manage content delivery is identical between Azure CDN global and Azure CDN China.
## Comparison of Azure CDN global and Azure CDN China
cdn Cdn Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-cors.md
Title: Using Azure CDN with CORS | Microsoft Docs description: Learn how to use the Azure Content Delivery Network (CDN) to with Cross-Origin Resource Sharing (CORS). ---++ ms.assetid: 86740a96-4269-4060-aba3-a69f00e6f14e na Previously updated : 01/23/2017 Last updated : 02/27/2023 # Using Azure CDN with CORS+ ## What is CORS?
-CORS (Cross Origin Resource Sharing) is an HTTP feature that enables a web application running under one domain to access resources in another domain. In order to reduce the possibility of cross-site scripting attacks, all modern web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy). This prevents a web page from calling APIs in a different domain. CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin.
+
+CORS (Cross Origin Resource Sharing) is an HTTP feature that enables a web application running under one domain to access resources in another domain. In order to reduce the possibility of cross-site scripting attacks, all modern web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy). This restriction prevents a web page from calling APIs in a different domain. CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin.
## How it works+ There are two types of CORS requests, *simple requests* and *complex requests.* ### For simple requests:
-1. The browser sends the CORS request with an additional **Origin** HTTP request header. The value of this header is the origin that served the parent page, which is defined as the combination of *protocol,* *domain,* and *port.* When a page from https\://www.contoso.com attempts to access a user's data in the fabrikam.com origin, the following request header would be sent to fabrikam.com:
+1. The browser sends the CORS request with an extra **Origin** HTTP request header. The value of the request header is the origin that served the parent page, which is defined as the combination of *protocol,* *domain,* and *port.* When a page from https\://www.contoso.com attempts to access a user's data in the fabrikam.com origin, the following request header would be sent to fabrikam.com:
`Origin: https://www.contoso.com`
-2. The server may respond with any of the following:
+2. The server may respond with any of the following headers:
* An **Access-Control-Allow-Origin** header in its response indicating which origin site is allowed. For example: `Access-Control-Allow-Origin: https://www.contoso.com`
- * An HTTP error code such as 403 if the server does not allow the cross-origin request after checking the Origin header
+ * An HTTP error code such as 403 if the server doesn't allow the cross-origin request after checking the Origin header
* An **Access-Control-Allow-Origin** header with a wildcard that allows all origins:
A complex request is a CORS request where the browser is required to send a *pre
> ## Wildcard or single origin scenarios
-CORS on Azure CDN will work automatically with no additional configuration when the **Access-Control-Allow-Origin** header is set to wildcard (*) or a single origin. The CDN will cache the first response and subsequent requests will use the same header.
-If requests have already been made to the CDN prior to CORS being set on your origin, you will need to purge content on your endpoint content to reload the content with the **Access-Control-Allow-Origin** header.
+CORS on Azure CDN works automatically without extra configurations when the **Access-Control-Allow-Origin** header is set to wildcard (*) or a single origin. CDN cache the first response and subsequent requests use the same header.
+
+If requests have already been made to the CDN prior to CORS being set on your origin, you need to purge content on your endpoint content to reload the content with the **Access-Control-Allow-Origin** header.
## Multiple origin scenarios
-If you need to allow a specific list of origins to be allowed for CORS, things get a little more complicated. The problem occurs when the CDN caches the **Access-Control-Allow-Origin** header for the first CORS origin. When a different CORS origin makes a subsequent request, the CDN will serve the cached **Access-Control-Allow-Origin** header, which won't match. There are several ways to correct this.
+If you need to allow a specific list of origins to be allowed for CORS, things get a little more complicated. The problem occurs when the CDN caches the **Access-Control-Allow-Origin** header for the first CORS origin. When a different CORS origin makes a subsequent request, the CDN serves the cached **Access-Control-Allow-Origin** header, which doesn't match. There are several ways to correct this problem.
### Azure CDN standard profiles
-On Azure CDN Standard from Microsoft, you can create a rule in the [Standard rules engine](cdn-standard-rules-engine-reference.md) to check the **Origin** header on the request. If it's a valid origin, your rule will set the **Access-Control-Allow-Origin** header with the desired value. In this case, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the CDN's rules engine completely manages the allowed CORS origins.
+On Azure CDN Standard from Microsoft, you can create a rule in the [Standard rules engine](cdn-standard-rules-engine-reference.md) to check the **Origin** header on the request. If it's a valid origin, your rule set the **Access-Control-Allow-Origin** header with the desired value. In this case, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the CDN's rules engine completely manages the allowed CORS origins.
![Rules example with standard rules engine](./media/cdn-cors/cdn-standard-cors.png)
On Azure CDN Standard from Microsoft, you can create a rule in the [Standard rul
> You can add additional actions to your rule to modify additional response headers, such as **Access-Control-Allow-Methods**. >
-On **Azure CDN Standard from Akamai**, the only mechanism to allow for multiple origins without the use of the wildcard origin is to use [query string caching](cdn-query-string.md). Enable the query string setting for the CDN endpoint and then use a unique query string for requests from each allowed domain. Doing so will result in the CDN caching a separate object for each unique query string. This approach is not ideal, however, as it will result in multiple copies of the same file cached on the CDN.
+On **Azure CDN Standard from Akamai**, the only mechanism to allow for multiple origins without the use of the wildcard origin is to use [query string caching](cdn-query-string.md). Enable the query string setting for the CDN endpoint and then use a unique query string for requests from each allowed domain. Doing so results in the CDN caching a separate object for each unique query string. This approach isn't ideal, however, as it results in multiple copies of the same file cached on the CDN.
### Azure CDN Premium from Verizon
-Using the Verizon Premium rules engine, You'll need to [create a rule](./cdn-verizon-premium-rules-engine.md) to check the **Origin** header on the request. If it's a valid origin, your rule will set the **Access-Control-Allow-Origin** header with the origin provided in the request. If the origin specified in the **Origin** header is not allowed, your rule should omit the **Access-Control-Allow-Origin** header, which will cause the browser to reject the request.
-There are two ways to do this with the Premium rules engine. In both cases, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the CDN's rules engine completely manages the allowed CORS origins.
+Using the Verizon Premium rules engine, you need to [create a rule](./cdn-verizon-premium-rules-engine.md) to check the **Origin** header on the request. If it's a valid origin, your rule sets the **Access-Control-Allow-Origin** header with the origin provided in the request. If the origin specified in the **Origin** header isn't allowed, your rule should omit the **Access-Control-Allow-Origin** header, which causes the browser to reject the request.
+
+There are two ways to resolve this problem with the Premium rules engine. In both cases, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the CDN's rules engine completely manages the allowed CORS origins.
#### One regular expression with all valid origins
-In this case, you'll create a regular expression that includes all of the origins you want to allow:
+
+In this case, you create a regular expression that includes all of the origins you want to allow:
```http https?:\/\/(www\.contoso\.com|contoso\.com|www\.microsoft\.com|microsoft.com\.com)$
https?:\/\/(www\.contoso\.com|contoso\.com|www\.microsoft\.com|microsoft.com\.co
> >
-If the regular expression matches, your rule will replace the **Access-Control-Allow-Origin**