Updates from: 08/09/2022 01:12:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Extensions App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/extensions-app.md
If you accidentally deleted the `b2c-extensions-app`, you have 30 days to recove
> [!NOTE] > An application can only be restored if it has been deleted within the last 30 days. If it has been more than 30 days, data will be permanently lost. For more assistance, file a support ticket.
+<!--Hide portal steps until SP bug is fixed
### Recover the extensions app using the Azure portal 1. Sign in to your Azure AD B2C tenant.
If you accidentally deleted the `b2c-extensions-app`, you have 30 days to recove
1. Select **Restore app registration**. You should now be able to [see the restored app](#verifying-that-the-extensions-app-is-present) in the Azure portal.-
+-->
### Recover the extensions app using Microsoft Graph
-To restore the app using Microsoft Graph, you must restore both the application and the service principal.
+To restore the app using Microsoft Graph, you must restore both the application object and the service principal. For more information, see the [Restore deleted item](/graph/api/directory-deleteditems-restore) API.
-To restore the application:
+To restore the application object:
1. Browse to [https://developer.microsoft.com/en-us/graph/graph-explorer](https://developer.microsoft.com/en-us/graph/graph-explorer). 1. Log in to the site as a global administrator for the Azure AD B2C directory that you want to restore the deleted app for. This global administrator must have an email address similar to the following: `username@{yourTenant}.onmicrosoft.com`. 1. Issue an HTTP GET against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/microsoft.graph.application`. This operation will list all of the applications that have been deleted within the past 30 days. You can also use the URL `https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.application?$filter=displayName eq 'b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.'` to filter by the app's **displayName** property. 1. Find the application in the list where the name begins with `b2c-extensions-app` and copy its `id` property value. 1. Issue an HTTP POST against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/{id}/restore`. Replace the `{id}` portion of the URL with the `id` from the previous step.]
-To restore the service principal:
+To restore the service principal object:
1. Issue an HTTP GET against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/microsoft.graph.servicePrincipal`. This operation will list all of the service principals that have been deleted within the past 30 days. You can also use the URL `https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal?$filter=displayName eq 'b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.'` to filter by the app's **displayName** property. 1. Find the service principal in the list where the name begins with `b2c-extensions-app` and copy its `id` property value. 1. Issue an HTTP POST against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/{id}/restore`. Replace the `{id}` portion of the URL with the `id` from the previous step.
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
Previously updated : 09/20/2021 Last updated : 08/04/2022
After you set up the Application Insights, and configure the custom policy, you
To get Application Insights ID and key: 1. In Azure portal, open the Application Insights resource for your application.
-1. Select **Settings**, then select **API Access**.
+1. Select **Configure**, then select **API Access**.
1. Copy the **Application ID** 1. Select **Create API Key** 1. Check the **Read telemetry** box.
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
description: Learn how to use additional context in MFA notifications
Previously updated : 06/23/2022 Last updated : 08/08/2022
# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
+This article covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
## Prerequisites
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
| Property | Type | Description | |||-|
-| id | String | The authentication method policy identifier. |
+| ID | String | The authentication method policy identifier. |
| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** | **RELATIONSHIPS**
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
| Property | Type | Description | |-||-| | authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| id | String | Object ID of an Azure AD user or group. |
+| ID | String | Object ID of an Azure AD user or group. |
| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>You can only set one group or user for additional context. | | displayAppInformationRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. |
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
Change the **displayAppInformationRequiredState** from **default** to **enabled**.
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you do not want to allow passwordless, use **push**.
+The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **displayAppInformationRequiredState**.
To turn off additional context, you'll need to PATCH remove **displayAppInformat
To enable additional context in the Azure AD portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
-1. Select the target users, click the three dots on the right, and click **Configure**.
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
+1. From the list of available authentication methods, select **Microsoft Authenticator**.
+
+ ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-additional-context/select-microsoft-authenticator-policy.png)
+
+1. Select the target users, select the three dots on the right, and choose **Configure**.
- ![Screenshot of how to configure number match.](media/howto-authentication-passwordless-phone/configure.png)
+ ![Screenshot of configuring Microsoft authenticator additional context.](./media/how-to-mfa-additional-context/configure-microsoft-authenticator.png)
-1. Select the **Authentication mode**, and then for **Show additional context in notifications (Preview)**, click **Enable**, and then click **Done**.
+1. Select the **Authentication mode**, and then for **Show additional context in notifications (Preview)**, select **Enable**, and then select **Done**.
![Screenshot of enabling additional context.](media/howto-authentication-passwordless-phone/enable-additional-context.png) ## Known issues
-Additional context is not supported for Network Policy Server (NPS).
+Additional context isn't supported for Network Policy Server (NPS).
## Next steps
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 06/23/2022 Last updated : 08/08/2022
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
+This article covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
>[!NOTE] >Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
Number matching is available for the following scenarios. When enabled, all scen
### Multifactor authentication
-When a user responds to an MFA push notification using the Authenticator app, they will be presented with a number. They need to type that number into the app to complete the approval.
+When a user responds to an MFA push notification using the Authenticator app, they'll be presented with a number. They need to type that number into the app to complete the approval.
![Screenshot of user entering a number match.](media/howto-authentication-passwordless-phone/phone-sign-in-microsoft-authenticator-app.png)
Make sure you run the latest version of the [NPS extension](https://www.microsof
Because the NPS extension can't show a number, a user who is enabled for number matching will still be prompted to **Approve**/**Deny**. However, you can create a registry key that overrides push notifications to ask a user to enter a One-Time Passcode (OTP). The user must have an OTP authentication method registered to see this behavior. Common OTP authentication methods include the OTP available in the Authenticator app, other software tokens, and so on.
-If the user doesn't have an OTP method registered, they will continue to get the **Approve**/**Deny** experience. A user with number matching disabled will always see the **Approve**/**Deny** experience.
+If the user doesn't have an OTP method registered, they'll continue to get the **Approve**/**Deny** experience. A user with number matching disabled will always see the **Approve**/**Deny** experience.
To create the registry key that overrides push notifications:
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
| Property | Type | Description | |||-|
-| id | String | The authentication method policy identifier. |
+| ID | String | The authentication method policy identifier. |
| state | authenticationMethodState | Possible values are: **enabled**<br>**disabled** | **RELATIONSHIPS**
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
| Property | Type | Description | |-||-| | authenticationMode | String | Possible values are:<br>**any**: Both passwordless phone sign-in and traditional second factor notifications are allowed.<br>**deviceBasedPush**: Only passwordless phone sign-in notifications are allowed.<br>**push**: Only traditional second factor push notifications are allowed. |
-| id | String | Object ID of an Azure AD user or group. |
-| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>Please note: You will be able to only set one group or user for number matching. |
+| ID | String | Object ID of an Azure AD user or group. |
+| targetType | authenticationMethodTargetType | Possible values are: **user**, **group**.<br>Note: You'll be able to only set one group or user for number matching. |
| numberMatchingRequiredState | advancedConfigState | Possible values are:<br>**enabled** explicitly enables the feature for the selected group.<br>**disabled** explicitly disables the feature for the selected group.<br>**default** allows Azure AD to manage whether the feature is enabled or not for the selected group. | >[!NOTE]
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
#### Example of how to enable number matching for all users
-You will need to change the **numberMatchingRequiredState** from **default** to **enabled**.
+You'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you do not want to allow passwordless, use **push**.
+Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
You might need to patch the entire includeTarget to prevent overwriting any prev
```
-To confirm this has applied, please run the GET request below using the endpoint below.
+To confirm this update has applied, please run the GET request below using the endpoint below.
GET - https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator #### Example of how to enable number matching for a single group
-We will need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
-You will need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
+We'll need to change the **numberMatchingRequiredState** value from **default** to **enabled.**
+You'll need to change the **id** from **all_users** to the ObjectID of the group from the Azure AD portal.
You need to PATCH the entire includeTarget to prevent overwriting any previous configuration. We recommend that you do a GET first, and then update only the relevant fields and then PATCH. The example below only shows the update to the **numberMatchingRequiredState**.
See the end user experience of an Authenticator MFA push notification with numbe
### Turn off number matching
-To turn number matching off, you will need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
+To turn number matching off, you'll need to PATCH remove **numberMatchingRequiredState** from **enabled** to **disabled**/**default**.
```json {
To turn number matching off, you will need to PATCH remove **numberMatchingRequi
## Enable number matching in the portal
-To enable number matching in the Azure AD portal, complete the following steps:
+To enable number matching in the Azure portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
-1. Select the target users, click the three dots on the right, and click **Configure**.
+1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
+1. From the list of available authentication methods, select **Microsoft Authenticator**.
+
+ ![Screenshot that shows how to select the Microsoft Authenticator policy.](./media/how-to-mfa-number-match/select-microsoft-authenticator-policy.png)
+
+1. Select the target users, select the three dots on the right, and choose **Configure**.
- ![Screenshot of configuring number match.](media/howto-authentication-passwordless-phone/configure.png)
+ ![Screenshot of configuring number match.](./media/how-to-mfa-number-match/configure-microsoft-authenticator.png)
-1. Select the **Authentication mode**, and then for **Require number matching (Preview)**, click **Enable**, and then click **Done**.
+1. Select the **Authentication mode**, and then for **Require number matching (Preview)**, select **Enable**, and then select **Done**.
- ![Screenshot of enabling number match.](media/howto-authentication-passwordless-phone/enable-number-matching.png)
+ ![Screenshot of enabling number match configuration.](media/howto-authentication-passwordless-phone/enable-number-matching.png)
>[!NOTE] >[Least privileged role in Azure Active Directory - Multifactor authentication](../roles/delegate-by-task.md#multi-factor-authentication)
-Number matching is not supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled.
+Number matching isn't supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled.
## Next steps
-[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-sms-signin.md
Title: SMS-based user sign in for Azure Active Directory
-description: Learn how to configure and enable users to sign in to Azure Active Directory using SMS
+ Title: SMS-based user sign-in for Azure Active Directory
+description: Learn how to configure and enable users to sign-in to Azure Active Directory using SMS
Previously updated : 02/10/2022 Last updated : 08/08/2022
# Configure and enable users for SMS-based authentication using Azure Active Directory
-To simplify and secure sign in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication lets users sign in without providing, or even knowing, their user name and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt. They receive an authentication code via text message that they can provide to complete the sign in. This authentication method simplifies access to applications and services, especially for Frontline workers.
+To simplify and secure sign-in to applications and services, Azure Active Directory (Azure AD) provides multiple authentication options. SMS-based authentication lets users sign-in without providing, or even knowing, their user name and password. After their account is created by an identity administrator, they can enter their phone number at the sign-in prompt. They receive an authentication code via text message that they can provide to complete the sign-in. This authentication method simplifies access to applications and services, especially for Frontline workers.
This article shows you how to enable SMS-based authentication for select users or groups in Azure AD. For a list of apps that support using SMS-based sign-in, see [App support for SMS-based authentication](how-to-authentication-sms-supported-apps.md).
Here are some known issues:
* SMS-based authentication isn't recommended for B2B accounts. * Federated users won't authenticate in the home tenant. They only authenticate in the cloud. * If a user's default sign-in method is a text or call to your phone number, then the SMS code or voice call is sent automatically during multifactor authentication. As of June 2021, some apps will ask users to choose **Text** or **Call** first. This option prevents sending too many security codes for different apps. If the default sign-in method is the Microsoft Authenticator app ([which we highly recommend](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752)), then the app notification is sent automatically.
-* SMS-based authentication has reached general availability, and we are working to remove the **(Preview)** label in the Azure portal.
+* SMS-based authentication has reached general availability, and we're working to remove the **(Preview)** label in the Azure portal.
## Enable the SMS-based authentication method
There are three main steps to enable and use SMS-based authentication in your or
First, let's enable SMS-based authentication for your Azure AD tenant.
-1. Sign in to the [Azure portal][azure-portal] as a *global administrator*.
-1. Search for and select **Azure Active Directory**.
-1. From the navigation menu on the left-hand side of the Azure Active Directory window, select **Security > Authentication methods > Authentication method policy**.
+1. Sign-in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
+1. From the list of available authentication methods, select **Text message**.
- [![Browse to and select the Authentication method policy window in the Azure portal.](media/howto-authentication-sms-signin/authentication-method-policy-cropped.png)](media/howto-authentication-sms-signin/authentication-method-policy.png#lightbox)
+ ![Screenshot that shows how to select the text message authentication method.](./media/howto-authentication-sms-signin/select-text-message-policy.png)
-1. From the list of available authentication methods, select **Text message**.
-1. Set **Enable** to *Yes*.
+
+1. Set **Enable** to *Yes*. Then select the **Target users**.
![Enable text authentication in the authentication method policy window](./media/howto-authentication-sms-signin/enable-text-authentication-method.png)
With SMS-based authentication enabled in your Azure AD tenant, now select some u
1. In the text message authentication policy window, set **Target** to *Select users*. 1. Choose to **Add users or groups**, then select a test user or group, such as *Contoso User* or *Contoso SMS Users*.-
- [![Choose users or groups to enable for SMS-based authentication in the Azure portal.](media/howto-authentication-sms-signin/add-users-or-groups-cropped.png)](media/howto-authentication-sms-signin/add-users-or-groups.png#lightbox)
- 1. When you've selected your users or groups, choose **Select**, then **Save** the updated authentication method policy. Each user that's enabled in the text message authentication method policy must be licensed, even if they don't use it. Make sure you have the appropriate licenses for the users you enable in the authentication method policy, especially when you enable the feature for large groups of users. ## Set a phone number for user accounts
-Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign in. The user can [set this phone number themselves](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42) in *My Account*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
+Users are now enabled for SMS-based authentication, but their phone number must be associated with the user profile in Azure AD before they can sign-in. The user can [set this phone number themselves](https://support.microsoft.com/account-billing/set-up-sms-sign-in-as-a-phone-verification-method-0aa5b3b3-a716-4ff2-b0d6-31d2bcfbac42) in *My Account*, or you can assign the phone number using the Azure portal. Phone numbers can be set by *global admins*, *authentication admins*, or *privileged authentication admins*.
When a phone number is set for SMS-sign, it's also then available for use with [Azure AD Multi-Factor Authentication][tutorial-azure-mfa] and [self-service password reset][tutorial-sspr].
To test the user account that's now enabled for SMS-based sign-in, complete the
## Troubleshoot SMS-based sign-in
-The following scenarios and troubleshooting steps can used if you have problems with enabling and using SMS-based sign in.
+The following scenarios and troubleshooting steps can used if you have problems with enabling and using SMS-based sign-in.
For a list of apps that support using SMS-based sign-in, see [App support for SMS-based authentication](how-to-authentication-sms-supported-apps.md). ### Phone number already set for a user account
-If a user has already registered for Azure AD Multi-Factor Authentication and / or self-service password reset (SSPR), they already have a phone number associated with their account. This phone number is not automatically available for use with SMS-based sign-in.
+If a user has already registered for Azure AD Multi-Factor Authentication and / or self-service password reset (SSPR), they already have a phone number associated with their account. This phone number isn't automatically available for use with SMS-based sign-in.
A user that has a phone number already set for their account is displayed a button to *Enable for SMS sign-in* in their **My Profile** page. Select this button, and the account is enabled for use with SMS-based sign-in and the previous Azure AD Multi-Factor Authentication or SSPR registration.
If you receive an error when you try to set a phone number for a user account in
## Next steps - For a list of apps that support using SMS-based sign-in, see [App support for SMS-based authentication](how-to-authentication-sms-supported-apps.md).-- For additional ways to sign in to Azure AD without a password, such as the Microsoft Authenticator App or FIDO2 security keys, see [Passwordless authentication options for Azure AD][concepts-passwordless].
+- For more ways to sign-in to Azure AD without a password, such as the Microsoft Authenticator App or FIDO2 security keys, see [Passwordless authentication options for Azure AD][concepts-passwordless].
- You can also use the Microsoft Graph REST API to [enable][rest-enable] or [disable][rest-disable] SMS-based sign-in.
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 06/17/2022 Last updated : 08/08/2022
Although you can create a Temporary Access Pass for any user, only those include
Global administrator and Authentication Policy administrator role holders can update the Temporary Access Pass authentication method policy. To configure the Temporary Access Pass authentication method policy:
-1. Sign in to the Azure portal as a Global admin or Authentication Policy admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
-![Screenshot of how to manage Temporary Access Pass within the authentication method policy experience.](./media/how-to-authentication-temporary-access-pass/policy.png)
-1. Set Enable to **Yes** to enable the policy, select which users have the policy applied.
-![Screenshot of how to enable the Temporary Access Pass authentication method policy.](./media/how-to-authentication-temporary-access-pass/policy-scope.png)
-1. (Optional) Click **Configure** and modify the default Temporary Access Pass settings, such as setting maximum lifetime, or length.
-![Screenshot of how to customize the settings for Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/policy-settings.png)
-1. Click **Save** to apply the policy.
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Search for and select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under the **Manage** menu header, select **Authentication methods** > **Policies**.
+1. From the list of available authentication methods, select **Temporary Access Pass**.
+
+ ![Screenshot of how to manage Temporary Access Pass within the authentication method policy experience.](./media/how-to-authentication-temporary-access-pass/select-temporary-access-pass-policy.png)
+
+1. Set the **Enable** to **Yes** to enable the policy. Then select the **Target** users.
+
+ ![Screenshot of how to enable the Temporary Access Pass authentication method policy.](./media/how-to-authentication-temporary-access-pass/enable-temporary-access-pass.png)
+
+1. (Optional) Select **Configure** and modify the default Temporary Access Pass settings, such as setting maximum lifetime, or length.
+![Screenshot of how to customize the settings for Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/configure-temporary-access-pass.png)
+1. Select **Save** to apply the policy.
To configure the Temporary Access Pass authentication method policy:
| Setting | Default values | Allowed values | Comments | |||||
- | Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. |
- | Maximum lifetime | 8 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
- | Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. |
+ | Minimum lifetime | 1 hour | 10 ΓÇô 43,200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. |
+ | Maximum lifetime | 8 hours | 10 ΓÇô 43,200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
+ | Default lifetime | 1 hour | 10 ΓÇô 43,200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. |
| One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. | | Length | 8 | 8-48 characters | Defines the length of the passcode. |
These roles can perform the following actions related to a Temporary Access Pass
- Global Reader can view the Temporary Access Pass details on the user (without reading the code itself). 1. Sign in to the Azure portal as either a Global administrator, Privileged Authentication administrator, or Authentication administrator.
-1. Click **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**.
+1. Select **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**.
1. If needed, select the option to **Try the new user authentication methods experience**. 1. Select the option to **Add authentication methods**.
-1. Below **Choose method**, click **Temporary Access Pass**.
-1. Define a custom activation time or duration and click **Add**.
+1. Below **Choose method**, select **Temporary Access Pass**.
+1. Define a custom activation time or duration and select **Add**.
![Screenshot of how to create a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/create.png)
-1. Once added, the details of the Temporary Access Pass are shown. Make a note of the actual Temporary Access Pass value. You provide this value to the user. You can't view this value after you click **Ok**.
+1. Once added, the details of the Temporary Access Pass are shown. Make a note of the actual Temporary Access Pass value. You provide this value to the user. You can't view this value after you select **Ok**.
![Screenshot of Temporary Access Pass details.](./media/how-to-authentication-temporary-access-pass/details.png)
c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True
```
-For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta).
+For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod&preserve-view=true) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
## Use a Temporary Access Pass
-The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in or device setup, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
+The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in or device setup, without the need to complete extra security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
1. Open a web browser to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). 1. Enter the UPN of the account you created the Temporary Access Pass for, such as *tapuser@contoso.com*.
-1. If the user is included in the Temporary Access Pass policy, they will see a screen to enter their Temporary Access Pass.
+1. If the user is included in the Temporary Access Pass policy, they'll see a screen to enter their Temporary Access Pass.
1. Enter the Temporary Access Pass that was displayed in the Azure portal. ![Screenshot of how to enter a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/enter.png)
Users can also continue to sign-in by using their password; a TAP doesnΓÇÖt repl
### User management of Temporary Access Pass
-Users managing their security information at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) will see an entry for the Temporary Access Pass. If a user does not have any other registered methods they will be presented a banner at the top of the screen requesting them to add a new sign-in method. Users can additionally view the TAP expiration time, and delete the TAP if no longer needed.
+Users managing their security information at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) will see an entry for the Temporary Access Pass. If a user does not have any other registered methods, they'll be presented a banner at the top of the screen requesting them to add a new sign-in method. Users can additionally view the TAP expiration time, and delete the TAP if no longer needed.
![Screenshot of how users can manage a Temporary Access Pass in My Security Info.](./media/how-to-authentication-temporary-access-pass/tap-my-security-info.png) ### Windows device setup Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello For Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state: - During Azure AD Join setup, users can authenticate with a TAP (no password required) and setup Windows Hello for Business.-- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business. -- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business.
+- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
+- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
![Screenshot of how to enter Temporary Access Pass when setting up Windows 10.](./media/how-to-authentication-temporary-access-pass/windows-10-tap.png)
You can also use PowerShell:
Remove-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com -TemporaryAccessPassAuthenticationMethodId c5dbd20a-8b8f-4791-a23f-488fcbde3b38 ```
-For more information, see [Remove-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/remove-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta).
+For more information, see [Remove-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/remove-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
## Replace a Temporary Access Pass
For more information about NIST standards for onboarding and recovery, see [NIST
Keep these limitations in mind: -- When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation does not apply to a Temporary Access Pass that can be used more than once.-- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass.
-Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration.
-- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
+- When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation doesn't apply to a Temporary Access Pass that can be used more than once.
+- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they've signed in with a Temporary Access Pass.
+Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience doesn't currently support FIDO2 and Phone Sign-in registration.
+- A Temporary Access Pass can't be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
- After a Temporary Access Pass is added to an account or expires, it can take a few minutes for the changes to replicate. Users may still see a prompt for Temporary Access Pass during this time. ## Troubleshooting -- If a Temporary Access Pass is not offered to a user during sign-in, check the following:
+- If a Temporary Access Pass isn't offered to a user during sign-in, check the following:
- The user is in scope for the Temporary Access Pass authentication method policy.
- - The user has a valid Temporary Access Pass, and if it is one-time use, it wasnΓÇÖt used yet.
+ - The user has a valid Temporary Access Pass, and if it's one-time use, it wasnΓÇÖt used yet.
- If **Temporary Access Pass sign in was blocked due to User Credential Policy** appears during sign-in with a Temporary Access Pass, check the following: - The user has a multi-use Temporary Access Pass while the authentication method policy requires a one-time Temporary Access Pass. - A one-time Temporary Access Pass was already used.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
In some cases, an **All cloud apps** policy could inadvertently block user acces
- Calls to Azure AD Graph and MS Graph, to access user profile, group membership and relationship information that is commonly used by applications excluded from policy. The excluded scopes are listed below. Consent is still required for apps to use these permissions. - For native clients:
- - Azure AD Graph: User.read
+ - Azure AD Graph: email, offline_access, openid, profile, User.read
- MS Graph: User.read, People.read, and UserProfile.read - For confidential / authenticated clients:
- - Azure AD Graph: User.read, User.read.all, and User.readbasic.all
+ - Azure AD Graph: email, offline_access, openid, profile, User.read, User.read.all, and User.readbasic.all
- MS Graph: User.read,User.read.all, User.read.All People.read, People.read.all, GroupMember.Read.All, Member.Read.Hidden, and UserProfile.read ## User actions
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Title: Grant controls in Conditional Access policy - Azure Active Directory
-description: What are grant controls in an Azure AD Conditional Access policy
+description: Grant controls in an Azure Active Directory Conditional Access policy.
Last updated 08/05/2022-
# Conditional Access: Grant
-Within a Conditional Access policy, an administrator can make use of access controls to either grant or block access to resources.
+Within a Conditional Access policy, an administrator can use access controls to grant or block access to resources.
## Block access
-Block takes into account any assignments and prevents access based on the Conditional Access policy configuration.
+The control for blocking access considers any assignments and prevents access based on the Conditional Access policy configuration.
-Block is a powerful control that should be wielded with appropriate knowledge. Policies with block statements can have unintended side effects. Proper testing and validation are vital before enabling at scale. Administrators should utilize tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes.
+**Block access** is a powerful control that you should apply with appropriate knowledge. Policies with block statements can have unintended side effects. Proper testing and validation are vital before you enable the control at scale. Administrators should use tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes.
## Grant access
Administrators can choose to enforce one or more controls when granting access.
- [Require app protection policy](app-protection-based-conditional-access.md) - [Require password change](#require-password-change)
-When administrators choose to combine these options, they can choose the following methods:
+When administrators choose to combine these options, they can use the following methods:
-- Require all the selected controls (control **AND** control)-- Require one of the selected controls (control **OR** control)
+- Require all the selected controls (control *and* control)
+- Require one of the selected controls (control *or* control)
-By default Conditional Access requires all selected controls.
+By default, Conditional Access requires all selected controls.
-### Require multifactor authentication
+### Require Multi-Factor Authentication
-Selecting this checkbox will require users to perform Azure AD Multifactor Authentication. More information about deploying Azure AD Multifactor Authentication can be found in the article [Planning a cloud-based Azure AD Multifactor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+Selecting this checkbox requires users to perform Azure Active Directory (Azure AD) Multi-factor Authentication. You can find more information about deploying Azure AD Multi-Factor Authentication in [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
-[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
+[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies.
### Require device to be marked as compliant
-Organizations who have deployed Microsoft Intune can use the information returned from their devices to identify devices that meet specific compliance requirements. Policy compliance information is sent from Intune to Azure AD so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see the article [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started).
-
-A device can be marked as compliant by Intune (for any device OS) or by third-party MDM system for Windows 10 devices. A list of supported third-party MDM systems can be found in the article [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
+Organizations that have deployed Intune can use the information returned from their devices to identify devices that meet specific policy compliance requirements. Intune sends compliance information to Azure AD so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see [Set rules on devices to allow access to resources in your organization by using Intune](/intune/protect/device-compliance-get-started).
-Devices must be registered in Azure AD before they can be marked as compliant. More information about device registration can be found in the article, [What is a device identity](../devices/overview.md).
+A device can be marked as compliant by Intune for any device operating system or by a third-party mobile device management system for Windows 10 devices. You can find a list of supported third-party mobile device management systems in [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
-**Remarks**
+Devices must be registered in Azure AD before they can be marked as compliant. You can find more information about device registration in [What is a device identity?](../devices/overview.md).
-- The **Require device to be marked as compliant** requirement:
+The **Require device to be marked as compliant** control:
- Only supports Windows 10+, iOS, Android, and macOS devices registered with Azure AD and enrolled with Intune.
- - For devices enrolled with third-party MDM systems, see [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
- - Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode as a compliant device.
+ - Considers Microsoft Edge in InPrivate mode a non-compliant device.
> [!NOTE]
-> On Windows 7, iOS, Android, macOS, and some third-party web browsers Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
+> On Windows 7, iOS, Android, macOS, and some third-party web browsers, Azure AD identifies the device by using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser, the user is prompted to select the certificate. The user must select this certificate before they can continue to use the browser.
-You can use the Microsoft Defender for Endpoint app along with the Approved Client app policy in Intune to set device compliance policy Conditional Access policies. There's no exclusion required for the Microsoft Defender for Endpoint app while setting up Conditional Access. Although Microsoft Defender for Endpoint on Android & iOS (App ID - dd47d17a-3194-4d86-bfd5-c6ae6f5651e3) isn't an approved app, it has permission to report device security posture. This permission enables the flow of compliance information to Conditional Access.
+You can use the Microsoft Defender for Endpoint app with the approved client app policy in Intune to set the device compliance policy to Conditional Access policies. There's no exclusion required for the Microsoft Defender for Endpoint app while you're setting up Conditional Access. Although Microsoft Defender for Endpoint on Android and iOS (app ID dd47d17a-3194-4d86-bfd5-c6ae6f5651e3) isn't an approved app, it has permission to report device security posture. This permission enables the flow of compliance information to Conditional Access.
### Require hybrid Azure AD joined device
-Organizations can choose to use the device identity as part of their Conditional Access policy. Organizations can require that devices are hybrid Azure AD joined using this checkbox. For more information about device identities, see the article [What is a device identity?](../devices/overview.md).
-
-When using the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the require managed device grant control or a device state condition arenΓÇÖt supported. This is because the device performing authentication canΓÇÖt provide its device state to the device providing a code and the device state in the token is locked to the device performing authentication. Use the require multi-factor authentication grant control instead.
+Organizations can choose to use the device identity as part of their Conditional Access policy. Organizations can require that devices are hybrid Azure AD joined by using this checkbox. For more information about device identities, see [What is a device identity?](../devices/overview.md).
-**Remarks**
+When you use the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the required grant control for the managed device or a device state condition isn't supported. This is because the device that is performing authentication can't provide its device state to the device that is providing a code. Also, the device state in the token is locked to the device performing authentication. Use the **Require Multi-Factor Authentication** control instead.
-- The **Require hybrid Azure AD joined device** requirement:
- - Only supports domain joined Windows down-level (pre Windows 10) and Windows current (Windows 10+) devices.
- - Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode as a hybrid Azure AD joined device.
+The **Require hybrid Azure AD joined device** control:
+ - Only supports domain-joined Windows down-level (before Windows 10) and Windows current (Windows 10+) devices.
+ - Doesn't consider Microsoft Edge in InPrivate mode as a hybrid Azure AD-joined device.
### Require approved client app
-Organizations can require that an access attempt to the selected cloud apps needs to be made from an approved client app. These approved client apps support [Intune app protection policies](/intune/app-protection-policy) independent of any mobile-device management (MDM) solution.
+Organizations can require that an approved client app is used to access selected cloud apps. These approved client apps support [Intune app protection policies](/intune/app-protection-policy) independent of any mobile device management solution.
-In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be the Microsoft Authenticator for iOS, or either the Microsoft Authenticator or Microsoft Company portal for Android devices. If a broker app isnΓÇÖt installed on the device when the user attempts to authenticate, the user gets redirected to the appropriate app store to install the required broker app.
+To apply this grant control, the device must be registered in Azure AD, which requires using a broker app. The broker app can be Microsoft Authenticator for iOS, or either Microsoft Authenticator or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the appropriate app store to install the required broker app.
-The following client apps have been confirmed to support this setting:
+The following client apps support this setting:
- Microsoft Azure Information Protection - Microsoft Bookings
The following client apps have been confirmed to support this setting:
- Microsoft 365 Admin **Remarks**
+ - The approved client apps support the Intune mobile application management feature.
+ - The **Require approved client app** requirement:
+ - Only supports the iOS and Android for device platform condition.
+ - Requires a broker app to register the device. The broker app can be Microsoft Authenticator for iOS, or either Microsoft Authenticator or Microsoft Company Portal for Android devices.
+- Conditional Access can't consider Microsoft Edge in InPrivate mode an approved client app.
+- Conditional Access policies that require Microsoft Power BI as an approved client app don't support using Azure AD Application Proxy to connect the Power BI mobile app to the on-premises Power BI Report Server.
-- The approved client apps support the Intune mobile application management feature.-- The **Require approved client app** requirement:
- - Only supports the iOS and Android for device platform condition.
- - A broker app is required to register the device. The broker app can be the Microsoft Authenticator for iOS, or either the Microsoft Authenticator or Microsoft Company portal for Android devices.
-- Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode an approved client app.-- Using Azure AD Application Proxy to enable the Power BI mobile app to connect to on premises Power BI Report Server isnΓÇÖt supported with Conditional Access policies that require the Microsoft Power BI app as an approved client app.-
-See the article, [How to: Require approved client apps for cloud app access with Conditional Access](app-based-conditional-access.md) for configuration examples.
+See [Require approved client apps for cloud app access with Conditional Access](app-based-conditional-access.md) for configuration examples.
### Require app protection policy
-In your Conditional Access policy, you can require an [Intune app protection policy](/intune/app-protection-policy) be present on the client app before access is available to the selected cloud apps.
+In your Conditional Access policy, you can require that an [Intune app protection policy](/intune/app-protection-policy) is present on the client app before access is available to the selected cloud apps.
-In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be either the Microsoft Authenticator for iOS, or the Microsoft Company portal for Android devices. If a broker app isnΓÇÖt installed on the device when the user attempts to authenticate, the user gets redirected to the app store to install the broker app.
+To apply this grant control, Conditional Access requires that the device is registered in Azure AD, which requires using a broker app. The broker app can be either Microsoft Authenticator for iOS or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the app store to install the broker app.
-Applications are required to have the **Intune SDK** with **Policy Assurance** implemented and meet certain other requirements to support this setting. Developers implementing applications with the Intune SDK can find more information in the SDK documentation on these requirements.
+Applications must have the Intune SDK with policy assurance implemented and must meet certain other requirements to support this setting. Developers who are implementing applications with the Intune SDK can find more information on these requirements in the SDK documentation.
-The following client apps have been confirmed to support this setting:
+The following client apps support this setting:
- Microsoft Cortana - Microsoft Edge
The following client apps have been confirmed to support this setting:
- Microsoft To Do - Microsoft Word - MultiLine for Intune-- Nine Mail - Email & Calendar
+- Nine Mail - Email and Calendar
> [!NOTE]
-> Microsoft Kaizala, Microsoft Skype for Business and Microsoft Visio do not support the **Require app protection policy** grant. If you require these apps to work, please use the **Require approved apps** grant exclusively. The use of the `or` clause between the two grants will not work for these three applications.
+> Kaizala, Skype for Business, and Visio don't support the **Require app protection policy** grant. If you require these apps to work, use the **Require approved apps** grant exclusively. Using the "or" clause between the two grants will not work for these three applications.
-**Remarks**
+Apps for the app protection policy support the Intune mobile application management feature with policy protection.
+
+The **Require app protection policy** control:
-- Apps for app protection policy support the Intune mobile application management feature with policy protection.-- The **Require app protection policy** requirements:
- - Only supports the iOS and Android for device platform condition.
- - A broker app is required to register the device. On iOS, the broker app is Microsoft Authenticator and on Android, itΓÇÖs Intune Company Portal app.
+- Only supports iOS and Android for device platform condition.
+- Requires a broker app to register the device. On iOS, the broker app is Microsoft Authenticator. On Android, the broker app is Intune Company Portal.
-See the article, [How to: Require app protection policy and an approved client app for cloud app access with Conditional Access](app-protection-based-conditional-access.md) for configuration examples.
+See [Require app protection policy and an approved client app for cloud app access with Conditional Access](app-protection-based-conditional-access.md) for configuration examples.
-### Require password change
+### Require password change
-When user risk is detected, using the user risk policy conditions, administrators can choose to have the user securely change the password using Azure AD self-service password reset. If user risk is detected, users can perform a self-service password reset to self-remediate, this process will close the user risk event to prevent unnecessary noise for administrators.
+When user risk is detected, administrators can employ the user risk policy conditions to have the user securely change a password by using Azure AD self-service password reset. Users can perform a self-service password reset to self-remediate. This process will close the user risk event to prevent unnecessary alerts for administrators.
-When a user is prompted to change their password, theyΓÇÖll first be required to complete multi-factor authentication. YouΓÇÖll want to make sure all of your users have registered for multi-factor authentication, so theyΓÇÖre prepared in case risk is detected for their account.
+When a user is prompted to change a password, they'll first be required to complete multifactor authentication. Make sure all users have registered for multifactor authentication, so they're prepared in case risk is detected for their account.
> [!WARNING]
-> Users must have previously registered for self-service password reset before triggering the user risk policy.
+> Users must have previously registered for self-service password reset before triggering the user risk policy.
-Restrictions when you configure a policy using the password change control.
+The following restrictions apply when you configure a policy by using the password change control:
-1. The policy must be assigned to ΓÇÿall cloud appsΓÇÖ. This requirement prevents an attacker from using a different app to change the userΓÇÖs password and reset account risk, by signing into a different app.
-1. Require password change canΓÇÖt be used with other controls, like requiring a compliant device.
-1. The password change control can only be used with the user and group assignment condition, cloud app assignment condition (which must be set to all), and user risk conditions.
+- The policy must be assigned to "all cloud apps." This requirement prevents an attacker from using a different app to change the user's password and resetting their account risk by signing in to a different app.
+- **Require password change** can't be used with other controls, such as requiring a compliant device.
+- The password change control can only be used with the user and group assignment condition, cloud app assignment condition (which must be set to "all"), and user risk conditions.
### Terms of use
-If your organization has created terms of use, other options may be visible under grant controls. These options allow administrators to require acknowledgment of terms of use as a condition of accessing the resources protected by the policy. More information about terms of use can be found in the article, [Azure Active Directory terms of use](terms-of-use.md).
+If your organization has created terms of use, other options might be visible under grant controls. These options allow administrators to require acknowledgment of terms of use as a condition of accessing the resources that the policy protects. You can find more information about terms of use in [Azure Active Directory terms of use](terms-of-use.md).
### Custom controls (preview)
-Custom controls is a preview capability of the Azure Active Directory. When using custom controls, your users are redirected to a compatible service to satisfy authentication requirements outside of Azure Active Directory. For more information, check out the [Custom controls](controls.md) article.
+Custom controls is a preview capability of Azure AD. When you use custom controls, your users are redirected to a compatible service to satisfy authentication requirements that are separate from Azure AD. For more information, check out the [Custom controls](controls.md) article.
## Next steps
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
All policies are enforced in two phases:
- Use the session details gathered in phase 1 to identify any requirements that haven't been met. - If there's a policy that is configured to block access, with the block grant control, enforcement will stop here and the user will be blocked. - The user will be prompted to complete more grant control requirements that weren't satisfied during phase 1 in the following order, until policy is satisfied:
- - [Multi-factor authenticationΓÇï](concept-conditional-access-grant.md#require-multifactor-authentication)
+ - [Multi-factor authenticationΓÇï](concept-conditional-access-grant.md#require-multi-factor-authentication)
- [Device to be marked as compliant](./concept-conditional-access-grant.md#require-device-to-be-marked-as-compliant) - [Hybrid Azure AD joined device](./concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device) - [Approved client app](./concept-conditional-access-grant.md#require-approved-client-app)
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
The public preview supports the following scenarios:
- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status. - Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.-- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multifactor-authentication) grant control.
+- Require user reauthentication for risky sign-ins with the [require multifactor authentication](concept-conditional-access-grant.md#require-multi-factor-authentication) grant control.
When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
Previously updated : 07/18/2022 Last updated : 08/07/2022 #Customer intent: As an application developer, I want to create a trust relationship with a Google Cloud identity so my service in Google Cloud can access Azure AD protected resources without managing secrets.
-# Access Azure AD protected resources from an app in Google Cloud (preview)
+# Access Azure AD protected resources from an app in Google Cloud
Software workloads running in Google Cloud need an Azure Active Directory (Azure AD) application to authenticate and access Azure AD protected resources. A common practice is to configure that application with credentials (a secret or certificate). The credentials are used by a Google Cloud workload to request an access token from Microsoft identity platform. These credentials pose a security risk and have to be stored securely and rotated regularly. You also run the risk of service downtime if the credentials expire.
active-directory Concept Azure Ad Join Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-join-hybrid.md
Hybrid Azure AD joined devices require network line of sight to your on-premises
| **Primary audience** | Suitable for hybrid organizations with existing on-premises AD infrastructure | | | Applicable to all users in an organization | | **Device ownership** | Organization |
-| **Operating Systems** | Windows 11, Windows 10 or 8.1 |
+| **Operating Systems** | Windows 11, Windows 10 or 8.1 except Home editions |
| | Windows Server 2008/R2, 2012/R2, 2016, 2019 and 2022 | | **Provisioning** | Windows 11, Windows 10, Windows Server 2016/2019/2022 | | | Domain join by IT and autojoin via Azure AD Connect or ADFS config |
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
# What is a Primary Refresh Token?
-A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10 or newer, Windows Server 2016 and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) specially issued to Microsoft first party token brokers to enable single sign-on (SSO) across the applications used on those devices. In this article, we will provide details on how a PRT is issued, used, and protected on Windows 10 or newer devices.
+A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10 or newer, Windows Server 2016 and later versions, iOS, and Android devices. It is a JSON Web Token (JWT) specially issued to Microsoft first party token brokers to enable single sign-on (SSO) across the applications used on those devices. In this article, we will provide details on how a PRT is issued, used, and protected on Windows 10 or newer devices. We recommend using the latest versions of Windows 10, Windows 11 and Windows Server 2019+ to get the best SSO experience.
This article assumes that you already understand the different device states available in Azure AD and how single sign-on works in Windows 10 or newer. For more information about devices in Azure AD, see the article [What is device management in Azure Active Directory?](overview.md)
This article assumes that you already understand the different device states ava
The following Windows components play a key role in requesting and using a PRT: * **Cloud Authentication Provider** (CloudAP): CloudAP is the modern authentication provider for Windows sign in, that verifies users logging to a Windows 10 or newer device. CloudAP provides a plugin framework that identity providers can build on to enable authentication to Windows using that identity providerΓÇÖs credentials.
-* **Web Account Manager** (WAM): WAM is the default token broker on Windows 10 or newer devices. WAM also provides a plugin framework that identity providers can build on and enable SSO to their applications relying on that identity provider. (Not included in Windows Server 2016 LTSC builds)
+* **Web Account Manager** (WAM): WAM is the default token broker on Windows 10 or newer devices. WAM also provides a plugin framework that identity providers can build on and enable SSO to their applications relying on that identity provider.
* **Azure AD CloudAP plugin**: An Azure AD specific plugin built on the CloudAP framework, that verifies user credentials with Azure AD during Windows sign in. * **Azure AD WAM plugin**: An Azure AD specific plugin built on the WAM framework, that enables SSO to applications that rely on Azure AD for authentication. * **Dsreg**: An Azure AD specific component on Windows 10 or newer, that handles the device registration process for all device states.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
Title: View, add, and remove assignments for an access package in Azure AD entit
description: Learn how to view, add, and remove assignments for an access package in Azure Active Directory entitlement management. documentationCenter: ''-+ editor:
To use Azure AD entitlement management and assign users to access packages, you
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. Click **Assignments** to see a list of active assignments.
+1. Select **Assignments** to see a list of active assignments.
![List of assignments to an access package](./media/entitlement-management-access-package-assignments/assignments-list.png)
-1. Click a specific assignment to see additional details.
+1. Select a specific assignment to see more details.
-1. To see a list of assignments that did not have all resource roles properly provisioned, click the filter status and select **Delivering**.
+1. To see a list of assignments that didn't have all resource roles properly provisioned, select the filter status and select **Delivering**.
You can see additional details on delivery errors by locating the user's corresponding request on the **Requests** page.
-1. To see expired assignments, click the filter status and select **Expired**.
+1. To see expired assignments, select the filter status and select **Expired**.
-1. To download a CSV file of the filtered list, click **Download**.
+1. To download a CSV file of the filtered list, select **Download**.
## View assignments programmatically ### View assignments with Microsoft Graph
In some cases, you might want to directly assign specific users to an access pac
**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, click **Assignments**.
+1. In the left menu, select **Assignments**.
-1. Click **New assignment** to open Add user to access package.
+1. Select **New assignment** to open Add user to access package.
![Assignments - Add user to access package](./media/entitlement-management-access-package-assignments/assignments-add-user.png)
-1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can click **Create new policy** to add a new policy.
+1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can select **Create new policy** to add a new policy.
1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy. > [!NOTE] > If you select a policy with questions, you can only assign one user at a time.
-1. Set the date and time you want the selected users' assignment to start and end. If an end date is not provided, the policy's lifecycle settings will be used.
+1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
1. Optionally provide a justification for your direct assignment for record keeping.
-1. If the selected policy includes additional requestor information, click **View questions** to answer them on behalf of the users, then click **Save**.
+1. If the selected policy includes additional requestor information, select **View questions** to answer them on behalf of the users, then select **Save**.
![Assignments - click view questions](./media/entitlement-management-access-package-assignments/assignments-view-questions.png) ![Assignments - questions pane](./media/entitlement-management-access-package-assignments/assignments-questions-pane.png)
-1. Click **Add** to directly assign the selected users to the access package.
+1. Select **Add** to directly assign the selected users to the access package.
- After a few moments, click **Refresh** to see the users in the Assignments list.
+ After a few moments, select **Refresh** to see the users in the Assignments list.
> [!NOTE] > When assigning users to an access package, administrators will need to verify that the users are eligible for that access package based on the existing policy requirements. Otherwise, the users won't successfully be assigned to the access package. If the access package contains a policy that requires user requests to be approved, users can't be directly assigned to the package without necessary approval(s) from the designated approver(s).
Azure AD Entitlement Management also allows you to directly assign external user
1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package in which you want to add a user.
+1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
-1. In the left menu, click **Assignments**.
+1. In the left menu, select **Assignments**.
1. Select **New assignment** to open **Add user to access package**.
Azure AD Entitlement Management also allows you to directly assign external user
> - Similarly, if you set your policy to include **All configured connected organizations**, the userΓÇÖs email address must be from one of your configured connected organizations. Otherwise, the user won't be added to the access package. > - If you wish to add any user to the access package, you'll need to ensure that you select **All users (All connected organizations + any external user)** when configuring your policy.
-1. Set the date and time you want the selected users' assignment to start and end. If an end date is not provided, the policy's lifecycle settings will be used.
-1. Click **Add** to directly assign the selected users to the access package.
-1. After a few moments, click **Refresh** to see the users in the Assignments list.
+1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
+1. Select **Add** to directly assign the selected users to the access package.
+1. After a few moments, select **Refresh** to see the users in the Assignments list.
## Directly assigning users programmatically ### Assign a user to an access package with Microsoft Graph
You can also assign multiple users that are in your directory to an access packa
* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the object IDs of the target users, either as an array of strings, or as a list of user members returned from the `Get-MgGroupMember` cmdlet.
-For example, if you want to ensure all the users who are currently members of a group also have assignments to an access package, you can use this cmdlet to create requests for those users who don't currently have assignments. Note that this cmdlet will only create assignments; it does not remove assignments for users who are no longer members of a group.
+For example, if you want to ensure all the users who are currently members of a group also have assignments to an access package, you can use this cmdlet to create requests for those users who don't currently have assignments. Note that this cmdlet will only create assignments; it doesn't remove assignments for users who are no longer members of a group.
```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"
You can remove an assignment that a user or an administrator had previously requ
**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, click **Assignments**.
+1. In the left menu, select **Assignments**.
-1. Click the check box next to the user whose assignment you want to remove from the access package.
+1. Select the check box next to the user whose assignment you want to remove from the access package.
-1. Click the **Remove** button near the top of the left pane.
+1. Select the **Remove** button near the top of the left pane.
![Assignments - Remove user from access package](./media/entitlement-management-access-package-assignments/remove-assignment-select-remove-assignment.png)
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
Title: Create a new access package in entitlement management - Azure AD
description: Learn how to create a new access package of resources you want to share in Azure Active Directory entitlement management. documentationCenter: ''-+ editor:
An access package enables you to do a one-time setup of resources and policies t
## Overview
-All access packages must be put in a container called a catalog. A catalog defines what resources you can add to your access package. If you don't specify a catalog, your access package will be put into the General catalog. Currently, you can't move an existing access package to a different catalog.
+All access packages must be put in a container called a catalog. A catalog defines what resources you can add to your access package. If you don't specify a catalog, your access package will be put into the general catalog. Currently, you can't move an existing access package to a different catalog.
An access package can be used to assign access to roles of multiple resources that are in the catalog. If you're an administrator or catalog owner, you can add resources to the catalog while creating an access package.
-If you are an access package manager, you cannot add resources you own to a catalog. You are restricted to using the resources available in the catalog. If you need to add resources to a catalog, you can ask the catalog owner.
+If you're an access package manager, you can't add resources you own to a catalog. You're restricted to using the resources available in the catalog. If you need to add resources to a catalog, you can ask the catalog owner.
All access packages must have at least one policy for users to be assigned to the access package. Policies specify who can request the access package and also approval and lifecycle settings. When you create a new access package, you can create an initial policy for users in your directory, for users not in your directory, for administrator direct assignments only, or you can choose to create the policy later.
Here are the high-level steps to create a new access package.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Azure Active Directory** and then click **Identity Governance**.
+1. Select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages**.
+1. In the left menu, select **Access packages**.
-1. Click **New access package**.
+1. Select **New access package**.
![Entitlement management in the Azure portal](./media/entitlement-management-shared/access-packages-list.png)
On the **Basics** tab, you give the access package a name and specify which cata
1. In the **Catalog** drop-down list, select the catalog you want to create the access package in. For example, you might have a catalog owner that manages all the marketing resources that can be requested. In this case, you could select the marketing catalog.
- You will only see catalogs you have permission to create access packages in. To create an access package in an existing catalog, you must be a Global administrator, Identity Governance administrator or User administrator, or you must be a catalog owner or access package manager in that catalog.
+ You'll only see catalogs you have permission to create access packages in. To create an access package in an existing catalog, you must be either a Global administrator, Identity Governance administrator or User administrator, or you must be a catalog owner or access package manager in that catalog.
![Access package - Basics](./media/entitlement-management-access-package-create/basics.png)
- If you are a Global administrator, an Identity Governance administrator, a User administrator, or catalog creator and you would like to create your access package in a new catalog that's not listed, click **Create new catalog**. Enter the Catalog name and description and then click **Create**.
+ If you're a Global administrator, an Identity Governance administrator, a User administrator, or catalog creator and you would like to create your access package in a new catalog that's not listed, select **Create new catalog**. Enter the Catalog name and description and then select **Create**.
- The access package you are creating and any resources included in it will be added to the new catalog. You can also add additional catalog owners later and add attributes to the resources you put in the catalog. Read [Add resource attributes in the catalog](entitlement-management-catalog-create.md#add-resource-attributes-in-the-catalog) to learn more about how to edit the attributes list for a specific catalog resource and the prerequisite roles.
+ The access package you're creating, and any resources included in it, will be added to the new catalog. You can also add additional catalog owners later, and add attributes to the resources you put in the catalog. Read [Add resource attributes in the catalog](entitlement-management-catalog-create.md#add-resource-attributes-in-the-catalog) to learn more about how to edit the attributes list for a specific catalog resource and the prerequisite roles.
-1. Click **Next**.
+1. Select **Next**.
## Resource roles
On the **Resource roles** tab, you select the resources to include in the access
If you're not sure which resource roles to include, you can skip adding resource roles while creating the access package, and then [add resource roles](entitlement-management-access-package-resources.md) after you've created the access package.
-1. Click the resource type you want to add (**Groups and Teams**, **Applications**, or **SharePoint sites**).
+1. Select the resource type you want to add (**Groups and Teams**, **Applications**, or **SharePoint sites**).
1. In the Select pane that appears, select one or more resources from the list. ![Access package - Resource roles](./media/entitlement-management-access-package-create/resource-roles.png)
- If you are creating the access package in the General catalog or a new catalog, you will be able to pick any resource from the directory that you own. You must be at least a Global administrator, a User administrator, or Catalog creator.
+ If you're creating the access package in the General catalog or a new catalog, you'll be able to pick any resource from the directory that you own. You must be at least a Global administrator, a User administrator, or Catalog creator.
- If you are creating the access package in an existing catalog, you can select any resource that is already in the catalog without owning it.
+ If you're creating the access package in an existing catalog, you can select any resource that is already in the catalog without owning it.
- If you are a Global administrator, a User administrator, or catalog owner, you have the additional option of selecting resources you own that are not yet in the catalog. If you select resources not currently in the selected catalog, these resources will also be added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, check the **See all** check box at the top of the Select pane. If you only want to select resources that are currently in the selected catalog, leave the check box **See all** unchecked (default state).
+ If you're a Global administrator, a User administrator, or catalog owner, you have the additional option of selecting resources you own that aren't yet in the catalog. If you select resources not currently in the selected catalog, these resources will also be added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, check the **See all** check box at the top of the Select pane. If you only want to select resources that are currently in the selected catalog, leave the check box **See all** unchecked (default state).
1. Once you've selected the resources, in the **Role** list, select the role you want users to be assigned for the resource. For more information on selecting the appropriate roles for a resource, read [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles). ![Access package - Resource role selection](./media/entitlement-management-access-package-create/resource-roles-role.png)
-1. Click **Next**.
+1. Select **Next**.
>[!NOTE] >You can add dynamic groups to a catalog and to an access package. However, you will be able to select only the Owner role when managing a dynamic group resource in an access package.
On the **Review + create** tab, you can review your settings and check for any v
![Access package - Enable policy setting](./media/entitlement-management-access-package-create/review-create.png)
-1. Click **Create** to create the access package.
+1. Select **Create** to create the access package.
The new access package appears in the list of access packages.
On the **Review + create** tab, you can review your settings and check for any v
You can also create an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to
-1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog.
-1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope.
+1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that aren't yet in the catalog.
+1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when later creating an accessPackageResourceRoleScope.
1. [Create an accessPackage](/graph/tutorial-access-package-api). 1. [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-accesspackageassignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for each policy needed in the access package. 1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package.
active-directory Entitlement Management Access Package Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md
Title: Hide or delete access package in entitlement management - Azure AD
description: Learn how to hide or delete an access package in Azure Active Directory entitlement management. documentationCenter: ''-+ editor:
# Hide or delete an access package in Azure AD entitlement management
-Access packages are discoverable by default. This means that if a policy allows a user to request the access package, they will automatically see the access package listed in their My Access portal. However, you can change the **Hidden** setting so that the access package is not listed in user's My Access portal.
+When you create access packages, they're discoverable by default. This means that if a policy allows a user to request the access package, they'll automatically see the access package listed in their My Access portal. However, you can change the **Hidden** setting so that the access package isn't listed in the user's My Access portal.
This article describes how to hide or delete an access package.
Follow these steps to change the **Hidden** setting for an access package.
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. On the Overview page, click **Edit**.
+1. On the Overview page, select **Edit**.
1. Set the **Hidden** setting. If set to **No**, the access package will be listed in the user's My Access portal.
- If set to **Yes**, the access package will not be listed in the user's My Access portal. The only way a user can view the access package is if they have the direct **My Access portal link** to the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md).
+ If set to **Yes**, the access package won't be listed in the user's My Access portal. The only way a user can view the access package is if they have the direct **My Access portal link** to the access package. For more information, see [Share link to request an access package](entitlement-management-access-package-settings.md).
## Delete an access package
An access package can only be deleted if it has no active user assignments. Foll
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
-1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, click **Assignments** and remove access for all users.
+1. In the left menu, select **Assignments** and remove access for all users.
-1. In the left menu, click **Overview** and then click **Delete**.
+1. In the left menu, select **Overview** and then select **Delete**.
-1. In the delete message that appears, click **Yes**.
+1. In the delete message that appears, select **Yes**.
## Next steps
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
Title: Tutorial - Manage access to resources in Azure AD entitlement management
description: Step-by-step tutorial for how to create your first access package using the Azure portal in Azure Active Directory entitlement management. documentationCenter: ''-+ editor: markwahl-msft
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
Title: Configure separation of duties for an access package in Azure AD entitlem
description: Learn how to configure separation of duties enforcement for requests for an access package in Azure Active Directory entitlement management. documentationCenter: ''-+ editor:
# Configure separation of duties checks for an access package in Azure AD entitlement management
-In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. For example, employees might only need manager approval to get access to certain apps, but guests coming in from other organizations may require both a sponsor and a resource team departmental manager to approve. In a policy for users already in the directory, you can specify a particular group of users for who can request access. However, you may have a requirement to avoid a user obtaining excessive access. To meet this requirement, you will want to further restrict who can request access, based on the access the requestor already has.
+In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. For example, employees might only need manager approval to get access to certain apps, but guests coming in from other organizations may require both a sponsor and a resource team departmental manager to approve. In a policy for users already in the directory, you can specify a particular group of users for who can request access. However, you may have a requirement to avoid a user obtaining excessive access. To meet this requirement, you'll want to further restrict who can request access, based on the access the requestor already has.
-With the separation of duties settings on an access package, you can configure that a user who is a member of a group or who already has an assignment to one access package cannot request an additional access package.
+With the separation of duties settings on an access package, you can configure that a user who is a member of a group or who already has an assignment to one access package can't request an additional access package.
![myaccess experience for attempting to request incompatible access](./media/entitlement-management-access-package-incompatible/request-prevented.png)
Similarly, you may have an application with two roles - **Western Sales** and **
- the **Western Territory** access package has the **Eastern Territory** package as incompatible, and - the **Eastern Territory** access package has the **Western Territory** package as incompatible.
-If youΓÇÖve been using Microsoft Identity Manager or other on-premises identity management systems for automating access for on-premises apps, then you can integrate these systems with Azure AD entitlement management as well. If you will be controlling access to Azure AD-integrated apps through entitlement management, and want to prevent users from having incompatible access, you can configure that an access package is incompatible with a group. That could be a group, which your on-premises identity management system sends into Azure AD through Azure AD Connect. This check ensures a user will be unable to request an access package, if that access package would give access that's incompatible with access the user has in on-premises apps.
+If youΓÇÖve been using Microsoft Identity Manager or other on-premises identity management systems for automating access for on-premises apps, then you can integrate these systems with Azure AD entitlement management as well. If you'll be controlling access to Azure AD-integrated apps through entitlement management, and want to prevent users from having incompatible access, you can configure that an access package is incompatible with a group. That could be a group, which your on-premises identity management system sends into Azure AD through Azure AD Connect. This check ensures a user will be unable to request an access package, if that access package would give access that's incompatible with access the user has in on-premises apps.
## Prerequisites
Follow these steps to change the list of incompatible groups or other access pac
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Azure Active Directory**, and then click **Identity Governance**.
+1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package which users will request.
+1. In the left menu, select **Access packages** and then open the access package which users will request.
-1. In the left menu, click **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
-1. If you wish to prevent users who have another access package assignment already from requesting this access package, click on **Add access package** and select the access package that the user would already be assigned.
+1. If you wish to prevent users who have another access package assignment already from requesting this access package, select on **Add access package** and select the access package that the user would already be assigned.
![configuration of incompatible access packages](./media/entitlement-management-access-package-incompatible/select-incompatible-ap.png)
-1. If you wish to prevent users who have an existing group membership from requesting this access package, then click on **Add group** and select the group that the user would already be in.
+1. If you wish to prevent users who have an existing group membership from requesting this access package, then select on **Add group** and select the group that the user would already be in.
### Configure incompatible access packages programmatically
You can also configure the groups and other access packages that are incompatibl
**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
-Follow these steps to view the list of other access packages that have indicated that they are incompatible with an existing access package:
+Follow these steps to view the list of other access packages that have indicated that they're incompatible with an existing access package:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Azure Active Directory**, and then click **Identity Governance**.
+1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, click **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
-1. Click on **Incompatible With**.
+1. Select on **Incompatible With**.
## Identifying users who already have incompatible access to another access package
-If you are configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups will not be able to re-request access.
+If you're configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups won't be able to re-request access.
**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
Follow these steps to view the list of users who have assignments to two access
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Click **Azure Active Directory**, and then click **Identity Governance**.
+1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package where you will be configuring incompatible assignments.
+1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
-1. In the left menu, click **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that **Delivered** status is selected.
-1. Click the **Download** button and save the resulting CSV file as the first file with a list of assignments.
+1. Select the **Download** button and save the resulting CSV file as the first file with a list of assignments.
-1. In the navigation bar, click **Identity Governance**.
+1. In the navigation bar, select **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package which you plan to indicate as incompatible.
+1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible.
-1. In the left menu, click **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that the **Delivered** status is selected.
-1. Click the **Download** button and save the resulting CSV file as the second file with a list of assignments.
+1. Select the **Download** button and save the resulting CSV file as the second file with a list of assignments.
1. Use a spreadsheet program such as Excel to open the two files.
foreach ($w in $apa_w) { if ($null -ne $w.Target -and $null -ne $w.Target.Id -an
## Configuring multiple access packages for override scenarios
-If an access package has been configured as incompatible, then a user who has an assignment to that incompatible access package cannot request the access package, nor can an administrator make a new assignment that would be incompatible.
+If an access package has been configured as incompatible, then a user who has an assignment to that incompatible access package can't request the access package, nor can an administrator make a new assignment that would be incompatible.
-For example, if the **Production environment** access package has marked the **Development environment** package as incompatible, and a user has an assignment to the **Development environment** access package, then the access package manager for **Production environment** cannot create an assignment for that user to the **Production environment**. In order to proceed with that assignment, the user's existing assignment to the **Development environment** access package must first be removed.
+For example, if the **Production environment** access package has marked the **Development environment** package as incompatible, and a user has an assignment to the **Development environment** access package, then the access package manager for **Production environment** can't create an assignment for that user to the **Production environment**. In order to proceed with that assignment, the user's existing assignment to the **Development environment** access package must first be removed.
-If there is an exceptional situation where separation of duties rules might need to be overridden, then configuring an additional access package to capture the users who have overlapping access rights will make it clear to the approvers, reviewers, and auditors the exceptional nature of those assignments.
+If there's an exceptional situation where separation of duties rules might need to be overridden, then configuring an additional access package to capture the users who have overlapping access rights will make it clear to the approvers, reviewers, and auditors the exceptional nature of those assignments.
For example, if there was a scenario that some users would need to have access to both production and deployment environments at the same time, you could create a new access package **Production and development environments**. That access package could have as its resource roles some of the resource roles of the **Production environment** access package and some of the resource roles of the **Development environment** access package.
Depending on your governance processes, that combined access package could have
- a **direct assignments policy**, so that only an access package manager would be interacting with the access package, or - a **users can request access policy**, so that a user can request, with potentially an additional approval stage
-This policy could have as its lifecycle settings a much shorter expiration number of days than a policy on other access packages, or require more frequent access reviews, with regular oversight so that users do not retain access longer than necessary.
+This policy could have as its lifecycle settings a much shorter expiration number of days than a policy on other access packages, or require more frequent access reviews, with regular oversight so that users don't retain access longer than necessary.
## Monitor and report on access assignments
You can use Azure Monitor workbooks to get insights on how users have been recei
![View access package events](./media/entitlement-management-logs-and-reporting/view-events-access-package.png)
-1. To see if there have been changes to application role assignments for an application that were not created due to access package assignments, then you can select the workbook named *Application role assignment activity*. If you select to omit entitlement activity, then only changes to application roles that were not made by entitlement management are shown. For example, you would see a row if a global administrator had directly assigned a user to an application role.
+1. To see if there have been changes to application role assignments for an application that weren't created due to access package assignments, then you can select the workbook named *Application role assignment activity*. If you select to omit entitlement activity, then only changes to application roles that weren't made by entitlement management are shown. For example, you would see a row if a global administrator had directly assigned a user to an application role.
![View app role assignments](./media/entitlement-management-access-package-incompatible/workbook-ara.png)
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-staging-server.md
The export.csv file contains all changes that are about to be exported. Each row
4. You now have a file named **processedusers1.csv** that can be examined in Microsoft Excel. Note that the file provides a mapping from the DN attribute to common identifiers (for example, displayName and userPrincipalName). It currently does not include the actual attribute changes that are about to be exported. #### Switch active server
-1. On the currently active server, either turn off the server (DirSync/FIM/Azure AD Sync) so it is not exporting to Azure AD or set it in staging mode (Azure AD Connect).
-2. Run the installation wizard on the server in **staging mode** and disable **staging mode**.
- ![ReadyToConfigure](./media/how-to-connect-sync-staging-server/additionaltasks.png)
+Azure AD Connect can be set up in an Active-Passive High Availability setup, where one server will actively push changes to the synced AD objects to Azure AD and the passive server will stage these changes in the event it will need to take over.
+
+Note: You cannot set up Azure AD Connect in an Active-Active setup. It must be Active-Passive
+Ensure that only 1 Azure AD Connect server is actively syncing changes.
+
+For more information on setting up an Azure AD Connect sync server in Staging Mode, see [staging mode](how-to-connect-sync-staging-server.md)
+
+You may need to perform a failover of the Sync Servers for several reasons, such as upgrading the version of Azure AD Connect, or receiving an alert that the health service of the Sync Service is not receiving up to date information. In these events you can attempt a failover of the Sync Servers by following the below steps.
+
+#### Prerequisites
+
+- One currently active Azure AD Connect Sync Server
+- One staging Azure AD Connect Sync Server
+
+#### Changing Currently Active Sync Server to Staging Mode
+
+We need to ensure that only one Sync Server is syncing changes at any given time throughout this process. If the currently Active Sync Server is reachable you can perform the below steps to move it to Staging Mode. If it is not reachable, ensure that the server or VM does not regain access unexpectedly either by shutting down the server or isolating it from outbound connections and proceed to the steps on how to change the currently Staging Sync Server to Active Mode.
+
+1. For the currently Active Azure AD Connect server, open the Azure AD Connect Console and click "Configure staging mode" then Next:
+[Insert Image: "active_server_menu.png"]
+
+2. You will need to sign into Azure AD with Global Admin or Hybrid Identity Admin credentials:
+[Insert Image: "active_server_sign_in.png"]
+
+3. Tick the box for Staging Mode and click Next:
+[Insert Image: "active_server_staging_mode.png"]
+
+4. The Azure AD Connect server will check for installed components and then prompt you whether you want to start the sync process:
+[Insert Image: "active_server_config.png"]
+Since the server will be in staging mode, it will not write changes to Azure AD, but retain any changes to the AD in its Connector Space, ready to write them.
+It is recommended to leave the sync process on for the server in Staging Mode, so if it becomes active, it will quickly take over and won't have to do a large sync to catch up to the current state of the AD/Azure AD sync.
+
+5. After selecting whether to start or stop the sync process and clicking Configure, the Azure AD Connect server will configure itself into Staging Mode.
+When this is completed, you will be prompted with a screen that confirms Staging Mode is enabled.
+You can click Exit to finish this.
+
+6. You can confirm that the server is successfully in Staging Mode by opening the Synchronization Service console.
+From here, there should be no more Export jobs since the change and Full & Delta Imports will be suffixed with "(Stage Only)" like below:
+[Insert Image "active_server_sync_server_mgmr.png"]
+
+#### Changing Currently Staging Sync Server to Active Mode
+
+At this point, all of our Azure AD Connect Sync Servers should be in Staging Mode and not exporting changes.
+We can now move our Staging Sync Server to Active mode and actively sync changes.
+
+1. Now move to the Azure AD Connect server that was originally in Staging Mode and open the Azure AD Connect console.
+Click on "Configure staging mode" and click Next:
+[Insert Image: "staging_server_menu.png"]
+Note the message at the bottom of the Console that indicates this server is in Staging Mode.
+
+2. Sign into Azure AD, then go to the Staging Mode screen.
+Untick the box for Staging Mode and click Next
+[Insert Image: "staging_server_staging_mode.png"]
+As per the warning on this page, it is important to ensure no other Azure AD Connect server is actively syncing.
+There should only be one active Azure AD Connect sync server at any time.
+
+3. When you are prompted to start the sync process, tick this box and click Configure:
+[Insert Image: "staging_server_config.png"]
+
+4. Once the process is finished you should get the below confirmation screen where you can click Exit to finish:
+[Insert Image: "staging_server_confirmation.png"]
+
+5. You can again confirm that this is working by opening the Sync Service Console and checking if Export jobs are running:
+[Insert Image: "staging_server_sync_server_mgmr.png"]
## Disaster recovery Part of the implementation design is to plan for what to do in case there is a disaster where you lose the sync server. There are different models to use and which one to use depends on several factors including:
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
+> [!NOTE]
+> The ability to reset a password includes the ability to update the following sensitive attributes required for [self-service password reset](../authentication/concept-sspr-howitworks.md):
+> - businessPhones
+> - mobilePhone
+> - otherMails
+ ## Who can update sensitive attributes Some administrators can update the following sensitive attributes for some users. All users can read these sensitive attributes.
active-directory Ideagen Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Ideagen Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Ideagen Cloud.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 9d86a706-03d3-4a7e-b76b-2197d6641af4
+++
+ms.devlang: na
+ Last updated : 08/08/2022+++
+# Tutorial: Configure Ideagen Cloud for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Ideagen Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Ideagen Cloud](https://www.ideagen.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Ideagen Cloud.
+> * Remove users in Ideagen Cloud when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Ideagen Cloud.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* The Tenant URL and Secret Token.
+* Global Administrative rights for the Active Directory.
+* Access rights to set up Enterprise applications.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Ideagen Cloud](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Ideagen Cloud to support provisioning with Azure AD
+1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu.
+
+ ![Screenshot of administration menu.](media\ideagen-cloud-provisioning-tutorial\admin.png)
+
+2. Navigate to **Authentication** page under the **Manage tenant** sub menu.
+
+ ![Screenshot of authentication page.](media\ideagen-cloud-provisioning-tutorial\authentication.png)
+
+3. Scroll down in the Authentication page to **Client Token** section and click on **Regenerate**.
+
+ ![Screenshot of token generation.](media\ideagen-cloud-provisioning-tutorial\generate-token.png)
+
+4. **Copy** and save the Bearer Token. This value will be entered in the Secret Token * field in the Provisioning tab of your Ideagen Cloud application in the Azure portal.
+
+ ![Screenshot of copying token.](media\ideagen-cloud-provisioning-tutorial\copy-token.png)
+
+## Step 3. Add Ideagen Cloud from the Azure AD application gallery
+
+Add Ideagen Cloud from the Azure AD application gallery to start managing provisioning to Ideagen Cloud. If you have previously setup Ideagen Cloud for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to Ideagen Cloud
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Ideagen Cloud based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Ideagen Cloud in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Ideagen Cloud**.
+
+ ![Screenshot of the Ideagen Cloud link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab,](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Ideagen Cloud Tenant URL and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to Ideagen Cloud. If the connection fails, ensure your Ideagen Cloud account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Ideagen Cloud**.
+
+1. Review the user attributes that are synchronized from Azure AD to Ideagen Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Ideagen Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Ideagen Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Ideagen Cloud|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |title|String|
+ |emails[type eq "work"].value|String||&check;
+ |preferredLanguage|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |externalId|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Ideagen Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Ideagen Cloud by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Smarteru Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smarteru-tutorial.md
# Tutorial: Azure Active Directory integration with SmarterU > [!NOTE]
-> The process for integrating SmarterU with Azure Active Directory is also documented and maintained in the [SmarterU help system](https://help.smarteru.com/ID2053086).
+> The process for integrating SmarterU with Azure Active Directory is also documented and maintained in the [SmarterU help system](https://support.smarteru.com/docs/sso-azure-active-directory).
In this tutorial, you'll learn how to integrate SmarterU with Azure Active Directory (Azure AD). When you integrate SmarterU with Azure AD, you can:
active-directory Speexx Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/speexx-tutorial.md
Previously updated : 03/28/2022 Last updated : 08/08/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, perform the following steps:-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://portal.speexx.com/auth/saml/<customername>`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://portal.speexx.com/auth/saml/<customername>/adfs/postResponse`
-
- c. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://portal.speexx.com/auth/saml/<customername>`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Speexx Client support team](mailto:support@speexx.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ 1. In the **Identifier** text box, type a URL using the following pattern: `https://portal.speexx.com/auth/saml/<customername>`
+ 1. In the **Reply URL** text box, type a URL using the following pattern: `https://portal.speexx.com/auth/saml/<customername>/adfs/postResponse`
+ 1. In the **Sign-on URL** text box, type a URL using the following pattern: `https://portal.speexx.com/auth/saml/<customername>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Speexx Client support team](mailto:support@speexx.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Speexx you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Speexx you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Az
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage (preview) in an Azure Kubernetes Service (AKS) cluster. Previously updated : 07/21/2022 Last updated : 08/08/2022
Mounting Azure Blob storage as a file system into a container or pod, enables yo
* Images, documents, and streaming video or audio * Disaster recovery data
-The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver (preview), the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver (preview) is enabled on AKS, there are two built-in storage classes: *blob-fuse* and *blob-nfs*.
+The data on the object storage can be accessed by applications using BlobFuse or Network File System (NFS) 3.0 protocol. Before the introduction of the Azure Blob storage CSI driver (preview), the only option was to manually install an unsupported driver to access Blob storage from your application running on AKS. When the Azure Blob storage CSI driver (preview) is enabled on AKS, there are two built-in storage classes: *azureblob-fuse-premium* and *azureblob-nfs-premium*.
To create an AKS cluster with CSI drivers support, see [CSI drivers on AKS][csi-drivers-aks]. To learn more about the differences in access between each of the Azure storage types using the NFS protocol, see [Compare access to Azure Files, Blob Storage, and Azure NetApp Files with NFS][compare-access-with-nfs].
Azure Blob storage CSI driver (preview) supports the following features:
### Uninstall open-source driver
-Perform the following steps if you previously installed the [CSI Blob Storage open-source driver][csi-blob-storage-open-source-driver] to access Azure Blob storage from your cluster.
-
-1. Copy the following Shell script and create a file named `uninstall-driver.sh`:
-
- ```bash
- # Copyright 2020 The Kubernetes Authors.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
-
- set -euo pipefail
-
- ver="master"
- if [[ "$#" -gt 0 ]]; then
- ver="$1"
- fi
-
- repo="https://raw.githubusercontent.com/kubernetes-sigs/blob-csi-driver/$ver/deploy"
- if [[ "$#" -gt 1 ]]; then
- if [[ "$2" == *"local"* ]]; then
- echo "use local deploy"
- repo="./deploy"
- fi
- fi
-
- if [ $ver != "master" ]; then
- repo="$repo/$ver"
- fi
-
- echo "Uninstalling Azure Blob Storage CSI driver, version: $ver ..."
- kubectl delete -f $repo/csi-blob-controller.yaml --ignore-not-found
- kubectl delete -f $repo/csi-blob-node.yaml --ignore-not-found
- kubectl delete -f $repo/csi-blob-driver.yaml --ignore-not-found
- kubectl delete -f $repo/rbac-csi-blob-controller.yaml --ignore-not-found
- kubectl delete -f $repo/rbac-csi-blob-node.yaml --ignore-not-found
- echo 'Uninstalled Azure Blob Storage CSI driver successfully.'
- ```
-
-2. Run the script using the following command:
-
- ```bash
- ./uninstall-driver.sh
- ```
+Perform the steps in this [link][csi-blob-storage-open-source-driver-uninstall-steps] if you previously installed the [CSI Blob Storage open-source driver][csi-blob-storage-open-source-driver] to access Azure Blob storage from your cluster.
## Install the Azure CLI aks-preview extension
You're prompted to confirm there isn't an open-source Blob CSI driver installed.
"blobCsiDriver": { "enabled": true },
- "diskCsiDriver": {
- "enabled": true,
- "version": "v1"
- },
``` ## Disable CSI driver on an existing AKS cluster
When you use storage CSI drivers on AKS, there are two additional built-in Stora
The reclaim policy on both storage classes ensures that the underlying Azure Blob storage is deleted when the respective PV is deleted. The storage classes also configure the container to be expandable by default, as the `set allowVolumeExpansion` parameter is set to **true**.
-Use the [kubectl get sc][kubectl-get] command to see the storage classes. The following example shows the `blob-fuse` and `blob-nfs` storage classes available within an AKS cluster:
+Use the [kubectl get sc][kubectl-get] command to see the storage classes. The following example shows the `azureblob-fuse-premium` and `azureblob-nfs-premium` storage classes available within an AKS cluster:
```bash NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
To have a storage volume persist for your workload, you can use a StatefulSet. T
- metadata: name: persistent-storage annotations:
- volume.beta.kubernetes.io/storage-class: blob-nfs
+ volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
spec: accessModes: ["ReadWriteMany"] resources:
To have a storage volume persist for your workload, you can use a StatefulSet. T
- metadata: name: persistent-storage annotations:
- volume.beta.kubernetes.io/storage-class: blob-fuse
+ volume.beta.kubernetes.io/storage-class: azureblob-fuse-premium
spec: accessModes: ["ReadWriteMany"] resources:
To have a storage volume persist for your workload, you can use a StatefulSet. T
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [csi-blob-storage-open-source-driver]: https://github.com/kubernetes-sigs/blob-csi-driver
+[csi-blob-storage-open-source-driver-uninstall-steps]: https://github.com/kubernetes-sigs/blob-csi-driver](https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-csi-driver-master.md#clean-up-blob-csi-driver
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
To have a storage volume persist for your workload, you can use a StatefulSet. T
[use-tags]: use-tags.md [az-tags]: ../azure-resource-manager/management/tag-resources.md [azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md
-[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
+[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS)
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims Previously updated : 03/30/2022 Last updated : 03/08/2022
Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types includ
> [!TIP] >For most production and development workloads, use Premium SSD.
-Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
+Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single node. For storage volumes that can be accessed by pods on multiple nodes simultaneously, use Azure Files.
### Azure Files Use *Azure Files* to mount an SMB 3.1.1 share or NFS 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
There are two options for adding the NVIDIA device plugin:
### Update your cluster to use the AKS GPU image (preview)
-AKS provides is providing a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
+AKS provides a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
Register the `GPUDedicatedVHDPreview` feature:
az aks nodepool add \
--max-count 3 ```
-The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the nodes in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
+The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the node in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
> [!NOTE] > A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
For more information about running machine learning (ML) workloads on Kubernetes
For information on using Azure Kubernetes Service with Azure Machine Learning, see the following articles:
-* [Deploy a model to Azure Kubernetes Service][azureml-aks].
-* [Deploy a deep learning model for inference with GPU][azureml-gpu].
+* [Configure a Kubernetes cluster for ML model training or deployment][azureml-aks].
+* [Deploy a model with an online endpoint][azureml-deploy].
* [High-performance serving with Triton Inference Server][azureml-triton]. <!-- LINKS - external -->
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[aks-spark]: spark-job.md [gpu-skus]: ../virtual-machines/sizes-gpu.md [install-azure-cli]: /cli/azure/install-azure-cli
-[azureml-aks]: ../machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
-[azureml-gpu]: ../machine-learning/how-to-deploy-inferencing-gpus.md
+[azureml-aks]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
+[azureml-deploy]: ../machine-learning/how-to-deploy-managed-online-endpoints.md
[azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md [aks-container-insights]: monitor-aks.md#container-insights
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
alertmanager-config 1 20s
NOTES: To verify that openfaas has started, run:
-```console
kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas" ```
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
aks Use Psa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-psa.md
+
+ Title: Use Pod Security Admission in Azure Kubernetes Service (AKS)
+description: Learn how to enable and use Pod Security Admission with Azure Kubernetes Service (AKS).
++ Last updated : 08/08/2022+++
+# Use Pod Security Admission in Azure Kubernetes Service (AKS)
+
+Pod Security Admission enforces Pod Security Standards policies on pods running in a namespace. Pod Security Admission is enabled by default in AKS and is controlled by adding labels to a namespace. For more information about Pod Security Admission, see [Enforce Pod Security Standards with Namespace Labels][kubernetes-psa]. For more information about the Pod Security Standards used by Pod Security Admission, see [Pod Security Standards][kubernetes-pss].
+
+## Before you begin
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+- An existing AKS cluster running Kubernetes version 1.23 or higher.
+
+## Enable Pod Security Admission for a namespace in your cluster
+
+To enable PSA for a namespace in your cluster, set the `pod-security.kubernetes.io/enforce` label with the policy value you want to enforce. For example:
+
+```azurecli-interactive
+kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
+```
+
+The above command enforces the `restricted` policy for the *NAMESPACE* namespace.
+
+You can also enable Pod Security Admission for all your namespaces. For example:
+
+```azurecli-interactive
+kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
+```
+
+The above example will generate a user-facing warning if any pods are deployed to any namespace that does not meet the `baseline` policy.
+
+## Example of enforcing a Pod Security Admission policy with a deployment
+
+Create two namespaces, one with the `restricted` policy and one with the `baseline` policy.
+
+```azurecli-interactive
+kubectl create namespace test-restricted
+kubectl create namespace test-privileged
+kubectl label --overwrite ns test-restricted pod-security.kubernetes.io/enforce=restricted pod-security.kubernetes.io/warn=restricted
+kubectl label --overwrite ns test-privileged pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=privileged
+```
+
+Both the `test-restricted` and `test-privileged` namespaces will block running pods as well as generate a user-facing warning if any pods attempt to run that do not meet the configured policy.
+
+Attempt to deploy pods to the `test-restricted` namespace.
+
+```azurecli-interactive
+kubectl apply --namespace test-restricted -f https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml
+```
+
+Notice you get a warning that the pods violate the configured policy.
+
+```output
+...
+Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "azure-vote-back" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "azure-vote-back" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "azure-vote-back" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "azure-vote-back" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
+deployment.apps/azure-vote-back created
+service/azure-vote-back created
+Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "azure-vote-front" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "azure-vote-front" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "azure-vote-front" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "azure-vote-front" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
+deployment.apps/azure-vote-front created
+service/azure-vote-front created
+```
+
+Confirm there are no pods running in the `test-restricted` namespace.
+
+```azurecli-interactive
+kubectl get pods --namespace test-restricted
+```
+
+```output
+$ kubectl get pods --namespace test-restricted
+No resources found in test-restricted namespace.
+```
+
+Attempt to deploy pods to the `test-privileged` namespace.
+
+```azurecli-interactive
+kubectl apply --namespace test-privileged -f https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml
+```
+
+Notice there are no warnings about pods not meeting the configured policy.
+
+Confirm you have pods running in the `test-privileged` namespace.
+
+```azurecli-interactive
+kubectl get pods --namespace test-privileged
+```
+
+```output
+$ kubectl get pods --namespace test-privileged
+NAME READY STATUS RESTARTS AGE
+azure-vote-back-6fcdc5cbd5-svbdf 1/1 Running 0 2m29s
+azure-vote-front-5f4b8d498-tqzwv 1/1 Running 0 2m28s
+```
+
+Delete both the `test-restricted` and `test-privileged` namespaces.
+
+```azurecli-interactive
+kubectl delete namespace test-restricted test-privileged
+```
+
+## Next steps
+
+In this article, you learned how to enable Pod Security Admission an AKS cluster. For more information about Pod Security Admission, see [Enforce Pod Security Standards with Namespace Labels][kubernetes-psa]. For more information about the Pod Security Standards used by Pod Security Admission, see [Pod Security Standards][kubernetes-pss].
+
+<!-- LINKS - Internal -->
+[kubernetes-psa]: https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/
+[kubernetes-pss]: https://kubernetes.io/docs/concepts/security/pod-security-standards/
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
securityContext:
> [!WARNING] > Running the self-hosted gateway with read-only filesystem (`readOnlyRootFilesystem: true`) is not supported.
+## Assessing impact with Azure Advisor
+
+In order to make the migration easier, we have introduced new Azure Advisor recommendations:
+
+- **Use self-hosted gateway v2** recommendation - Identifies Azure API Management instances where the usage of self-hosted gateway v0.x or v1.x was identified.
+- **Use Configuration API v2 for self-hosted gateways** recommendation - Identifies Azure API Management instances where the usage of Configuration API v1 for self-hosted gateway was identified.
+
+We highly recommend customers to use ["All Recommendations" overview in Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/~/All) to determine if a migration is required. Use the filtering options to see if one of the above recommendations is present.
+ ## Known limitations Here's a list of known limitations for the self-hosted gateway v2:
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
As of v2.1.1 and above, you can manage the ciphers that are being used through t
## Next steps
+- Learn more about the various gateways in our [API gateway overview](api-management-gateways-overview.md)
- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management) - Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md) - [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
When no longer needed, you can delete the resource group, App service, and all r
1. From the *resource group* page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources. :::image type="content" source="./media/quickstart-wordpress/delete-resource-group.png" alt-text="Delete resource group.":::
-## MySQL password
+## Change MySQL password
-The [Application Settings](reference-app-settings.md#wordpress) for MySQL database credentials are used by WordPress to connect to the MySQL database. To change the MySQL database password, see [update admin password](/azure/mysql/single-server/how-to-create-manage-server-portal#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](/azure/mysql/single-server/how-to-create-manage-server-portal#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
-## WordPress admin password
+## Change WordPress admin password
The [Application Settings](reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-arc Least Privilege https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/least-privilege.md
Permissions required to perform this action:
- Create - All the permissions being granted to the service account (see the arcdata-deployer.yaml below for details)
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
```console kubectl apply --namespace arc -f arcdata-deployer.yaml
You have several additional options for creating the Azure Arc data controller:
- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md) - [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md) - [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)-- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md)
+- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md)
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
|:|:|:| | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent by using Azure extension framework. | | On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). |
- | Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
- | Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
+ | Windows 10, 11 desktops, workstations | [Client installer (Public preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
+ | Windows 10, 11 laptops | [Client installer (Public preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
1. Define a data collection rule and associate the resource to the rule.
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Data source | Destinations | Description | |:|:|:|
- | Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
+ | Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
| Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system | | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
Azure Monitor Agent currently supports these Azure Monitor features:
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : |
-| Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (preview)](data-collection-text-log.md) |
+| Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (Public preview)](data-collection-text-log.md) |
| Windows client installer | Public preview | None | [Set up Azure Monitor Agent on Windows client devices](azure-monitor-agent-windows-client.md) | | [VM insights](../vm/vminsights-overview.md) | Preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
Azure Monitor Agent currently supports these Azure
| [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Sign-up link](https://aka.ms/AMAgent) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Preview</li><li>Linux Syslog CEF: Preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Windows DNS logs](https://aka.ms/AMAgent)</li><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/AMAgent)</li><li>No sign-up needed for Windows Forwarding Event (WEF) and Windows Security Events</li></ul> | | [Change Tracking](../../automation/change-tracking/overview.md) (part of Defender) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/AMAgent) |
-| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (preview) documentation](/azure/update-center/) |
+| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](/azure/update-center/) |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | ## Supported regions
If the machine connects through a proxy server to communicate over the internet,
The Azure Monitor Agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported. > [!IMPORTANT]
-> Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
+> Proxy configuration is not supported for [Azure Monitor Metrics (Public preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
1. Use this flowchart to determine the values of the *`Settings` and `ProtectedSettings` parameters first.
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Azure | X | X | X | | | Other cloud (Azure Arc) | X | X | | | | On-premises (Azure Arc) | X | X | |
-| | Windows Client OS | X (Preview) | | |
+| | Windows Client OS | X (Public preview) | | |
| **Data collected** | | | | | | | Event Logs | X | X | X | | | Performance | X | X | X |
-| | File based logs | X (Preview) | X | X |
-| | IIS logs | X (Preview) | X | X |
+| | File based logs | X (Public preview) | X | X |
+| | IIS logs | X (Public preview) | X | X |
| | ETW events | | | X | | | .NET app logs | | | X | | | Crash dumps | | | X |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Event Hub | | | X | | **Services and features supported** | | | | | | | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | |
-| | VM Insights | | X (Preview) | |
+| | VM Insights | | X (Public preview) | |
| | Azure Automation | | X | | | | Microsoft Defender for Cloud | | X | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data collected** | | | | | | | | Syslog | X | X | X | | | | Performance | X | X | X | X |
-| | File based logs | X (Preview) | | | |
+| | File based logs | X (Public preview) | | | |
| **Data sent to** | | | | | | | | Azure Monitor Logs | X | X | | | | | Azure Monitor Metrics<sup>1</sup> | X | | | X |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | Event Hub | | | X | | | **Services and features supported** | | | | | | | | Microsoft Sentinel | X ([View scope](#supported-services-and-features)) | X | | |
-| | VM Insights | X (Preview) | X | | |
-| | Container Insights | X (Preview) | X | | |
+| | VM Insights | X (Public preview) | X | | |
+| | Container Insights | X (Public preview) | X | | |
| | Azure Automation | | X | | | | | Microsoft Defender for Cloud | | X | | |
The following tables list the operating systems that Azure Monitor Agent and the
| Azure Stack HCI | | X | | <sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser)<br>
-<sup>2</sup> Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md)
+<sup>2</sup> Using the Azure Monitor agent [client installer (Public preview)](./azure-monitor-agent-windows-client.md)
#### Linux
The following tables list the operating systems that Azure Monitor Agent and the
<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br> <sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br>
-<sup>3</sup> Not all kernel versions are supported. Check the supported kernel versions in the following table.
-
-> [!NOTE]
-> For Dependency Agent Linux support, see [Dependency Agent documentation](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support).
+<sup>3</sup> Not all kernel versions are supported. For more information, see [Dependency Agent Linux support](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support).
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
-# Migration tools for Log Analytics Agent to Azure Monitor Agent
+# Tools for migrating from Log Analytics Agent to Azure Monitor Agent
Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MM) include enhanced security, cost-effectiveness, performance, manageability and reliability. This article explains how to use the AMA Migration Helper and DCR Config Generator tools to help automate and track the migration from Log Analytics Agent to Azure Monitor Agent.
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
The following sections describe common errors that appear in the connector statu
**Cause**: Such an error appears in either of these situations: * A newly created ITSM Connector instance has yet to finish the initial sync.
-* The connector was not defined correctly.
+* The connector wasn't defined correctly.
**Resolution**:
The following sections describe common errors that appear in the connector statu
## IP restrictions
-**Error**: "Failed to add ITSM Connection named "XXX" due to Bad Request. Error: Bad request. Invalid parameters provided for connection. Http Exception: Status Code Forbidden."
+**Error**:
+* "Failed to add ITSM Connection named "XXX" due to Bad Request. Error: Bad request. Invalid parameters provided for connection. Http Exception: Status Code Forbidden."
+* "Failed to update ITSM Connection credentials"
-**Cause**: The IP address of ITSM application is not allow ITSM connections from partners ITSM tools.
+**Cause**: The IP address of ITSM application isn't allow ITSM connections from partners ITSM tools.
-**Resolution**: In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/download/details.aspx?id=56519) For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
+**Resolution**: In order to list the ITSM IP addresses to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/download/details.aspx?id=56519) For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
## Authentication **Error**: "User Not Authenticated"
-**Cause**: There can be one of 2 options either the token need to be refreshed or there is missing integration user rights.
+**Cause**: There can be one of two options either the token need to be refreshed or there's missing integration user rights.
**Resolution**:If the integration worked for you in the past, it might be that the refresh token has expired. Then sync ITSMC to generate a new refresh token, as explained in [How to manually fix sync problems](./itsmc-resync-servicenow.md). If it never worked, it might be missing integration user rights, Please check it [here ](./itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role)
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Each workspace has a default retention policy that's applied to all tables. You
:::image type="content" source="media/data-retention-configure/retention-archive.png" alt-text="Overview of data retention and archive periods":::
-During the interactive retention period, data is available for monitoring, troubleshooting and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs. You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md).
+During the interactive retention period, data is available for monitoring, troubleshooting and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs.
+Archived data stays in the same table, alongside the data that's available for interactive queries.
+When you set a total retention period that's longer than the interactive retention period, Log Analytics automatically archives the relevant data immediately at the end of the retention period.
+
+If you change the archive settings on a table with existing data, the relevant data in the table is also affected immediately. For example, if you have an existing table with 30 days of interactive retention and no archive period and you change the retention policy to eight days of interactive retention and one year total retention, Log Analytics immediately archives any data that's older than eight days.
+
+You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md).
+
> [!NOTE] > The archive feature is currently in public preview and can only be set at the table level, not at the workspace level.
The request body includes the values in the following table.
**Example**
-This example sets the table's interactive retention to the workspace default of 30 days, and the total retention to two years. This means the archive duration is 23 months.
+This example sets the table's interactive retention to the workspace default of 30 days, and the total retention to two years, which means that the archive duration is 23 months.
**Request**
Status code: 200
To set the retention and archive duration for a table, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and pass the `--retention-time` and `--total-retention-time` parameters.
-This example sets table's interactive retention to 30 days, and the total retention to two years. This means the archive duration is 23 months:
+This example sets table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months:
```azurecli az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name AzureMetrics --retention-time 30 --total-retention-time 730
Tables related to Application Insights resources also keep data for 90 days at n
## Pricing model
-You'll be charged for each day you retain data. The cost of retaining data for part of a day is the same as for a full day.
+The charge for maintaining archived logs is calculated based on the volume of data you archive, in GB, and the number or days for which you archive the data.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Classic Application Insights resources
-Data workspace-based Application Insights resources is stored in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. Classic Application Insights resources though, have separate retention settings.
+## Set data retention for Classic Application Insights resources
+Workspace-based Application Insights resources store data in a Log Analytics workspace, so it's included in the data retention and archive settings for the workspace. However, classic Application Insights resources have separate retention settings.
-The default retention for Application Insights resources is 90 days. Different retention periods can be selected for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days.
+The default retention for Application Insights resources is 90 days. You can select different retention periods for each Application Insights resource. The full set of available retention periods is 30, 60, 90, 120, 180, 270, 365, 550 or 730 days.
To change the retention, from your Application Insights resource, go to the **Usage and Estimated Costs** page and select the **Data Retention** option:
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
InsightsMetrics
| where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "TransfersPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) ), Computer, _ResourceId, Disk
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
``` **Logical disk data rate**
InsightsMetrics
| where Origin == "vm.azm.ms" | where Namespace == "LogicalDisk" and Name == "BytesPerSecond" | extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) , Computer, _ResourceId, Disk
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk
``` ## Network alerts
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 04/04/2022 Last updated : 08/08/2022+ # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## July, 2022
+### General
+
+| Article | Description |
+|:|:|
+|[Sources of data in Azure Monitor](data-sources.md)|Updated with Azure Monitor agent and Logs ingestion API.|
+
+### Agents
+
+| Article | Description |
+|:|:|
+|[Azure Monitor Agent overview](agents/agents-overview.md)| Restructure of the Agents section. A single Azure Monitor Agent is replacing all of Azure Monitor's legacy monitoring agents.
+|[Enable network isolation for the Azure Monitor agent](agents/azure-monitor-agent-data-collection-endpoint.md)|Rewritten to better describe configuration of network isolation.
+
+### Alerts
+
+| Article | Description |
+|:|:|
+|[Azure Monitor Alerts Overview](alerts/alerts-overview.md)|Updated the logic for the time to resolve behavior in stateful log alerts.
+
+### Application Insights
+
+| Article | Description |
+|:|:|
+|[Azure Monitor Application Insights Java](app/java-in-process-agent.md)|OpenTelemetry-based auto-instrumentation for Java applications has an updated Supported Custom Telemetry table.
+|[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)|Clarification has been added that valueCount and itemCount have a minimum value of 1.
+|[Telemetry sampling in Azure Application Insights](app/sampling.md)|Sampling documentation has been updated to warn of the potential impact on alerting accuracy.
+|[Azure Monitor Application Insights Java (redirect to OpenTelemetry)](app/java-in-process-agent-redirect.md)|Java Auto-Instrumentation now redirects to OpenTelemetry documentation.
+|[Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md)|Updated .NET Core FAQ
+|[Create a new Azure Monitor Application Insights workspace-based resource](app/create-workspace-resource.md)|We've linked out to Microsoft.Insights components for more information on Properties.
+|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|SDK support guidance has been updated and clarified.
+|[Azure Monitor Application Insights Java](app/java-in-process-agent.md)|Example code has been updated.
+|[IP addresses used by Azure Monitor](app/ip-addresses.md)|The IP/FQDN table has been updated.
+|[Continuous export of telemetry from Application Insights](app/export-telemetry.md)|The continuous export notice has been updated and clarified.
+|[Set up availability alerts with Application Insights](app/availability-alerts.md)|Custom Alert Rule and Alert Frequency sections have been added.
+
+### Autoscale
+
+| Article | Description |
+|:|:|
+| [How-to guide for setting up autoscale for a web app with a custom metric](autoscale/autoscale-custom-metric.md) |General rewrite to improve clarity.|
+[Overview of autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|General rewrite to improve clarity.|
+
+### Containers
+
+| Article | Description |
+|:|:|
+|[Overview of Container insights](containers/container-insights-overview.md)|Added information about deprecation of Docker support.|
+|[Enable Container insights](containers/container-insights-onboard.md)|All Container insights content updated for new support of managed identity authentication using Azure Monitor agent.|
+
+### Essentials
+
+| Article | Description |
+|:|:|
+|[Tutorial - Editing Data Collection Rules](essentials/data-collection-rule-edit.md)|New article.|
+|[Data Collection Rules in Azure Monitor](essentials/data-collection-rule-overview.md)|General rewrite to improve clarity.|
+|[Data collection transformations](essentials/data-collection-transformations.md)|General rewrite to improve clarity.|
+|[Data collection in Azure Monitor](essentials/data-collection.md)|New article.|
+|[How to Migrate from Diagnostic Settings Storage Retention to Azure Storage Lifecycle Policy](essentials/migrate-to-azure-storage-lifecycle-policy.md)|New article.|
+
+### Logs
+
+| Article | Description |
+|:|:|
+|[Logs ingestion API in Azure Monitor (Preview)](logs/logs-ingestion-api-overview.md)|Custom logs API renamed to Logs ingestion API.
+|[Tutorial - Send data to Azure Monitor Logs using REST API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Custom logs API renamed to Logs ingestion API.
+|[Tutorial - Send data to Azure Monitor Logs using REST API (Azure portal)](logs/tutorial-logs-ingestion-portal.md)|Custom logs API renamed to Logs ingestion API.
+
+### Virtual Machines
+
+| Article | Description |
+|:|:|
+|[What is VM insights?](vm/vminsights-overview.md)|All VM insights content updated for new support of Azure Monitor agent.
+ ## June, 2022
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"use-protectedsettings-for-commandtoexecute-secrets": { "level": "warning" },
+ "secure-secrets-in-params": {
+ "level": "warning"
+ },
"use-stable-resource-identifiers": { "level": "warning" },
azure-resource-manager Linter Rule Secure Secrets In Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-secure-secrets-in-parameters.md
+
+ Title: Linter rule - secure secrets in parameters
+description: Linter rule - secure secrets in parameters
+ Last updated : 08/01/2022++
+# Linter rule - secure secrets in parameters
+
+This rule finds parameters whose names look like secrets but without the [secure decorator](./parameters.md#decorators), for example: a parameter name contains the following keywords:
+
+- password
+- pwd
+- secret
+- accountkey
+- acctkey
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`secure-secrets-in-params`
+
+## Solution
+
+Use the [secure decorator](./parameters.md#decorators) for the parameters that contain secrets. The secure decorator marks the parameter as secure. The value for a secure parameter isn't saved to the deployment history and isn't logged.
+
+The following example fails this test because the parameter name may contain secrets.
+
+```bicep
+param mypassword string
+```
+
+You can fix it by adding the secure decorator:
+
+```bicep
+@secure()
+param mypassword string
+```
+
+## Silencing false positives
+
+Sometimes this rule alerts on parameters that don't actually contain secrets. In these cases, you can disable the warning for this line by adding `#disable-next-line secure-secrets-in-params` before the line with the warning. For example:
+
+```bicep
+#disable-next-line secure-secrets-in-params // Doesn't contain a secret
+param mypassword string
+```
+
+It's good practice to add a comment explaining why the rule doesn't apply to this line.
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [prefer-interpolation](./linter-rule-prefer-interpolation.md) - [prefer-unquoted-property-names](./linter-rule-prefer-unquoted-property-names.md) - [secure-parameter-default](./linter-rule-secure-parameter-default.md)
+- [secure-secrets-in-params](./linter-rule-secure-secrets-in-parameters.md)
- [simplify-interpolation](./linter-rule-simplify-interpolation.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md) - [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md)
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 05/13/2022 Last updated : 08/08/2022
Applying locks can lead to unexpected results. Some operations, which don't seem
- A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml). -- A cannot-delete lock on a **resource group** prevents **Azure Machine Learning** from autoscaling [Azure Machine Learning compute clusters](../../machine-learning/concept-compute-target.md#azure-machine-learning-compute-managed) to remove unused nodes.
+- A cannot-delete lock on a **resource group** that contains **Azure Machine Learning** workspaces prevents autoscaling of [Azure Machine Learning compute clusters](../../machine-learning/concept-compute-target.md#azure-machine-learning-compute-managed) from working correctly. With the lock, autoscaling can't remove unused nodes. Your solution consumes more resources than are required for the workload.
- A read-only lock on a **Log Analytics workspace** prevents **User and Entity Behavior Analytics (UEBA)** from being enabled.
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
The resource group stores metadata about the resources. When you specify a locat
## Create resource groups
-to create a resource group, use [az group create](/cli/azure/group#az-group-create).
+To create a resource group, use [az group create](/cli/azure/group#az-group-create).
```azurecli-interactive az group create --name demoResourceGroup --location westus
To get the locks for a resource group, use [az lock list](/cli/azure/lock#az-loc
az lock list --resource-group exampleGroup ```
-To delete a lock, use [az lock delete](/cli/azure/lock#az-lock-delete)
+To delete a lock, use [az lock delete](/cli/azure/lock#az-lock-delete).
```azurecli-interactive az lock delete --name exampleLock --resource-group exampleGroup
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
-description: Describes common deployment errors for Azure resources that are deployed with Azure Resource Manager templates (ARM templates) or Bicep files.
+description: Describes common Azure deployment errors for resources that are deployed with Bicep files or Azure Resource Manager templates (ARM templates).
tags: top-support-issue Previously updated : 05/16/2022 Last updated : 08/05/2022 # Troubleshoot common Azure deployment errors
-This article describes common Azure deployment errors, and provides information about solutions. Azure resources can be deployed with Azure Resource Manager templates (ARM templates) or Bicep files. If you can't find the error code for your deployment error, see [Find error code](find-error-code.md).
+This article describes common Azure deployment errors, and provides information about solutions. Azure resources can be deployed with Bicep files or Azure Resource Manager templates (ARM templates). If you can't find the error code for your deployment error, see [Find error code](find-error-code.md).
If your error code isn't listed, submit a GitHub issue. On the right side of the page, select **Feedback**. At the bottom of the page, under **Feedback** select **This page**. Provide your documentation feedback but **don't include confidential information** because GitHub issues are public.
If your error code isn't listed, submit a GitHub issue. On the right side of the
| Error code | Mitigation | More information | | - | - | - |
-| AccountNameInvalid | Follow naming restrictions for storage accounts. | [Resolve storage account name](error-storage-account-name.md) |
+| AccountNameInvalid | Follow naming guidelines for storage accounts. | [Resolve errors for storage account names](error-storage-account-name.md) |
| AccountPropertyCannotBeSet | Check available storage account properties. | [storageAccounts](/azure/templates/microsoft.storage/storageaccounts) | | AllocationFailed | The cluster or region doesn't have resources available or can't support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-linux) <br><br> [Provisioning and allocation issues for Windows](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-windows) <br><br> [Troubleshoot allocation failures](/troubleshoot/azure/virtual-machines/allocation-failure)| | AnotherOperationInProgress | Wait for concurrent operation to complete. | |
If your error code isn't listed, submit a GitHub issue. On the right side of the
| DeploymentJobSizeExceeded | Simplify your template to reduce size. | [Resolve template size errors](error-job-size-exceeded.md) | | DnsRecordInUse | The DNS record name must be unique. Enter a different name. | | | ImageNotFound | Check VM image settings. | |
-| InaccessibleImage | Azure Container Instance deployment fails. You might need to include the image's tag with the syntax `registry/image:tag` to deploy the container. For a private registry, verify your credentials are correct. | [Find error code](find-error-code.md) |
+| InaccessibleImage | Azure Container Instance deployment fails. You might need to include the image's tag with the syntax `registry/image:tag` to deploy the container. For a private registry, verify your credentials are correct. | [Find error code](find-error-code.md) |
| InternalServerError | Caused by a temporary problem. Retry the deployment. | | | InUseSubnetCannotBeDeleted | This error can occur when you try to update a resource, if the request process deletes and creates the resource. Make sure to specify all unchanged values. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) | | InvalidAuthenticationTokenTenant | Get access token for the appropriate tenant. You can only get the token from the tenant that your account belongs to. | |
If your error code isn't listed, submit a GitHub issue. On the right side of the
| InvalidParameter | One of the values you provided for a resource doesn't match the expected value. This error can result from many different conditions. For example, a password may be insufficient, or a blob name may be incorrect. The error message should indicate which value needs to be corrected. | [ARM template parameters](../templates/parameters.md) <br><br> [Bicep parameters](../bicep/parameters.md) | | InvalidRequestContent | The deployment values either include values that aren't recognized, or required values are missing. Confirm the values for your resource type. | [Template reference](/azure/templates/) | | InvalidRequestFormat | Enable debug logging when running the deployment, and verify the contents of the request. | [Debug logging](enable-debug-logging.md) |
-| InvalidResourceLocation | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
+| InvalidResourceLocation | Provide a unique name for the storage account. | [Resolve errors for storage account names](error-storage-account-name.md) |
| InvalidResourceNamespace | Check the resource namespace you specified in the **type** property. | [Template reference](/azure/templates/) | | InvalidResourceReference | The resource either doesn't yet exist or is incorrectly referenced. Check whether you need to add a dependency. Verify that your use of the **reference** function includes the required parameters for your scenario. | [Resolve dependencies](error-not-found.md) | | InvalidResourceType | Check the resource type you specified in the **type** property. | [Template reference](/azure/templates/) |
If your error code isn't listed, submit a GitHub issue. On the right side of the
| ResourceNotFound | Your deployment references a resource that can't be resolved. Verify that your use of the **reference** function includes the parameters required for your scenario. | [Resolve references](error-not-found.md) | | ResourceQuotaExceeded | The deployment is trying to create resources that exceed the quota for the subscription, resource group, or region. If possible, revise your infrastructure to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) | | SkuNotAvailable | Select SKU (such as VM size) that is available for the location you've selected. | [Resolve SKU](error-sku-not-available.md) |
-| StorageAccountAlreadyExists <br> StorageAccountAlreadyTaken | Provide a unique name for the storage account. | [Resolve storage account name](error-storage-account-name.md) |
+| StorageAccountAlreadyTaken <br> StorageAccountAlreadyExists | Provide a unique name for the storage account. | [Resolve errors for storage account names](error-storage-account-name.md) |
+| StorageAccountInAnotherResourceGroup | Provide a unique name for the storage account. | [Resolve errors for storage account names](error-storage-account-name.md) |
| StorageAccountNotFound | Check the subscription, resource group, and name of the storage account that you're trying to use. | | | SubnetsNotInSameVnet | A virtual machine can only have one virtual network. When deploying several NICs, make sure they belong to the same virtual network. | [Windows VM multiple NICs](../../virtual-machines/windows/multiple-nics.md) <br><br> [Linux VM multiple NICs](../../virtual-machines/linux/multiple-nics.md) | | SubnetIsFull | There aren't enough available addresses in the subnet to deploy resources. You can release addresses from the subnet, use a different subnet, or create a new subnet. | [Manage subnets](../../virtual-network/virtual-network-manage-subnet.md) and [Virtual network FAQ](../../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) <br><br> [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) |
azure-resource-manager Error Storage Account Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-storage-account-name.md
Title: Storage account name errors
-description: Describes errors that can occur when specifying a storage account name in an Azure Resource Manager template (ARM template) or Bicep file.
+ Title: Resolve errors for storage account names
+description: Describes how to resolve errors for Azure storage account names that can occur during deployment with a Bicep file or Azure Resource Manager template (ARM template).
Previously updated : 11/12/2021 Last updated : 08/05/2022 # Resolve errors for storage account names
-This article describes naming errors that can occur when deploying a storage account with an Azure Resource Manager template (ARM template) or Bicep file.
+This article describes how to resolve errors for Azure storage account names that can occur during deployment with a Bicep file or Azure Resource Manager template (ARM template). Common causes for an error are a storage account name with invalid characters or a storage account that uses the same name as an existing storage account. Storage account names must be globally unique across Azure.
## Symptom
-If your storage account name includes prohibited characters, like an uppercase letter or exclamation point, you receive an error:
+An invalid storage account name causes an error code during deployment. The following are some examples of errors for storage account names.
+
+### Account name invalid
+
+If your storage account name includes prohibited characters, like an uppercase letter or special character like an exclamation point.
```Output Code=AccountNameInvalid
Message=S!torageckrexph7isnoc is not a valid storage account name. Storage accou
between 3 and 24 characters in length and use numbers and lower-case letters only. ```
-For storage accounts, you must provide a resource name that's unique across Azure. If you don't provide a unique name, you receive an error:
+### Invalid resource location
+
+If you try to deploy a new storage account with the same name and in the same resource group, but use a different location as an existing storage account in your Azure subscription. The error indicates the storage account already exists and can't be created in the new location. Select a different name to create the new storage account.
```Output
-Code=StorageAccountAlreadyTaken
-Message=The storage account named mystorage is already taken.
+Code=InvalidResourceLocation
+Message=The resource 'storageckrexph7isnoc' already exists in location 'westus'
+in resource group 'demostorage'. A resource with the same name cannot be created in location 'eastus'.
+Please select a new resource name.
+```
+
+### Storage account in another resource group
+
+If you try to deploy a new storage account with the same name and location as an existing storage account but in a different resource group in your subscription.
+
+```Output
+Code=StorageAccountInAnotherResourceGroup
+Message=The account storageckrexph7isnoc is already in another resource group in this subscription.
```
-If you deploy a storage account with the same name as an existing storage account in your subscription, but in a different location, you receive an error. The error indicates the storage account already exists in a different location. Either delete the existing storage account or use the same location as the existing storage account.
+### Storage account already taken
+
+If you try to deploy a new storage account with the same name as a storage account that already exists in Azure. The existing storage account name might be in your subscription or tenant, or anywhere across Azure. Storage account names must be globally unique across Azure.
+
+```Output
+Code=StorageAccountAlreadyTaken
+Message=The storage account named storageckrexph7isnoc is already taken.
+```
## Cause
-Storage account names must be between 3 and 24 characters in length and only use numbers and lowercase letters. No uppercase letters or special characters. The name must be unique.
+Common reasons for an error are because the storage account name uses invalid characters or is a duplicate name. Storage account names must meet the following criteria:
+
+- Length between 3 and 24 characters with only lowercase letters and numbers.
+- Must be globally unique across Azure. Storage account names can't be duplicated in Azure.
## Solution
-You can create a unique name by concatenating your naming convention with the result of the `uniqueString` function.
+You can create a unique name by concatenating a prefix or suffix with a value from the `uniqueString` function.
+
+The following examples specify a prefix with the string `storage` that's concatenated with the value from `uniqueString`.
# [Bicep](#tab/bicep) Bicep uses [string interpolation](../bicep/bicep-functions-string.md#concat) with [uniqueString](../bicep/bicep-functions-string.md#uniquestring). ```bicep
-resource storageAccount 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
name: 'storage${uniqueString(resourceGroup().id)}' ```
ARM templates use [concat](../templates/template-functions-string.md#concat) wit
Make sure your storage account name doesn't exceed 24 characters. The `uniqueString` function returns 13 characters. If you want to concatenate a prefix or suffix, provide a value that's 11 characters or less.
-The following examples use a parameter that creates a prefix with a maximum of 11 characters.
+The following examples use a parameter named `storageNamePrefix` that creates a prefix with a maximum of 11 characters.
# [Bicep](#tab/bicep)
param storageNamePrefix string = 'storage'
-You then concatenate the parameter value with the `uniqueString` value to create a storage account name.
+You then concatenate the `storageNamePrefix` parameter's value with the `uniqueString` value to create a storage account name.
# [Bicep](#tab/bicep)
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
Azure SignalR Service uses Managed Identity to access your Key Vault. In order t
:::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+Depending on how you configure your Key Vault permission model, you may need to grant permissions at different places.
+
+#### [Vault access policy](#tab/vault-access-policy)
+
+If you're using Key Vault built-in access policy as Key Vault permission model:
+
+ :::image type="content" alt-text="Screenshot of built-in access policy selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-access-policy.png" :::
+ 1. Go to your Key Vault resource. 1. In the menu pane, select **Access configuration**. Click **Go to access policies**. 1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
Azure SignalR Service uses Managed Identity to access your Key Vault. In order t
1. Skip **Application (optional)**. Click **Next**. 1. In **Review + create**, click **Create**.
+#### [Azure role-based access control](#tab/azure-rbac)
+
+If you're using Azure role-based access control as Key Vault permission model:
+
+ :::image type="content" alt-text="Screenshot of Azure RBAC selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-rbac.png" :::
+
+1. Go to your Key Vault resource.
+1. In the menu pane, select **Access control (IAM)**.
+1. Click **Add**. Select **Add role assignment**.
+
+ :::image type="content" alt-text="Screenshot of Key Vault IAM." source="media\howto-custom-domain\portal-key-vault-iam.png" :::
+
+1. Under the **Role** tab, select **Key Vault Secrets User**. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of role tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-role.png" :::
+
+1. Under the **Members** tab, select **Managed identity**. 1. Search for the Azure SignalR Service resource name or the user assigned identity name. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of members tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-members.png" :::
+
+1. Click **Review + assign**.
+
+--
+ ### Step 2: Create a custom certificate 1. In the Azure portal, go to your Azure SignalR Service resource.
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
Classic and ARM (Azure Resource Manager) are both paid accounts with similar dat
Going forward, ARM account support more Azure native features and integrations such as: Azure Monitor, Private endpoints, Service tag and CMK (Customer managed key). **The recommended paid account type is the ARM-based account**
+### To generate an access token
+ | | ARM-based |Classic| Trial| ||||| |Get access token | [ARM REST API](https://aka.ms/avam-arm-api) |[Get access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token)|Same as classic |Share account| [Azure RBAC(role based access control)](../role-based-access-control/overview.md)| [Invite users](invite-users.md) |Same as classic - A trial Azure Video Indexer account has limitation on number of videos, support, and SLA. ### Indexing
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Prev
description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts. Previously updated : 05/10/2022 Last updated : 08/08/2022
The following diagram demonstrates a typical architecture of Azure NetApp Files
Before you begin the prerequisites, review the [Performance best practices](#performance-best-practices) section to learn about optimal performance of NFS datastores on Azure NetApp Files volumes.
-1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud in a configured virtual network. For more information, see [Network planning checklist](./tutorial-network-checklist.md) and [Configure networking for your VMware private cloud](https://review.docs.microsoft.com/azure/azure-vmware/tutorial-configure-networking?).
-1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network as the Azure VMware Solution private cloud.
+1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md).
+1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step.
1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP. 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
azure-web-pubsub Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-custom-domain.md
Azure Web PubSub Service uses Managed Identity to access your Key Vault. In orde
:::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+Depending on how you configure your Key Vault permission model, you may need to grant permissions at different places.
+
+#### [Vault access policy](#tab/vault-access-policy)
+
+If you're using Key Vault built-in access policy as Key Vault permission model:
+
+ :::image type="content" alt-text="Screenshot of built-in access policy selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-access-policy.png" :::
+ 1. Go to your Key Vault resource. 1. In the menu pane, select **Access configuration**. Click **Go to access policies**. 1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
Azure Web PubSub Service uses Managed Identity to access your Key Vault. In orde
1. Skip **Application (optional)**. Click **Next**. 1. In **Review + create**, click **Create**.
+#### [Azure role-based access control](#tab/azure-rbac)
+
+If you're using Azure role-based access control as Key Vault permission model:
+
+ :::image type="content" alt-text="Screenshot of Azure RBAC selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-rbac.png" :::
+
+1. Go to your Key Vault resource.
+1. In the menu pane, select **Access control (IAM)**.
+1. Click **Add**. Select **Add role assignment**.
+
+ :::image type="content" alt-text="Screenshot of Key Vault IAM." source="media\howto-custom-domain\portal-key-vault-iam.png" :::
+
+1. Under the **Role** tab, select **Key Vault Secrets User**. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of role tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-role.png" :::
+
+1. Under the **Members** tab, select **Managed identity**. 1. Search for the Azure Web PubSub Service resource name or the user assigned identity name. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of members tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-members.png" :::
+
+1. Click **Review + assign**.
+
+--
+ ### Step 2: Create a custom certificate 1. In the Azure portal, go to your Azure Web PubSub Service resource.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
bastion Bastion Connect Vm Rdp Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md
Title: 'Connect to a Linux VM using RDP' description: Learn how to use Azure Bastion to connect to Linux VM using RDP.- Previously updated : 10/12/2021 Last updated : 08/08/2022
-# Create an RDP connection to a Linux VM using Azure
+# Create an RDP connection to a Linux VM using Azure Bastion
-This article shows you how to securely and seamlessly create an RDP connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Linux VM using SSH. For information, see [Create an SSH connection to a Linux VM](bastion-connect-vm-ssh-linux.md).
+This article shows you how to securely and seamlessly create an RDP connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also [connect to a Linux VM using SSH](bastion-connect-vm-ssh-linux.md).
-Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md).
+Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see [What is Azure Bastion?](bastion-overview.md)
-> [!NOTE]
-> Using RDP to connect to a Linux virtual machine requires the Azure Bastion Standard SKU.
->
+## Prerequisites
-When using Azure Bastion to connect to a Linux virtual machine using RDP, you must use username/password for authentication.
+Before you begin, verify that you've met the following criteria:
-## Prerequisites
+* Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
-Before you begin, verify that you have met the following criteria:
+* To use RDP with a Linux virtual machine, you must also ensure that you have xrdp installed and configured on the Linux VM. To learn how to do this, see [Use xrdp with Linux](../virtual-machines/linux/use-remote-desktop.md).
-Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
+* Bastion must be configured with the [Standard SKU](configuration-settings.md#skus).
-To RDP to a Linux virtual machine, you must also ensure that you have xrdp installed and configured on your Linux virtual machine. To learn how to do this, see [Use xrdp with Linux](../virtual-machines/linux/use-remote-desktop.md).
+* You must use username/password authentication.
### Required roles
In order to make a connection, the following roles are required:
To connect to the Linux VM via RDP, you must have the following ports open on your VM: * Inbound port: RDP (3389) *or*
-* Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
-
-### Supported configurations
-
-Currently, Azure Bastion only supports connecting to Linux VMs via RDP using **xrdp**.
+* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion)
## <a name="rdp"></a>Connect [!INCLUDE [Connect to a Linux VM using RDP](../../includes/bastion-vm-rdp-linux.md)]
-
+ ## Next steps Read the [Bastion FAQ](bastion-faq.md).
bastion Bastion Connect Vm Rdp Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-windows.md
Title: 'Connect to a Windows VM using RDP' description: Learn how to use Azure Bastion to connect to Windows VM using RDP.- - Previously updated : 11/29/2021 Last updated : 08/08/2022
This article shows you how to securely and seamlessly create an RDP connection to your Windows VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Windows VM using SSH. For information, see [Create an SSH connection to a Windows VM](bastion-connect-vm-ssh-windows.md).
-Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md).
+Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see [What is Azure Bastion?](bastion-overview.md)
## Prerequisites
-Before you begin, verify that you have met the following criteria:
+Before you begin, verify that you've met the following criteria:
* A VNet with the Bastion host already installed.
- * Make sure that you have set up an Azure Bastion host for the virtual network in which the VM is located. Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in the virtual network.
+ * Make sure that you have set up an Azure Bastion host for the virtual network in which the VM is located. Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in the virtual network.
* To set up an Azure Bastion host, see [Create a bastion host](tutorial-create-host-portal.md#createhost). If you plan to configure custom port values, be sure to select the Standard SKU when configuring Bastion. * A Windows virtual machine in the virtual network.
Before you begin, verify that you have met the following criteria:
To connect to the Windows VM, you must have the following ports open on your Windows VM:
-* Inbound port: RDP (3389) ***or***
-* Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
+* Inbound port: RDP (3389) ***or***
+* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion)
> [!NOTE] > If you want to specify a custom port value, Azure Bastion must be configured using the Standard SKU. The Basic SKU does not allow you to specify custom ports.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
cognitive-services Multivariate Anomaly Detection Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md
+
+ Title: "Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics"
+
+description: Learn how to use the Multivariate Anomaly Detector with Azure Synapse Analytics.
++++++ Last updated : 08/03/2022+++
+# Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics
+
+Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others.
+
+For more information, see [SynapseML estimator for Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Use Azure Synapse Analytics to detect anomalies among multiple variables in Synapse Analytics.
+> * Train a multivariate anomaly detector model and inference in separate notebooks in Synapse Analytics.
+> * Get anomaly detection result and root cause analysis for each anomaly.
+
+## Prerequisites
+
+In this section, you'll create the following resources in the Azure portal:
+
+* An **Anomaly Detector** resource to get access to the capability of Multivariate Anomaly Detector.
+* An **Azure Synapse Analytics** resource to use the Synapse Studio.
+* A **Storage account** to upload your data for model training and anomaly detection.
+* A **Key Vault** resource to hold the key of Anomaly Detector and the connection string of the Storage Account.
+
+### Create Anomaly Detector and Azure Synapse Analytics resources
+
+* [Create a resource for Azure Synapse Analytics](https://portal.azure.com/#create/Microsoft.Synapse) in the Azure portal, fill in all the required items.
+* [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal.
+* Sign in to [Azure Synapse Analytics](https://web.azuresynapse.net/) using your subscription and Workspace name.
+
+ ![A screenshot of the Synapse Analytics landing page.](../media/multivariate-anomaly-detector-synapse/synapse-workspace-welcome-page.png)
+
+### Create a storage account resource
+
+* [Create a storage account resource](https://portal.azure.com/#create/Microsoft.StorageAccount) in the Azure portal. After your storage account is built, **create a container** to store intermediate data, since SynapseML will transform your original data to a schema that Multivariate Anomaly Detector supports. (Refer to Multivariate Anomaly Detector [input schema](../how-to/multivariate-how-to.md#input-data-schema))
+
+ > [!NOTE]
+ > For the purposes of this example only we are setting the security on the container to allow anonymous read access for containers and blobs since it will only contain our example .csv data. For anything other than demo purposes this is **not recommended**.
+
+ ![A screenshot of the creating a container in a storage account.](../media/multivariate-anomaly-detector-synapse/create-a-container.png)
+
+### Create a Key Vault to hold Anomaly Detector Key and storage account connection string
+
+* Create a key vault and configure secrets and access
+ 1. Create a [key vault](https://portal.azure.com/#create/Microsoft.KeyVault) in the Azure portal.
+ 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](/azure/data-factory/data-factory-service-identity?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) permission to read secrets from Azure Key Vault.
+
+ ![A screenshot of granting permission to Synapse.](../media/multivariate-anomaly-detector-synapse/grant-synapse-permission.png)
+
+* Create a secret in Key Vault to hold the Anomaly Detector key
+ 1. Go to your Anomaly Detector resource, **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard.
+ 2. Go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret, and then paste the key from the previous step into the **Value** field. Finally, select **Create**.
+
+ ![A screenshot of the creating a secret.](../media/multivariate-anomaly-detector-synapse/create-a-secret.png)
+
+* Create a secret in Key Vault to hold Connection String of Storage account
+ 1. Go to your Storage account resource, select **Access keys** to copy one of your Connection strings.
+
+ ![A screenshot of copying connection string.](../media/multivariate-anomaly-detector-synapse/copy-connection-string.png)
+
+ 2. Then go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret (like *myconnectionstring*), and then paste the Connection string from the previous step into the **Value** field. Finally, select **Create**.
+
+## Using a notebook to conduct Multivariate Anomaly Detection in Synapse Analytics
+
+### Create a notebook and a Spark pool
+
+1. Sign in [Azure Synapse Analytics](https://web.azuresynapse.net/) and create a new Notebook for coding.
+
+ ![A screenshot of creating notebook in Synapse.](../media/multivariate-anomaly-detector-synapse/create-a-notebook.png)
+
+2. Select **Manage pools** in the page of notebook to create a new Apache Spark pool if you donΓÇÖt have one.
+
+ ![A screenshot of creating spark pool.](../media/multivariate-anomaly-detector-synapse/create-spark-pool.png)
+
+### Writing code in notebook
+
+1. Install the latest version of SynapseML with the Anomaly Detection Spark models. You can also install SynapseML in Spark Packages, Databricks, Docker, etc. Please refer to [SynapseML homepage](https://microsoft.github.io/SynapseML/).
+
+ If you're using **Spark 3.1**, please use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12",
+ "spark.yarn.user.classpath.first": "true"
+ }
+ }
+ ```
+
+ If you're using **Spark 3.2**, please use the following code:
+
+ ```python
+ %%configure -f
+ {
+ "name": "synapseml",
+ "conf": {
+ "spark.jars.packages": " com.microsoft.azure:synapseml_2.12:0.9.5 ",
+ "spark.jars.repositories": "https://mmlspark.azureedge.net/maven",
+ "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,io.netty:netty-tcnative-boringssl-static",
+ "spark.yarn.user.classpath.first": "true"
+ }
+ }
+ ```
+
+2. Import the necessary modules and libraries.
+
+ ```python
+ from synapse.ml.cognitive import *
+ from notebookutils import mssparkutils
+ import numpy as np
+ import pandas as pd
+ import pyspark
+ from pyspark.sql.functions import col
+ from pyspark.sql.functions import lit
+ from pyspark.sql.types import DoubleType
+ import synapse.ml
+ ```
+
+3. Load your data. Compose your data in the following format, and upload it to a cloud storage that Spark supports like an Azure Storage Account. The timestamp column should be in `ISO8601` format, and the feature columns should be `string` type. **Download sample data [here](https://sparkdemostorage.blob.core.windows.net/mvadcsvdata/spark-demo-data.csv)**.
+
+ ```python
+ df = spark.read.format("csv").option("header", True).load("wasbs://[container_name]@[storage_account_name].blob.core.windows.net/[csv_file_name].csv")
+
+ df = df.withColumn("sensor_1", col("sensor_1").cast(DoubleType())) \
+ .withColumn("sensor_2", col("sensor_2").cast(DoubleType())) \
+ .withColumn("sensor_3", col("sensor_3").cast(DoubleType()))
+
+ df.show(10)
+ ```
+
+ ![A screenshot of raw data.](../media/multivariate-anomaly-detector-synapse/raw-data.png)
+
+4. Train a multivariate anomaly detection model.
+
+ ![A screenshot of training parameter.](../media/multivariate-anomaly-detector-synapse/training-parameter.png)
+
+ ```python
+ #Input your key vault name and anomaly key name in key vault.
+ anomalyKey = mssparkutils.credentials.getSecret("[key_vault_name]","[anomaly_key_secret_name]")
+ #Input your key vault name and connection string name in key vault.
+ connectionString = mssparkutils.credentials.getSecret("[key_vault_name]", "[connection_string_secret_name]")
+
+ #Specify information about your data.
+ startTime = "2021-01-01T00:00:00Z"
+ endTime = "2021-01-02T09:18:00Z"
+ timestampColumn = "timestamp"
+ inputColumns = ["sensor_1", "sensor_2", "sensor_3"]
+ #Specify the container you created in Storage account, you could also initialize a new name here, and Synapse will help you create that container automatically.
+ containerName = "[container_name]"
+ #Set a folder name in Storage account to store the intermediate data.
+ intermediateSaveDir = "intermediateData"
+
+ simpleMultiAnomalyEstimator = (FitMultivariateAnomaly()
+ .setSubscriptionKey(anomalyKey)
+ #In .setLocation, specify the region of your Anomaly Detector resource, use lowercase letter like: eastus.
+ .setLocation("[anomaly_detector_region]")
+ .setStartTime(startTime)
+ .setEndTime(endTime)
+ .setContainerName(containerName)
+ .setIntermediateSaveDir(intermediateSaveDir)
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .setSlidingWindow(200)
+ .setConnectionString(connectionString))
+ ```
+
+ Trigger training process through these codes.
+
+ ```python
+ model = simpleMultiAnomalyEstimator.fit(df)
+ type(model)
+ ```
+
+5. Trigger inference process.
+
+ ```python
+ startInferenceTime = "2021-01-02T09:19:00Z"
+ endInferenceTime = "2021-01-03T01:59:00Z"
+ result = (model
+ .setStartTime(startInferenceTime)
+ .setEndTime(endInferenceTime)
+ .setOutputCol("results")
+ .setErrorCol("errors")
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .transform(df))
+ ```
+
+6. Get inference results.
+
+ ```python
+ rdf = (result.select("timestamp",*inputColumns, "results.contributors", "results.isAnomaly", "results.severity").orderBy('timestamp', ascending=True).filter(col('timestamp') >= lit(startInferenceTime)).toPandas())
+
+ def parse(x):
+ if type(x) is list:
+ return dict([item[::-1] for item in x])
+ else:
+ return {'series_0': 0, 'series_1': 0, 'series_2': 0}
+
+ rdf['contributors'] = rdf['contributors'].apply(parse)
+ rdf = pd.concat([rdf.drop(['contributors'], axis=1), pd.json_normalize(rdf['contributors'])], axis=1)
+ rdf
+ ```
+
+ The inference results will look as followed. The `severity` is a number between 0 and 1, showing the severe degree of an anomaly. The last three columns indicate the `contribution score` of each sensor, the higher the number is, the more anomalous the sensor is.
+ ![A screenshot of inference result.](../media/multivariate-anomaly-detector-synapse/inference-result.png)
+
+## Clean up intermediate data (optional)
+
+By default, the anomaly detector will automatically upload data to a storage account so that the service can process the data. To clean up the intermediate data, you could run the following codes.
+
+```python
+simpleMultiAnomalyEstimator.cleanUpIntermediateData()
+model.cleanUpIntermediateData()
+```
+
+## Use trained model in another notebook with model ID (optional)
+
+If you have the need to run training code and inference code in separate notebooks in Synapse, you could first get the model ID and use that ID to load the model in another notebook by creating a new object.
+
+1. Get the model ID in the training notebook.
+
+ ```python
+ model.getModelId()
+ ```
+
+2. Load the model in inference notebook.
+
+ ```python
+ retrievedModel = (DetectMultivariateAnomaly()
+ .setSubscriptionKey(anomalyKey)
+ .setLocation("eastus")
+ .setOutputCol("result")
+ .setStartTime(startTime)
+ .setEndTime(endTime)
+ .setContainerName(containerName)
+ .setIntermediateSaveDir(intermediateSaveDir)
+ .setTimestampCol(timestampColumn)
+ .setInputCols(inputColumns)
+ .setConnectionString(connectionString)
+ .setModelId('5bXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXe9'))
+ ```
+
+## Learn more
+
+### About Anomaly Detector
+
+* Learn about [what is Multivariate Anomaly Detector](../overview-multivariate.md).
+* SynapseML documentation with [Multivariate Anomaly Detector feature](https://microsoft.github.io/SynapseML/docs/documentation/estimators/estimators_cognitive/#fitmultivariateanomaly).
+* Recipe: [Cognitive Services - Multivariate Anomaly Detector](https://microsoft.github.io/SynapseML/docs/next/features/cognitive_services/CognitiveServices).
+* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u).
+
+### About Synapse
+
+* Quick start: [Configure prerequisites for using Cognitive Services in Azure Synapse Analytics](/azure/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse#create-a-key-vault-and-configure-secrets-and-access).
+* Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples.
+* Learn more about [Synapse Analytics](/azure/synapse-analytics/).
+* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below. * Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
-* TTS Service March 2022, public preview of Cheerful and Sad styles with fr-FR-DeniseNeural.
+* TTS Service July 2022, new voices in Public Preview and new viseme feature blend shapes were released. See details below.
## Release notes
cognitive-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/encryption-data-at-rest.md
Previously updated : 05/24/2022 Last updated : 08/08/2022 #Customer intent: As a user of the Language service, I want to learn how encryption at rest works.
You must use Azure Key Vault to store your customer-managed keys. You can either
### Customer-managed keys for Language services
-To request the ability to use customer-managed keys, fill out and submit theΓÇ»[Language Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with Language services, you'll need to create a new Language resource from the Azure portal.
+To request the ability to use customer-managed keys, fill out and submit theΓÇ»[Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with Language services, you'll need to create a new Language resource from the Azure portal.
### Enable customer-managed keys
To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more
## Next steps
-* [Language Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
+* [Language service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk)
* [Learn more about Azure Key Vault](../../../key-vault/general/overview.md)
cognitive-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate-language-service-latest.md
Title: Migrate to the latest version of Azure Cognitive Service for Language
-description: Learn how to move your Text Analytics applications to use the latest version of the Language Service.
+description: Learn how to move your Text Analytics applications to use the latest version of the Language service.
Previously updated : 07/13/2022 Last updated : 08/08/2022
cognitive-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/use-asynchronously.md
Title: "How to: Use Language Service features asynchronously"
+ Title: "How to: Use Language service features asynchronously"
-description: Learn how to send Language Service API requests asynchronously.
+description: Learn how to send Language service API requests asynchronously.
Previously updated : 08/02/2022 Last updated : 08/08/2022
-# How to use Language Service features asynchronously
+# How to use Language service features asynchronously
The Language service enables you to send API requests asynchronously, using either the REST API or client library. You can also include multiple different Language service features in your request, to be performed on your data at the same time.
To submit an asynchronous job, review the [reference documentation](/rest/api/la
1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object. 1. You can optionally: 1. Choose a specific [version of the model](model-lifecycle.md) used on your data.
- 1. Include additional Language Service features in the `tasks` object, to be performed on your data at the same time.
+ 1. Include additional Language ervice features in the `tasks` object, to be performed on your data at the same time.
Once you've created the JSON body for your request, add your key to the `Ocp-Apim-Subscription-Key` header. Then send your API request to job creation endpoint. For example:
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/train-model.md
Previously updated : 05/05/2022 Last updated : 08/08/2022
It is recommended to make sure that all your classes are adequately represented
Custom text classification supports two methods for data splitting:
-* **Automatically splitting the testing set from training data**: The system will split your labeled data between the training and testing sets, according to the percentages you choose. The recommended percentage split is 80% for training and 20% for testing.
+* **Automatically splitting the testing set from training data**: The system will split your labeled data between the training and testing sets, according to the percentages you choose. The system will attempt to have a representation of all classes in your training set. The recommended percentage split is 80% for training and 20% for testing.
> [!NOTE] > If you choose the **Automatically splitting the testing set from training data** option, only the data assigned to training set will be split according to the percentages provided.
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/azure-resources.md
description: Question answering uses several Azure sources, each with a differen
Previously updated : 10/10/2021 Last updated : 08/08/2022
Use these keys when making requests to the service through APIs.
|Name|Location|Purpose| |--|--|--|
-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language Service APIs). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new resource.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
+|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new resource.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
|Azure Cognitive Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure cognitive search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure Cognitive Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure Cognitive Search** resource on the **Keys** page.| ### Find authoring keys in the Azure portal
cognitive-services Migrate Qnamaker To Question Answering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md
Title: Migrate from QnA Maker to Question Answering
description: Details on features, requirements, and examples for migrating from QnA Maker to Question Answering ++
+ms.
Previously updated : 6/9/2022 Last updated : 08/08/2022 # Migrate from QnA Maker to Question Answering
When you are looking at migrating to Question Answering, please consider the fol
- Knowledge base/project content or size has no implications on pricing -- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refer to the query submitted by the user to the runtime, and it is a concept common to all features within Language Service
+- ΓÇ£Text RecordsΓÇ¥ in Question Answering features refer to the query submitted by the user to the runtime, and it is a concept common to all features within Language service
Here you can find the pricing details for [Question Answering](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/) and [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/).
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
Title: Capabilities for Teams guests
+ Title: Capabilities for Teams external user
-description: Calling capabilities of Azure Communication Services support for Teams guests
+description: Calling capabilities of Azure Communication Services support for Teams external users
Last updated 7/9/2022
-# Capabilities for Teams guests
+# Capabilities for Teams external users
-In this article, you will learn which capabilities are supported for Teams guests using Azure Communication Services SDKs.
+In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs.
## Client capabilities The following table shows supported client-side capabilities available in Azure Communication Services SDKs:
The following table shows supported client-side capabilities available in Azure
| Apply background effects | ❌ | | See together mode video stream | ❌ |
-When Teams guest leaves the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
+When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
## Server capabilities
The following table shows supported Teams capabilities:
## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/limitations.md
Title: Known issues and limitations
-description: Known issues and limitations of Azure Communication Services support for Teams guests
+description: Known issues and limitations of Azure Communication Services support for Teams external users
Last updated 7/9/2022
## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md
Title: Communication as Teams guest
+ Title: Communication as Teams external user
-description: Introduction to Azure Communication Services support for Teams guests
+description: Introduction to Azure Communication Services support for Teams external users
Last updated 7/9/2022
-# Communication as Teams guest
+# Communication as Teams external user
-You can use Azure Communication Services to build applications that enable external users to join and participate in Teams meetings as Teams anonymous users (Guests). Customers can join Teams meetings from within your applications or websites. The main benefits are:
+You can use Azure Communication Services to build applications that enable external users to join and participate in Teams meetings as Teams anonymous users. Customers can join Teams meetings from within your applications or websites. The main benefits are:
- No requirement to download Teams desktop, mobile or web clients for external users - External users don't lose context by switching to another application - Browser support on mobile devices
Developers can experiment with the capabilities on multiple levels to evaluate,
### Low code or no-code
-You can create an identity and access token for Teams guests on Azure portal without a single line of code. [Here are steps how to do it](../../../quickstarts/identity/quick-create-identity.md).
+You can create an identity and access token for Teams external users on Azure portal without a single line of code. [Here are steps how to do it](../../../quickstarts/identity/quick-create-identity.md).
With a valid identity, access token, and Teams meeting URL, you can use [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/story/composites-call-with-chat-jointeamsmeeting--join-teams-meeting) to join Teams meeting without any code. ### Single-click deployment
-The [Azure Communication Services Calling Hero Sample](../../../samples/calling-hero-sample.md) demonstrates how developers can use Azure Communication Services Calling Web SDK to join a Teams meeting from a web application as a Teams guest. You can experiment with the capability with single-click deployment to Azure.
+The [Azure Communication Services Calling Hero Sample](../../../samples/calling-hero-sample.md) demonstrates how developers can use Azure Communication Services Calling Web SDK to join a Teams meeting from a web application as a Teams external user. You can experiment with the capability with single-click deployment to Azure.
+
+The [Azure Communication Services Authentication Hero Sample](../../../samples/trusted-auth-sample.md) demonstrates how developers can use Azure Communication Services Identity SDK to get access tokens as Teams users. You can clone the GitHub repository and follow a simple guide to set up your service for authentication in Azure.
### Coding
-The data flow for joining Teams meetings is available at the [client and server architecture page](../../client-and-server-architecture.md). When implementing the experience, you must implement client logic for real-time communication and server logic for authentication. The following articles will guide you in implementing the communication for Teams guests.
+The data flow for joining Teams meetings is available at the [client and server architecture page](../../client-and-server-architecture.md). When implementing the experience, you must implement client logic for real-time communication and server logic for authentication. The following articles will guide you in implementing the communication for Teams external users.
High-level coding articles:
-1. [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)
-1. [Stateful Client (Meeting)](https://azure.github.io/communication-ui-library/?path=/story/composites-meeting-basicexample--basic-example)
+- [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md)
+- [Stateful Client (Meeting)](https://azure.github.io/communication-ui-library/?path=/story/composites-meeting-basicexample--basic-example)
Low-level coding articles:
-1. [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
-1. [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
-1. [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
+- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md)
## Supported use cases
-The following table show supported use cases for Teams guest with Azure Communication
+The following table show supported use cases for Teams external user with Azure Communication
| Scenario | Supported | | | |
The following table show supported use cases for Teams guest with Azure Communic
| Join Teams 1:1 or group call | ❌ | | Join Teams 1:1 or group chat | ❌ | -- [1] Teams guests can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages-- [2] Teams guest users may join a Teams webinar. However, the presenter and attendee roles aren't honored for Teams guests. Thus Teams guests on Azure Communication Services SDKs could perform actions not intended for attendees, such as screen sharing, turning their camera on/off, or unmuting themselves, if your application provides UX for those actions.
+- [1] Teams external users can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages
+- [2] Teams external users may join a Teams webinar. However, the presenter and attendee roles aren't honored for Teams external users. Thus Teams external users on Azure Communication Services SDKs could perform actions not intended for attendees, such as screen sharing, turning their camera on/off, or unmuting themselves, if your application provides UX for those actions.
## Pricing Any licensed Teams users can schedule Teams meetings and share the invite with external users. External users can join the Teams meeting experience via existing Teams desktop, mobile, and web clients without additional charge. External users joining via Azure Communication Services SDKs will pay
Any licensed Teams users can schedule Teams meetings and share the invite with e
## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/identity/access-token-teams-external-users.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/privacy.md
Title: User privacy for Teams guests
+ Title: User privacy for Teams external users
-description: User privacy requirements in Azure Communication Services support for Teams guests
+description: User privacy requirements in Azure Communication Services support for Teams external users
Last updated 7/9/2022
Microsoft will indicate to you via the Azure Communication Services API that rec
## Chat storage
-All chat messages sent by Teams users or Communication Services users during a Teams meeting are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Teams guest joining via Azure Communication Services SDKs in the meetings, there is a copy of the most recently sent message stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. Review the article [Region availability and data residency](./privacy.md).
+All chat messages sent by Teams users or Communication Services users during a Teams meeting are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Teams external user joining via Azure Communication Services SDKs in the meetings, there is a copy of the most recently sent message stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. Review the article [Region availability and data residency](./privacy.md).
Azure Communication Services will delete all copies of the most recently sent message per Teams retention policies. If no retention policy is defined, Azure Communication Services deletes data after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams). ## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
Title: Teams controls for Teams guest
+ Title: Teams controls for Teams external user
-description: Teams administrator controls to impact Azure Communication Services support for Teams guests
+description: Teams administrator controls to impact Azure Communication Services support for Teams external users
Last updated 7/9/2022
# Teams administrator controls
-Teams administrators have the following policies to control the experience for Teams guests in Teams meetings.
+Teams administrators have the following policies to control the experience for Teams external users in Teams meetings.
|Setting name|Policy scope|Description| Supported | | - | -| -| |
-| [Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | organization-wide | If disabled, Teams guests can't join Teams meeting | ✔️ |
-| [Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams guests can't join Teams meeting | ✔️ |
-| [Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | If enabled, Teams guests can start a Teams meeting without Teams user | ✔️ |
-| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | per-organizer | If set to "Everyone", Teams guests can bypass lobby. Otherwise, Teams guests have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Anonymous users can join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | organization-wide | If disabled, Teams external users can't join Teams meeting | ✔️ |
+| [Let anonymous people join a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) | per-organizer | If disabled, Teams external users can't join Teams meeting | ✔️ |
+| [Let anonymous people start a meeting](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings)| per-organizer | If enabled, Teams external users can start a Teams meeting without Teams user | ✔️ |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | per-organizer | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | per-user | Controls who in the Teams meeting can share screen | ❌ |
-| [Blocked anonymous join client types](/powershell/module/skype/set-csteamsmeetingpolicy) | per-organizer | If property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams guests via Azure Communication Services can join Teams meeting | ✔️ |
+| [Blocked anonymous join client types](/powershell/module/skype/set-csteamsmeetingpolicy) | per-organizer | If property "BlockedAnonymousJoinClientTypes" is set to "Teams" or "Null", the Teams external users via Azure Communication Services can join Teams meeting | ✔️ |
Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users. ## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Teams Client Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-client-experience.md
Title: Teams client experience for Teams guest
+ Title: Teams client experience for Teams external user
-description: Teams client experience of Azure Communication Services support for Teams guests
+description: Teams client experience of Azure Communication Services support for Teams external users
Last updated 7/9/2022
# Experience for users in Teams client
-Teams guest joining Teams meeting with Azure Communication Services SDKs will be represented in Teams client as any other Teams anonymous user. Teams guests will be marked as "external" in the participant's lists as Teams clients. As Teams anonymous users, their capabilities in the Teams meeting will be limited regardless of the assigned Teams meeting role.
+Teams external user joining Teams meeting with Azure Communication Services SDKs will be represented in Teams client as any other Teams anonymous user. Teams external users will be marked as "external" in the participant's lists as Teams clients. As Teams anonymous users, their capabilities in the Teams meeting will be limited regardless of the assigned Teams meeting role.
## Next steps -- [Authenticate as Teams guest](../../../quickstarts/access-tokens.md)-- [Join Teams meeting audio and video as Teams guest](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as Teams guest](../../../quickstarts/chat/meeting-interop.md)
+- [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
+- [Join Teams meeting audio and video as Teams external user](../../../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md)
- [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).
communication-services Azure Ad Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/azure-ad-api-permissions.md
+
+ Title: Azure AD API permissions for communication as Teams user
+
+description: This article describes Azure AD API permissions for communication as a Teams user with Azure Communication Services.
+++++ Last updated : 08/01/2022++++
+# Azure AD permissions for communication as Teams user
+In this article, you will learn about Azure AD permissions available for communication as a Teams user in Azure Communication Services.
+
+## Delegated permissions
+
+| Permission | Display string | Description | Admin consent required | Microsoft account supported |
+|: |: |: |: |: |
+| _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_ | Manage calls in Teams | Start, join, forward, transfer, or leave Teams calls and update call properties. | No | No |
+| _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ | Manage chats in Teams | Create, read, update, and delete 1:1 or group chat threads on behalf of the signed-in user. Read, send, update, and delete messages in chat threads on behalf of the signed-in user. | No | No |
+
+## Application permissions
+
+None.
+
+## Roles for granting consent on behalf of a company
+
+- Global admin
+- Application admin
+- Cloud application admin
+
+Find more details in [Azure Active Directory documentation](/azure/active-directory/roles/permissions-reference.md).
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md
There are two categories of Communication Service data:
### Identities
-Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communicationidentity/communication-identity/delete) API to remove them. Deleting an identity will revoke all associated access tokens and delete their chat messages. For more information on how to remove an identity [see this page](../quickstarts/access-tokens.md).
+Azure Communication Services maintains a directory of identities, use the [DeleteIdentity](/rest/api/communication/communication-identity/delete?tabs=HTTP) API to remove them. Deleting an identity will revoke all associated access tokens and delete their chat messages. For more information on how to remove an identity [see this page](../quickstarts/access-tokens.md).
- DeleteIdentity
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
Development of Calling and Chat applications can be accelerated by the [Azure C
|--|-||-| | Azure Resource Manager | [REST](/rest/api/communication/communicationservice)| Service| Provision and manage Communication Services resources| | Common | N/A | Client & Service | Provides base types for other SDKs |
-| Identity | [REST](/rest/api/communication/communicationidentity/communication-identity) | Service| Manage users, access tokens|
+| Identity | [REST](/rest/api/communication/communication-identity) | Service| Manage users, access tokens|
| Phone numbers| [REST](/rest/api/communication/phonenumbers) | Service| Acquire and manage phone numbers | | SMS | [REST](/rest/api/communication/sms) | Service| Send and receive SMS messages| | Chat | [REST](/rest/api/communication/) with proprietary signaling | Client & Service | Add real-time text chat to your applications |
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
[!INCLUDE [Public Preview](../includes/public-preview-include-document.md)]
-You can use Azure Communication Services and Graph API to integrate communication as Teams user into your products to communicate with other people in and outside your organization. With Azure Communication Services supporting Teams identities and Graph API, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
+You can use Azure Communication Services and Graph API to integrate communication as Teams users into your products. Teams users can communicate with other people in and outside their organization. The benefits for enterprises are:
+- No requirement to download Teams desktop, mobile or web clients for Teams users
+- Teams users don't lose context by switching between applications for day-to-day work and Teams client for communication
+- Teams is a single source for chat messages and call history within the organization
+- Teams policies control communication across applications
-You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens. The diagrams in the next sections demonstrate multitenant use cases, where fictional company Fabrikam is the customer of fictional company Contoso. Contoso builds multi-tenant SaaS product that Fabrikam's administrator purchases for its employees.
+The benefits of using API surface for developers are:
+- Browser support on mobile devices
+- User interface (UI) customization
+- No additional Teams licenses are required
+- Tenants bring policies and configurations inside your app without extra work
-## Calling
-Voice, video, and screen-sharing capabilities are provided via [Azure Communication Services Calling SDKs](./interop/teams-user-calling.md). The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with Azure Communication Services support Teams identities. You can learn more details about the [authentication](./interop/custom-teams-endpoint-authentication-overview.md) and [used packages](../quickstarts/manage-teams-identity.md).
+You can also use Graph API to implement [chat](/graph/api/resources/chat) and [calling](/graph/api/resources/call) capabilities on the server side. This article concentrates on the client experience.
-![Diagram of the process to integrate the calling capabilities into your product with Azure Communication Services.](./media/teams-identities/teams-identity-calling-overview.svg)
+## Use cases
+Here are real-world examples of applications:
+- Independent software vendor (ISV) builds a customer service web application for receptionists to route calls within an organization. Receptionists in multiple organizations use this product tailored for their needs to route calls to subject matter experts (SMEs) within the organization.
+- Manufacturer of augmented reality headset adds video calling capability into the product to enable remote assistance with subject matter experts joining via Teams clients. Teams user sees an incoming call from a frontline worker that shares the augmented reality and provides guidance directly from Teams client.
+- Independent software vendor (ISV) builds an application for customer outreach via multiple channels. ISV adds Teams chat and calling capabilities into their product to enable communication with enterprise users directly from their application.
+- Bank has decided to replace their limited Teams application for wealth management with direct integration of calling as Teams user into their existing wealth management application. This application now integrates calling capability as part of the process instead of incorporating processes inside the Teams client.
-## Chat
+## Prototyping
+Developers can experiment with the capabilities on multiple levels to evaluate, learn and customize the product. Low/no-code options are currently in development.
-Optionally, you can also use Graph API to integrate chat capabilities into your product. For more information about the Graph API, see the [chat resource type](/graph/api/channel-post-messages) documentation.
+### Single-click deployment
-![Diagram of the process to integrate the chat capabilities into your product with Graph API.](./media/teams-identities/teams-identity-chat-overview.png)
+The [Azure Communication Services Authentication Hero Sample](../samples/trusted-auth-sample.md) demonstrates how developers can use Azure Communication Services Identity SDK to get access tokens as Teams users. You can clone the GitHub repository and follow a simple guide to set up your service for authentication in Azure.
+
+The calling and chat hero sample for Teams users is currently in development.
-## Azure Communication Services permissions
+### Coding
-### Delegated permissions
+Communication as Teams user leverages Graph API for chat and Azure Communication Services for calling. In each case, you need to authenticate the Teams user and then implement the logic for communication.
-| Permission | Display string | Description | Admin consent required | Microsoft account supported |
-|: |: |: |: |: |
-| _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_ | Manage calls in Teams | Start, join, forward, transfer, or leave Teams calls and update call properties. | No | No |
-| _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ | Manage chats in Teams | Create, read, update, and delete 1:1 or group chat threads on behalf of the signed-in user. Read, send, update, and delete messages in chat threads on behalf of the signed-in user. | No | No |
+The diagrams in the next sections demonstrate multi-tenant use cases where the fictional company Fabrikam is the customer of the fictional company Contoso. Contoso builds multi-tenant SaaS product that Fabrikam's administrator purchases for its employees.
-### Application permissions
+#### Calling
-None.
+Voice, video, and screen-sharing capabilities are provided via [Azure Communication Services Calling SDKs](./interop/teams-user-calling.md). The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with Azure Communication Services support Teams identities.
-### Roles for granting consent on behalf of a company
+You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens.
+
+![Diagram of the process to integrate the calling capabilities into your product with Azure Communication Services.](./media/teams-identities/teams-identity-calling-overview.svg)
-- Global admin-- Application admin-- Cloud application admin
+The following articles will guide you in implementing the calling for Teams users:
+- [Authenticate as Teams user](../quickstarts/manage-teams-identity.md)
+- [Add video calling as Teams user to your client app](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+- [How-to use calling SDK as Teams user](../how-tos/cte-calling-sdk/manage-calls.md)
+
+#### Chat
+
+Use Graph API to integrate 1:1 chat, group chat, meeting chat, and channel capabilities into your product.
+
+![Diagram of the process to integrate the chat capabilities into your product with Graph API.](./media/teams-identities/teams-identity-chat-overview.png)
-Find more details in [Azure Active Directory documentation](../../active-directory/roles/permissions-reference.md).
+The following articles will guide you in implementing the chat for Teams users:
+- [Authenticate as Teams user](/graph/auth-v2-user)
+- [Send message as Teams user](/graph/api/chat-post-messages)
+- [Receive message as Teams user on webhook](/graph/teams-changenotifications-chatMessage) and then push message to the client with, for example, [SignalR](/azure/azure-signalr/signalr-overview).
+- [Poll messages for Teams user](/graph/api/chat-list-messages)
+
+## Supported use cases
+
+The following table show supported use cases for Teams users with Azure Communication Services and Graph API:
+
+| Scenario | Supported |
+| | |
+| Make a voice-over-IP (VoIP) call to Teams user | ✔️ |
+| Make a phone (PSTN) call | ✔️ |
+| Accept incoming voice-over-IP (VoIP) call for Teams user | ✔️ |
+| Accept incoming phone (PSTN) for Teams user | ✔️ |
+| Join Teams meeting | ✔️ |
+| Join channel Teams meeting | ✔️ |
+| Join Teams webinar [1] | ✔️ |
+| [Join Teams live events](/microsoftteams/teams-live-events/what-are-teams-live-events).| ❌ |
+| Join [Teams meeting scheduled in an application for personal use](https://www.microsoft.com/microsoft-teams/teams-for-home) | ❌ |
+| Join Teams 1:1 or group call | ❌ |
+| Send a message to 1:1 chat, group chat or Teams meeting chat| ✔️ |
+| Get messages from 1:1 chat, group chat or Teams meeting chat | ✔️ |
+
+- [1] Teams users may join a Teams webinar. However, the presenter and attendee roles aren't honored for Teams users. Thus Teams users on Azure Communication Services SDKs could perform actions not intended for attendees, such as screen sharing, turning their camera on/off, or unmuting themselves if your application provides UX for those actions.
+
+## Pricing
+Teams users can join the Teams meeting experience, manage calls, and manage chats via existing Teams desktop, mobile, and web clients or Graph API without additional charge. Teams users using Azure Communication Services SDKs will pay
+[standard Azure Communication Services consumption](https://azure.microsoft.com/pricing/details/communication-services/) for audio and video. There's no additional fee for the interoperability capability itself. You can find more details on [Teams interoperability pricing here](./pricing/teams-interop-pricing.md).
## Next steps
Find more details in [Azure Active Directory documentation](../../active-directo
Find more details in the following articles: - [Teams interoperability](./teams-interop.md) - [Issue a Teams access token](../quickstarts/manage-teams-identity.md)-- [Start a call with Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+- [Start a call to Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
# Teams interoperability > [!IMPORTANT]
-> Bring your own identity (BYOI) interoperability for Teams meetings is now generally available to all Communication Services applications and Teams organizations.
+> Teams external users interoperability for Teams meetings is now generally available to all Communication Services applications and Teams organizations.
>
-> Interoperability with Communication Services SDK with Teams identities is in public preview and available to Web-based applications.
+> Support for Teams users in Azure Communication Services SDK is in public preview and available to Web-based applications.
> > Preview APIs and SDKs are provided without a service-level agreement and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Communication Services can be used to build custom applications and experi
Azure Communication Services supports two types of Teams interoperability depending on the identity of the user: -- **[Guest/Bring your own identity (BYOI)](#guestbring-your-own-identity).** You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. This model allows you to build custom applications for non-Teams users to connect and communicate with Teams users.-- **[Teams identity](#teams-identity).** User authentication is controlled by Azure Active Directory and users of your custom application must have Teams licenses. This model allows you to build custom applications for Teams users to enable specialized workflows or experiences that are not possible with the existing Teams clients.
+- **[External user](#external-user).** You control user authentication, and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. This model allows you to build custom applications for non-Teams users to connect and communicate with Teams users.
+- **[Teams user](#teams-user).** Azure Active Directory controls user authentication, and users of your custom application must have Teams licenses. This model allows you to build custom applications for Teams users to enable specialized workflows or experiences that are impossible with the existing Teams clients.
Applications can implement both authentication models and leave the choice of authentication up to the user. The following table compares two models:
-|Feature|Bring your own identity| Teams identity|
+|Feature|External user| Teams user|
|||| |Target user base|Customers|Enterprise| |Identity provider|Any|Azure Active Directory|
Applications can implement both authentication models and leave the choice of au
\* Server logic issuing access tokens can perform any custom authentication and authorization of the request.
-## Guest/Bring your own identity
+## External user
-The bring your own identity (BYOI) authentication model allows you to build custom applications for non-Teams users to connect and communicate with Teams users. You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. The first scenario that has been enabled allows users of your application to join Microsoft Teams meetings as external accounts, similar to [anonymous users that join meetings](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) using the Teams web application. This is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application) into a meeting experience. In the future, we will be enabling additional scenarios including direct calling and chat which will allow your application to initiate calls and chats with Teams users outside the context of a Teams meeting.
+The bring your own identity (BYOI) authentication model allows you to build custom applications for external users to connect and communicate with Teams users. You control user authentication, and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. The first scenario that has been enabled allows users of your application to join Microsoft Teams meetings as external accounts, similar to [anonymous users that join meetings](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) using the Teams web application. This is ideal for business-to-consumer applications that combine employees (familiar with Teams) and external users (using a custom application) into a meeting experience. In the future, we will be enabling additional scenarios, including direct calling and chat which will allow your application to initiate calls and chats with Teams users outside the context of a Teams meeting.
For more information, see [Join a Teams meeting](join-teams-meeting.md). It is currently not possible for a Teams user to join a call that was initiated using the Azure Communication Services Calling SDK.
-## Teams identity
+## Teams user
-Developers can use [Communication Services Calling SDK with Teams identity](./interop/teams-user-calling.md) to build custom applications for Teams users. Custom applications can enable specialized workflows for Teams users such as management of incoming and outgoing PSTN calls or bring Teams calling experience into devices that are not supported with the standard Teams client. Teams identities are authenticated by Azure Active Directory, and all attributes and details about the user are bound to their Azure Active Directory account.
+Developers can use [Communication Services Calling SDK with Teams identity](./interop/teams-user-calling.md) to build custom applications for Teams users. Custom applications can enable specialized workflows for Teams users, such as managing incoming and outgoing phone calls or bringing Teams calling experience into devices not supported with the standard Teams client. Azure Active Directory authenticates Teams users, and all attributes and details about the user are bound to their Azure Active Directory account.
-When a Communication Services endpoint connects to a Teams meeting or Teams call using a Teams identity, the endpoint is treated like a Teams user with a Teams client and the experience is driven by policies assigned to users within and outside of the organization. Teams users can join Teams meetings, place calls to other Teams users, receive calls from phone numbers, transfer an ongoing call to the Teams call queue or share screen.
+When a Communication Services endpoint connects to a Teams meeting or Teams call using a Teams identity, the endpoint is treated like a Teams user with a Teams client. The experience is driven by policies assigned to users within and outside of the organization. Teams users can join Teams meetings, place calls to other Teams users, receive calls from phone numbers, and transfer an ongoing call to the Teams call queue or share screen.
-Teams users are authenticated against Azure Active Directory in the client application. Authentication tokens received from Azure Active Directory are exchanged for Communication Services access tokens via the Communication Services Identity SDK. This creates a connection between Azure Active Directory and Communication Services. You are encouraged to implement an exchange of tokens in your backend services as exchange requests are signed by credentials for Azure Communication Services. In your backend services, you can require any additional authentication.
+Teams users authenticate against Azure Active Directory in the client application. Developers then exchange authentication tokens from Azure Active Directory for access tokens via the Communication Services Identity SDK. This exchange creates a connection between Azure Active Directory and Communication Services. You are encouraged to implement an exchange of tokens in your backend services as credentials for Azure Communication Services sign exchange requests. In your backend services, you can require any additional authentication.
## Teams meeting and calling experiences
There are several ways that users can join a Teams meeting:
- Via Teams clients as authenticated **Teams users**. This includes the desktop, mobile, and web Teams clients. - Via Teams clients as unauthenticated **Anonymous users**. -- Via custom Communication Services applications as **Guest/BYOI users** using the bring your own identity authentication model.
+- Via custom Communication Services applications as **External users** using the bring your own identity authentication model.
- Via custom Communication Services applications as **Teams users** using the Teams identity authentication model. ![Overview of multiple interoperability scenarios within Azure Communication Services](./media/teams-identities/teams-interop-overview-v2.png)
Using the Teams identity authentication model, a Communication Services applicat
![Overview of interoperability scenarios within Azure Communication Services](./media/teams-identities/teams-interop-microsoft365-identity-interop-overview-v2.png) ## Privacy
-Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
+Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
-Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real-time, to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced, and you must communicate this fact, in real-time, to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
## Pricing
-All usage of Azure Communication Service APIs and SDKs increments [Azure Communication Service billing meters](https://azure.microsoft.com/pricing/details/communication-services/). Interactions with Microsoft Teams, such as joining a meeting or initiating a phone call using a Teams allocated number, will increment these meters but there is no additional fee for the Teams interoperability capability itself, and there is no pricing distinction between the Guest/BYOI and Microsoft 365 authentication options.
+All usage of Azure Communication Service APIs and SDKs increments [Azure Communication Service billing meters](https://azure.microsoft.com/pricing/details/communication-services/). Interactions with Microsoft Teams, such as joining a meeting or initiating a phone call using a Teams allocated number, will increment these meters. However, there is no additional fee for the Teams interoperability capability itself, and there is no pricing distinction between the BYOI and Microsoft 365 authentication options.
If your Azure application has a user spend 10 minutes in a meeting with a user of Microsoft Teams, those two users combined consumed 20 calling minutes. The 10 minutes exercised through the custom application and using Azure APIs and SDKs will be billed to your resource. However, the 10 minutes consumed by the user in the native Teams application is covered by the applicable Teams license and is not metered by Azure.
Azure Communication Services interoperability isn't compatible with Teams deploy
## Next steps
-Find more details for Guest/BYOI interoperability:
-- [Get access tokens for Guest/BYOI](../quickstarts/access-tokens.md)-- [Join Teams meeting call as a Guest/BYOI](../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Join Teams meeting chat as a Guest/BYOI](../quickstarts/chat/meeting-interop.md)
+Find more details for External user interoperability:
+- [Get access tokens for external user](../quickstarts/access-tokens.md)
+- [Join Teams meeting call as a external user](../quickstarts/voice-video-calling/get-started-teams-interop.md)
+- [Join Teams meeting chat as a external user](../quickstarts/chat/meeting-interop.md)
Find more details forTeams user interoperability: - [Get access tokens for Teams users](../quickstarts/manage-teams-identity.md)
communication-services Access Token Teams External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/access-token-teams-external-users.md
+
+ Title: Quickstart - Create and manage access tokens for Teams external users
+
+description: Learn how to manage identities and access tokens for Teams external users by using the Azure Communication Services Identity SDK.
++++ Last updated : 08/05/2022+++
+zone_pivot_groups: acs-azcli-js-csharp-java-python
+++
+# Quickstart: Create and manage access tokens for Teams external users
+
+Teams external users are authenticated as Azure Communication Services users in Teams. With an access token for Azure Communication Services users, you can use chat and calling SDKs to join Teams meeting audio, video, and chat as Teams external user. The quickstart here is identical to [identity and access token management of Azure Communication Services users](../access-tokens.md).
+
+In this quickstart, you'll learn how to use the Azure Communication Services SDKs to create identities and manage your access tokens. For production use cases, we recommend that you generate access tokens on a [server-side service](../../concepts/client-and-server-architecture.md).
++++++
+## Use identity for monitoring and metrics
+
+The user ID is a primary key for logs and metrics collected through Azure Monitor. To view all of a user's calls, for example, you can set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a single user.
+
+Learn more about [authentication concepts](../../concepts/authentication.md), call diagnostics through [log analytics](../../concepts/analytics/log-analytics.md), and [metrics](../../concepts/metrics.md) that are available to you.
+
+## Clean up resources
+
+Delete the resource or resource group to clean up and remove a Communication Services subscription. Deleting a resource group also deletes any other resources that are associated with it. For more information, see the "Clean up resources" section of [Create and manage Communication Services resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+In this quickstart, you learned how to:
+
+> [!div class="checklist"]
+> * Manage Teams external user identity
+> * Issue access tokens for Teams external user
+> * Use the Communication Services Identity SDK
++
+> [!div class="nextstepaction"]
+> [Add Teams meeting voice to your app](../voice-video-calling/get-started-teams-interop.md)
+
+You might also want to:
+
+ - [Learn about authentication](../../concepts/authentication.md)
+ - [Add Teams meeting chat to your app](../chat/meeting-interop.md)
+ - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md)
+ - [Deploy trusted authentication service hero sample](../../samples/trusted-auth-sample.md)
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
First, you'll need to create a user-assigned identity resource.
Note the `id` property of the new identity.
-1. Run the `az containerapps identity assign` command to assign the identity to the app. The identities parameter is a space separated list.
+1. Run the `az containerapp identity assign` command to assign the identity to the app. The identities parameter is a space separated list.
```azurecli az containerapp identity assign --resource-group <GROUP_NAME> --name <APP_NAME> \
For more information on the REST endpoint, see [REST endpoint reference](#rest-e
You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output will show the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app. ```azurecli
-az containerapps identity show --name <APP_NAME> --resource-group <GROUP_NAME>
+az containerapp identity show --name <APP_NAME> --resource-group <GROUP_NAME>
``` ## Remove a managed identity
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
the link in the **Version** column to view the source on the
## Policy definitions ## Next steps
container-instances How To Reuse Dns Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/how-to-reuse-dns-names.md
For [Azure portal](https://portal.azure.com) users, you can set the DNS name reu
![Screenshot of DNS name reuse policy dropdown menu, PNG.](./media/how-to-reuse-dns-names/portal-dns-name-reuse-policy.png)
-For ARM template users, see the [Resource Manager reference](/azure/templates/microsoft.containerinstance/containergroups.md) to see how the dnsNameLabelReusePolicy field fits into the existing schema.
+For ARM template users, see the [Resource Manager reference](/azure/templates/microsoft.containerinstance/containergroups) to see how the dnsNameLabelReusePolicy field fits into the existing schema.
For YAML template users, see the [YAML reference](container-instances-reference-yaml.md) to see how the dnsNameLabelReusePolicy field fits into the existing schema.
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Previously updated : 07/12/2021 Last updated : 08/03/2022 # Azure Cosmos DB resource model+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Azure Cosmos DB is a fully managed platform-as-a-service (PaaS). To begin using Azure Cosmos DB, you should initially create an Azure Cosmos account in your Azure resource group in the required subscription, and then databases, containers, items under it. This article describes the Azure Cosmos DB resource model and different entities in the resource model hierarchy.
+Azure Cosmos DB is a fully managed platform-as-a-service (PaaS). To begin using Azure Cosmos DB, create an Azure Cosmos DB account in an Azure resource group in your subscription. You then create databases and containers within the account.
-The Azure Cosmos account is the fundamental unit of global distribution and high availability. Your Azure Cosmos account contains a unique DNS name and you can manage an account by using the Azure portal or the Azure CLI, or by using different language-specific SDKs. For more information, see [how to manage your Azure Cosmos account](how-to-manage-database-account.md). For globally distributing your data and throughput across multiple Azure regions, you can add and remove Azure regions to your account at any time. You can configure your account to have either a single region or multiple write regions. For more information, see [how to add and remove Azure regions to your account](how-to-manage-database-account.md). You can configure the [default consistency](consistency-levels.md) level on an account.
+Your Azure Cosmos DB account contains a unique DNS name and can be managed using the Azure portal, ARM or Bicep templates, Azure PowerShell, Azure CLI, or any of the Azure Management SDK's or REST API. For more information, see [how to manage your Azure Cosmos DB account](how-to-manage-database-account.md). For replicating your data and throughput across multiple Azure regions, you can add and remove Azure regions to your account at any time. You can configure your account to have either a single region or multiple write regions. For more information, see [how to add and remove Azure regions to your account](how-to-manage-database-account.md). You can configure the [default consistency](consistency-levels.md) level on an account.
## Elements in an Azure Cosmos DB account
-An Azure Cosmos container is the fundamental unit of scalability. You can virtually have an unlimited provisioned throughput (RU/s) and storage on a container. Azure Cosmos DB transparently partitions your container using the logical partition key that you specify in order to elastically scale your provisioned throughput and storage.
-
-Currently, you can create a maximum of 50 Azure Cosmos accounts under an Azure subscription (this is a soft limit that can be increased via support request). A single Azure Cosmos account can virtually manage an unlimited amount of data and provisioned throughput. To manage your data and provisioned throughput, you can create one or more Azure Cosmos databases under your account and within that database, you can create one or more containers. The following image shows the hierarchy of elements in an Azure Cosmos account:
+Currently, you can create a maximum of 50 Azure Cosmos DB accounts under an Azure subscription (this is a soft limit that can be increased via support request). A single Azure Cosmos DB account can virtually manage an unlimited amount of data and provisioned throughput. To manage your data and provisioned throughput, you can create one or more databases within your account, then one or more containers to store your data. The following image shows the hierarchy of elements in an Azure Cosmos DB account:
-
-After you create an account under your Azure subscription, you can manage the data in your account by creating databases, containers, and items.
The following image shows the hierarchy of different entities in an Azure Cosmos DB account: ## Azure Cosmos DB databases
-You can create one or multiple Azure Cosmos databases under your account. A database is analogous to a namespace. A database is the unit of management for a set of Azure Cosmos containers. The following table shows how a database is mapped to various API-specific entities:
+In Azure Cosmos DB, a database is similar to a namespace. A database is simply a group of containers. The following table shows how a database is mapped to various API-specific entities:
-| Azure Cosmos entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| Azure Cosmos DB entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
| | | | | | | |Azure Cosmos database | Database | Keyspace | Database | Database | NA | > [!NOTE]
-> With Table API accounts, when you create your first table, a default database is automatically created in your Azure Cosmos account.
-
-### Operations on an Azure Cosmos database
-
-You can interact with an Azure Cosmos database with Azure Cosmos APIs as described in the following table:
-
-| Operation | Azure CLI | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
-| | | | | | | |
-|Enumerate all databases| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA |
-|Read database| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA |
-|Create new database| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA |
-|Update database| Yes | Yes | Yes (database is mapped to a keyspace) | Yes | NA | NA |
+> With Table API accounts, to maintain compatibility with Azure Storage Tables, tables in Azure Cosmos DB are created at the account level.
## Azure Cosmos DB containers
-An Azure Cosmos container is the unit of scalability both for provisioned throughput and storage. A container is horizontally partitioned and then replicated across multiple regions. The items that you add to the container are automatically grouped into logical partitions, which are distributed across physical partitions, based on the partition key. The throughput on a container is evenly distributed across the physical partitions. To learn more about partitioning and partition keys, see [Partition data](partitioning-overview.md).
+An Azure Cosmos DB container is where data is stored. Unlike most relational databases which scale up with larger VM sizes, Azure Cosmos DB scales out. Data is stored on one or more servers, called partitions. To increase throughput or storage, more partitions are added. This provides a virtually an unlimited amount of throughput and storage for a container. When a container is created, you need to supply a partition key. This is a property you select from your documents to store. The value of that property is then used to route data to the partition to be written, updated, or deleted. It can also be used in the WHERE clause in queries for efficient data retrieval.
+
+The underlying storage mechanism for data in Azure Cosmos DB is called a physical partition. These can have a throughput amount up to 10,000 RU/s and store up to 50 GB of data. Azure Cosmos DB abstracts this with a logical partition which can store up to 20 GB of data. Logical partitions allow the service to provide greater elasticity and better management of data on the underlying physical partitions as more partitions are added. To learn more about partitioning and partition keys, see [Partition data](partitioning-overview.md).
When you create a container, you configure throughput in one of the following modes:
-* **Dedicated provisioned throughput mode**: The throughput provisioned on a container is exclusively reserved for that container and it is backed by the SLAs. To learn more, see [How to provision throughput on a container](how-to-provision-container-throughput.md).
+* **Dedicated throughput**: The throughput provisioned on a container is exclusively reserved for that container. There are two types of dedicated throughput, standard and autoscale. To learn more, see [How to provision throughput on a container](how-to-provision-container-throughput.md).
-* **Shared provisioned throughput mode**: These containers share the provisioned throughput with the other containers in the same database (excluding containers that have been configured with dedicated provisioned throughput). In other words, the provisioned throughput on the database is shared among all the ΓÇ£shared throughputΓÇ¥ containers. To learn more, see [How to provision throughput on a database](how-to-provision-database-throughput.md).
+* **Shared throughput**: Here throughput is specified at the database level, then shared with up to 25 containers within the database, (excluding containers that have been configured with dedicated throughput). This can be a good option when all of the containers in the database have similar requests and storage needs or when you don't need predictable performance on the data. To learn more, see [How to provision throughput on a database](how-to-provision-database-throughput.md).
> [!NOTE]
-> You can configure shared and dedicated throughput only when creating the database and container. To switch from dedicated throughput mode to shared throughput mode (and vice versa) after the container is created, you have to create a new container and migrate the data to the new container. You can migrate the data by using the Azure Cosmos DB change feed feature.
-
-An Azure Cosmos container can scale elastically, whether you create containers by using dedicated or shared provisioned throughput modes.
+> You can not go between dedicated and shared throughput. Containers created in a shared throughput database, cannot be updated to have dedicated throughput. To change a container from shared to dedicated throughput, a new container must be created and data copied to it.
-A container is a schema-agnostic container of items. Items in a container can have arbitrary schemas. For example, an item that represents a person and an item that represents an automobile can be placed in the *same container*. By default, all items that you add to a container are automatically indexed without requiring explicit index or schema management. You can customize the indexing behavior by configuring the [indexing policy](index-overview.md) on a container.
+Containers are schema-agnostic. Items within a container can have arbitrary schemas or different entities so long as they share the same partition key. For example, an item that represents a customer and one or more items representing all their orders, can be placed in the *same container*. By default, all data added to a container is automatically indexed without requiring explicit indexing. You can customize the indexing for a container by configuring its [indexing policy](index-overview.md).
-You can set [Time to Live (TTL)](time-to-live.md) on selected items in a container or for the entire container to gracefully purge those items from the system. Azure Cosmos DB automatically deletes the items when they expire. It also guarantees that a query performed on the container doesn't return the expired items within a fixed bound. To learn more, see [Configure TTL on your container](how-to-time-to-live.md).
+You can set [Time to Live (TTL)](time-to-live.md) on selected items in a container or for the entire container to silently delete those items automatically in the background with unused throughput to avoid impacting performance. However, even if not deleted, any data that has expired will not appear in any reads made. To learn more, see [Configure TTL on your container](how-to-time-to-live.md).
-You can use [change feed](change-feed.md) to subscribe to the operations log that is managed for each logical partition of your container. Change feed provides the log of all the updates performed on the container, along with the before and after images of the items. For more information, see [Build reactive applications by using change feed](serverless-computing-database.md). You can also configure the retention duration for the change feed by using the change feed policy on the container.
+Azure Cosmos DB provides a built-in change data capture capability called, [change feed](change-feed.md) that can be used to subscribe to all the changes to data within your container. For more information, see [Change feed in Azure Cosmos DB](change-feed.md).
You can register [stored procedures, triggers, user-defined functions (UDFs)](stored-procedures-triggers-udfs.md), and [merge procedures](how-to-manage-conflicts.md) for your container.
-You can specify a [unique key constraint](unique-keys.md) on your Azure Cosmos container. By creating a unique key policy, you ensure the uniqueness of one or more values per logical partition key. If you create a container by using a unique key policy, no new or updated items with values that duplicate the values specified by the unique key constraint can be created. To learn more, see [Unique key constraints](unique-keys.md).
+Data within a container must have a unique `id` property value for each logical partition key value. This can be useful when you want to have a unique constraint within your container. You can also specify a [unique key constraint](unique-keys.md) on your Azure Cosmos DB container that uses one or more different properties and ensures uniqueness of one or more values per logical partition key. If you create a container by using a unique key policy, no new or updated items with values that duplicate the values specified by the unique key constraint can be created. To learn more, see [Unique key constraints](unique-keys.md).
A container is specialized into API-specific entities as shown in the following table:
A container is specialized into API-specific entities as shown in the following
|Azure Cosmos container | Container | Table | Collection | Graph | Table | > [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. Some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
### Properties of an Azure Cosmos DB container
-An Azure Cosmos container has a set of system-defined properties. Depending on which API you use, some properties might not be directly exposed. The following table describes the list of system-defined properties:
+An Azure Cosmos DB container has a set of system-defined properties. Depending on which API you use, some properties might not be directly exposed. The following table describes the list of system-defined properties:
| System-defined property | System-generated or user-configurable | Purpose | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API | | | | | | | | | |
An Azure Cosmos container has a set of system-defined properties. Depending on w
|\_etag | System-generated | Entity tag used for optimistic concurrency control | Yes | No | No | No | No | |\_ts | System-generated | Last updated timestamp of the container | Yes | No | No | No | No | |\_self | System-generated | Addressable URI of the container | Yes | No | No | No | No |
-|id | User-configurable | User-defined unique name of the container | Yes | Yes | Yes | Yes | Yes |
-|indexingPolicy | User-configurable | Provides the ability to change the index path, index type, and index mode | Yes | No | No | No | Yes |
+|id | User-configurable | Name of the container | Yes | Yes | Yes | Yes | Yes |
+|indexingPolicy | User-configurable | Provides the ability to change indexes | Yes | No | Yes | Yes | Yes |
|TimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](time-to-live.md). | Yes | No | No | No | Yes | |changeFeedPolicy | User-configurable | Used to read changes made to items in a container. For details, see [Change feed](change-feed.md). | Yes | No | No | No | Yes | |uniqueKeyPolicy | User-configurable | Used to ensure the uniqueness of one or more values in a logical partition. For more information, see [Unique key constraints](unique-keys.md). | Yes | No | No | No | Yes | |AnalyticalTimeToLive | User-configurable | Provides the ability to delete items automatically from a container after a set time period. For details, see [Time to Live](analytical-store-introduction.md). | Yes | No | Yes | No | No |
-### Operations on an Azure Cosmos DB container
-
-An Azure Cosmos container supports the following operations when you use any of the Azure Cosmos APIs:
-
-| Operation | Azure CLI | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
-| | | | | | | |
-| Enumerate containers in a database | Yes | Yes | Yes | Yes | NA | NA |
-| Read a container | Yes | Yes | Yes | Yes | NA | NA |
-| Create a new container | Yes | Yes | Yes | Yes | NA | NA |
-| Update a container | Yes | Yes | Yes | Yes | NA | NA |
-| Delete a container | Yes | Yes | Yes | Yes | NA | NA |
- ## Azure Cosmos DB items
-Depending on which API you use, an Azure Cosmos item can represent either a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos item:
+Depending on which API you use, data can represent either an item in a container, a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos item:
| Cosmos entity | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API | | | | | | | |
-|Azure Cosmos item | Item | Row | Document | Node or edge | Item |
+| Azure Cosmos DB item | Item | Row | Document | Node or edge | Item |
### Properties of an item Every Azure Cosmos item has the following system-defined properties. Depending on which API you use, some of them might not be directly exposed.
-| System-defined property | System-generated or user-configurable| Purpose | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| System-defined property | System-generated or user-defined| Purpose | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
| | | | | | | | | |\_rid | System-generated | Unique identifier of the item | Yes | No | No | No | No | |\_etag | System-generated | Entity tag used for optimistic concurrency control | Yes | No | No | No | No |
Every Azure Cosmos item has the following system-defined properties. Depending o
|Arbitrary user-defined properties | User-defined | User-defined properties represented in API-native representation (including JSON, BSON, and CQL) | Yes | Yes | Yes | Yes | Yes | > [!NOTE]
-> Uniqueness of the `id` property is only enforced within each logical partition. Multiple documents can have the same `id` property with different partition key values.
+> Uniqueness of the `id` property is enforced within each logical partition. Multiple documents can have the same `id` property with different partition key values.
### Operations on items Azure Cosmos items support the following operations. You can use any of the Azure Cosmos APIs to perform the operations.
-| Operation | Azure CLI | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
-| | | | | | | |
-| Insert, Replace, Delete, Upsert, Read | No | Yes | Yes | Yes | Yes | Yes |
+| Operation | SQL API | Cassandra API | Azure Cosmos DB API for MongoDB | Gremlin API | Table API |
+| | | | | | |
+| Insert, Replace, Delete, Upsert, Read | Yes | Yes | Yes | Yes | Yes |
## Next steps Learn how to manage your Azure Cosmos account and other concepts: * To learn more, see the [Azure Cosmos DB SQL API](/learn/modules/intro-to-azure-cosmos-db-core-api/) learn module.
-* [How-to manage your Azure Cosmos account](how-to-manage-database-account.md)
+* [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md)
* [Global distribution](distribute-data-globally.md) * [Consistency levels](consistency-levels.md)
-* [VNET service endpoint for your Azure Cosmos account](how-to-configure-vnet-service-endpoint.md)
-* [IP-firewall for your Azure Cosmos account](how-to-configure-firewall.md)
-* [How-to add and remove Azure regions to your Azure Cosmos account](how-to-manage-database-account.md)
-* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/)
+* [VNET service endpoint for your Azure Cosmos DB account](how-to-configure-vnet-service-endpoint.md)
+* [IP-firewall for your Azure Cosmos DB account](how-to-configure-firewall.md)
+* [How-to add and remove Azure regions to your Azure Cosmos DB account](how-to-manage-database-account.md)
+* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/)
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
This Blob storage connector supports the following authentication types. See the
- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication) >[!NOTE]
->- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity).
+>- If want to use the public Azure integration runtime to connect to your Blob storage by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Blob storage is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [Managed identity authentication](#managed-identity) section for more configuration prerequisites. >[!NOTE]
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
The Azure Data Lake Storage Gen2 connector supports the following authentication
- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication) >[!NOTE]
->- If want to use the public Azure integration runtime to connect to the Data Lake Storage Gen2 by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity).
+>- If want to use the public Azure integration runtime to connect to the Data Lake Storage Gen2 by leveraging the **Allow trusted Microsoft services to access this storage account** option enabled on Azure Storage firewall, you must use [managed identity authentication](#managed-identity). For more information about the Azure Storage firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
>- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Data Lake Storage Gen2 is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the [managed identity authentication](#managed-identity) section with more configuration prerequisites. ### Account key authentication
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md
Previously updated : 09/09/2021 Last updated : 08/02/2022 # Copy data from Salesforce Marketing Cloud using Azure Data Factory or Synapse Analytics
To copy data from Salesforce Marketing Cloud, set the source type in the copy ac
] ```
+>[!Note]
+> Contacts table is not supported.
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
Previously updated : 03/22/2022 Last updated : 07/20/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for the SQL Server linked service:
+This SQL server connector supports the following authentication types. See the corresponding sections for details.
+
+- [SQL authentication](#sql-authentication)
+- [Windows authentication](#windows-authentication)
+
+>[!TIP]
+>If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is XXX and has been reached," add `Pooling=false` to your connection string and try again.
+
+### SQL authentication
+
+To use SQL authentication, the following properties are supported:
| Property | Description | Required | |: |: |: | | type | The type property must be set to **SqlServer**. | Yes |
-| connectionString |Specify **connectionString** information that's needed to connect to the SQL Server database by using either SQL authentication or Windows authentication. Refer to the following samples.<br/>You also can put a password in Azure Key Vault. If it's SQL authentication, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
-| userName |Specify a user name if you use Windows authentication. An example is **domainname\\username**. |No |
-| password |Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |No |
+| connectionString | Specify **connectionString** information that's needed to connect to the SQL Server database. Specify a login name as your user name, and ensure the database that you want to connect is mapped to this login. Refer to the following samples. | Yes |
+| password | If you want to put a password in Azure Key Vault, pull the `password` configuration out of the connection string. For more information, see the JSON example following the table and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md). |No |
| alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see the JSON example following the table and [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No | | connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure integration runtime is used. |No |
-> [!NOTE]
-> Windows authentication is not supported in data flow.
-
->[!TIP]
->If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is XXX and has been reached," add `Pooling=false` to your connection string and try again.
-
-**Example 1: Use SQL authentication**
+**Example: Use SQL authentication**
```json {
The following properties are supported for the SQL Server linked service:
} } ```-
-**Example 2: Use SQL authentication with a password in Azure Key Vault**
+**Example: Use SQL authentication with a password in Azure Key Vault**
```json {
The following properties are supported for the SQL Server linked service:
"type": "SqlServer", "typeProperties": { "connectionString": "Data Source=<servername>\\<instance name if using named instance>;Initial Catalog=<databasename>;Integrated Security=False;User ID=<username>;",
- "password": { 
- "type": "AzureKeyVaultSecret", 
- "store": { 
- "referenceName": "<Azure Key Vault linked service name>", 
- "type": "LinkedServiceReference" 
- }, 
- "secretName": "<secretName>" 
+ "password": {
+ "type": "AzureKeyVaultSecret",
+ "store": {
+ "referenceName": "<Azure Key Vault linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "secretName": "<secretName>"
} }, "connectVia": {
The following properties are supported for the SQL Server linked service:
} } ```
+**Example: Use Always Encrypted**
-**Example 3: Use Windows authentication**
+```json
+{
+ "name": "SqlServerLinkedService",
+ "properties": {
+ "type": "SqlServer",
+ "typeProperties": {
+ "connectionString": "Data Source=<servername>\\<instance name if using named instance>;Initial Catalog=<databasename>;Integrated Security=False;User ID=<username>;Password=<password>;"
+ },
+ "alwaysEncryptedSettings": {
+ "alwaysEncryptedAkvAuthType": "ServicePrincipal",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalKey": {
+ "type": "SecureString",
+ "value": "<service principal key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+### Windows authentication
+
+To use Windows authentication, the following properties are supported:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **SqlServer**. | Yes |
+| connectionString | Specify **connectionString** information that's needed to connect to the SQL Server database. Refer to the following samples.
+| userName | Specify a user name. An example is **domainname\\username**. |Yes |
+| password | Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+| alwaysEncryptedSettings | Specify **alwaysencryptedsettings** information that's needed to enable Always Encrypted to protect sensitive data stored in SQL server by using either managed identity or service principal. For more information, see [Using Always Encrypted](#using-always-encrypted) section. If not specified, the default always encrypted setting is disabled. |No |
+| connectVia | This [integration runtime](concepts-integration-runtime.md) is used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure integration runtime is used. |No |
+
+> [!NOTE]
+> Windows authentication is not supported in data flow.
+
+**Example: Use Windows authentication**
```json {
The following properties are supported for the SQL Server linked service:
} ```
-**Example 4: Use Always Encrypted**
+**Example: Use Windows authentication with a password in Azure Key Vault**
```json { "name": "SqlServerLinkedService", "properties": {
+ "annotations": [],
"type": "SqlServer", "typeProperties": {
- "connectionString": "Data Source=<servername>\\<instance name if using named instance>;Initial Catalog=<databasename>;Integrated Security=False;User ID=<username>;Password=<password>;"
- },
- "alwaysEncryptedSettings": {
- "alwaysEncryptedAkvAuthType": "ServicePrincipal",
- "servicePrincipalId": "<service principal id>",
- "servicePrincipalKey": {
- "type": "SecureString",
- "value": "<service principal key>"
+ "connectionString": "Data Source=<servername>\\<instance name if using named instance>;Initial Catalog=<databasename>;Integrated Security=True;",
+ "userName": "<domain\\username>",
+ "password": {
+ "type": "AzureKeyVaultSecret",
+ "store": {
+ "referenceName": "<Azure Key Vault linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "secretName": "<secretName>"
} }, "connectVia": {
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 07/26/2022 Last updated : 08/08/2022
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
The following table summarizes what's included in each plan.
| **Unified view** | The Defender for Cloud portal displays Defender for Endpoint alerts. You can then drill down into Defender for Endpoint portal, with additional information such as the alert process tree, the incident graph, and a detailed machine timeline showing historical data up to six months.| :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Automatic MDE provisioning** | Automatic provisioning of Defender for Endpoint on Azure, AWS, and GCP resources. | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Microsoft threat and vulnerability management** | Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, without needing other agents or periodic scans. [Learn more](deploy-vulnerability-assessment-tvm.md). | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Security Policy and Regulatory Compliance** | Customize a security policy for your subscription and also compare the configuration of your resources with requirements in industry standards, regulations, and benchmarks. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **Integrated vulnerability assessment powered by Qualys** | Use the Qualys scanner for real-time identification of vulnerabilities in Azure and hybrid VMs. Everything's handled by Defender for Cloud. You don't need a Qualys license or even a Qualys account. [Learn more](deploy-vulnerability-assessment-vm.md). | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Log Analytics 500 MB free data ingestion** | Defender for Cloud leverages Azure Monitor to collect data from Azure VMs and servers, using the Log Analytics agent. | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **Threat detection** | Defender for Cloud detects threats at the OS level, network layer, and control plane. | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations in Microsoft Defender for Clouds
+ Title: Endpoint protection recommendations in Microsoft Defender for Cloud
description: How the endpoint protection solutions are discovered and identified as healthy.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
Defender for IoT agents aggregate events during the send interval defined in the
When the agent collects similar events to the ones that are already stored in memory, the agent will increase the hit count of this specific event to reduce the memory footprint of the agent. When the aggregation time window passes, the agent sends the hit count of each type of event that occurred. Event aggregation is the aggregation of the hit counts of similar events. For example, network activity with the same remote host and on the same port, is aggregated as one event, instead of as a separate event for each packet.
+> [!NOTE]
+> By default, the micro agent sends logs and telemetry to the cloud for troubleshooting and monitoring purposes. This behavior can be configured or turned off through the twin.
+ ## Next steps For more information, see:
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
Configure the micro agent using the following collector-specific properties and
| **Process_Mode** | `1` = Auto <br>`2` = Netlink <br>`3`= Polling | Determines the Process collector mode. In `Auto` mode, the agent first tries to enable the Netlink mode. <br><br>If that fails, it will automatically fall back / switch to the Polling mode.| `1` | | **Process_CacheSize** | Positive integer | The number of Process events (after aggregation) to keep in the cache between send intervals. Beyond that number, older events will be dropped (lost).| `256` |
+### Log collector-specific settings
+
+| Setting Name | Setting options | Description | Default |
+|--|--|--|--|
+| **LogCollector_Disabled** | `True`/`False` | Disables the Logs collector. | `False` |
+| **LogCollector_MessageFrequency** | `Low`/`Medium`/`High` | Defines the frequency in which to send Log events. | `Low` |
+ ## Next steps For more information, see:
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
The **Secure your IoT solution** button will only appear if the IoT Hub hasn't a
:::image type="content" source="media/quickstart-onboard-iot-hub/threat-prevention.png" alt-text="Screenshot showing that Defender for IoT is enabled." lightbox="media/quickstart-onboard-iot-hub/threat-prevention-expanded.png":::
+## Configure data collection
+
+Configure data collection settings for Defender for IoT in your IoT hub, such as a Log Analytics workspace and other advanced settings.
+
+**To configure Defender for IoT data collection**:
+
+1. In your IoT hub, select **Defender for IoT > Settings**. The **Enable Microsoft Defender for IoT** option is toggled on by default.
+
+1. In the **Workspace configuration** area, toggle the **On** option to connect to a Log Analytics workspace, and then select the Azure subscription and Log Analytics workspace you want to connect to.
+
+ If you need to create a new workspace, select the **Create New Workspace** link.
+
+ Select **Access to raw security data** to export raw security events from your devices to the Log Analytics workspace that you'd selected above.
+
+1. In the **Advanced settings** area, the following options are selected by default. Clear the selection as needed:
+
+ - **In-depth security recommendations and custom alerts**. Allows Defender for IoT access to the device's twin data in order to generate alerts based on that data.
+
+ - **IP data collection**. Allows Defender for IoT access to the device's incoming and outgoing IP addresses to generate alerts based on suspicious connections.
+
+1. Select **Save** to save your settings.
+ ## Next steps Advance to the next article to add a resource group to your solution.
defender-for-iot Understand Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/understand-network-architecture.md
When planning your network monitoring, you must understand your system network architecture and how it will need to connect to Defender for IoT. Also, understand where each of your system elements falls in the Purdue Reference model for Industrial Control System (ICS) OT network segmentation.
-Defender for IoT network sensors receive traffic from multiple sources, either by switch mirror ports (SPAN ports) or network TAPs. The network sensor's management port connects to the business, corporate, or sensor management network for network management from the Azure portal or an on-premises management system.
+Defender for IoT network sensors receive traffic from two main sources, either by switch mirror ports (SPAN ports) or network TAPs. The network sensor's management port connects to the business, corporate, or sensor management network for network management from the Azure portal or an on-premises management system.
For example:
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
For more information, see [Azure roles](../../role-based-access-control/rbac-and
### Supported service regions
-Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *Central US* regional datacenter.
+Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
-If you're using a legacy version of the sensor traffic and are connecting through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
+If you're using a legacy experience of Defender for IoT and are connecting through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
## Identify and plan your OT solution architecture
devtest-labs Devtest Lab Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-redeploy-vm.md
To redeploy a VM in a lab in Azure DevTest Labs, take the following steps:
6. In the Virtual Machine page for your VM, select **Redeploy** under **OPERATIONS** in the left menu. ![Screen capture shows the Virtual Machine page with Redeploy selected.](media/devtest-lab-redeploy-vm/redeploy.png)
-7. Read the information on the page, and select **Redeploy** button. 9. Check the status of the redeploy operation in the **Notifications** window.
+7. Read the information on the page, and select **Redeploy** button.
+8. Check the status of the redeploy operation in the **Notifications** window.
![Redeploy status](media/devtest-lab-redeploy-vm/redeploy-status.png)
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
- }
+ },
+ "role": "{role}"
}, "serverCallId": "{serverCallId}", "group": { "id": "00000000-0000-0000-0000-000000000000" },
- "isTwoParty": true
+ "room": {
+ "id": "{roomId}"
+ },
+ "isTwoParty": false,
+ "correlationId": "{correlationId}",
+ "isRoomsCall": true
}, "eventType": "Microsoft.Communication.CallStarted", "dataVersion": "1.0",
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
- }
+ },
+ "role": "{role}"
}, "serverCallId": "{serverCallId}", "group": { "id": "00000000-0000-0000-0000-000000000000" },
- "isTwoParty": true
+ "room": {
+ "id": "{roomId}"
+ },
+ "isTwoParty": false,
+ "correlationId": "{correlationId}",
+ "isRoomsCall": true
}, "eventType": "Microsoft.Communication.CallEnded", "dataVersion": "1.0",
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
- }
+ },
+ "role": "{role}"
}, "displayName": "Sharif Edge", "participantId": "041e3b8a-1cce-4ebf-b587-131312c39410",
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
- }
+ },
+ "role": "{role}"
}, "serverCallId": "{serverCallId}", "group": { "id": "00000000-0000-0000-0000-000000000000" },
- "isTwoParty": true
+ "room": {
+ "id": "{roomId}"
+ },
+ "isTwoParty": false,
+ "correlationId": "{correlationId}",
+ "isRoomsCall": true
}, "eventType": "Microsoft.Communication.CallParticipantAdded", "dataVersion": "1.0",
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-27cc-07fd-0848220077d8"
- }
+ },
+ "role": "{role}"
}, "displayName": "Sharif Chrome", "participantId": "750a1442-3156-4914-94d2-62cf73796833",
This section contains an example of what that data would look like for each even
"rawId": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1", "communicationUser": { "id": "8:acs:bc360ba8-d29b-4ef2-b698-769ebef85521_0000000c-1fb9-4878-07fd-0848220077e1"
- }
+ },
+ "role": "{role}"
}, "serverCallId": "aHR0cHM6Ly9jb252LWRldi0yMS5jb252LWRldi5za3lwZS5uZXQ6NDQzL2NvbnYvbVQ4NnVfempBMG05QVM4VnRvSWFrdz9pPTAmZT02Mzc2Nzc3MTc2MDAwMjgyMzA", "group": { "id": "00000000-0000-0000-0000-000000000000" },
- "isTwoParty": false
+ "room": {
+ "id": "{roomId}"
+ },
+ "isTwoParty": false,
+ "correlationId": "{correlationId}",
+ "isRoomsCall": true
}, "eventType": "Microsoft.Communication.CallParticipantRemoved", "dataVersion": "1.0",
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 | | **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported | Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC | | **LG CNS** |Supported |Supported | Busan, Seoul |
-| **[Liquid Telecom](https://liquidcloud.africa/connect/)** |Supported |Supported | Cape Town, Johannesburg |
+| **[Liquid Intelligent Technologies ](https://liquidcloud.africa/connect/)** |Supported |Supported | Cape Town, Johannesburg |
| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported |Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, Queretaro (Mexico), San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** |Supported |Supported | London |
If you are remote and do not have fiber connectivity or you want to explore othe
| **[United Information Highway (UIH)](https://www.uih.co.th/en/internet-solution/cloud-direct/uih-cloud-direct-for-microsoft-azure-expressroute)**| Equinix | Singapore | | **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | Equinix | Sao Paulo | | **[Webair](https://opti9tech.com/partners/)**| Megaport | New York |
-| **[Windstream](https://www.windstreambusiness.com/solutions/cloud-services/cloud-and-managed-hosting-services)**| Equinix | Chicago, Silicon Valley, Washington DC |
+| **[Windstream](https://www.windstreamenterprise.com/solutions/)**| Equinix | Chicago, Silicon Valley, Washington DC |
| **[X2nsat Inc.](https://www.x2nsat.com/expressroute/)** |Coresite |Silicon Valley, Silicon Valley 2| | **Zain** |Equinix |London| | **[Zertia](https://www.zertia.es)**| Level 3 | Madrid |
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 07/26/2022 Last updated : 08/08/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/26/2022 Last updated : 08/08/2022
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-compute](../../../../includes/policy/reference/bycat/policies-compute.md)]
-## Container App
+## Container Apps
## Container Instance
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-managed-identity](../../../../includes/policy/reference/bycat/policies-managed-identity.md)]
-## Managed Labs
-- ## Maps [!INCLUDE [azure-policy-reference-policies-maps](../../../../includes/policy/reference/bycat/policies-maps.md)]
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md
Example Update Manifest with one Reference Step:
} ```
+> [!NOTE]
+> In the [update manifest]( https://docs.microsoft.com/azure/iot-hub-device-update/update-manifest), each step should have different ΓÇ£installedCriteriaΓÇ¥ string if that string is being used to determine whether the step should be performed or not.
+ ## Parent Update vs. Child Update For Public Preview Refresh, we will refer to the top-level Update Manifest as `Parent Update` and refer to an Update Manifest specified in a Reference Step as `Child Update`.
Inline step(s) specified in `Parent Update` will be applied to the Host Device.
> [!NOTE] > See [Steps Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler/README.md) and [Implementing a custom component-Aware Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) for more details.
+> [!NOTE]
+> Steps Content Handler:
+> IsInstalled validation logic: The Device Update agentΓÇÖs [step handler]( https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md) checks to see if particular update is already installed (i.e., IsInstalled() resulted in a result code ΓÇ£900ΓÇ¥ i.e., is installed is ΓÇÿtrueΓÇÖ). To avoid installing an update that is already on the device the DU agent will skip future steps because we use it to determine whether to perform the step or not.
+> Reporting an update result: The result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option [learn more]( https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md#steps-content-handler). Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF.
+ ### Reference Step In Parent Update Reference step(s) specified in `Parent Update` will be applied to the component on or components connected to the Host Device. A **Reference Step** is a step that contains update identifier of another Update, called as a `Child Update`. When processing a Reference Step, the Steps Handler will download a Detached Update Manifest file specified in the Reference Step data, then validate the file integrity.
iot-hub Iot Hub Csharp Csharp C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-csharp-csharp-c2d.md
[!INCLUDE [iot-hub-selector-c2d](../../includes/iot-hub-selector-c2d.md)]
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstart shows how to create an IoT hub, provision a device identity in it, and code a device app that sends device-to-cloud messages.
+Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article builds on [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp). It shows you how to do the following tasks:
+This article shows you how to:
-* From your solution back end, send cloud-to-device messages to a single device through IoT Hub.
+* Send cloud-to-device messages, from your solution backend, to a single device through IoT Hub
-* Receive cloud-to-device messages on a device.
+* Receive cloud-to-device messages on a device
-* From your solution back end, request delivery acknowledgment (*feedback*) for messages sent to a device from IoT Hub.
+* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
-You can find more information on cloud-to-device messages in [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
At the end of this article, you run two .NET console apps.
-* **SimulatedDevice**. This app connects to your IoT hub and receives cloud-to-device messages. This app is a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp).
+* **SimulatedDevice**: a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp), which connects to your IoT hub and receives cloud-to-device messages.
-* **SendCloudToDevice**. This app sends a cloud-to-device message to the device app through IoT Hub, and then receives its delivery acknowledgment.
+* **SendCloudToDevice**: sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages, including C, Java, Python, and JavaScript, through [Azure IoT device SDKs](iot-hub-devguide-sdks.md). For step-by-step instructions on how to connect your device to this article's code, and generally to Azure IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
->
+> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
+
+You can find more information on cloud-to-device messages in [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
## Prerequisites * Visual Studio
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
+* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article. This cloud-to-device article builds on the quickstart.
* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Receive messages in the device app
-In this section, modify the device app you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) to receive cloud-to-device messages from the IoT hub.
+In this section, modify your device app to receive cloud-to-device messages from the IoT hub.
1. In Visual Studio, in the **SimulatedDevice** project, add the following method to the **SimulatedDevice** class.
With AMQP and HTTPS, but not MQTT, the device can also:
If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-For more detailed information about how IoT Hub processes cloud-to-device messages, including details of the cloud-to-device message lifecycle, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
> [!NOTE] > When using HTTPS instead of MQTT or AMQP as a transport, the `ReceiveAsync` method returns immediately. The supported pattern for cloud-to-device messages with HTTPS is intermittently connected devices that check for messages infrequently (a minimum of every 25 minutes). Issuing more HTTPS receives results in IoT Hub throttling the requests. For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md).
->
## Get the IoT hub connection string
-In this article, you create a back-end service to send cloud-to-device messages through the IoT hub you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp). To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
+In this article, you create a back-end service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)] ## Send a cloud-to-device message
-In this section, you create a .NET console app that sends cloud-to-device messages to the simulated device app.
+In this section, you create a .NET console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
-1. In the current Visual Studio solution, select **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
+1. In Visual Studio, select **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
1. Name the project *SendCloudToDevice*, then select **Next**.
- ![Configure a new project in Visual Studio](./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png)
+ :::image type="content" source="./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png" alt-text="Screenshot of the 'Configure a new project' popup in Visual Studio." lightbox="./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png":::
1. Accept the most recent version of the .NET Framework. Select **Create** to create the project.
In this section, you create a .NET console app that sends cloud-to-device messag
## Receive delivery feedback
-It is possible to request delivery (or expiration) acknowledgments from IoT Hub for each cloud-to-device message. This option enables the solution back end to easily inform retry or compensation logic. For more information about cloud-to-device feedback, see [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
+It's possible to request delivery (or expiration) acknowledgments from IoT Hub for each cloud-to-device message. This option enables the solution back end to easily inform, retry, or compensation logic. For more information about cloud-to-device feedback, see [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
In this section, you modify the **SendCloudToDevice** app to request feedback, and receive it from the IoT hub.
In this section, you modify the **SendCloudToDevice** app to request feedback, a
## Next steps
-In this how-to, you learned how to send and receive cloud-to-device messages.
+In this article, you learned how to send and receive cloud-to-device messages.
To learn more about developing solutions with IoT Hub, see the [IoT Hub developer guide](iot-hub-devguide.md).
iot-hub Iot Hub Ios Swift C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ios-swift-c2d.md
[!INCLUDE [iot-hub-selector-c2d](../../includes/iot-hub-selector-c2d.md)]
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md) quickstart shows how to create an IoT hub, provision a device identity in it, and code a simulated device app that sends device-to-cloud messages.
+Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
This article shows you how to:
-* Receive cloud-to-device messages on a device.
-
-You can find more information on cloud-to-device messages in the [messaging section of the IoT Hub developer guide](iot-hub-devguide-messaging.md).
+* Receive cloud-to-device messages on a device
At the end of this article, you run the following Swift iOS project:
-* **sample-device**, the same app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md), which connects to your IoT hub and receives cloud-to-device messages.
+* **sample-device**: the same app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md), which connects to your IoT hub and receives cloud-to-device messages.
> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. For step-by-step instructions on how to connect your device to this article's code, and generally to Azure IoT Hub, see the [Azure IoT Developer Center](https://www.azure.com/develop/iot).
+> IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-## Prerequisites
+You can find more information on cloud-to-device messages in the [messaging section of the IoT Hub developer guide](iot-hub-devguide-messaging.md).
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
+## Prerequisites
* An active IoT hub in Azure.
iot-hub Iot Hub Java Java C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-c2d.md
[!INCLUDE [iot-hub-selector-c2d](../../includes/iot-hub-selector-c2d.md)]
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart shows how to create an IoT hub, provision a device identity in it, and code a simulated device app that sends device-to-cloud messages.
+Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article builds on [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java). It shows you how to do the following:
+This article shows you how to:
-* From your solution back end, send cloud-to-device messages to a single device through IoT Hub.
+* Send cloud-to-device messages, from your solution backend, to a single device through IoT Hub
-* Receive cloud-to-device messages on a device.
+* Receive cloud-to-device messages on a device
-* From your solution back end, request delivery acknowledgment (*feedback*) for messages sent to a device from IoT Hub.
+* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
-You can find more information on [cloud-to-device messages in the IoT Hub developer guide](iot-hub-devguide-messaging.md).
At the end of this article, you run two Java console apps:
-* **simulated-device**, a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java), which connects to your IoT hub and receives cloud-to-device messages.
+* **SimulatedDevice**: a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp), which connects to your IoT hub and receives cloud-to-device messages.
-* **send-c2d-messages**, which sends a cloud-to-device message to the simulated device app through IoT Hub, and then receives its delivery acknowledgment.
+* **SendCloudToDevice**: sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. For step-by-step instructions on how to connect your device to this article's code, and generally to Azure IoT Hub, see the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot).
+> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
+
+You can find more information on [cloud-to-device messages in the IoT Hub developer guide](iot-hub-devguide-messaging.md).
## Prerequisites
-* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article.
+* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article. This cloud-to-device article builds on the quickstart.
* [Java SE Development Kit 8](/java/azure/jdk/). Make sure you select **Java 8** under **Long-term support** to get to downloads for JDK 8. * [Maven 3](https://maven.apache.org/download.cgi)
-* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
- * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Receive messages in the simulated device app
-In this section, you modify the simulated device app you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) to receive cloud-to-device messages from the IoT hub.
+In this section, modify your device app to receive cloud-to-device messages from the IoT hub.
1. Using a text editor, open the simulated-device\src\main\java\com\mycompany\app\App.java file.
With AMQP and HTTPS, but not MQTT, the device can also:
If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-For more detailed information about how IoT Hub processes cloud-to-device messages, including details of the cloud-to-device message lifecycle, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
> [!NOTE] > If you use HTTPS instead of MQTT or AMQP as the transport, the **DeviceClient** instance checks for messages from IoT Hub infrequently (a minimum of every 25 minutes). For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md). ## Get the IoT hub connection string
-In this article you create a backend service to send cloud-to-device messages through the IoT hub you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java). To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
+In this article you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)] ## Send a cloud-to-device message
-In this section, you create a Java console app that sends cloud-to-device messages to the simulated device app. You need the device ID of the device you added in the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart. You also need the the IoT hub connection string you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
+In this section, you create a Java console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
1. Create a Maven project called **send-c2d-messages** using the following command at your command prompt. Note this command is a single, long command:
In this section, you create a Java console app that sends cloud-to-device messag
``` > [!NOTE]
- > For simplicity, this article does not implement any retry policy. In production code, you should implement retry policies (such as exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
+ > For simplicity, this article does not implement a retry policy. In production code, you should implement retry policies (such as exponential backoff) as suggested in the article [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
9. To build the **simulated-device** app using Maven, execute the following command at the command prompt in the simulated-device folder:
iot-hub Iot Hub Node Node C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-c2d.md
[!INCLUDE [iot-hub-selector-c2d](../../includes/iot-hub-selector-c2d.md)]
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart shows how to create an IoT hub, provision a device identity in it, and code a simulated device app that sends device-to-cloud messages.
+Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
+This article shows you how to:
-This article builds on [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs). It shows you how to:
+* Send cloud-to-device messages, from your solution backend, to a single device through IoT Hub
-* From your solution back end, send cloud-to-device messages to a single device through IoT Hub.
-* Receive cloud-to-device messages on a device.
-* From your solution back end, request delivery acknowledgment (*feedback*) for messages sent to a device from IoT Hub.
+* Receive cloud-to-device messages on a device
-You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
+* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
+ At the end of this article, you run two Node.js console apps:
-* **SimulatedDevice**, a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs), which connects to your IoT hub and receives cloud-to-device messages.
+* **SimulatedDevice**: a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp), which connects to your IoT hub and receives cloud-to-device messages.
-* **SendCloudToDeviceMessage**, which sends a cloud-to-device message to the simulated device app through IoT Hub, and then receives its delivery acknowledgment.
+* **SendCloudToDevice**: sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through Azure IoT device SDKs. For step-by-step instructions on how to connect your device to this article's code, and generally to Azure IoT Hub, see the [Azure IoT Developer Center](https://azure.microsoft.com/develop/iot).
->
+> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
+
+You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
## Prerequisites
-* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
+* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article. This article builds on the quickstart.
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial) in just a couple of minutes.)
+* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Receive messages in the simulated device app
-In this section, you modify the simulated device app you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) to receive cloud-to-device messages from the IoT hub.
+In this section, modify your device app to receive cloud-to-device messages from the IoT hub.
1. Using a text editor, open the **SimulatedDevice.js** file. This file is located in the **iot-hub\Quickstarts\simulated-device** folder off of the root folder of the Node.js sample code you downloaded in the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart.
With AMQP and HTTPS, but not MQTT, the device can also:
If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-For more detailed information about how IoT Hub processes cloud-to-device messages, including details of the cloud-to-device message lifecycle, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
> [!NOTE] > If you use HTTPS instead of MQTT or AMQP as the transport, the **DeviceClient** instance checks for messages from IoT Hub infrequently (a minimum of every 25 minutes). For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md).
->
## Get the IoT hub connection string
-In this article, you create a backend service to send cloud-to-device messages through the IoT hub you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs). To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
+In this article, you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)] ## Send a cloud-to-device message
-In this section, you create a Node.js console app that sends cloud-to-device messages to the simulated device app. You need the device ID of the device you added in the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) quickstart. You also need the IoT hub connection string you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
+In this section, you create a Node.js console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
1. Create an empty folder called **sendcloudtodevicemessage**. In the **sendcloudtodevicemessage** folder, create a package.json file using the following command at your command prompt. Accept all the defaults:
iot-hub Iot Hub Python Python C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-c2d.md
[!INCLUDE [iot-hub-selector-c2d](../../includes/iot-hub-selector-c2d.md)]
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end. The [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) quickstart shows how to create an IoT hub, provision a device identity in it, and code a simulated device app that sends device-to-cloud messages.
+Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article builds on [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python). It shows you how to:
+This article shows you how to:
-* From your solution back end, send cloud-to-device messages to a single device through IoT Hub.
+* Send cloud-to-device messages, from your solution backend, to a single device through IoT Hub
-* Receive cloud-to-device messages on a device.
+* Receive cloud-to-device messages on a device
-You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
At the end of this article, you run two Python console apps:
-* **SimulatedDevice.py**, a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python), which connects to your IoT hub and receives cloud-to-device messages.
+* **SimulatedDevice.py**: a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python), which connects to your IoT hub and receives cloud-to-device messages.
-* **SendCloudToDeviceMessage.py**, which sends cloud-to-device messages to the simulated device app through IoT Hub.
+* **SendCloudToDeviceMessage.py**: sends cloud-to-device messages to the simulated device app through IoT Hub.
[!INCLUDE [iot-hub-include-python-sdk-note](../../includes/iot-hub-include-python-sdk-note.md)]
+You can find more information on cloud-to-device messages in the [IoT Hub developer guide](iot-hub-devguide-messaging.md).
+ ## Prerequisites
+* A complete working version of the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) quickstart or the [Configure message routing with IoT Hub](tutorial-routing.md) article. This cloud-to-device article builds on the quickstart.
+ [!INCLUDE [iot-hub-include-python-v2-installation-notes](../../includes/iot-hub-include-python-v2-installation-notes.md)] * Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Receive messages in the simulated device app
-In this section, you create a Python console app to simulate the device and receive cloud-to-device messages from the IoT hub.
+In this section, you create a Python console app to simulate a device and receive cloud-to-device messages from the IoT hub.
1. From a command prompt in your working directory, install the **Azure IoT Hub Device SDK for Python**:
In this section, you create a Python console app to simulate the device and rece
1. Save and close the **SimulatedDevice.py** file.
+For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+ ## Get the IoT hub connection string
-In this article, you create a backend service to send cloud-to-device messages through the IoT hub you created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python). To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
+In this article, you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
[!INCLUDE [iot-hub-include-find-service-connection-string](../../includes/iot-hub-include-find-service-connection-string.md)] ## Send a cloud-to-device message
-In this section, you create a Python console app that sends cloud-to-device messages to the simulated device app. You need the device ID of the device you added in the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) quickstart. You also need the IoT hub connection string you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string).
+In this section, you create a Python console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys-details.md
Key Vault, including Managed HSM, supports the following operations on key objec
- **Get**: Allows a client to retrieve the public parts of a given key in a Key Vault. - **Backup**: Exports a key in a protected form. - **Restore**: Imports a previously backed up key.-- **Release**: It securly releases a key to authorized code running within confidential compute. It requires release policy generated within hardware-based Trusted Execution Environment (TEE).
+- **Release**: It securely releases a key to authorized code running within a confidential compute environment. It requires an attestation that the Trusted Execution Environment (TEE) meets the requirements of the keyΓÇÖs release_policy.
- **Rotate**: Rotate an existing key by generating new version of the key (Key Vault only). For more information, see [Key operations in the Key Vault REST API reference](/rest/api/keyvault).
The following permissions can be granted, on a per user / service principal basi
- Permissions for privileged operations - *purge*: Purge (permanently delete) a deleted key
- - *release*: Release a key to confidential compute workloads
+ - *release*: Release a key to a confidential compute environment which matches the release_policy of the key
- Permissions for rotation policy operations - *rotate*: Rotate an existing key by generating new version of the key (Key Vault only)
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
lab-services Add Lab Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/add-lab-creator.md
To provide educators the permission to create labs for their classes, add them t
1. On the **Lab Plan** resource, select **Access control (IAM)**
-1. Select **Add** > **Add role assignment (Preview)**.
+1. Select **Add** > **Add role assignment**.
![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
Title: Administrator guide | Microsoft Docs description: This guide helps administrators who create and manage lab plans by using Azure Lab Services. Previously updated : 01/22/2022 Last updated : 07/04/2022+ # Azure Lab Services - Administrator guide
Last updated 01/22/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] > [!NOTE]
-> If using a version of Azure Lab Services prior to the [April 2022 Update (preview)](lab-services-whats-new.md), see [Administrator guide when using lab accounts](administrator-guide-1.md).
+> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Administrator guide when using lab accounts](administrator-guide-1.md).
Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab plan for their school. After they have set up a lab plan, administrators or educators create the labs that are associated with the lab plan. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them.
When you create a lab plan, you must configure the resource group that contains
A resource group is also required when you create an [Azure Compute Gallery](#azure-compute-gallery). You can place your lab plan and compute gallery in the same resource group or in two separate resource groups. You might want to take this second approach if you plan to share the compute gallery across various solutions.
-We recommend that you invest time up front to plan the structure of your resource groups. It's *not* possible to change a lab plan or compute gallery resource group once itΓÇÖs created. If you need to change the resource group for these resources, youΓÇÖll need to delete and re-create them.
+We recommend that you invest time up front to plan the structure of your resource groups. It's *not* possible to change a lab plan or compute gallery resource group once it's created. If you need to change the resource group for these resources, you'll need to delete and re-create them.
## Lab plan
-A lab plan set of configurations that influence the creation of a lab. A lab plan can be associated with zero or more labs. When youΓÇÖre getting started with Azure Lab Services, itΓÇÖs most common to have a single lab plan. As your lab usage scales up, you can choose to create more lab plans later.
+A lab plan set of configurations that influence the creation of a lab. A lab plan can be associated with zero or more labs. When you're getting started with Azure Lab Services, it's most common to have a single lab plan. As your lab usage scales up, you can choose to create more lab plans later.
The following list highlights scenarios where more than one lab plan might be beneficial:
The following list highlights scenarios where more than one lab plan might be be
A lab contains VMs that are each assigned to a single student. In general, you can expect to: - Have one lab for each class.-- Create a new set of labs for each semester, quarter, or other academic system youΓÇÖre using. For classes that need to use the same image, you should use a [compute gallery](#azure-compute-gallery). This way, you can reuse images across labs and academic periods.
+- Create a new set of labs for each semester, quarter, or other academic system you're using. For classes that need to use the same image, you should use a [compute gallery](#azure-compute-gallery). This way, you can reuse images across labs and academic periods.
-When youΓÇÖre determining how to structure your labs, consider the following points:
+When you're determining how to structure your labs, consider the following points:
- **All VMs within a lab are deployed with the same image that's published.**
When youΓÇÖre determining how to structure your labs, consider the following poi
- **The usage quota is set at the lab level and applies to all users within the lab**
- To set different quotas for users, you must create separate labs. However, itΓÇÖs possible to add more hours to specific users after you have set the quota.
+ To set different quotas for users, you must create separate labs. However, it's possible to add more hours to specific users after you have set the quota.
- **The startup or shutdown schedule is set at the lab level and applies to all VMs within the lab**
An Azure Compute Gallery is attached to a lab plan and serves as a central repos
Educators can publish an image version from the compute gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
-The compute gallery is an optional resource that you might not need immediately if youΓÇÖre starting with only a few labs. However, a compute gallery offers many benefits that are helpful as you scale up to more labs:
+The compute gallery is an optional resource that you might not need immediately if you're starting with only a few labs. However, a compute gallery offers many benefits that are helpful as you scale up to more labs:
- **You can save and manage versions of a template VM image**
- ItΓÇÖs useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, itΓÇÖs common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to the compute gallery. You can then use these image versions when you create new labs.
+ It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to the compute gallery. You can then use these image versions when you create new labs.
- **You can share and reuse template VM images across labs**
- You can save and reuse an image so that you donΓÇÖt have to configure it from scratch each time that you create a new lab. For example, if multiple classes need to use the same image, you can create it once and export it to the compute gallery so that it can be shared across labs.
+ You can save and reuse an image so that you don't have to configure it from scratch each time that you create a new lab. For example, if multiple classes need to use the same image, you can create it once and export it to the compute gallery so that it can be shared across labs.
- **You can upload your own custom images from other environments outside of labs**
For more information about naming other Azure resources, see [Naming conventions
## Regions
-When you set up your Azure Lab Services resources, youΓÇÖre required to provide a region or location of the data center that will host the resources. Lab plans can enable one or more regions in which labs may be created. The next sections describe how a region or location might affect each resource that is involved with setting up a lab.
+When you set up your Azure Lab Services resources, you're required to provide a region or location of the data center that will host the resources. Lab plans can enable one or more regions in which labs may be created. The next sections describe how a region or location might affect each resource that is involved with setting up a lab.
- **Resource group**. The region specifies the datacenter where information about a resource group is stored. Azure resources contained within the resource group can be in a different region from that of their parent. - **Lab plan**. A lab plan's location indicates the region that a resource exists in. When a lab plan is connected to your own virtual network, the network must be in the same region as the lab plan. Also, labs will be created in the same Azure region as that virtual network.-- **Lab**. The location that a lab exists in varies, and doesnΓÇÖt need to be in the same location as the lab plan. Administrators control which regions labs can be created in through the lab plan settings. A general rule is to set a resource's region to one that is closest to its users. For labs, this means creating the lab that is closest to your students. For online courses whose students are located all over the world, use your best judgment to create a lab that is centrally located. Or you can split a class into multiple labs according to your students' regions.
+- **Lab**. The location that a lab exists in varies, and doesn't need to be in the same location as the lab plan. Administrators control which regions labs can be created in through the lab plan settings. A general rule is to set a resource's region to one that is closest to its users. For labs, this means creating the lab that is closest to your students. For online courses whose students are located all over the world, use your best judgment to create a lab that is centrally located. Or you can split a class into multiple labs according to your students' regions.
> [!NOTE] > To help ensure that a region has sufficient VM capacity, it's important to first [request capacity](capacity-limits.md#request-a-limit-increase).
By using [Azure role-based access control (RBAC)](../role-based-access-control/o
- Change the lab plan settings. - Create and manage all labs in the lab plan.
- However, the Contributor *canΓÇÖt* grant other users access to either lab plans or labs.
+ However, the Contributor *can't* grant other users access to either lab plans or labs.
- **Lab Creator**
- When set on the lab plan, this role enables the user account to create labs from the lab plan. The user account can also see existing labs that are in the same resource group as the lab plan. When applied to a resource group, this role enables the user to view existing lab and create new labs. TheyΓÇÖll have full control over any labs they create as theyΓÇÖre assigned as Owner to those created labs. For more information, see [Add a user to the Lab Creator role](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role).
+ When set on the lab plan, this role enables the user account to create labs from the lab plan. The user account can also see existing labs that are in the same resource group as the lab plan. When applied to a resource group, this role enables the user to view existing lab and create new labs. They'll have full control over any labs they create as they're assigned as Owner to those created labs. For more information, see [Add a user to the Lab Creator role](./tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role).
- **Lab Contributor** When applied to an existing lab, this role enables the user to fully manage the lab. When applied to a resource group, this role enables the user account to fully manage existing labs and create new labs in that resource group.
- A key difference between the lab Owner and Contributor roles is that only an Owner can grant other users access to manage a lab. A Contributor *canΓÇÖt* grant other users access to manage a lab.
+ A key difference between the lab Owner and Contributor roles is that only an Owner can grant other users access to manage a lab. A Contributor *can't* grant other users access to manage a lab.
- **Lab Operator**
- When applied to a resource group or a lab, this role enables the user to have limited ability to manage existing labs. This role wonΓÇÖt give the user the ability to create new labs. In an existing lab, the user can manage users, adjust individual usersΓÇÖ quota, manage schedules, and start/stop VMs. The user account will be able to publish a lab. The user wonΓÇÖt have the ability to change lab capacity or change quota at the lab level. The user wonΓÇÖt be able to change the template title or description.
+ When applied to a resource group or a lab, this role enables the user to have limited ability to manage existing labs. This role won't give the user the ability to create new labs. In an existing lab, the user can manage users, adjust individual users' quota, manage schedules, and start/stop VMs. The user account will be able to publish a lab. The user won't have the ability to change lab capacity or change quota at the lab level. The user won't be able to change the template title or description.
- **Lab Assistant**
By using [Azure role-based access control (RBAC)](../role-based-access-control/o
- **Lab Services Reader**
- When applied to a resource group, enables the user to view, but not change, all lab plans and lab resources. External resources like image galleries and virtual networks that may be connected to a lab plan arenΓÇÖt included.
+ When applied to a resource group, enables the user to view, but not change, all lab plans and lab resources. External resources like image galleries and virtual networks that may be connected to a lab plan aren't included.
-When youΓÇÖre assigning roles, it helps to follow these tips:
+When you're assigning roles, it helps to follow these tips:
- Ordinarily, only administrators should be members of a lab plan Owner or Contributor role. The lab plan might have more than one Owner or Contributor. - To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role.-- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that theyΓÇÖll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab.
+- To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab.
## Content filtering
-Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesnΓÇÖt offer built-in support for content filtering.
+Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
There are two approaches that schools typically consider for content filtering: - Configure a firewall to filter content at the network level. - Install third-party software directly on each computer that performs content filtering.
-By default, Azure Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. YouΓÇÖll need to use [advanced networking](how-to-connect-vnet-injection.md) in the lab plan. Make sure to check known limitations of VNet injection before proceeding.
+By default, Azure Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. You'll need to use [advanced networking](how-to-connect-vnet-injection.md) in the lab plan. Make sure to check known limitations of VNet injection before proceeding.
We recommend the second approach, which is to install third-party software on each lab's template VM. There are a few key points to highlight as part of this solution: -- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), youΓÇÖll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab.-- You may also want to have each student use a non-admin account on their VM so that they canΓÇÖt uninstall the content filtering software. Adding a non-admin account must be done when creating the lab.
+- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you'll need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab.
+- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. Adding a non-admin account must be done when creating the lab.
If your school needs to do content filtering, contact us via the [Azure Lab Services' Q&A](https://aka.ms/azlabs/questions) for more information.
If your school needs to do content filtering, contact us via the [Azure Lab Serv
Many endpoint management tools, such as [Microsoft Endpoint Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine will have a new, unique machine SID generated when the VM boots from the image.
-With Lab Services, if you create a lab with a template, the lab VMs will have the same SID. Even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when itΓÇÖs published to create the student VMs.
+With Lab Services, if you create a lab with a template, the lab VMs will have the same SID. Even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs.
To obtain lab VMs with unique SID, create a lab without a template VM. You must use a *generalized* image from the Azure Marketplace or an attached Azure Compute Gallery. To use your own Azure Compute Gallery, see [Attach or detach a compute gallery in Azure Lab Services](how-to-attach-detach-shared-image-gallery.md). The machine SIDs can be verified by using a tool such as [PsGetSid](/sysinternals/downloads/psgetsid).
-If you plan to use an endpoint management tool or similar software, we recommend that you donΓÇÖt use template VMs for your labs.
+If you plan to use an endpoint management tool or similar software, we recommend that you don't use template VMs for your labs.
## Pricing
Billing entries in Azure Cost Management are per lab VM. Tags for lab plan ID an
You also need to consider the pricing for the compute gallery service if you plan to use compute galleries for storing and managing image versions.
-Creating a compute gallery and attaching it to your lab plan is free. No cost is incurred until you save an image version to the gallery. The pricing for using a compute gallery is ordinarily fairly negligible, but itΓÇÖs important to understand how itΓÇÖs calculated, because it isnΓÇÖt included in the pricing for Azure Lab Services.
+Creating a compute gallery and attaching it to your lab plan is free. No cost is incurred until you save an image version to the gallery. The pricing for using a compute gallery is ordinarily fairly negligible, but it's important to understand how it's calculated, because it isn't included in the pricing for Azure Lab Services.
#### Storage charges
To store image versions, a compute gallery uses standard hard disk drive (HDD) m
#### Replication and network egress charges
-When you save an image version by using a lab template VM, Azure Lab Services first stores it in a source region. However, youΓÇÖll most likely need to replicate the source image version to one or more target regions.
+When you save an image version by using a lab template VM, Azure Lab Services first stores it in a source region. However, you'll most likely need to replicate the source image version to one or more target regions.
A network egress charge occurs when an image version is replicated from the source region to other target regions. The amount charged is based on the size of the image version when the image's data is initially transferred outbound from the source region. For pricing details, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/).
For information about costs to store images and their replications, see [billing
#### Cost management
-ItΓÇÖs important for lab plan administrators to manage costs by routinely deleting unneeded image versions from the gallery.
+It's important for lab plan administrators to manage costs by routinely deleting unneeded image versions from the gallery.
Be wary of removing replication to specific regions as a way to reduce the costs. Replication changes might have adverse effects on the ability of Azure Lab Services to publish VMs from images saved within a compute gallery.
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
Title: Capacity limits in Azure Lab Services description: Learn about VM capacity limits in Azure Lab Services. Previously updated : 02/01/2022 Last updated : 07/04/2022+ # Capacity limits in Azure Lab Services Azure Lab Services has default capacity limits on Azure subscriptions that adhere to Azure Compute quota limitations and to mitigate fraud. All Azure subscriptions will have an initial capacity limit, which can vary based on subscription type, number of standard compute cores, and GPU cores available inside Azure Lab Services. It restricts how many virtual machines you can create inside your lab before you need to request for a limit increase.
-If youΓÇÖre close to or have reached your subscriptionΓÇÖs core limit, youΓÇÖll see messages from Azure Lab Services. Actions that are affected by core limits include:
+If you're close to or have reached your subscription's core limit, you'll see messages from Azure Lab Services. Actions that are affected by core limits include:
- Create a lab - Publish a lab
These actions may be disabled if there no more cores that can be enabled for you
## Request a limit increase
-If you reach the cores limit, you can request a limit increase to continue using Azure Lab Services. The request process is a checkpoint to ensure your subscription isnΓÇÖt involved in any cases of fraud or unintentional, sudden large-scale deployments.
+If you reach the cores limit, you can request a limit increase to continue using Azure Lab Services. The request process is a checkpoint to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
To create a support request, you must be an [Owner](../role-based-access-control/built-in-roles.md), [Contributor](../role-based-access-control/built-in-roles.md), or be assigned to the [Support Request Contributor](../role-based-access-control/built-in-roles.md) role at the subscription level. For information about creating support requests in general, see how to create a [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
The admin can follow these steps to request a limit increase:
1. One the **Details** page, enter the following information in the **Description** page. - VM size. For size details, see [VM sizing](administrator-guide.md#vm-sizing). - Number of VMs.
- - Location. Location will be a [geography](https://azure.microsoft.com/global-infrastructure/geographies/#geographies) or region, if using the [April 2022 Update (preview)](lab-services-whats-new.md).
+ - Location. Location will be a [geography](https://azure.microsoft.com/global-infrastructure/geographies/#geographies) or region, if using the [August 2022 Update](lab-services-whats-new.md).
1. Under **Advanced diagnostic information**, select **No**. 1. Under **Support method** section, select your preferred contact method. Verify contact information is correct. 1. Select **Next: Review + create** 1. On the **Review + create** page, select **Create** to submit the support request.
-Once you submit the support request, weΓÇÖll review the request. If necessary, weΓÇÖll contact you to get more details.
+Once you submit the support request, we'll review the request. If necessary, we'll contact you to get more details.
## Subscriptions with default limit of zero cores
-Some rare subscription types that are more commonly used for fraud can have a default limit of zero standard cores and zero GPU cores. If youΓÇÖre using one of these subscription types, your admin needs to request a limit increase before you can use Azure Lab Services.
+Some rare subscription types that are more commonly used for fraud can have a default limit of zero standard cores and zero GPU cores. If you're using one of these subscription types, your admin needs to request a limit increase before you can use Azure Lab Services.
## Per-customer assigned capacity
-Azure Lab Services hosts lab resources, including VMs, within special Microsoft-managed Azure subscriptions that arenΓÇÖt visible to customers. With the [April 2022 Update (preview)](lab-services-whats-new.md), VM capacity is dedicated to each customer. Previous to this update, VM capacity was available from a large pool shared by customers.
+Azure Lab Services hosts lab resources, including VMs, within special Microsoft-managed Azure subscriptions that aren't visible to customers. With the [August 2022 Update](lab-services-whats-new.md), VM capacity is dedicated to each customer. Previous to this update, VM capacity was available from a large pool shared by customers.
Before you set up a large number of VMs across your labs, we recommend that you open a support ticket to pre-request VM capacity. Requests should include VM size, number, and location. Requesting capacity before lab creation helps us to ensure that you create your labs in a region that has a sufficient number of VM cores for the VM size that you need for your labs.
lab-services Class Type Deep Learning Natural Language Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-deep-learning-natural-language-processing.md
Title: Set up a lab focused on deep learning using Azure Lab Services | Microsoft Docs description: Learn how to set up a lab focused on deep learning in natural language processing (NLP) using Azure Lab Services. Previously updated : 04/06/2022 Last updated : 07/04/2022
For instructions on how to create a lab, see [Tutorial: Set up a lab](tutorial-s
| Lab settings | Value | | | | | Virtual machine (VM) size | **Small GPU (Compute)**. This size is best suited for compute-intensive and network-intensive applications like Artificial Intelligence and Deep Learning. |
-| VM image | [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804). This image provides deep learning frameworks and tools for machine learning and data science. To view the full list of installed tools on this image, see [WhatΓÇÖs included on the DSVM?](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). |
+| VM image | [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804). This image provides deep learning frameworks and tools for machine learning and data science. To view the full list of installed tools on this image, see [What's included on the DSVM?](../machine-learning/data-science-virtual-machine/overview.md#whats-included-on-the-dsvm). |
| Enable remote desktop connection | Optionally, check **Enable remote desktop connection**. The Data Science image is already configured to use X2Go so that teachers and students can connect using a GUI remote desktop. X2Go *doesn't* require the **Enable remote desktop connection** setting to be enabled. |
-| Template Virtual Machine Settings | Optionally, choose **Use a virtual machine image without customization**. If you're using the [April 2022 Update (preview)](lab-services-whats-new.md) and the DSVM has all the tools that your class requires, you can skip the template customization step. |
+| Template Virtual Machine Settings | Optionally, choose **Use a virtual machine image without customization**. If you're using the [August 2022 Update](lab-services-whats-new.md) and the DSVM has all the tools that your class requires, you can skip the template customization step. |
> [!IMPORTANT] > We recommend that you use the X2Go with the Data Science image. However, if you choose to use RDP instead, you'll need to connect to the Linux VM using SSH and install the RDP and GUI packages before publishing the lab. Then, students can connect to the Linux VM using RDP later. For more information, see [Enable graphical remote desktop for Linux VMs](how-to-enable-remote-desktop-linux.md).
lab-services Classroom Labs Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-concepts.md
Title: Labs concepts - Azure Lab Services | Microsoft Docs description: Learn the basic concepts of Lab Services, and how it can make it easy to create and manage labs. Previously updated : 01/27/2022 Last updated : 07/04/2022+ # Labs concepts
For more information, see [Configure automatic shutdown of VMs for a lab plan](h
A template VM in a lab is a base image from which all students' VMs are created. Educators configure the template VM with the software needed to complete the lab. When educators [publish a template VM](tutorial-setup-lab.md#publish-a-lab), Azure Lab Services creates or updates student lab VMs to match the template VM.
-Labs can be created without needing a template VM, if using the [April 2022 Update (preview)](lab-services-whats-new.md). The Marketplace or Azure Compute Gallery image is used as-is to create the student's VMs.
+Labs can be created without needing a template VM, if using the [August 2022 Update](lab-services-whats-new.md). The Marketplace or Azure Compute Gallery image is used as-is to create the student's VMs.
## Lab plans
Lab plans are an Azure resource and contain settings used when creating new labs
## User profiles
-Azure Lab Services was designed with three major personas in mind: administrators, educators, and students. You'll see these three roles mentioned throughout Azure Lab Services documentation. This section describes each persona and the tasks theyΓÇÖre typically responsible for.
+Azure Lab Services was designed with three major personas in mind: administrators, educators, and students. You'll see these three roles mentioned throughout Azure Lab Services documentation. This section describes each persona and the tasks they're typically responsible for.
### Administrator
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
Azure Lab Services is a SaaS (software as a service) solution, which means that
Azure Lab Services does provide a couple of areas that allow you to use your own resources with Lab Services. For more information about using VMs on your own network, see [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) to use virtual network injection instead of virtual network peering. To reuse images from an Azure Compute Gallery, see how to [attach a compute gallery](how-to-attach-detach-shared-image-gallery.md).
-Below is the basic architecture of a lab. The lab plan is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
+Below is the basic architecture of a lab without advanced networking enabled. The lab plan is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
:::image type="content" source="./media/classroom-labs-fundamentals/labservices-basic-architecture.png" alt-text="Architecture diagram of basic lab in Azure Lab Services.":::
These subscriptions are monitored for suspicious activity. It's important to no
## Virtual Network
-Each lab is isolated by its own virtual network. If the lab is using [advanced networking](how-to-connect-vnet-injection.md), then each lab using the same subnet that has been delegated to Azure Lab Services and connected to the lab plan.
+By default, each lab is isolated by its own virtual network.
Students connect to their virtual machine through a load balancer. No student virtual machines have a public IP address; they only have a private IP address. The connection string for the student will be the public IP address of the load balancer and a random port between:
Students connect to their virtual machine through a load balancer. No student v
Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the appropriate virtual machine. An NSG prevents outside traffic on any other ports.
+If the lab is using [advanced networking](how-to-connect-vnet-injection.md), then each lab is using the same subnet that has been delegated to Azure Lab Services and connected to the lab plan. You'll also be responsible for creating an [NSG with an inbound security rule to allow RDP and SSH traffic](how-to-connect-vnet-injection.md#associate-delegated-subnet-with-nsg) so students can connect to their VMs.
+ ## Access control to the virtual machines Lab Services handles the studentΓÇÖs ability to perform actions like start and stop on their virtual machines. It also controls access to their VM connection information.
lab-services Cost Management Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/cost-management-guide.md
Title: Cost management guide for Azure Lab Services
description: Understand the different ways to view costs for Lab Services. Previously updated : 02/03/2022 Last updated : 07/04/2022 + # Cost management for Azure Lab Services
The cost analysis is for reviewing the previous month's usage to help you determ
The Cost analysis dashboard allows in-depth cost analysis, including the ability to export to different file types on a schedule. For more information, see [Cost Management + Billing overview](../cost-management-billing/cost-management-billing-overview.md).
-You can filter by service or resource type. To see only costs associated with Azure Lab Services, set the **service name** filter equal to **azure lab services**. If filtering on **resource type**, include `Microsoft.Labservices/labaccounts` resource type. If using the [April 2022 Update (preview)](lab-services-whats-new.md), also include the `Microsoft.LabServices/labs` resource type.
+You can filter by service or resource type. To see only costs associated with Azure Lab Services, set the **service name** filter equal to **azure lab services**. If filtering on **resource type**, include `Microsoft.Labservices/labaccounts` resource type. If using the [August 2022 Update](lab-services-whats-new.md), also include the `Microsoft.LabServices/labs` resource type.
### Understand the entries
In this example, adding the first and second rows (both start with "aaalab / doc
:::image type="content" source="./media/cost-management-guide/cost-analysis.png" alt-text="Screenshot that shows an example cost analysis for a subscription for Azure Lab Services associated costs." lightbox="./media/cost-management-guide/cost-analysis.png":::
-If you're using the [April 2022 Update (preview)](lab-services-whats-new.md), the entries in are formatted differently. The **Resource** column will show entries in the form `{lab name}/{number}` for Azure Lab Services. Some tags are added automatically to each entry when using the April 2022 Update.
+If you're using the [August 2022 Update](lab-services-whats-new.md), the entries in are formatted differently. The **Resource** column will show entries in the form `{lab name}/{number}` for Azure Lab Services. Some tags are added automatically to each entry when using the August 2022 Update.
| Tag name | Value | | -- | -- |
If you're using the [April 2022 Update (preview)](lab-services-whats-new.md), th
| ms-labname | Name of the lab. | | ms-labplanid | Full resource ID of the lab plan used when creating the lab. | To get the cost for the entire lab, don't forget to include external resources. Azure Compute Gallery related charges are under the `Microsoft.Compute` namespace. The advanced networking charges are under the `Microsoft.Network` namespace.
Since cost entries are tied to the lab account, some schools use the lab account
In the cost analysis pane, add a filter based on the resource group name for the class. Then, only the costs for that class will be visible. Grouping by resource group allows a clearer delineation between the classes when you're viewing the costs. You can use the [scheduled export](../cost-management-billing/costs/tutorial-export-acm-data.md) feature of the cost analysis to download the costs of each class in separate files.
-In the [April 2022 Update (preview)](lab-services-whats-new.md):
+In the [August 2022 Update](lab-services-whats-new.md):
- Cost entries are tied to a lab VM, *not* the lab plan. - Cost entries get tagged with the name of the lab the VM is tied to. You can filter by the lab name tag to view total cost across VM in that lab.
lab-services How To Add User Lab Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-add-user-lab-owner.md
This article shows you how you, as an administrator, can add additional owners t
![Select the lab ](./media/how-to-add-user-lab-owner/select-lab.png) 1. In the navigation menu, select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment (Preview)**.
+1. Select **Add** > **Add role assignment**.
![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
This article shows you how you, as an administrator, can add additional owners t
1. On the **Lab Account** page, select **Access control (IAM)**
-1. Select **Add** > **Add role assignment (Preview)**.
+1. Select **Add** > **Add role assignment**.
![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
Title: Attach or detach an Azure Compute Gallery in Azure Lab Services | Microsoft Docs description: This article describes how to attach an Azure Compute Gallery to a lab in Azure Lab Services. Previously updated : 04/06/2022 Last updated : 07/04/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] > [!NOTE]
-> If using a version of Azure Lab Services prior to the [April 2022 Update (preview)](lab-services-whats-new.md), see [Attach or detach a shared image gallery to a lab account in Azure Lab Services](how-to-attach-detach-shared-image-gallery-1.md).
+> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Attach or detach a shared image gallery to a lab account in Azure Lab Services](how-to-attach-detach-shared-image-gallery-1.md).
This article shows you how to attach or detach an Azure Compute Gallery to a lab plan.
lab-services How To Configure Firewall Settings 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings-1.md
+
+ Title: Firewall settings for labs when using lab accounts
+description: Learn how to determine the public IP address of VMs in a lab created using a lab account so information can be added to firewall rules.
+ Last updated : 07/04/2022++++
+# Firewall settings for labs when using lab accounts
++
+Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network.
+
+Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address won't change for the life of lab. Each VM will have a different port number. The port numbers range is 49152 - 65535. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
+
+>[!IMPORTANT]
+>Each lab will have a different public IP address.
+
+> [!NOTE]
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+
+## Find public IP for a lab
+
+The public IP addresses for each lab are listed in the **All labs** page of the Lab Services lab account. For directions how to find the **All labs** page, see [View labs in a lab account](manage-labs-1.md#view-labs-in-a-lab-account).
++
+>[!NOTE]
+>You won't see the public IP address if the template machine for your lab isn't published yet.
+
+## Conclusion
+
+Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access.
+
+## Next steps
+
+- As an admin, [enable labs to connect your vnet](how-to-connect-vnet-injection.md).
+- As an educator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md).
+- As an educator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm).
lab-services How To Configure Firewall Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings.md
Title: Firewall settings for Azure Lab Services
-description: Learn how to determine the public IP address of VMs in a lab so information can be added to firewall rules.
-- Previously updated : 02/01/2022
+description: Learn how to determine the public IP address of VMs in a lab created using a lab plan so information can be added to firewall rules.
+ms.lab-
Last updated : 08/01/2022 + # Firewall settings for Azure Lab Services +
+> [!NOTE]
+> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Firewall settings for labs when using lab accounts](how-to-configure-firewall-settings-1.md).
+ Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network.
-Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address wonΓÇÖt change for the life of lab. Each VM will have a different port number. The port numbers range is 49152 - 65535. If using the April 2022 Update (preview), the port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
+Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address won't change for the life of lab. Each VM will have a different port number. The port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
>[!IMPORTANT] >Each lab will have a different public IP address.
Each lab uses single public IP address and multiple ports. All VMs, both the te
## Find public IP for a lab
-The public IP addresses for each lab are listed in the **All labs** page of the Lab Services lab account. For directions how to find the **All labs** page, see [View labs in a lab account](manage-labs-1.md#view-labs-in-a-lab-account).
+If using a customizable lab, then we can get the public ip anytime after the lab is created. If using a non-customizable lab, the lab must be published and have capacity of at least 1 to be able to get the public IP for the lab.
+
+We're going to use the Az.LabServices PowerShell module to get the public IP address for a lab. For more examples using Az.LabServices PowerShell module and how to use it, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](quick-create-lab-plan-powershell.md) and [Quickstart: Create a lab using PowerShell and the Azure module](quick-create-lab-powershell.md). For more information about cmdlets available in the Az.LabServices PowerShell module, see [Az.LabServices reference](/powershell/module/az.labservices/)
+
+```powershell
+$ResourceGroupName = "MyResourceGroup"
+$LabName = "MyLab"
+$LabPublicIP = $null
+
+$lab = Get-AzLabServicesLab -Name $LabName -ResourceGroupName $ResourceGroupName
+if (-not $lab){
+ Write-Error "Could find lab $($LabName) in resource group $($ResourceGroupName)."
+}
+
+if($lab.NetworkProfilePublicIPId){
+ #Lab is using advance networking
+ # Get public IP from networking properties
+ $LabPublicIP = Get-AzResource -ResourceId $lab.NetworkProfilePublicIPId | Get-AzPublicIpAddress | Select-Object -expand IpAddress
+}else{
+ #Get first VM from lab
+ # If customizable lab, this is the template VM
+ # If non-customizable lab, this is the first VM published.
+ $vm = $lab | Get-AzLabServicesVM | Select -First 1
+
+ if ($vm){
+ if($vm.ConnectionProfileSshAuthority){
+ $connectionAuthority = $vm.ConnectionProfileSshAuthority.Split(":")[0]
+ }else{
+ $connectionAuthority = $vm.ConnectionProfileRdpAuthority.Split(":")[0]
+ }
+ $LabPublicIP = [System.Net.DNS]::GetHostByName($connectionAuthority).AddressList.IPAddressToString | Where-Object {$_} | Select -First 1
+ }
+}
->[!NOTE]
->You wonΓÇÖt see the public IP address if the template machine for your lab isnΓÇÖt published yet.
+if ($LabPublicIP){
+ Write-Output "Public IP for $($lab.Name) is $LabPublicIP."
+}else{
+ Write-Error "Lab must be published to get public IP address."
+}
+```
## Conclusion
-Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access.
+Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999. Once the rules are updated, students can access their VMs without the network firewall blocking access.
## Next steps - As an admin, [enable labs to connect your vnet](how-to-connect-vnet-injection.md). - As an educator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md).
+- As an educator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm).
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
Title: Connect to your virtual network in Azure Lab Services | Microsoft Docs description: Learn how to connect a lab to one of your networks. Previously updated : 2/11/2022 Last updated : 07/04/2022+ # Use advanced networking (virtual network injection) to connect to your virtual network in Azure Lab Services
Last updated 2/11/2022
This article provides information about connecting a [lab plan](tutorial-setup-lab-plan.md) to your virtual network.
-Some organizations have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, etc.
+Some organizations have advanced network requirements and configurations that they want to apply to labs. For example, network requirements can include a network traffic control, ports management, access to resources in an internal network, etc. Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
-In the Azure Lab Services [April 2022 Update (preview)](lab-services-whats-new.md), customers may take control of the network for the labs using virtual network (VNet) injection. You can now tell us which virtual network to use, and weΓÇÖll inject the necessary resources into your network. VNet injection replaces the [peering to your virtual network](how-to-connect-peer-virtual-network.md), as was done in previous versions.
-
-With VNet injection, you can connect to on premise resources such as licensing servers and use user defined routes (UDRs).
-
-## Overview
-
-You can connect to your own virtual network to your lab plan when you create the lab plan.
+In the Azure Lab Services [August 2022 Update](lab-services-whats-new.md), customers may take control of the network for the labs using virtual network (VNet) injection. You can now tell Lab Services which virtual network to use, and we'll inject the necessary resources into your network. With VNet injection, you can connect to on premise resources such as licensing servers and use user defined routes (UDRs). VNet injection replaces the [peering to your virtual network](how-to-connect-peer-virtual-network.md), as was done in previous versions.
> [!IMPORTANT]
-> VNet injection must be configured when creating a lab plan. It can't be added later.
+> Advanced networking (VNet injection) must be configured when creating a lab plan. It can't be added later.
-Before you configure VNet injection for your lab plan:
+> [!NOTE]
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
-- [Create a virtual network](../virtual-network/quick-create-portal.md). The virtual network must be in the same region as the lab plan.-- [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network.-- [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md) and apply it to the subnet.-- [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**.
+## Prerequisites
-Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
+Before you configure advanced networking for your lab plan, complete the following tasks:
-> [!NOTE]
-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+1. [Create a virtual network](../virtual-network/quick-create-portal.md). The virtual network must be in the same region as the lab plan.
+1. [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network.
+1. [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**.
+1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md).
+1. [Create an inbound rule to allow traffic from SSH and RDP ports](/azure/virtual-network/manage-network-security-group).
+1. [Associate the NSG to the delegated subnet](#associate-delegated-subnet-with-nsg).
+
+Now that the prerequisites have been completed, you can [use advanced networking to connect your virtual network during lab plan creation](#connect-the-virtual-network-during-lab-plan-creation).
## Delegate the virtual network subnet for use with a lab plan
After you create a subnet for your virtual network, you must [delegate the subne
Only one lab plan at a time can be delegated for use with one subnet.
-1. Create a [virtual network](../virtual-network/manage-virtual-network.md), [subnet](../virtual-network/virtual-network-manage-subnet.md), and [network security group (NSG)](../virtual-network/manage-network-security-group.md) if not done already.
-1. Open the **Subnets** page for your virtual network.
-1. Select the subnet you wish to delegate to Lab Services to open the property window for that subnet.
-1. For the **Delegate subnet to a service** property, select **Microsoft.LabServices/labplans**. Select **Save**.
+1. Create a [virtual network](../virtual-network/manage-virtual-network.md) and [subnet](../virtual-network/virtual-network-manage-subnet.md).
+2. Open the **Subnets** page for your virtual network.
+3. Select the subnet you wish to delegate to Lab Services and open the property window for that subnet.
+4. For the **Delegate subnet to a service** property, select **Microsoft.LabServices/labplans**. Select **Save**.
:::image type="content" source="./media/how-to-connect-vnet-injection/delegate-subnet-for-azure-lab-services.png" alt-text="Screenshot of properties windows for subnet. The Delegate subnet to a service property is highlighted and set to Microsoft dot Lab Services forward slash lab plans.":::
-1. For the **Network security group** property, select the NSG you created earlier.
-
- > [!WARNING]
- > An NSG is required to allow access to the template and lab VMs. For more information about Lab Services architecture, see [Architecture Fundamentals in Azure Lab Services](classroom-labs-fundamentals.md).
-
- :::image type="content" source="./media/how-to-connect-vnet-injection/subnet-select-nsg.png" alt-text="Screenshot of properties windows for subnet. The Network security group property is highlighted.":::
-
-1. Verify the lab plan service appears in the **Delegated to** column. Verify the NSG appears in the **Security group** column.
+5. Verify the lab plan service appears in the **Delegated to** column.
:::image type="content" source="./media/how-to-connect-vnet-injection/delegated-subnet.png" alt-text="Screenshot of list of subnets for a virtual network. The Delegated to and Security group columns are highlighted." lightbox="./media/how-to-connect-vnet-injection/delegated-subnet.png":::
+## Associate delegated subnet with NSG
+
+> [!WARNING]
+> An NSG with inbound rules for RDP and/or SSH is required to allow access to the template and lab VMs.
+
+For connectivity to lab VMs, it's required to associate an NSG with the subnet delegated to Lab Services. We'll create an NSG, add an inbound rule to allow both SSH and RDP traffic, and then associate the NSG with the delegated subnet.
+
+1. [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md), if not done already.
+2. Create an inbound security rule allowing RDP and SSH traffic.
+ 1. Select **Inbound security rules** on the left menu.
+ 2. Select **+ Add** from the top menu bar. Fill in the details for adding the inbound security rule as follows:
+ 1. For **Source**, select **Any**.
+ 2. For **Source port ranges**, select **\***.
+ 3. For **Destination**, select **IP Addresses**.
+ 4. For **Destination IP addresses/CIDR ranges**, select subnet range previously created subnet.
+ 5. For **Service**, select **Custom**.
+ 6. For **Destination port ranges**, enter **22, 3389**. Port 22 is for Secure Shell protocol (SSH). Port 3389 is for Remote Desktop Protocol (RDP).
+ 7. For **Protocol**, select **Any**.
+ 8. For **Action**, select **Allow**.
+ 9. For **Priority**, select **1000**. Priority must be higher than other **Deny** rules for RDP and/or SSH.
+ 10. For **Name**, enter **AllowRdpSshForLabs**.
+ 11. Select **Add**.
+
+ :::image type="content" source="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" lightbox="media/how-to-connect-vnet-injection/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for Network security group.":::
+ 3. Wait for the rule to be created.
+ 4. Select **Refresh** on the menu bar. Our new rule will now show in the list of rules.
+3. Associate the NSG with the delegated subnet.
+ 1. Select **Subnets** on the left menu.
+ 1. Select **+ Associate** from the top menu bar.
+ 1. On the **Associate subnet** page, do the following actions:
+ 1. For **Virtual network**, select previously created virtual network.
+ 2. For **Subnet**, select previously created subnet.
+ 3. Select **OK**.
+
+ :::image type="content" source="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" lightbox="media/how-to-connect-vnet-injection/associate-nsg-with-subnet.png" alt-text="Screenshot of Associate subnet page in the Azure portal.":::
+ ## Connect the virtual network during lab plan creation 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-1. Search for **lab plan**. (**Lab plan (preview)** can also be found under the **DevOps** category.)
+1. Search for **lab plan**. (**Lab plan** can also be found under the **DevOps** category.)
1. Enter required information on the **Basics** tab of the **Create a lab plan** page. For more information, see [Tutorial: Create a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md). 1. From the **Basics** tab of the **Create a lab plan** page, select **Next: Networking** at the bottom of the page. 1. Select **Enable advanced networking**.
Once you have a lab plan configured with advanced networking, all labs created w
- Deleting your virtual network or subnet will cause the lab to stop working - Changing the DNS label on the public IP will cause the **Connect** button for lab VMs to stop working.-- Azure Firewall isnΓÇÖt currently supported.
+- Azure Firewall isn't currently supported.
## Next steps
lab-services How To Create A Lab With Shared Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource.md
Title: How to Create a Lab with a Shared Resource | Azure Lab Services
description: Learn how to create a lab that requires a resource shared among the students. Previously updated : 03/03/2022 Last updated : 07/04/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] > [!NOTE]
-> If using a version of Azure Lab Services prior to the [April 2022 Update (preview)](lab-services-whats-new.md), see [How to create a lab with a shared resource in Azure Lab Services when using lab accounts](how-to-create-a-lab-with-shared-resource-1.md).
+> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [How to create a lab with a shared resource in Azure Lab Services when using lab accounts](how-to-create-a-lab-with-shared-resource-1.md).
-When you're creating a lab, there might be some resources that need to be shared among all the students in a lab. For example, you have a licensing server or SQL Server for a database class. This article will discuss the steps to enable the shared resource for a lab. WeΓÇÖll also talk about how to limit access to the shared resource.
+When you're creating a lab, there might be some resources that need to be shared among all the students in a lab. For example, you have a licensing server or SQL Server for a database class. This article will discuss the steps to enable the shared resource for a lab. We'll also talk about how to limit access to the shared resource.
## Architecture
lab-services How To Create Manage Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-manage-template.md
Title: Manage a template of a lab in Azure Lab Services | Microsoft Docs description: Learn how to create and manage a lab template in Azure Lab Services. Previously updated : 01/31/2022 Last updated : 07/04/2022+ # Create and manage a template in Azure Lab Services
-A template in a lab is a base VM image from which all usersΓÇÖ virtual machines are created. Modify the template VM so that itΓÇÖs configured with exactly what you want to provide to the lab users. You can provide a name and description of the template that the lab users see. Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab using the template. The number of VMs created during publish equals lab capacity. If using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab. All virtual machines have the same configuration as the template.
+A template in a lab is a base VM image from which all users' virtual machines are created. Modify the template VM so that it's configured with exactly what you want to provide to the lab users. You can provide a name and description of the template that the lab users see. Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab using the template. The number of VMs created during publish equals lab capacity. If using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab. All virtual machines have the same configuration as the template.
-When you create a lab, the template VM is created but itΓÇÖs not started. You can start it, connect to it, and install any pre-requisite software for the lab, and then publish it. When you publish the template VM, itΓÇÖs automatically shut down for you if you havenΓÇÖt done so. This article describes how to manage a template VM of a lab.
+When you create a lab, the template VM is created but it's not started. You can start it, connect to it, and install any pre-requisite software for the lab, and then publish it. When you publish the template VM, it's automatically shut down for you if you haven't done so. This article describes how to manage a template VM of a lab.
> [!NOTE] > Template VMs incur cost when running, so ensure that the template VM is shutdown when you aren't using it.
Use the following steps to update a template VM.
1. On the **Template** page for the lab, select **Start template** on the toolbar. 1. Wait until the template VM is started, and then select **Connect to template** on the toolbar to connect to the template VM. Depending on the setting for the lab, you'll connect using Remote Desktop Protocol (RDP) or Secure Shell (SSH).
-1. Once you connect to the template and make changes, it will no longer have the same setup as the virtual machines last published to your users. Template changes wonΓÇÖt be reflected on your students' existing virtual machines until after you publish again.
+1. Once you connect to the template and make changes, it will no longer have the same setup as the virtual machines last published to your users. Template changes won't be reflected on your students' existing virtual machines until after you publish again.
![Connect to the template VM](./media/how-to-create-manage-template/connect-template-vm.png)
In this step, you publish the template VM. When you publish the template VM, Azu
2. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**. ![Publish template - number of VMs](./media/how-to-create-manage-template/publish-template-number-vms.png)
-3. You see the **status of publishing** the template on page. If using [Azure Lab Services April 2022 Update (preview)](lab-services-whats-new.md), publishing can take up to 20 minutes.
+3. You see the **status of publishing** the template on page. If using [Azure Lab Services August 2022 Update](lab-services-whats-new.md), publishing can take up to 20 minutes.
![Publish template - progress](./media/how-to-create-manage-template/publish-template-progress.png)
-4. Wait until the publishing is complete and then switch to the **Virtual machines pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. Confirm that you see virtual machines that are in **Unassigned** state. These VMs arenΓÇÖt assigned to students yet. They should be in **Stopped** state. You can start a student VM, connect to the VM, stop the VM, and delete the VM on this page. You can start them in this page or let your students start the VMs.
+4. Wait until the publishing is complete and then switch to the **Virtual machines pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. Confirm that you see virtual machines that are in **Unassigned** state. These VMs aren't assigned to students yet. They should be in **Stopped** state. You can start a student VM, connect to the VM, stop the VM, and delete the VM on this page. You can start them in this page or let your students start the VMs.
![Virtual machines in stopped state](./media/how-to-create-manage-template/virtual-machines-stopped.png)
lab-services How To Enable Shutdown Disconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-shutdown-disconnect.md
Title: Configure automatic shutdown of VMs for a lab in Azure Lab Services description: Learn how to enable or disable automatic shutdown of VMs when a remote desktop connection is disconnected. Previously updated : 02/04/2022 Last updated : 07/04/2022+ # Configure automatic shutdown of VMs for a lab
This article shows you how you can configure [automatic shut-down](classroom-lab
A lab plan administrator can configure automatic shutdown policies for the lab plan that you use create labs. For more information, see [Configure automatic shutdown of VMs for a lab plan](how-to-configure-auto-shutdown-lab-plans.md). As a lab owner, you can override the settings when creating a lab or after the lab is created. > [!IMPORTANT]
-> Prior to the [April 2022 Update (preview)](lab-services-whats-new.md), Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) image.
+> Prior to the [August 2022 Update](lab-services-whats-new.md), Linux labs only support automatic shut down when users disconnect and when VMs are started but users don't connect. Support also varies depending on [specific distributions and versions of Linux](../virtual-machines/extensions/diagnostics-linux.md#supported-linux-distributions). Shutdown settings are not supported by the [Data Science Virtual Machine - Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804) image.
## Configure for the lab level
lab-services How To Manage Classroom Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-classroom-labs.md
To set up a lab in a lab account, you must be a member of the **Lab Creator** ro
4. Then, select **Next** on the **Virtual machine credentials** page. 6. On the **Lab policies** page, do the following steps: 1. Enter the number of hours allotted for each user (**quota for each user**) outside the scheduled time for the lab.
- 2. For the **Auto-shutdown of virtual machines** option, specify whether you want the VM to be automatically shutdown when user disconnects. You can also specify how long the VM should wait for the user to reconnect before automatically shutting down.. For more information, see [Enable automatic shutdown of VMs on disconnect](how-to-enable-shutdown-disconnect.md).
+ 2. For the **Auto-shutdown of virtual machines** option, specify whether you want the VM to be automatically shut down when user disconnects. You can also specify how long the VM should wait for the user to reconnect before automatically shutting down. For more information, see [Enable automatic shutdown of VMs on disconnect](how-to-enable-shutdown-disconnect.md).
3. Then, select **Finish**. ![Quota for each user](./media/how-to-manage-classroom-labs/quota-for-each-user.png)
To switch to another lab from the current, select the drop-down list of labs in
You can also create a new lab using the **New lab** in this drop-down list. > [!NOTE]
-> You can also use the Az.LabServices PowerShell module (preview) to manage labs. For more information, see the [Az.LabServices home page on GitHub](https://aka.ms/azlabs/samples/PowerShellModule).
+> You can also use the Az.LabServices PowerShell module to manage labs. For more information, see the [Az.LabServices home page on GitHub](https://aka.ms/azlabs/samples/PowerShellModule).
To switch to a different lab account, select the drop-down next to the lab account and select the other lab account.
lab-services How To Manage Lab Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-lab-accounts.md
The **Shut down virtual machines when users do not connect** setting will catch
## Next steps - As an admin, [configure automatic shutdown settings for a lab account](how-to-configure-lab-accounts.md).-- As an admin, use the [Az.LabServices PowerShell module (preview)](https://aka.ms/azlabs/samples/PowerShellModule) to manage lab accounts.
+- As an admin, use the [Az.LabServices PowerShell module](https://aka.ms/azlabs/samples/PowerShellModule) to manage lab accounts.
- As an educator, [configure automatic shutdown settings for a lab](how-to-enable-shutdown-disconnect.md).
lab-services How To Manage Vm Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md
Title: Manage a VM pool in Azure Lab Services description: Learn how to manage a VM pool in Azure Lab Services Previously updated : 01/21/2022 Last updated : 07/21/2022+ # Manage a VM pool in Lab Services
On the **Reset virtual machine(s)** dialog box, select **Reset**.
### Redeploy VMs
-In the [April 2022 Update (preview)](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs. For more information and instructions on how students can redeploy their VMs, see: [Redeploy VMs](how-to-reset-and-redeploy-vm.md#redeploy-vms).
+In the [April 2022 Update](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs. For more information and instructions on how students can redeploy their VMs, see: [Redeploy VMs](how-to-reset-and-redeploy-vm.md#redeploy-vms).
## Connect to VMs
lab-services How To Reset And Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-reset-and-redeploy-vm.md
On the **Reset virtual machine(s)** dialog box, select **Reset**.
### Redeploy VMs
-In the [April 2022 Update (preview)](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs.
+In the [April 2022 Update](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs.
If you're facing difficulties accessing their VM, redeploying the VM may provide a resolution for the issue. Redeploying, unlike resetting, doesn't cause the data on the OS to be lost. When you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows), Azure Lab Services will shut down the VM, move it to a new host, and restart it. You can think of it as a refresh of the underlying VM for your machine. You donΓÇÖt need to re-register to the lab or perform any other action. Any data you saved in the OS disk (usually C: drive) of the VM will still be available after the redeploy operation. Anything saved on the temporary disk (usually D: drive) will be lost.
lab-services How To Set Virtual Machine Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-set-virtual-machine-passwords.md
By enabling the **Use same password for all virtual machines** option on this pa
![Set password dialog box](./media/how-to-set-virtual-machine-passwords/set-password.png) > [!NOTE]
-> Reset password option is not available for labs created without a template using the [April 2022 Updates (preview)](lab-services-whats-new.md).
+> Reset password option is not available for labs created without a template using the [April 2022 Updates](lab-services-whats-new.md).
## Next steps
lab-services How To Use Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-lab.md
Learn how to register for a lab. Also learn how to view, start, stop, and conne
Notice that the status toggle is in the on position. Select the status toggle again to **stop** the VM.
-Using the [Azure Lab Services portal](https://labs.azure.com/virtualmachines) is the preferred method for a student to stop their lab VM. However, with the [April 2022 Updates (preview)](lab-services-whats-new.md), Azure Lab Services will detect when a student shuts down their VM using the OS shutdown command. After a long delay to ensure the VM wasn't being restarted, the lab VM will be marked as stopped and billing will discontinue.
+Using the [Azure Lab Services portal](https://labs.azure.com/virtualmachines) is the preferred method for a student to stop their lab VM. However, with the [April 2022 Updates](lab-services-whats-new.md), Azure Lab Services will detect when a student shuts down their VM using the OS shutdown command. After a long delay to ensure the VM wasn't being restarted, the lab VM will be marked as stopped and billing will discontinue.
## Connect to the VM
lab-services Lab Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-overview.md
Title: About Azure Lab Services | Microsoft Docs description: Learn how Lab Services can make it easy to create, manage, and secure labs with VMs for educators and students. Previously updated : 01/04/2022 Last updated : 07/04/2022+ # An introduction to Azure Lab Services
The service creates and manages resources in a subscription managed by Microsoft
Azure Lab Services supports the following key capabilities and features: -- **Fast and flexible setup of a lab**. Using Azure Lab Services, lab owners can quickly [set up a lab](tutorial-setup-lab.md) for their needs. The service takes care of all Azure infrastructure including built-in scaling and resiliency of infrastructure for labs.
+- **Fast and flexible setup of a lab**. Lab owners can quickly [set up a lab](tutorial-setup-lab.md) for their needs. Azure Lab Services takes care of all Azure infrastructure including built-in scaling and resiliency of infrastructure for labs.
-- **Simplified experience for lab users**. Students who are invited to a lab get immediate access to the resources you give them inside your labs. They just need to sign in to see the full list of virtual machines for all labs that they can access. They can select a single button to connect to the virtual machines and start working. Users donΓÇÖt need Azure subscriptions to use the service. [Lab users can register](how-to-use-lab.md) to a lab with a registration code and can access the lab anytime to use the labΓÇÖs resources.
+- **Simplified experience for lab users**. Students who are invited to a lab get immediate access to the resources you give them inside your labs. They just need to sign in to see the full list of virtual machines for all labs that they can access. They can select a single button to connect to the virtual machines and start working. Users don't need Azure subscriptions to use the service. [Lab users can register](how-to-use-lab.md) to a lab with a registration code and can access the lab anytime to use the lab's resources.
-- **Cost optimization and analysis**. [Keep your budget in check](cost-management-guide.md) by controlling exactly how many hours your lab users can use the virtual machines. Set up [schedules](how-to-create-schedules.md) in the lab to allow users to use the virtual machines only during designated time slots. Set up [auto-shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) to avoid unneeded VM usage. Keep track of [individual usersΓÇÖ usage](how-to-manage-classroom-labs.md) and [set limits](how-to-configure-student-usage.md#set-quotas-for-users).
+- **Cost optimization and analysis**. [Keep your budget in check](cost-management-guide.md) by controlling exactly how many hours your lab users can use the virtual machines. Set up [schedules](how-to-create-schedules.md) in the lab to allow users to use the virtual machines only during designated time slots. Set up [auto-shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) to avoid unneeded VM usage. Keep track of [individual users' usage](how-to-manage-classroom-labs.md) and [set limits](how-to-configure-student-usage.md#set-quotas-for-users).
-- **Automatic management of Azure infrastructure and scale** Azure Lab Services is a managed service, which means that provisioning and management of a labΓÇÖs underlying infrastructure is handled automatically by the service. You can just focus on preparing the right lab experience for your users. Let the service handle the rest and roll out your labΓÇÖs virtual machines to your audience. Scale your lab to hundreds of virtual machines with a single action.
+- **Automatic management of Azure infrastructure and scale** Azure Lab Services is a managed service, which means that provisioning and management of a lab's underlying infrastructure is handled automatically by the service. You can just focus on preparing the right lab experience for your users. Let the service handle the rest and roll out your lab's virtual machines to your audience. Scale your lab to hundreds of virtual machines with a single action.
Here are some of the **use cases for managed labs**: -- Provide students with a lab of virtual machines configured with exactly whatΓÇÖs needed for a class. Give each student a limited number of hours for using the VMs for homework or personal projects.
+- Provide students with a lab of virtual machines configured with exactly what's needed for a class. Give each student a limited number of hours for using the VMs for homework or personal projects.
- Set up a pool of high-performance compute VMs to perform compute-intensive or graphics-intensive research. Run the VMs as needed, and clean up the machines once you're done.-- Move your schoolΓÇÖs physical computer lab into the cloud. Automatically scale the number of VMs only to the maximum usage and cost threshold that you set on the lab. -- Quickly create a lab of virtual machines for hosting a hackathon. Delete the lab with a single action once youΓÇÖre done.
+- Move your school's physical computer lab into the cloud. Automatically scale the number of VMs only to the maximum usage and cost threshold that you set on the lab.
+- Quickly create a lab of virtual machines for hosting a hackathon. Delete the lab with a single action once you're done.
## Example class types
You can set up labs for several types of classes with Azure Lab Services. See th
Visit the [Azure Global Infrastructure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services) page to learn where Azure Lab Services is available.
-[Azure Lab Services April 2022 Update (preview](lab-services-whats-new.md)) doesnΓÇÖt move or store customer data outside the region itΓÇÖs deployed in. However, accessing Azure Lab Services resources through the Azure Lab Services portal may cause customer data to cross regions.
+[Azure Lab Services August 2022 Update](lab-services-whats-new.md)) doesn't move or store customer data outside the region it's deployed in. However, accessing Azure Lab Services resources through the Azure Lab Services portal may cause customer data to cross regions.
-There are no guarantees customer data will stay in the region itΓÇÖs deployed to when using Azure Lab Services previous to the April 2022 Update (preview).
+There are no guarantees customer data will stay in the region it's deployed to when using Azure Lab Services previous to the August 2022 Update.
+
+## Data at rest
+
+Azure Lab Services encrypts all content using Microsoft managed encryption key.
## Next steps
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
Title: What's New in Azure Lab Services | Microsoft Docs
-description: Learn what's new in the Azure Lab Services April 2022 Updates.
+description: Learn what's new in the Azure Lab Services August 2022 Updates.
Previously updated : 04/14/2022 Last updated : 07/04/2022+
-# What's new in Azure Lab Services April 2022 Update (preview)
+# What's new in Azure Lab Services August 2022 Update
We've made fundamental improvements for the service to boost performance, reliability, and scalability. In this article, we'll describe all the great changes and new features that are available in this preview!
We've made fundamental improvements for the service to boost performance, reliab
**[Lab plans replace lab accounts](#lab-plans-replace-lab-accounts).** The lab account concept is being replaced with a new concept called a lab plan. Although similar in functionality, there are some fundamental differences between the two concepts. The lab plan serves as a collection of configurations and settings that apply to the labs created from it. Also, labs are now an Azure resource in their own right and a sibling resource to lab plans.
-**[Canvas Integration](how-to-get-started-create-lab-within-canvas.md)**. Now, educators donΓÇÖt have to leave Canvas to create their labs. Students can connect to a virtual machine from inside their course.
+**[Canvas Integration](how-to-get-started-create-lab-within-canvas.md)**. Now, educators don't have to leave Canvas to create their labs. Students can connect to a virtual machine from inside their course.
**[Per customer assigned capacity](capacity-limits.md#per-customer-assigned-capacity)**. No more sharing capacity with others. If your organization has requested more quota, Azure Lab Services will save it just for you.
We've made fundamental improvements for the service to boost performance, reliab
**[Improved auto-shutdown](how-to-configure-auto-shutdown-lab-plans.md)**. Auto-shutdown settings are now available for *all* operating systems!
-**[More built-in roles](administrator-guide.md#rbac-roles)**. Previously, there was only the Lab Creator built-in role. WeΓÇÖve added a few more roles including Lab Operator and Lab Assistant. Lab operators can manage existing labs, but not create new ones. Lab assistants can only help students by starting, stopping, or redeploying virtual machines. Lab assistants can't adjust quota or set schedules.
+**[More built-in roles](administrator-guide.md#rbac-roles)**. Previously, there was only the Lab Creator built-in role. We've added a few more roles including Lab Operator and Lab Assistant. Lab operators can manage existing labs, but not create new ones. Lab assistants can only help students by starting, stopping, or redeploying virtual machines. Lab assistants can't adjust quota or set schedules.
**[Improved cost tracking in Azure Cost Management](cost-management-guide.md#separate-the-costs)**. Lab virtual machines are now the cost unit tracked in Azure Cost Management. Tags for lab plan ID and lab name are automatically added to each cost entry. If you want to track the cost of a single lab, group the lab VM cost entries together by the lab name tag. Custom tags on labs will also propagate to Azure Cost Management entries to allow further cost analysis.
-**[Updates to lab owner experience](how-to-manage-labs.md)**. Choose to skip the template creation process when creating a new lab if you already have an image ready to use. WeΓÇÖve also added the ability to add a non-admin user to lab VMs.
+**[Updates to lab owner experience](how-to-manage-labs.md)**. Choose to skip the template creation process when creating a new lab if you already have an image ready to use. We've also added the ability to add a non-admin user to lab VMs.
**[Updates to student experience](how-to-manage-vm-pool.md#redeploy-vms)**. Students can now redeploy their VM without losing data. We also updated the registration experience for some scenarios. A lab VM is assigned to students *automatically* if the lab is set up to use Azure AD group sync, Teams, or Canvas.
For the new version of Lab Services, the lab account concept is being replaced w
|-|-| |Lab account was the only resource that administrators could interact with inside the Azure portal.|Administrators can now manage two types of resources, lab plan and lab, in the Azure portal.| |Lab account served as the **parent** for the labs.|Lab plan is a **sibling** resource to the lab resource. Grouping of labs is now done by the resource group.|
-|Lab account served as a container for the labs. A change to the lab account often affected the labs under it.|The lab plan serves as a collection of configurations and settings that are applied when a lab is **created**. If you change a lab planΓÇÖs settings, these changes wonΓÇÖt affect any existing labs that were previously created from the lab plan. (The exception is the internal help information, which will affect all labs.)|
+|Lab account served as a container for the labs. A change to the lab account often affected the labs under it.|The lab plan serves as a collection of configurations and settings that are applied when a lab is **created**. If you change a lab plan's settings, these changes won't affect any existing labs that were previously created from the lab plan. (The exception is the internal help information, which will affect all labs.)|
Lab accounts and labs have a parental relationship. Moving to a sibling relationship between the lab plan and lab provides an upgraded experience. The following table compares the previous experience with a lab account and the new improved experience with a lab plan.
Configuration that applies to all labs:
Remember, changes made to the lab settings from the lab plan will apply only to new labs created after the settings change is saved.
-Don't forget to assign user permissions on the lab plan and the lab planΓÇÖs resource group. Permission assignments for new labs may also be required if labs are created for educators instead of by them.
+Don't forget to assign user permissions on the lab plan and the lab plan's resource group. Permission assignments for new labs may also be required if labs are created for educators instead of by them.
## Getting started
-Use the following checklist to get started with Azure Lab Services April 2022 Update (preview):
+Use the following checklist to get started with Azure Lab Services August 2022 Update:
> [!div class="checklist"] > * Configure shared resources.
Use the following checklist to get started with Azure Lab Services April 2022 Up
> * Create labs. > * Update cost management reports.
-As you migrate, there likely will be a time when you're using both the April 2022 Update (preview) and the current version of Azure Lab Services. You might have both lab accounts and lab plans that coexist in your subscription and that access the same external resources.
+As you migrate, there likely will be a time when you're using both the August 2022 Update and the current version of Azure Lab Services. You might have both lab accounts and lab plans that coexist in your subscription and that access the same external resources.
With all the new enhancements, it's a good time to revisit your overall lab structure. More than one lab plan might be needed depending on your scenario. For example, the math department may only require one lab plan in one resource group. The computer science department might require multiple lab plans. One lab plan can enable advanced networking and a few custom images. Another lab plan can use basic networking and not enable custom images. Both lab plans can be kept in the same resource group.
-Let's cover each step to get started with the April 2022 Update (preview) in more detail.
+Let's cover each step to get started with the August 2022 Update in more detail.
-1. **Configure shared resources**. Optionally, [configure licensing servers](how-to-create-a-lab-with-shared-resource.md). For VMs that require access to a licensing server, create a lab using a lab plan with [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation). You can reuse the same Azure Compute Gallery and the licensing servers that you use with your lab accounts.
+1. **Configure shared resources**. Optionally, [configure licensing servers](how-to-create-a-lab-with-shared-resource.md). For VMs that require access to a licensing server, create a lab using a lab plan with [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation). You can reuse the same Azure Compute Gallery and the licensing servers that you use with your lab accounts.
1. **Create Lab plans.** 1. [Create](tutorial-setup-lab-plan.md) and [configure lab plans](#configure-a-lab-plan). If you plan to use a license server, don't forget to enable [advanced networking](how-to-connect-vnet-injection.md#connect-the-virtual-network-during-lab-plan-creation) when creating your lab plans.
Let's cover each step to get started with the April 2022 Update (preview) in mor
1. Optionally, [attach an Azure Compute Gallery](how-to-attach-detach-shared-image-gallery.md). 1. **Request capacity**. Forecast and [request dedicated VM capacity](capacity-limits.md#request-a-limit-increase). Even if enrollment isn't finalized, you can use preliminary estimates for your initial capacity request. You can request more capacity later, if needed.
-1. **Validate images**. Each of the VM sizes has been remapped to use a newer Azure VM Compute SKU. If using an [attached compute gallery](how-to-attach-detach-shared-image-gallery.md), validate images with new [Azure VM Compute SKUs](administrator-guide.md#vm-sizing). Validate that each image in the compute gallery is replicated to regions the lab plans and labs are in.
-1. **Configure integrations**. Optionally, configure [integration with Canvas](lab-services-within-canvas-overview.md) including [adding the app and linking lab plans](how-to-get-started-create-lab-within-canvas.md). Alternately, configure [integration with Teams](lab-services-within-teams-overview.md) by [adding the app to Teams groups](how-to-get-started-create-lab-within-teams.md).
-1. **Create labs**. Create labs to test educator and student experience in preparation for general availability of the updates. Lab administrators and educators should validate performance based on common student workloads.
-1. **Update cost management reports.** Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the April 2022 Update (preview). [Built-in and custom tags](cost-management-guide.md#understand-the-entries) allow for [grouping](../cost-management-billing/costs/quick-acm-cost-analysis.md) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](cost-management-guide.md).
+1. **Validate images**. Each of the VM sizes has been remapped to use a newer Azure VM Compute SKU. If using an [attached compute gallery](how-to-attach-detach-shared-image-gallery.md), validate images with new [Azure VM Compute SKUs](administrator-guide.md#vm-sizing). Validate that each image in the compute gallery is replicated to regions the lab plans and labs are in.
+1. **Configure integrations**. Optionally, configure [integration with Canvas](lab-services-within-canvas-overview.md) including [adding the app and linking lab plans](how-to-get-started-create-lab-within-canvas.md). Alternately, configure [integration with Teams](lab-services-within-teams-overview.md) by [adding the app to Teams groups](how-to-get-started-create-lab-within-teams.md).
+1. **Create labs**. Create labs to test educator and student experience in preparation for general availability of the updates. Lab administrators and educators should validate performance based on common student workloads.
+1. **Update cost management reports.** Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the August 2022 Update. [Built-in and custom tags](cost-management-guide.md#understand-the-entries) allow for [grouping](../cost-management-billing/costs/quick-acm-cost-analysis.md) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](cost-management-guide.md).
## Next steps
lab-services Quick Create Lab Plan Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-portal.md
The following steps show how to use the Azure portal to create a lab plan.
1. In the [Azure portal](https://portal.azure.com), select **Create a resource** at the top left of the screen. 1. Select **All services** in the left menu. Search for **Lab plans**.
-1. Select the **Lab plans (preview)** tile, select **Create**.
+1. Select the **Lab plans** tile, select **Create**.
:::image type="content" source="./media/quick-create-lab-plan-portal/select-lab-plans-service.png" alt-text="Screenshot that shows the Lab plan tile for Azure Marketplace.":::
lab-services Quick Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-template.md
The Azure portal is used here to deploy the template. You can also use Azure Pow
You can either use the Azure portal to check the lab plan, or use the Azure PowerShell script to list the lab plan created.
-To use Azure PowerShell, first verify the Az.LabServices (preview) module is installed. Then use the **Get-AzLabServicesLabPlan** cmdlet.
+To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLabPlan** cmdlet.
```azurepowershell-interactive Import-Module Az.LabServices
lab-services Quick Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md
The Azure portal is used here to deploy the template. You can also use Azure Pow
You can either use the Azure portal to check the lab, or use the Azure PowerShell script to list the lab resource created.
-To use Azure PowerShell, first verify the Az.LabServices (preview) module is installed. Then use the **Get-AzLabServicesLab** cmdlet.
+To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLab** cmdlet.
```azurepowershell-interactive Import-Module Az.LabServices
lab-services Reference Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reference-powershell-module.md
Title: PowerShell module for Azure Lab Services
description: Learn how to install and launch Az.LabServices PowerShell module Previously updated : 04/06/2022 Last updated : 06/29/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-note.md)] > [!NOTE]
-> To learn more about the integrated Az module experience available with the April 2022 Update (preview), see [Quickstart: Create a lab plan using PowerShell and the Azure modules](quick-create-lab-plan-powershell.md).
+> To learn more about the integrated Az module experience available with the August 2022 Update, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](quick-create-lab-plan-powershell.md).
-The [Az.LabServices (preview)](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Modules/Library) PowerShell module simplifies the management of Azure Lab Services. This module provides composable functions to create, query, update and delete resources, such as labs, lab accounts, VMs, and images.
+The [Az.LabServices](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Modules/Library) PowerShell module simplifies the management of Azure Lab Services. This module provides composable functions to create, query, update and delete resources, such as labs, lab accounts, VMs, and images.
## Install and launch
lab-services Specify Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/specify-marketplace-images.md
Title: Specify marketplace images for a lab in Azure Lab Services description: This article shows you how to specify which Marketplace images can be used during lab creation. Previously updated : 03/04/2022 Last updated : 07/04/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)] > [!NOTE]
-> If you're using a version of Azure Lab Services prior to the [April 2022 Update (preview)](lab-services-whats-new.md), see [Specify Marketplace images available to lab creators](specify-marketplace-images-1.md).
+> If you're using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Specify Marketplace images available to lab creators](specify-marketplace-images-1.md).
As an admin, you can specify the Marketplace images that educators can use when creating labs.
lab-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot.md
+
+ Title: Troubleshooting lab creation
+description: This guide helps to fix common issues you might experience when using Azure Lab Services to create labs.
+ Last updated : 07/14/2022++
+# Troubleshooting lab creation in Azure Lab Services
+
+This article provides several common reasons why an educator might not be able to create a lab successfully and what to do to resolve the issue.
+
+## You can't see a virtual machine image
+
+Possible issues:
+
+- The Azure Compute Gallery is not connected to the lab plan. To connect an Azure Compute Gallery, see [Attach or detach a compute gallery](/azure/lab-services/how-to-attach-detach-shared-image-gallery).
+
+- The image is not enabled by the administrator. This applies to both Marketplace images and Azure Compute Gallery images. To enable images, see [Specify marketplace images for labs](specify-marketplace-images.md).
+
+- The image in the attached Azure Compute Gallery is not replicated to the same location as the lab plan. For more information, see [Store and share images in an Azure Compute Gallery](/azure/virtual-machines/shared-image-galleries).
+
+- Image sizes greater than 127GB or with multiple disks are not supported.
+
+## The preferred virtual machine size is not available
+
+Possible issues:
+
+- A quota is not yet requested or you need to request more quota. To request quota, see [Request a limit increase](capacity-limits.md#request-a-limit-increase).
+
+- A quota is granted in a location other than what is enabled for the selected lab plan. For more information, see [Request a limit increase](capacity-limits.md#request-a-limit-increase).
+
+>[!NOTE]
+> You can run a script to query for lab quotas across all your regions. For more information, see the [PowerShell Quota script](https://aka.ms/azlabs/scripts/quota-powershell).
+
+## You don't see multiple regions/locations to choose from
+
+Possible issues:
+
+- The administrator only enabled one region for the lab plan. To specify regions, see [Configure regions for labs](create-and-configure-labs-admin.md).
+
+- Lab plan uses advanced networking. The lab plan and all labs must be in the same region as the network. For more information, see [Use advanced networking](how-to-connect-vnet-injection.md).
+
+## Next steps
+
+For more information about setting up and managing labs, see:
+
+- [Manage lab plans](how-to-manage-lab-plans.md)
+- [Lab setup guide](setup-guide.md)
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
+
+ Title: Use advanced networking in Azure Lab Services | Microsoft Docs
+description: Create an Azure Lab Services lab plan with advanced networking. Create two labs and verify they share same virtual network when published.
++ Last updated : 07/27/2022+++
+# Tutorial: Set up lab to lab communication with advanced networking
++
+Azure Lab Services provides a feature called advanced networking. Advanced networking enables you to control the network for labs created using lab plans. Advanced networking is used to enable various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), lab to lab communication, etc.
+
+Let's focus on the lab to lab communication scenario. For our example, we'll create labs for a web development class. Each student will need access to both a server VM and a client VM. The server and client VMs must be able to communicate with each other. We'll test communication by configuring Internet Control Message Protocol (ICMP) for each VM and allowing the VMs to ping each other.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a resource group
+> * Create a virtual network and subnet
+> * Delegate subnet to Azure Lab Services
+> * Create a network security group
+> * Update the network security group inbound rules
+> * Associate the network security group to virtual network
+> * Create a lab plan using advanced networking
+> * Create two labs
+> * Enable ICMP on the templates VMs
+> * Publish both labs
+> * Test communication between lab VMs
+
+## Prerequisites
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+
+## Create a resource group
++
+The following steps show how to use the Azure portal to [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal). For simplicity, we'll put all resources for this tutorial in the same resource group.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Resource groups**.
+1. Select **+ Create** from the top menu.
+1. On the **Basics** tab of the **Create a resource group** page, do the following actions:
+ 1. For **Subscription**, choose the subscription in which you want to create your labs.
+ 1. For **Resource group**, type **MyResourceGroup**.
+ 1. For **Region**, select the region closest to you. For more information about available regions, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies).
+ :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-resource-group.png" alt-text="Screenshot of create new resource group page in the Azure portal.":::
+1. Select **Review + Create**.
+1. Review the summary, and select **Create**.
+
+## Create a virtual network and subnet
+
+The following steps show how to use the Azure portal to create a virtual network and subnet that can be used with Azure Lab Services.
+
+> [!IMPORTANT]
+> When using Azure Lab Services with advanced networking, the virtual network, subnet, lab plan and lab must all be in the same region. For more information about which regions are supported by various products, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services).
+
+1. Open **MyResourceGroup** created previously.
+1. Select **+ Create** in the upper left corner of the Azure portal and search for "virtual network".
+1. Select the **Virtual network** tile and then select **Create**.
+1. On the **Basics** tab of the **Create virtual network**, do the following actions:
+ 1. For **Subscription**, choose the same subscription as the resource group.
+ 1. For **Resource group**, choose **MyResourceGroup**.
+ 1. For **Name**, enter **MyVirtualNetwork**.
+ 1. For **Region**, choose region that is also supported by Azure Lab Services. For more information about supported regions, see [Azure Lab Services by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services).
+ 1. Select **Next: IP Addresses**.
+
+ :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-virtual-network-basics-page.png" alt-text="Screenshot of Basics tab of Create virtual network page in the Azure portal.":::
+1. One the **IP Addresses** tab, create a subnet that will be used by the labs.
+ 1. Select **+ Add subnet**
+ 1. For **Subnet name**, enter **labservices-subnet**.
+ 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](/azure/virtual-network/virtual-network-manage-subnet).
+ 1. Select **OK**.
+1. Select **Review + Create**.
++
+1. Once validation passes, select **Create**.
+
+## Delegate subnet to Azure Lab Services
+
+In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](/azure/virtual-network/manage-subnet-delegation).
+
+1. Open the **MyVirtualNetwork** resource.
+1. Select the **Subnets** item on the left menu.
+1. Select **labservices-subnet** subnet.
+1. Under the **Subnet delegation** section, select **Microsoft.LabServices/labplans** for the **Delegate subnet to a service** setting.
+1. Select **Save**.
++
+## Create a network security group
++
+An NSG is required when using advanced networking in Azure Lab Services. In this section, we'll create the NSG. In the following section, we'll add some inbound rules needed to access lab VMs.
+
+To create an NSG, complete the following steps:
+
+1. Select **+ Create a Resource** in the upper left corner of the Azure portal and search for "network security group".
+1. Select the **Network security group** tile and then select **Create**.
+1. On the **Basics** tab, of the **Create network security group**, do the following actions:
+ 1. For **Subscription**, choose the same subscription as used previously.
+ 1. For **Resource group**, choose **MyResourceGroup**.
+ 1. For the **Name**, enter **MyNsg**.
+ 1. For **Region**, choose same region as **MyVirtualNetwork** that was created previously.
+ 1. Select **Review + Create**.
+ :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-network-security-group-basics-tab.png" alt-text="Screenshot of the Basics tab of the Create Network security group page in the Azure portal.":::
+1. When validation passes, select **Create**.
+
+## Update the network security group inbound rules
+
+To ensure that students can RDP to the lab VMs, we need to create an **Allow** security rule. When using Linux, we need to adapt the rule for SSH. Let's create a rule that allows both RDP and SSH traffic. We'll use the subnet range defined in the previous section.
+
+1. Open **MyNsg**.
+1. Select **Inbound security rules** on the left menu.
+1. Select **+ Add** from the top menu bar. Fill in the details for adding the inbound security rule as follows:
+ 1. For **Source**, select **Any**.
+ 1. For **Source port ranges**, select **\***.
+ 1. For **Destination**, select **IP Addresses**.
+ 1. For **Destination IP addresses/CIDR ranges**, select subnet range from **labservices-subnet** created previously.
+ 1. For **Service**, select **Custom**.
+ 1. For **Destination port ranges**, enter **22, 3389**. Port 22 is for Secure Shell protocol (SSH). Port 3389 is for Remote Desktop Protocol (RDP).
+ 1. For **Protocol**, select **Any**.
+ 1. For **Action**, select **Allow**.
+ 1. For **Priority**, select **1000**. Priority must be higher than other **Deny** rules for RDP and/or SSH.
+ 1. For **Name**, enter **AllowRdpSshForLabs**.
+ 1. Select **Add**.
+
+ :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for Network security group.":::
+1. Wait for the rule to be created.
+1. Select **Refresh** on the menu bar. Our new rule will now show in the list of rules.
+
+## Associate network security group to virtual network
+
+We now have an NSG with an inbound security rule to allow lab VMs to connect to the virtual network. Let's associate the NSG with the virtual network we created earlier.
+
+1. Open **MyVirtualNetwork**.
+1. Select **Subnets** on the left menu.
+1. Select **+ Associate** from the top menu bar.
+1. On the **Associate subnet** page, do the following actions:
+ 1. For **Virtual network**, select **MyVirtualNetwork**.
+ 1. For **Subnet**, select **labservices-subnet**.
+ 1. Select **OK**.
++
+> [!WARNING]
+> Connecting the network security group to the subnet is a **required step**. Students will not be able to connect to their VMs if there is no network security group associated with the subnet.
+
+## Create a lab plan using advanced networking
+
+Now that we have the network created and configured, we can create the lab plan.
+
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Search for **lab plan**.
+1. On the **Lab plan** tile, select the **Create** dropdown and choose **Lab plan**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/select-lab-plans-service.png" alt-text="All Services -> Lab Services":::
+1. On the **Basics** tab of the **Create a lab plan** page, do the following actions:
+ 1. For **Azure subscription**, select the subscription used earlier.
+ 2. For **Resource group**, select an existing resource group or select **Create new**, and enter a name for the new resource group.
+ 3. For **Name**, enter a lab plan name. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices).
+ 4. For **Region**, select a location/region in which you want to create the lab plan.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-plan-basics-page.png" alt-text="Screenshot of the basics page for lab plan creation.":::
+1. Select **Next: Networking**.
+1. On the **Networking** tab, do the following actions:
+ 1. Check **Enable advanced networking**.
+ 1. For **Virtual network**, choose **MyVirtualNetwork**.
+ 1. For **Subnet**, choose **labservices-subnet**.
+ 1. Select **Review + Create**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-plan-networking-page.png" alt-text="Screenshot of the networking page for lab plan creation.":::
+1. When the validation succeeds, select **Create**.
+
+> [!NOTE]
+> Advanced networking can only be enabled when lab plans are created. Advanced networking can't be added later.
+
+## Create two labs
+
+Next, let's create two labs that are using advanced networking. These labs will use the **labservices-subnet** we associated with Azure Lab Services. Any lab VMs created using **MyLabPlan** will be able to communicate with each other. Communication can be restricted by using NSGs, firewalls, etc.
+
+To create a lab, see the following steps. We'll run the steps twice. Once to create the lab with the server VMs and once to create the lab with the client VMs.
+
+1. Navigate to Lab Services web site: [https://labs.azure.com](https://labs.azure.com).
+1. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts.
+1. Select **MyResourceGroup** from the dropdown on the menu bar.
+1. Select **New lab**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-button.png" alt-text="Screenshot of Azure Lab Services portal. New lab button is highlighted.":::
+1. In the **New Lab** window, do the following actions:
+ 1. Specify a **name**. The name should be easily identifiable. We'll use **MyServerLab** for the lab with the server VMs and **MyClientLab** for the lab with the client VMs. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices).
+ 1. Choose a **virtual machine image**. For simplicity we'll use **Windows 11 Pro**, but you can choose another available image if you want. For more information about enabling virtual machine images, see [Specify Marketplace images available to lab creators](specify-marketplace-images.md).
+ 1. For **size**, select **Medium**.
+ 1. **Region** will only have one region. When a lab uses advanced networking, the lab must be in the same region as the associated subnet.
+ 1. Select **Next**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-window.png" alt-text="Screenshot of the New lab window for Azure Lab Services.":::
+
+1. On the **Virtual machine credentials** page, specify default administrator credentials for all VMs in the lab. Specify the **name** and **password** for the administrator. By default all the student VMs will have the same password as the one specified here. Select **Next**.
+
+ :::image type="content" source="./media/tutorial-setup-lab/virtual-machine-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials window when creating a new Azure Lab Services lab.":::
+
+ > [!IMPORTANT]
+ > Make a note of user name and password. They won't be shown again.
+
+1. On the **Lab policies** page, leave the default selections and select **Next**.
+
+ :::image type="content" source="./media/tutorial-setup-lab/quota-for-each-user.png" alt-text="Screenshot of the Lab policy window when creating a new Azure Lab Services lab.":::
+
+1. On the **Template virtual machine settings** window, leave the selection on **Create a template virtual machine**. Select **Finish**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/template-virtual-machine-settings.png" alt-text="Screenshot of the Template virtual machine settings windows when creating a new Azure Lab Services lab.":::
+
+1. You should see the following screen that shows the status of the template VM creation.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/create-template-vm-progress.png" alt-text="Screenshot of status of the template VM creation.":::
+
+1. Wait for the template VM to be created.
+
+## Enable ICMP on the lab templates
+
+Once the labs have been created, we'll enable ICMP (ping). Using ping is a simple example to show the template and lab VMs from different labs may communicate with each other. First, we'll enable ICMP on the template VMs for both labs. Enabling ICMP on the template VM will also enable it on the lab VMs. Once the labs are published, the lab VMs will be able to ping each other.
+
+To enable ICMP, complete the following steps for each template VM in each lab.
+
+1. On the **Template** page for the lab, start and connect to the template VM.
+ 1. Select **Start template**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-start-template.png" alt-text="Screenshot of Azure Lab Services template page. The Start template menu button is highlighted.":::
+
+ > [!NOTE]
+ > Template VMs incur **cost** when running, so ensure that the template VM is shutdown when you donΓÇÖt need it to be running.
+
+ 1. Once the template is started, select **Connect to template**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-connect-to-template.png" alt-text="Screenshot of Azure Lab Services template page. The Connect to template menu button is highlighted.":::
+
+Now that were logged on to the template VM, let's modify the firewall rules on the VM to allow ICMP. Since we're using Windows 11, we can use PowerShell and the [Enable-NetFilewallRule](/powershell/module/netsecurity/enable-netfirewallrule) cmdlet. To open a PowerShell window:
+
+1. Select the Start button.
+1. Type "PowerShell"
+1. Select the **Windows PowerShell** app.
+
+Run the following code:
+
+```powershell
+Enable-NetFirewallRule -Name CoreNet-Diag-ICMP4-EchoRequest-In
+Enable-NetFirewallRule -Name CoreNet-Diag-ICMP4-EchoRequest-Out
+```
+
+On the **Template** page for the lab, select **Stop** to stop the template VM.
+
+## Publish both labs
+
+In this step, you publish the lab. When you publish the template VM, Azure Lab Services creates VMs in the lab by using the template. All virtual machines have the same configuration as the template.
+
+1. On the **Template** page, select **Publish**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-publish-template.png" alt-text="Screenshot of Azure Lab Services template page. The Publish menu button is highlighted.":::
+1. Enter the number of machines that are needed for the lab, then select **Publish**.
+
+ :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/publish-template-number-vms.png" alt-text="Screenshot of confirmation window for publish action of Azure.":::
+
+ > [!WARNING]
+ > Publishing is an irreversible action! It can't be undone.
+
+1. You see the **status of publishing** the template page. Wait until the publishing is complete.
+
+## Test communication between lab VMs
+
+In this section weΓÇÖll, wrap up by showing that the two student virtual machines in different labs are able to communicate with each other.
+
+First, let's start and connect to a lab VM from each lab. Complete the following steps for each lab.
+
+1. Open the lab in the [Azure Lab Services website](https://labs.azure.com).
+1. Select **Virtual machine pool** on the left menu.
+1. Select a single VM listed in the virtual machine pool.
+1. Take note of the **Private IP Address** for the VM. We'll need the private IP addresses of both the server lab and client lab VMs later.
+1. Select the **State** slider to change the state from **Stopped** to **Starting**.
+
+ > [!NOTE]
+ > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-configure-student-usage.md?#set-quotas-for-users).
+1. Once the **State** is **Running**, select the connect icon for the running VM. Open the download RDP file to connect to the VM. For more information about connection experiences on different operating systems, see [Connect to a lab VM](connect-virtual-machine.md).
++
+Now we can use the ping utility to test cross-lab communication. From the lab VM in the server lab, open a command prompt. Use `ping {ip-address}`. The `{ip-address}` is the **Private IP Address** of the client VM, that we noted previously. Test can also be done from the VM from the client lab to the lab VM in the server lab.
++
+When done, navigate to the **Virtual machine pool** page for each lab, select the lab VM and select the **State** slider to stop the lab VM.
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete the virtual network, network security group, lab plan and labs with the following steps:
+
+1. In the [Azure portal](https://portal.azure.com), select the resource group you want to delete.
+1. Select **Delete resource group**.
+1. To confirm the deletion, type the name of the resource group
+
+## Next steps
+
+>[!div class="nextstepaction"]
+>[Add students to the labs](how-to-configure-student-usage.md)
lab-services Tutorial Setup Lab Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-account.md
To set up a lab in a lab account, the user must be a member of the **Lab Creator
1. On the **Lab Account** page, select **Access control (IAM)**
-1. Select **Add** > **Add role assignment (Preview)**.
+1. Select **Add** > **Add role assignment**.
![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
lab-services Tutorial Setup Lab Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-plan.md
The following steps illustrate how to use the Azure portal to create a lab plan
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-1. Search for **lab plan**. (**Lab plan (preview)** can also be found under the **DevOps** category.)
-1. On the **Lab plan (preview)** tile, select the **Create** dropdown and choose **Lab plan**.
+1. Search for **lab plan**. (**Lab plan** can also be found under the **DevOps** category.)
+1. On the **Lab plan** tile, select the **Create** dropdown and choose **Lab plan**.
:::image type="content" source="./media/tutorial-setup-lab-plan/select-lab-plans-service.png" alt-text="All Services -> Lab Services"::: 1. On the **Basics** tab of the **Create a lab plan** page, do the following actions:
The following steps illustrate how to use the Azure portal to create a lab plan
:::image type="content" source="./media/tutorial-setup-lab-plan/lab-plan-page.png" alt-text="Lab plan page"::: ## Add a user to the Lab Creator role+ [!INCLUDE [Add Lab Creator role](./includes/lab-services-add-lab-creator.md)] ## Next steps
lab-services Tutorial Setup Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab.md
In this step, you publish the lab. When you publish the template VM, Azure Lab S
1. On the **Template** page, select **Publish** on the toolbar.
- :::image type="content" source="./media/tutorial-setup-lab/template-page-publish-button.png" alt-text="Screenshot of Azure Lab Services template page. The Publish template menu button is highlighted.":::
+ :::image type="content" source="./media/tutorial-setup-lab/template-page-publish-button.png" alt-text="Screenshot of Azure Lab Services template page. The Publish template menu button is highlighted.":::
- > [!WARNING]
- > Publishing is an irreversible action! It can't be undone.
+ > [!WARNING]
+ > Publishing is an irreversible action! It can't be undone.
2. On the **Publish template** page, select **Publish**. Select **OK** when warned that publishing is a permanent action.
- :::image type="content" source="./media/tutorial-setup-lab/publish-template-number-vms.png" alt-text="Screenshot of confirmation window for publish action of Azure.":::
3. You see the **status of publishing** the template on page.
- :::image type="content" source="./media/tutorial-setup-lab/publish-template-progress.png" alt-text="Screenshot of Azure Lab Services template page. The publishing in progress message is highlighted.":::
+ :::image type="content" source="./media/tutorial-setup-lab/publish-template-progress.png" alt-text="Screenshot of Azure Lab Services template page. The publishing in progress message is highlighted.":::
4. Wait until the publishing is complete. 5. Select **Virtual machine pool** on the left menu or select **Virtual machines** tile on the dashboard page to see the list of available machines. Confirm that you see virtual machines that are in **Unassigned** state. These VMs aren't assigned to students yet. They should be in **Stopped** state. For more information about managing the virtual machine pool, see [Manage a VM pool in Lab Services](how-to-manage-vm-pool.md).
- :::image type="content" source="./media/tutorial-setup-lab/virtual-machines-stopped.png" alt-text="Screenshot of virtual machines stopped. The virtual machine pool menu is highlighted.":::
+ :::image type="content" source="./media/tutorial-setup-lab/virtual-machines-stopped.png" alt-text="Screenshot of virtual machines stopped. The virtual machine pool menu is highlighted.":::
- > [!NOTE]
- > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-configure-student-usage.md?#set-quotas-for-users).
+> [!NOTE]
+> When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-configure-student-usage.md?#set-quotas-for-users).
## Set a schedule for the lab
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022 ms.suite: integration
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
description: 'Prebuilt Docker images for inference (scoring) in Azure Machine Le
--++ Last updated 07/14/2022
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
Send the following to your labelers, after filling in your workspace and project
* Learn more about [working with a data labeling vendor company](how-to-outsource-data-labeling.md) * [Create an image labeling project and export labels](how-to-create-image-labeling-projects.md)
-* [Create a text labeling project and export labels (preview)](how-to-create-text-labeling-projects.md)
+* [Create a text labeling project and export labels (preview)](how-to-create-text-labeling-projects.md)
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Previously updated : 03/23/2022 Last updated : 08/01/2022
The workspace admin also cannot create a new role. It can only assign existing b
} ```
-<a name="labeler"></a>
-### Data labeler
+### Data labeling
+
+# [Data labeler](#tab/labeler)
Allows you to define a role scoped only to labeling data:
Allows you to define a role scoped only to labeling data:
} ```
-### Labeling Team Lead
+# [Labeling team lead](#tab/team-lead)
Allows you to review and reject the labeled dataset and view labeling insights. In addition to it, this role also allows you to perform the role of a labeler. `labeling_team_lead_custom_role.json` : ```json {
- "properties": {
- "roleName": "Labeling Team Lead",
- "description": "Team lead for Labeling Projects",
- "assignableScopes": [
- "/subscriptions/<subscription_id>"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.MachineLearningServices/workspaces/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"
- ],
- "notActions": [
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/write",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/delete",
- "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
- ],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
+ "Name": "Labeling Team Lead",
+ "IsCustom": true,
+ "Description": "Team lead for Labeling Projects",
+ "Actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"
+ ],
+ "NotActions": [
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/write",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/delete",
+ "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
+ ],
+ "AssignableScopes": [
+ "/subscriptions/<subscription_id>"
+ ]
} ```
+# [Vendor account manager](#tab/vendor-admin)
+
+A vendor account manager can help manage all the vendor roles and perform any labeling action. They cannot modify projects or view MLAssist experiments.
+
+`Vendor_admin_role.json` :
+```json
+{
+ "Name": "Vendor account admin",
+ "IsCustom": true,
+ "Description": "Vendor account admin for Labeling Projects",
+ "Actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/export/action",
+ "Microsoft.MachineLearningServices/workspaces/datasets/registered/read"
+ ],
+ "AssignableScopes": [
+ "/subscriptions/<subscription_id>"
+ ]
+}
+```
+
+# [Customer QA](#tab/customer-qa)
+
+A customer quality assurance role can view project dashboards, preview datasets, export a labeling project, and review submitted labels. This role can't submit labels.
+
+`customer_qa_role.json` :
+```json
+{
+ "Name": "Customer QA",
+ "IsCustom": true,
+ "Description": "Customer QA for Labeling Projects",
+ "Actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/export/action",
+ "Microsoft.MachineLearningServices/workspaces/datasets/registered/read"
+ ],
+ "AssignableScopes": [
+ "/subscriptions/<subscription_id>"
+ ]
+}
+```
+
+# [Vendor QA](#tab/vendor-qa)
+
+A vendor quality assurance role can perform a customer quality assurance role, but cannot preview the dataset.
+
+`vendor_qa_role.json`:
+```json
+{
+ "Name": "Vendor QA",
+ "IsCustom": true,
+ "Description": "Vendor QA for Labeling Projects",
+ "Actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
+ ],
+ "AssignableScopes": [
+ "/subscriptions/<subscription_id>"
+ ]
+}
+```
++ ## Troubleshooting Here are a few things to be aware of while you use Azure role-based access control (Azure RBAC):
machine-learning How To Attach Kubernetes Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md
In this article, you can learn about steps to configure an existing Kubernetes c
## Limitations -- [Using a service principal with AKS](../aks/kubernetes-service-principal.md) is **not supported** by Azure Machine Learning. The AKS cluster must use a managed identity instead.
+- [Using a service principal with AKS](../aks/kubernetes-service-principal.md) is **not supported** by Azure Machine Learning. The AKS cluster must use a **system-assigned managed identity** instead.
- [Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When deploying an AKS Cluster, local accounts are enabled by default. - If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions. Without access to the API server, the machine learning pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
+- If you have previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, please use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
## Deploy AzureML extension to Kubernetes cluster
Otherwise, if a user-assigned managed identity is specified in Azure Machine Lea
|Azure resource name |Role to be assigned|Description| |--|--|--| |Azure Relay|Azure Relay Owner|Only applicable for Arc-enabled Kubernetes cluster. Azure Relay isn't created for AKS cluster without Arc connected.|
-|Azure Arc-enabled Kubernetes|Reader|Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
+|Azure Arc-enabled Kubernetes or AKS|Reader|Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
Azure Relay resource is created during the extension deployment under the same Resource Group as the Arc-enabled Kubernetes cluster.
Set the `--type` argument to `Kubernetes`. Use the `identity_type` argument to e
> [!IMPORTANT] > `--user-assigned-identities` is only required for `UserAssigned` managed identities. Although you can provide a list of comma-separated user managed identities, only the first one is used when you attach your cluster.-
+>
+> Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace existed. You need to verify that the specified namespace exists in your cluster, otherwise, any AzureML workloads submitted to this compute will fail.
### [Python](#tab/python) [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
Attaching a Kubernetes cluster makes it available to your workspace for training
1. Enter a compute name and select your Kubernetes cluster from the dropdown.
- * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster.
+ * **(Optional)** Enter Kubernetes namespace, which defaults to `default`. All machine learning workloads will be sent to the specified Kubernetes namespace in the cluster. Compute attach won't create the Kubernetes namespace automatically or validate whether the kubernetes namespace exists. You need to verify that the specified namespace exists in your cluster, otherwise, any AzureML workloads submitted to this compute will fail.
* **(Optional)** Assign system-assigned or user-assigned managed identity. Managed identities eliminate the need for developers to manage credentials. For more information, see [managed identities overview](../active-directory/managed-identities-azure-resources/overview.md) .
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
description: Learn to deploy your AutoML model as a web service that's automatic
-+ -+ Last updated 05/11/2022
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
description: Learn how to use a custom container to use open-source servers in A
--++ Last updated 05/11/2022
machine-learning How To Deploy Managed Online Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md
description: Learn to deploy your machine learning model to Azure using Python S
-+ -+ Last updated 05/25/2022
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
description: Learn to deploy your MLflow model as a web service that's automatic
--++ Last updated 03/31/2022
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
Last updated 06/10/2022 --++ ms.devlang: azurecli
machine-learning How To Extend Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-extend-prebuilt-docker-image-inference.md
description: 'Extend Prebuilt docker images in Azure Machine Learning'
--++ Last updated 10/21/2021
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
Title: Azure Machine Learning inference HTTP server description: Learn how to enable local development with Azure machine learning inference http server.--++
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Azure Machine Learning allows you to work with different types of models. In thi
* The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install). * The Azure Machine Learning [CLI v2](how-to-configure-cli.md).
+## Supported paths
+
+When you provide a model you want to register, you'll need to specify a `path` parameter that points to the data or job location. Below is a table that shows the different data locations supported in Azure Machine Learning and examples for the `path` parameter:
++
+|Location | Examples |
+|||
+|A path on your local computer | `mlflow-model/model.pkl` |
+|A path on an AzureML Datastore | `azureml://datastores/<datastore-name>/paths/<path_on_datastore>` |
+|A path from an AzureML job | `azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location>` |
+|A path from an MLflow job | `runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location>` |
+
+## Supported modes
+
+When you run a job with model inputs/outputs, you can specify the *mode* - for example, whether you would like the model to be read-only mounted or downloaded to the compute target. The table below shows the possible modes for different type/mode/input/output combinations:
+
+Type | Input/Output | `direct` | `download` | `ro_mount`
+ | | :: | :: | :: |
+`custom` file | Input | Γ£ô | | |
+`custom` folder | Input | Γ£ô | Γ£ô | Γ£ô |
+`mlflow` | Input | | Γ£ô | Γ£ô |
+`custom` file | Output | Γ£ô | Γ£ô | Γ£ô |
+`custom` folder | Output | Γ£ô | Γ£ô | Γ£ô |
+`mlflow` | Output | Γ£ô | Γ£ô | Γ£ô |
++ ## Create a model in the model registry [Model registration](concept-model-management-and-deployment.md) allows you to store and version your models in the Azure cloud, in your workspace. The model registry helps you organize and keep track of your trained models.
You can create a model from a cloud path by using any one of the following suppo
az ml model create --name my-model --version 1 --path azureml://datastores/myblobstore/paths/models/cifar10/cifar.pt ```
-The examples use the shorthand `azureml` scheme for pointing to a path on the `datastore` by using the syntax `azureml://datastores/${{datastore-name}}/paths/${{path_on_datastore}}`.
+The examples use the shorthand `azureml` scheme for pointing to a path on the `datastore` by using the syntax `azureml://datastores/<datastore-name>/paths/<path_on_datastore>`.
For a complete example, see the [CLI reference](/cli/azure/ml/model).
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Previously updated : 07/28/2022 Last updated : 08/08/2022
The following table compares how services access different parts of an Azure Mac
| Scenario | Workspace | Associated resources | Training compute environment | Inferencing compute environment | |-|-|-|-|-|-| |**No virtual network**| Public IP | Public IP | Public IP | Public IP |
-|**Public workspace, all other resources in a virtual network** | Public IP | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Private IP | Private IP |
+|**Public workspace, all other resources in a virtual network** | Public IP | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Public IP | Private IP |
|**Secure resources in a virtual network**| Private IP (private endpoint) | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Private IP | Private IP | * **Workspace** - Create a private endpoint for your workspace. The private endpoint connects the workspace to the vnet through several private IP addresses.
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility.md
description: 'Extend prebuilt docker images with Python package extensibility so
--++ Last updated 10/21/2021
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
description: Safe rollout for online endpoints using Python SDK v2 (preview).
-+ -+ Last updated 05/25/2022
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-e
## Next steps
-* [Deploy machine learning models to Azure](how-to-deploy-and-where.md)
+* [Deploy machine learning models to Azure](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
* [Deploy and score a machine learning model by using a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md) * [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
machine-learning How To Troubleshoot Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-prebuilt-docker-image-inference.md
description: 'Troubleshooting steps for using prebuilt Docker images for inferen
--++ Last updated 10/21/2021
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
az ml job download --name <sweep-job> --output-name model
## Next steps * [Track an experiment](how-to-log-view-metrics.md)
-* [Deploy a trained model](how-to-deploy-managed-online-endpoint-sdk-v2.md)
+* [Deploy a trained model](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
--++ Last updated 05/25/2022
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
--++ Last updated 10/21/2021
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
In the list of global Azure regions, there are several regions that serve specif
* Azure Government regions **US-Arizona** and **US-Virginia**. * Azure China 21Vianet region **China-East-2**.
-Azure Machine Learning is still in development in Airgap Regions.
+Azure Machine Learning is still in development in air-gap Regions.
The information in the rest of this document provides information on what features of Azure Machine Learning are available in these regions, along with region-specific information on using these features. ## Azure Government
The information in the rest of this document provides information on what featur
| Network isolation for managed online endpoints | Public Preview | NO | NO | | **Compute** | | | | | [quota management across workspaces](how-to-manage-quotas.md) | GA | YES | YES |
+| [Kubernetes compute](./how-to-attach-kubernetes-anywhere.md) | GA | NO | NO |
| **[Data for machine learning](concept-data.md)** | | | | | Create, view, or edit datasets and datastores from the SDK | GA | YES | YES | | Create, view, or edit datasets and datastores from the UI | GA | YES | YES |
The information in the rest of this document provides information on what featur
| **Machine learning lifecycle** | | | | | [Model profiling](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL | | [The Azure ML CLI 1.0](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
-| [FPGA-based Hardware Accelerated Models](how-to-deploy-fpga-web-service.md) | GA | NO | NO |
+| [FPGA-based Hardware Accelerated Models](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO |
| [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO | | [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO | | [Integrate Azure Stream Analytics with Azure Machine Learning](../stream-analytics/machine-learning-udf.md) | Public Preview | NO | NO |
The information in the rest of this document provides information on what featur
| **Inference** | | | | | Managed online endpoints | GA | YES | YES | | [Batch inferencing](tutorial-pipeline-batch-scoring-classification.md) | GA | YES | YES |
-| [Azure Stack Edge with FPGA](how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
+| [Azure Stack Edge with FPGA](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
| **Other** | | | | | [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES | | [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |
The information in the rest of this document provides information on what featur
| Network isolation for managed online endpoints | Preview | NO | N/A | | **Compute** | | | | | quota management across workspaces | GA | YES | N/A |
+| [Kubernetes compute](./how-to-attach-kubernetes-anywhere.md) | GA | NO | NO |
| **Data for machine learning** | | | | | Create, view, or edit datasets and datastores from the SDK | GA | YES | N/A | | Create, view, or edit datasets and datastores from the UI | GA | YES | N/A |
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
For a low code experience, see how to use the [Azure Machine Learning studio to
Datastores currently support storing connection information to the storage services listed in the following matrix. > [!TIP]
-> **For unsupported storage solutions**, and to save data egress cost during ML experiments, [move your data](#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution.
+> **For unsupported storage solutions** (those not listed in the table below), you may run into issues connecting and working with your data. We suggest you [move your data](#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. Doing this may also help with additional scenarios, like saving data egress cost during ML experiments.
| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code ||||||
Azure Data Factory provides efficient and resilient data transfer with more than
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md) * [Train a model](../how-to-set-up-training-targets.md)
-* [Deploy a model](../how-to-deploy-and-where.md)
+* [Deploy a model](../how-to-deploy-and-where.md)
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-fpga-web-service.md
+
+ Title: Deploy ML models to FPGAs
+
+description: Learn about field-programmable gate arrays. You can deploy a web service on an FPGA with Azure Machine Learning for ultra-low latency inference.
++++++ Last updated : 10/21/2021++++
+# Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning
++
+In this article, you learn about FPGAs and how to deploy your ML models to an Azure FPGA using the [hardware-accelerated models Python package](/python/api/azureml-accel-models/azureml.accel) from [Azure Machine Learning](../overview-what-is-azure-machine-learning.md).
+
+## What are FPGAs?
+
+FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects. The interconnects allow these blocks to be configured in various ways after manufacturing. Compared to other chips, FPGAs provide a combination of programmability and performance.
+
+FPGAs make it possible to achieve low latency for real-time inference (or model scoring) requests. Asynchronous requests (batching) aren't needed. Batching can cause latency, because more data needs to be processed. Implementations of neural processing units don't require batching; therefore the latency can be many times lower, compared to CPU and GPU processors.
+
+You can reconfigure FPGAs for different types of machine learning models. This flexibility makes it easier to accelerate the applications based on the most optimal numerical precision and memory model being used. Because FPGAs are reconfigurable, you can stay current with the requirements of rapidly changing AI algorithms.
+
+![Diagram of Azure Machine Learning FPGA comparison](./media/how-to-deploy-fpga-web-service/azure-machine-learning-fpga-comparison.png)
+
+|Processor| Abbreviation |Description|
+||:-:||
+|Application-specific integrated circuits|ASICs|Custom circuits, such as Google's Tensor Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.|
+|Field-programmable gate arrays|FPGAs|FPGAs, such as those available on Azure, provide performance close to ASICs. They're also flexible and reconfigurable over time, to implement new logic.|
+|Graphics processing units|GPUs|A popular choice for AI computations. GPUs offer parallel processing capabilities, making it faster at image rendering than CPUs.|
+|Central processing units|CPUs|General-purpose processors, the performance of which isn't ideal for graphics and video processing.|
+
+## FPGA support in Azure
+
+Microsoft Azure is the world's largest cloud investment in FPGAs. Microsoft uses FPGAs for deep neural networks (DNN) evaluation, Bing search ranking, and software defined networking (SDN) acceleration to reduce latency, while freeing CPUs for other tasks.
+
+FPGAs on Azure are based on Intel's FPGA devices, which data scientists and developers use to accelerate real-time AI calculations. This FPGA-enabled architecture offers performance, flexibility, and scale, and is available on Azure.
+
+Azure FPGAs are integrated with Azure Machine Learning. Azure can parallelize pre-trained DNN across FPGAs to scale out your service. The DNNs can be pre-trained, as a deep featurizer for transfer learning, or fine-tuned with updated weights.
+
+|Scenarios & configurations on Azure|Supported DNN models|Regional support|
+|--|--|-|
+|+ Image classification and recognition scenarios<br/>+ TensorFlow deployment (requires Tensorflow 1.x)<br/>+ Intel FPGA hardware|- ResNet 50<br/>- ResNet 152<br/>- DenseNet-121<br/>- VGG-16<br/>- SSD-VGG|- East US<br/>- Southeast Asia<br/>- West Europe<br/>- West US 2|
+
+To optimize latency and throughput, your client sending data to the FPGA model should be in one of the regions above (the one you deployed the model to).
+
+The **PBS Family of Azure VMs** contains Intel Arria 10 FPGAs. It will show as "Standard PBS Family vCPUs" when you check your Azure quota allocation. The PB6 VM has six vCPUs and one FPGA. PB6 VM is automatically provisioned by Azure Machine Learning during model deployment to an FPGA. It's only used with Azure ML, and it can't run arbitrary bitstreams. For example, you won't be able to flash the FPGA with bitstreams to do encryption, encoding, etc.
+
+## Deploy models on FPGAs
+
+You can deploy a model as a web service on FPGAs with [Azure Machine Learning Hardware Accelerated Models](/python/api/azureml-accel-models/azureml.accel). Using FPGAs provides ultra-low latency inference, even with a single batch size.
+
+In this example, you create a TensorFlow graph to preprocess the input image, make it a featurizer using ResNet 50 on an FPGA, and then run the features through a classifier trained on the ImageNet data set. Then, the model is deployed to an AKS cluster.
+
+### Prerequisites
+
+- An Azure subscription. If you don't have one, create a [pay-as-you-go](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) account (free Azure accounts aren't eligible for FPGA quota).
+
+- An Azure Machine Learning workspace and the Azure Machine Learning SDK for Python installed, as described in [Create a workspace](../how-to-manage-workspace.md).
+
+- The hardware-accelerated models package: `pip install --upgrade azureml-accel-models[cpu]`
+
+- The [Azure CLI](/cli/azure/install-azure-cli)
+
+- FPGA quota. Submit a [request for quota](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2nac9-PZhBDnNSV2ITz0LNUN0U5S0hXRkNITk85QURTWk9ZUUFUWkkyTC4u), or run this CLI command to check quota:
+
+ ```azurecli-interactive
+ az vm list-usage --location "eastus" -o table --query "[?localName=='Standard PBS Family vCPUs']"
+ ```
+
+ Make sure you have at least 6 vCPUs under the __CurrentValue__ returned.
+
+### Define the TensorFlow model
+
+Begin by using the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro) to create a service definition. A service definition is a file describing a pipeline of graphs (input, featurizer, and classifier) based on TensorFlow. The deployment command compresses the definition and graphs into a ZIP file, and uploads the ZIP to Azure Blob storage. The DNN is already deployed to run on the FPGA.
+
+1. Load Azure Machine Learning workspace
+
+ ```python
+ import os
+ import tensorflow as tf
+
+ from azureml.core import Workspace
+
+ ws = Workspace.from_config()
+ print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
+ ```
+
+1. Preprocess image. The input to the web service is a JPEG image. The first step is to decode the JPEG image and preprocess it. The JPEG images are treated as strings and the result are tensors that will be the input to the ResNet 50 model.
+
+ ```python
+ # Input images as a two-dimensional tensor containing an arbitrary number of images represented a strings
+ import azureml.accel.models.utils as utils
+ tf.reset_default_graph()
+
+ in_images = tf.placeholder(tf.string)
+ image_tensors = utils.preprocess_array(in_images)
+ print(image_tensors.shape)
+ ```
+
+1. Load featurizer. Initialize the model and download a TensorFlow checkpoint of the quantized version of ResNet50 to be used as a featurizer. Replace "QuantizedResnet50" in the code snippet to import other deep neural networks:
+
+ - QuantizedResnet152
+ - QuantizedVgg16
+ - Densenet121
+
+ ```python
+ from azureml.accel.models import QuantizedResnet50
+ save_path = os.path.expanduser('~/models')
+ model_graph = QuantizedResnet50(save_path, is_frozen=True)
+ feature_tensor = model_graph.import_graph_def(image_tensors)
+ print(model_graph.version)
+ print(feature_tensor.name)
+ print(feature_tensor.shape)
+ ```
+
+1. Add a classifier. This classifier was trained on the ImageNet data set.
+
+ ```python
+ classifier_output = model_graph.get_default_classifier(feature_tensor)
+ print(classifier_output)
+ ```
+
+1. Save the model. Now that the preprocessor, ResNet 50 featurizer, and the classifier have been loaded, save the graph and associated variables as a model.
+
+ ```python
+ model_name = "resnet50"
+ model_save_path = os.path.join(save_path, model_name)
+ print("Saving model in {}".format(model_save_path))
+
+ with tf.Session() as sess:
+ model_graph.restore_weights(sess)
+ tf.saved_model.simple_save(sess, model_save_path,
+ inputs={'images': in_images},
+ outputs={'output_alias': classifier_output})
+ ```
+
+1. Save input and output tensors **as you will use them for model conversion and inference requests**.
+
+ ```python
+ input_tensors = in_images.name
+ output_tensors = classifier_output.name
+
+ print(input_tensors)
+ print(output_tensors)
+ ```
+
+ The following models are listed with their classifier output tensors for inference if you used the default classifier.
+
+ + Resnet50, QuantizedResnet50
+ ```python
+ output_tensors = "classifier_1/resnet_v1_50/predictions/Softmax:0"
+ ```
+ + Resnet152, QuantizedResnet152
+ ```python
+ output_tensors = "classifier/resnet_v1_152/predictions/Softmax:0"
+ ```
+ + Densenet121, QuantizedDensenet121
+ ```python
+ output_tensors = "classifier/densenet121/predictions/Softmax:0"
+ ```
+ + Vgg16, QuantizedVgg16
+ ```python
+ output_tensors = "classifier/vgg_16/fc8/squeezed:0"
+ ```
+ + SsdVgg, QuantizedSsdVgg
+ ```python
+ output_tensors = ['ssd_300_vgg/block4_box/Reshape_1:0', 'ssd_300_vgg/block7_box/Reshape_1:0', 'ssd_300_vgg/block8_box/Reshape_1:0', 'ssd_300_vgg/block9_box/Reshape_1:0', 'ssd_300_vgg/block10_box/Reshape_1:0', 'ssd_300_vgg/block11_box/Reshape_1:0', 'ssd_300_vgg/block4_box/Reshape:0', 'ssd_300_vgg/block7_box/Reshape:0', 'ssd_300_vgg/block8_box/Reshape:0', 'ssd_300_vgg/block9_box/Reshape:0', 'ssd_300_vgg/block10_box/Reshape:0', 'ssd_300_vgg/block11_box/Reshape:0']
+ ```
+
+### Convert the model to the Open Neural Network Exchange format (ONNX)
+
+Before you can deploy to FPGAs, convert the model to the [ONNX](https://onnx.ai/) format.
+
+1. [Register](../concept-model-management-and-deployment.md) the model by using the SDK with the ZIP file in Azure Blob storage. Adding tags and other metadata about the model helps you keep track of your trained models.
+
+ ```python
+ from azureml.core.model import Model
+
+ registered_model = Model.register(workspace=ws,
+ model_path=model_save_path,
+ model_name=model_name)
+
+ print("Successfully registered: ", registered_model.name,
+ registered_model.description, registered_model.version, sep='\t')
+ ```
+
+ If you've already registered a model and want to load it, you may retrieve it.
+
+ ```python
+ from azureml.core.model import Model
+ model_name = "resnet50"
+ # By default, the latest version is retrieved. You can specify the version, i.e. version=1
+ registered_model = Model(ws, name="resnet50")
+ print(registered_model.name, registered_model.description,
+ registered_model.version, sep='\t')
+ ```
+
+1. Convert the TensorFlow graph to the ONNX format. You must provide the names of the input and output tensors, so your client can use them when you consume the web service.
+
+ ```python
+ from azureml.accel import AccelOnnxConverter
+
+ convert_request = AccelOnnxConverter.convert_tf_model(
+ ws, registered_model, input_tensors, output_tensors)
+
+ # If it fails, you can run wait_for_completion again with show_output=True.
+ convert_request.wait_for_completion(show_output=False)
+
+ # If the above call succeeded, get the converted model
+ converted_model = convert_request.result
+ print("\nSuccessfully converted: ", converted_model.name, converted_model.url, converted_model.version,
+ converted_model.id, converted_model.created_time, '\n')
+ ```
+
+### Containerize and deploy the model
+
+Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image.
+
+ ```python
+ from azureml.core.image import Image
+ from azureml.accel import AccelContainerImage
+
+ image_config = AccelContainerImage.image_configuration()
+ # Image name must be lowercase
+ image_name = "{}-image".format(model_name)
+
+ image = Image.create(name=image_name,
+ models=[converted_model],
+ image_config=image_config,
+ workspace=ws)
+ image.wait_for_creation(show_output=False)
+ ```
+
+ List the images by tag and get the detailed logs for any debugging.
+
+ ```python
+ for i in Image.list(workspace=ws):
+ print('{}(v.{} [{}]) stored at {} with build log {}'.format(
+ i.name, i.version, i.creation_state, i.image_location, i.image_build_log_uri))
+ ```
+
+#### Deploy to an Azure Kubernetes Service Cluster
+
+1. To deploy your model as a high-scale production web service, use AKS. You can create a new one using the Azure Machine Learning SDK, CLI, or [Azure Machine Learning studio](https://ml.azure.com).
+
+ ```python
+ from azureml.core.compute import AksCompute, ComputeTarget
+
+ # Specify the Standard_PB6s Azure VM and location. Values for location may be "eastus", "southeastasia", "westeurope", or "westus2". If no value is specified, the default is "eastus".
+ prov_config = AksCompute.provisioning_configuration(vm_size = "Standard_PB6s",
+ agent_count = 1,
+ location = "eastus")
+
+ aks_name = 'my-aks-cluster'
+ # Create the cluster
+ aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+ ```
+
+ The AKS deployment may take around 15 minutes. Check to see if the deployment succeeded.
+
+ ```python
+ aks_target.wait_for_completion(show_output=True)
+ print(aks_target.provisioning_state)
+ print(aks_target.provisioning_errors)
+ ```
+
+1. Deploy the container to the AKS cluster.
+
+ ```python
+ from azureml.core.webservice import Webservice, AksWebservice
+
+ # For this deployment, set the web service configuration without enabling auto-scaling or authentication for testing
+ aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
+ num_replicas=1,
+ auth_enabled=False)
+
+ aks_service_name = 'my-aks-service'
+
+ aks_service = Webservice.deploy_from_image(workspace=ws,
+ name=aks_service_name,
+ image=image,
+ deployment_config=aks_config,
+ deployment_target=aks_target)
+ aks_service.wait_for_deployment(show_output=True)
+ ```
+
+#### Deploy to a local edge server
+
+All [Azure Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+
+### Consume the deployed model
+
+Lastly, use the sample client to call into the Docker image to get predictions from the model. Sample client code is available:
+- [Python](https://github.com/Azure/aml-real-time-ai/blob/master/pythonlib/amlrealtimeai/client.py)
+- [C#](https://github.com/Azure/aml-real-time-ai/blob/master/sample-clients/csharp)
+
+The Docker image supports gRPC and the TensorFlow Serving "predict" API.
+
+You can also download a sample client for TensorFlow Serving.
+
+```python
+# Using the grpc client in Azure ML Accelerated Models SDK package
+from azureml.accel import PredictionClient
+
+address = aks_service.scoring_uri
+ssl_enabled = address.startswith("https")
+address = address[address.find('/')+2:].strip('/')
+port = 443 if ssl_enabled else 80
+
+# Initialize Azure ML Accelerated Models client
+client = PredictionClient(address=address,
+ port=port,
+ use_ssl=ssl_enabled,
+ service_name=aks_service.name)
+```
+
+Since this classifier was trained on the ImageNet data set, map the classes to human-readable labels.
+
+```python
+import requests
+classes_entries = requests.get(
+ "https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt").text.splitlines()
+
+# Score image with input and output tensor names
+results = client.score_file(path="./snowleopardgaze.jpg",
+ input_name=input_tensors,
+ outputs=output_tensors)
+
+# map results [class_id] => [confidence]
+results = enumerate(results)
+# sort results by confidence
+sorted_results = sorted(results, key=lambda x: x[1], reverse=True)
+# print top 5 results
+for top in sorted_results[:5]:
+ print(classes_entries[top[0]], 'confidence:', top[1])
+```
+
+### Clean up resources
+
+To avoid unnecessary costs, clean up your resources **in this order**: web service, then image, and then the model.
+
+```python
+aks_service.delete()
+aks_target.delete()
+image.delete()
+registered_model.delete()
+converted_model.delete()
+```
+
+## Next steps
+++ Learn how to [secure your web services](how-to-secure-web-service.md) document.+++ Learn about FPGA and [Azure Machine Learning pricing and costs](https://azure.microsoft.com/pricing/details/machine-learning/).+++ [Hyperscale hardware: ML at scale on top of Azure + FPGA: Build 2018 (video)](/events/Build/2018/BRK3202)+++ [Project Brainwave for real-time AI](https://www.microsoft.com/research/project/project-brainwave/)+++ [Automated optical inspection system](https://blogs.microsoft.com/ai/build-2018-project-brainwave/)
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md
+
+ Title: Deploy a model for inference with GPU
+
+description: This article teaches you how to use Azure Machine Learning to deploy a GPU-enabled Tensorflow deep learning model as a web service.service and score inference requests.
++++++ Last updated : 08/08/2022++++
+# Deploy a deep learning model for inference with GPU
++
+This article teaches you how to use Azure Machine Learning to deploy a GPU-enabled model as a web service. The information in this article is based on deploying a model on Azure Kubernetes Service (AKS). The AKS cluster provides a GPU resource that is used by the model for inference.
+
+Inference, or model scoring, is the phase where the deployed model is used to make predictions. Using GPUs instead of CPUs offers performance advantages on highly parallelizable computation.
++
+> [!IMPORTANT]
+> When using the Azure ML __SDK v1__, GPU inference is only supported on Azure Kubernetes Service. When using the Azure ML __SDK v2__ or __CLI v2__, you can use an online endpoint for GPU inference. For more information, see [Deploy and score a machine learning model with an online endpoint](../how-to-deploy-managed-online-endpoints.md).
+
+> For inference using a __machine learning pipeline__, GPUs are only supported on Azure Machine Learning Compute. For more information on using ML pipelines, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](../tutorial-pipeline-batch-scoring-classification.md).
+
+> [!TIP]
+> Although the code snippets in this article use a TensorFlow model, you can apply the information to any machine learning framework that supports GPUs.
+
+> [!NOTE]
+> The information in this article builds on the information in the [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) article. Where that article generally covers deployment to AKS, this article covers GPU specific deployment.
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../quickstart-create-resources.md).
+
+* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
+
+* A registered model that uses a GPU.
+
+ * To learn how to register models, see [Deploy Models](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+
+ * To create and register the Tensorflow model used to create this document, see [How to Train a TensorFlow Model](../how-to-train-tensorflow.md).
+
+* A general understanding of [How and where to deploy models](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+
+## Connect to your workspace
+
+To connect to an existing workspace, use the following code:
+
+> [!IMPORTANT]
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create workspace resources](../quickstart-create-resources.md). For more information on saving the configuration to file, see [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+
+```python
+from azureml.core import Workspace
+
+# Connect to the workspace
+ws = Workspace.from_config()
+```
+
+## Create a Kubernetes cluster with GPUs
+
+Azure Kubernetes Service provides many different GPU options. You can use any of them for model inference. See [the list of N-series VMs](https://azure.microsoft.com/pricing/details/virtual-machines/linux/#n-series) for a full breakdown of capabilities and costs.
+
+The following code demonstrates how to create a new AKS cluster for your workspace:
+
+```python
+from azureml.core.compute import ComputeTarget, AksCompute
+from azureml.exceptions import ComputeTargetException
+
+# Choose a name for your cluster
+aks_name = "aks-gpu"
+
+# Check to see if the cluster already exists
+try:
+ aks_target = ComputeTarget(workspace=ws, name=aks_name)
+ print('Found existing compute target')
+except ComputeTargetException:
+ print('Creating a new compute target...')
+ # Provision AKS cluster with GPU machine
+ prov_config = AksCompute.provisioning_configuration(vm_size="Standard_NC6")
+
+ # Create the cluster
+ aks_target = ComputeTarget.create(
+ workspace=ws, name=aks_name, provisioning_configuration=prov_config
+ )
+
+ aks_target.wait_for_completion(show_output=True)
+```
+
+> [!IMPORTANT]
+> Azure will bill you as long as the AKS cluster exists. Make sure to delete your AKS cluster when you're done with it.
+
+For more information on using AKS with Azure Machine Learning, see [How to deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md).
+
+## Write the entry script
+
+The entry script receives data submitted to the web service, passes it to the model, and returns the scoring results. The following script loads the Tensorflow model on startup, and then uses the model to score data.
+
+> [!TIP]
+> The entry script is specific to your model. For example, the script must know the framework to use with your model, data formats, etc.
+
+```python
+import json
+import numpy as np
+import os
+import tensorflow as tf
+
+from azureml.core.model import Model
++
+def init():
+ global X, output, sess
+ tf.reset_default_graph()
+ model_root = os.getenv('AZUREML_MODEL_DIR')
+ # the name of the folder in which to look for tensorflow model files
+ tf_model_folder = 'model'
+ saver = tf.train.import_meta_graph(
+ os.path.join(model_root, tf_model_folder, 'mnist-tf.model.meta'))
+ X = tf.get_default_graph().get_tensor_by_name("network/X:0")
+ output = tf.get_default_graph().get_tensor_by_name("network/output/MatMul:0")
+
+ sess = tf.Session()
+ saver.restore(sess, os.path.join(model_root, tf_model_folder, 'mnist-tf.model'))
++
+def run(raw_data):
+ data = np.array(json.loads(raw_data)['data'])
+ # make prediction
+ out = output.eval(session=sess, feed_dict={X: data})
+ y_hat = np.argmax(out, axis=1)
+ return y_hat.tolist()
+```
+
+This file is named `score.py`. For more information on entry scripts, see [How and where to deploy](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+
+## Define the conda environment
+
+The conda environment file specifies the dependencies for the service. It includes dependencies required by both the model and the entry script. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service. The following YAML defines the environment for a Tensorflow model. It specifies `tensorflow-gpu`, which will make use of the GPU used in this deployment:
+
+```yaml
+name: project_environment
+dependencies:
+ # The Python interpreter version.
+ # Currently Azure ML only supports 3.5.2 and later.
+- python=3.6.2
+
+- pip:
+ # You must list azureml-defaults as a pip dependency
+ - azureml-defaults>=1.0.45
+ - numpy
+ - tensorflow-gpu=1.12
+channels:
+- conda-forge
+```
+
+For this example, the file is saved as `myenv.yml`.
+
+## Define the deployment configuration
+
+> [!IMPORTANT]
+> AKS does not allow pods to share GPUs, you can have only as many replicas of a GPU-enabled web service as there are GPUs in the cluster.
+
+The deployment configuration defines the Azure Kubernetes Service environment used to run the web service:
+
+```python
+from azureml.core.webservice import AksWebservice
+
+gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
+ num_replicas=3,
+ cpu_cores=2,
+ memory_gb=4)
+```
+
+For more information, see the reference documentation for [AksService.deploy_configuration](/python/api/azureml-core/azureml.core.webservice.akswebservice#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none--compute-target-name-none-).
+
+## Define the inference configuration
+
+The inference configuration points to the entry script and an environment object, which uses a docker image with GPU support. Please note that the YAML file used for environment definition must list azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.
+
+```python
+from azureml.core.model import InferenceConfig
+from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
+
+myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
+myenv.docker.base_image = DEFAULT_GPU_IMAGE
+inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
+```
+
+For more information on environments, see [Create and manage environments for training and deployment](how-to-use-environments.md).
+For more information, see the reference documentation for [InferenceConfig](/python/api/azureml-core/azureml.core.model.inferenceconfig).
+
+## Deploy the model
+
+Deploy the model to your AKS cluster and wait for it to create your service.
+
+```python
+from azureml.core.model import Model
+
+# Name of the web service that is deployed
+aks_service_name = 'aks-dnn-mnist'
+# Get the registerd model
+model = Model(ws, "tf-dnn-mnist")
+# Deploy the model
+aks_service = Model.deploy(ws,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=gpu_aks_config,
+ deployment_target=aks_target,
+ name=aks_service_name)
+
+aks_service.wait_for_deployment(show_output=True)
+print(aks_service.state)
+```
+
+For more information, see the reference documentation for [Model](/python/api/azureml-core/azureml.core.model.model).
+
+## Issue a sample query to your service
+
+Send a test query to the deployed model. When you send a jpeg image to the model, it scores the image. The following code sample downloads test data and then selects a random test image to send to the service.
+
+```python
+# Used to test your webservice
+import os
+import urllib
+import gzip
+import numpy as np
+import struct
+import requests
+
+# load compressed MNIST gz files and return numpy arrays
+def load_data(filename, label=False):
+ with gzip.open(filename) as gz:
+ struct.unpack('I', gz.read(4))
+ n_items = struct.unpack('>I', gz.read(4))
+ if not label:
+ n_rows = struct.unpack('>I', gz.read(4))[0]
+ n_cols = struct.unpack('>I', gz.read(4))[0]
+ res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)
+ res = res.reshape(n_items[0], n_rows * n_cols)
+ else:
+ res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)
+ res = res.reshape(n_items[0], 1)
+ return res
+
+# one-hot encode a 1-D array
+def one_hot_encode(array, num_of_classes):
+ return np.eye(num_of_classes)[array.reshape(-1)]
+
+# Download test data
+os.makedirs('./data/mnist', exist_ok=True)
+urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename='./data/mnist/test-images.gz')
+urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename='./data/mnist/test-labels.gz')
+
+# Load test data from model training
+X_test = load_data('./data/mnist/test-images.gz', False) / 255.0
+y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)
+
+# send a random row from the test set to score
+random_index = np.random.randint(0, len(X_test)-1)
+input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
+
+api_key = aks_service.get_keys()[0]
+headers = {'Content-Type': 'application/json',
+ 'Authorization': ('Bearer ' + api_key)}
+resp = requests.post(aks_service.scoring_uri, input_data, headers=headers)
+
+print("POST to url", aks_service.scoring_uri)
+print("label:", y_test[random_index])
+print("prediction:", resp.text)
+```
+
+For more information on creating a client application, see [Create client to consume deployed web service](../how-to-consume-web-service.md).
+
+## Clean up the resources
+
+If you created the AKS cluster specifically for this example, delete your resources after you're done.
+
+> [!IMPORTANT]
+> Azure bills you based on how long the AKS cluster is deployed. Make sure to clean it up after you are done with it.
+
+```python
+aks_service.delete()
+aks_target.delete()
+```
+
+## Next steps
+
+* [Deploy model on FPGA](how-to-deploy-fpga-web-service.md)
+* [Deploy model with ONNX](../concept-onnx.md#deploy-onnx-models-in-azure)
+* [Train TensorFlow DNN Models](../how-to-train-tensorflow.md)
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
Data that's collected by the Azure Migrate appliance is stored in the Azure loca
Here's more information about how data is stored: -- The collected data is securely stored in CosmosDB in a Microsoft subscription. The data is deleted when you delete the project. Storage is handled by Azure Migrate. You can't specifically choose a storage account for collected data.
+- The collected data is securely stored in Cosmos DB in a Microsoft subscription. The data is deleted when you delete the project. Storage is handled by Azure Migrate. You can't specifically choose a storage account for collected data.
- If you use [dependency visualization](concepts-dependency-visualization.md), the data that's collected is stored in an Azure Log Analytics workspace created in your Azure subscription. The data is deleted when you delete the Log Analytics workspace in your subscription. ## How much data is uploaded during continuous profiling?
Only the appliance and the appliance agents are updated by these automatic updat
Yes. In the portal, go the **Agent health** page for the Azure Migrate: Discovery and assessment or Azure Migrate: Server Migration tool. There, you can check the connection status between Azure and the discovery and assessment agents on the appliance.
-## Can I add multiple server credentials on VMware appliance?
+## Can I add multiple server credentials on appliance?
-Yes, we now support multiple server credentials to perform software inventory (discovery of installed applications), agentless dependency analysis, and discovery of SQL Server instances and databases. [Learn more](tutorial-discover-vmware.md#provide-server-credentials) on how to provide credentials on the appliance configuration manager.
+Yes, we now support multiple server credentials to perform software inventory (discovery of installed applications), agentless dependency analysis, and discovery of SQL Server instances and databases. [Learn more](add-server-credentials.md) on how to provide credentials on the appliance configuration manager.
-## What type of server credentials can I add on the VMware appliance?
+## What type of server credentials can I add on the appliance?
You can provide domain/ Windows(non-domain)/ Linux(non-domain)/ SQL Server authentication credentials on the appliance configuration manager. [Learn more](add-server-credentials.md) about how to provide credentials and how we handle them.
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
You can discover up to 10,000 servers from VMware environment, up to 5,000 serve
## How do I choose the assessment type? - Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md).-- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server from your VMware environment for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
+- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md). - Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md). - You can use a common group with VMware machines only to run both types of assessments. If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines.
There could be two reasons:
## I want to try out the new Azure SQL assessment
-Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of SQL Server instances and databases running in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I want to try out the new Azure App Service assessment
Discovery and assessment of .NET web apps running in your VMware environment is
## I can't see some servers when I am creating an Azure SQL assessment - Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a SQL instance from the group.
+- If you are not able to see a previously created group while creating the assessment, remove any server without a SQL instance from the group.
- If you are running Azure SQL assessments in Azure Migrate for the first time, it is advisable to create a new group of servers. ## I can't see some servers when I am creating an Azure App Service assessment
The Azure SQL assessment only includes databases that are in online status. In c
## I want to compare costs for running my SQL instances on Azure VM vs Azure SQL Database/Azure SQL Managed Instance
-You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) for SQL Server on Azure virtual machines.
+You can create a single **Azure SQL** assessment consisting of desired SQL servers across VMware, Microsoft Hyper-V and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. A single assessment covers readiness, SKUs, estimated costs and migration blockers for all the available SQL migration targets in Azure - Azure SQL Managed Instance, Azure SQL Database and SQL Server on Azure VM. You can then compare the assessment output for the desired targets. [Learn More](./concepts-azure-sql-assessment-calculation.md)
## The storage cost in my Azure SQL assessment is zero
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
Last updated 05/05/2022
# Assessment Overview (migrate to Azure SQL)
-This article provides an overview of assessments for migrating on-premises SQL Server instances from a VMware environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
+This article provides an overview of assessments for migrating on-premises SQL Server instances from a VMware, Microsoft Hyper-V, and Physical environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
## What's an assessment? An assessment with the Discovery and assessment tool is a point in time snapshot of data and measures the readiness and estimates the effect of migrating on-premises servers to Azure.
There are three types of assessments you can create using the Azure Migrate: Dis
**Assessment Type** | **Details** | **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type.
-**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. <br/><br/> If your SQL servers are running on a non-VMware platform, you can assess readiness by using the [Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
+**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware, Microsoft Hyper-V, and Physical environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance.
**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware environment to Azure App Service. **Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md).
migrate Discovered Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md
Architecture | uname
Azure Migrate appliance used for discovery of VMware VMs can also collect data on SQL Server instances and databases.
-> [!Note]
-> Currently this feature is only available for servers running in your VMware environment.
- ### SQL database metadata **Database Metadata** | **Views/ SQL Server properties**
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Last updated 06/27/2022
As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity. This article shows you how to assess discovered SQL instances in preparation for migration to Azure SQL, using the Azure Migrate: Discovery and assessment tool.
-> [!Note]
-> Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
- ## Before you start - Make sure you've [created](./create-manage-projects.md) an Azure Migrate project and have the Azure Migrate: Discovery and assessment tool added.-- To create an assessment, you need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md). The appliance discovers on-premises servers, and sends metadata and performance data to Azure Migrate. [Learn more](migrate-appliance.md).
+- To create an assessment, you need to set up an Azure Migrate appliance for VMware, Hyper-V or Physical environment, whichever applicable. The appliance discovers on-premises servers, and sends metadata and performance data to Azure Migrate. [Learn more](migrate-appliance.md).
## Azure SQL assessment overview You can create an Azure SQL assessment with sizing criteria as **Performance-based**.
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md
This article describes how to discover installed software inventory, web apps, a
Performing software inventory helps identify and tailor a migration path to Azure for your workloads. Software inventory uses the Azure Migrate appliance to perform discovery, using server credentials. It is completely agentless- no agents are installed on the servers to collect this data. > [!Note]
-> Currently the discovery of ASP.NET web apps and SQL Server instances and databases is only available with appliance used for discovery of servers running in your VMware enviornment. These features are not available for servers running in your Hyper-V enviornment and for physical servers or servers running on other clouds like AWS, GCP etc.
+> Currently the discovery of ASP.NET web apps is only available with appliance used for discovery of servers running in your VMware enviornment. These feature is not available for servers running in your Hyper-V enviornment and for physical servers or servers running on other clouds like AWS, GCP etc.
## Before you start
Performing software inventory helps identify and tailor a migration path to Azur
2. As you configure the appliance, you need to specify the following in the appliance configuration - The details of the source environment (vCenter Server(s)/Hyper-V host(s) or cluster(s)/physical servers) which you want to discover. - Server credentials, which can be domain/ Windows (non-domain)/ Linux (non-domain) credentials. [Learn more](add-server-credentials.md) about how to provide credentials and how the appliance handles them.
- - Verify the permissions required to perform software inventory.You need a guest user account for Windows servers, and a regular/normal user account (non-sudo access) for all Linux servers.
+ - Verify the permissions required to perform software inventory. You need a guest user account for Windows servers, and a regular/normal user account (non-sudo access) for all Linux servers.
### Add credentials and initiate discovery
The software inventory is exported and downloaded in Excel format. The **Softwar
## Discover SQL Server instances and databases -- Software inventory also identifies the SQL Server instances running in your VMware environment.
+- Software inventory also identifies the SQL Server instances running in your VMware, Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud.
- If you have not provided Windows authentication or SQL Server authentication credentials on the appliance configuration manager, then add the credentials so that the appliance can use them to connect to respective SQL Server instances. > [!NOTE]
Once connected, appliance gathers configuration and performance data of SQL Serv
- After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours. > [!Note]
-> Currently the discovery of ASP.NET web apps and SQL Server instances and databases is only available with appliance used for discovery of servers running in your VMware environment.
+> Currently the discovery of ASP.NET web apps is only available with appliance used for discovery of servers running in your VMware environment.
## Next steps
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md
This discovery process is agentless that is, nothing is installed on the target
- Created an [Azure Migrate project](./create-manage-projects.md) before the announcement of SQL and web apps assessment feature for your region - Added the [Azure Migrate: Discovery and assessment](./how-to-assess.md) tool to a project - Review [app-discovery support and requirements](./migrate-support-matrix-vmware.md#vmware-requirements).-- Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
+- In case you are discovering assets on VMware environment then, Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
- Check the [requirements](./migrate-appliance.md) for deploying the Azure Migrate appliance. - Verify that you have the [required roles](./create-manage-projects.md#verify-permissions) in the subscription to create resources. - Ensure that your appliance has access to the internet
+> [!Note]
+> Even though the processes in this document are covered for VMware, the processes are similar for Microsoft Hyper-V and Physical environment.
+> Discovery and assessment for SQL Server instances and databases is available across the Microsoft Hyper-V and Physical environment also.
+ ## Enable discovery of ASP.NET web apps and SQL Server instances and databases 1. In your Azure Migrate project, either
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
The Azure Migrate appliance is used in the following scenarios.
| | **Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments. **Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br/><br/> Replicate servers without installing any agents on them.
-**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments.<br/><br/> Perform discovery of installed software inventory and agentless dependency analysis.
-**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.<br/><br/> Perform discovery of installed software inventory and agentless dependency analysis.
+**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Perform discovery of installed software inventory, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
+**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
## Deployment methods
The appliance has the following
- **Discovery agent**: The agent collects server configuration metadata, which can be used to create as on-premises assessments. - **Assessment agent**: The agent collects server performance metadata, which can be used to create performance-based assessments. - **Auto update service**: The service keeps all the agents running on the appliance up-to-date. It automatically runs once every 24 hours.
+- **SQL discovery and assessment agent**: sends the configuration and performance metadata of SQL Server instances and databases to Azure.
- **DRA agent**: Orchestrates server replication, and coordinates communication between replicated servers and Azure. Used only when replicating servers to Azure using agentless migration. - **Gateway**: Sends replicated data to Azure. Used only when replicating servers to Azure using agentless migration.-- **SQL discovery and assessment agent**: sends the configuration and performance metadata of SQL Server instances and databases to Azure. - **Web apps discovery and assessment agent**: sends the web apps configuration data to Azure. > [!Note]
-> The last 4 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
+> The last 3 services are only available in the appliance used for discovery and assessment of servers running in your VMware environment.
## Appliance - VMware
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Requirement** | **Hyper-V** | **Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
+**Appliance services** | The appliance has the following
**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 5000 servers running in Hyper-V environment.<br> An appliance can connect to up to 300 Hyper-V hosts. **Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Requirement** | **Physical** | **Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
+**Appliance services** | The appliance has the following
**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances.<br> **Discovery limits** | An appliance can discover up to 1000 physical servers. **Supported deployment** | Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
The appliance communicates with the discovery sources using the following proces
**Start discovery** | The appliance communicates with the vCenter server on TCP port 443 by default. If the vCenter server listens on a different port, you can configure it in the appliance configuration manager. | The appliance communicates with the Hyper-V hosts on WinRM port 5985 (HTTP). | The appliance communicates with Windows servers over WinRM port 5985 (HTTP) with Linux servers over port 22 (TCP). **Gather configuration and performance metadata** | The appliance collects the metadata of servers running on vCenter Server(s) using vSphere APIs by connecting on port 443 (default port) or any other port each vCenter Server listens on. | The appliance collects the metadata of servers running on Hyper-V hosts using a Common Information Model (CIM) session with hosts on port 5985.| The appliance collects metadata from Windows servers using Common Information Model (CIM) session with servers on port 5985 and from Linux servers using SSH connectivity on port 22. **Send discovery data** | The appliance sends the collected data to Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits. | The appliance sends the collected data to Azure Migrate: Discovery and assessment over SSL port 443.<br/><br/> The appliance can connect to Azure over the internet or via ExpressRoute private peering or Microsoft peering circuits.
-**Data collection frequency** | Configuration metadata is collected and sent every 15 minutes. <br/><br/> Performance metadata is collected every 50 minutes to send a data point to Azure. <br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 15 minutes.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.| Configuration metadata is collected and sent every 3 hours. <br/><br/> Performance metadata is collected every 5 minutes to send a data point to Azure.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.
+**Data collection frequency** | Configuration metadata is collected and sent every 15 minutes. <br/><br/> Performance metadata is collected every 50 minutes to send a data point to Azure. <br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours. <br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. <br/><br/> The web apps configuration data is updated once every 24 hours. Performance data is not captured for web apps.| Configuration metadata is collected and sent every 30 minutes. <br/><br/> Performance metadata is collected every 30 seconds and is aggregated to send a data point to Azure every 15 minutes.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.<br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds.| Configuration metadata is collected and sent every 3 hours. <br/><br/> Performance metadata is collected every 5 minutes to send a data point to Azure.<br/><br/> Software inventory data is sent to Azure once every 24 hours. <br/><br/> Agentless dependency data is collected every 5 minutes, aggregated on appliance and sent to Azure every 6 hours.<br/><br/> The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds.
**Assess and migrate** | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.<br/><br/>In addition, you can also start migrating servers running in your VMware environment using Azure Migrate: Server Migration tool to orchestrate agentless server replication.| You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool. | You can create assessments from the metadata collected by the appliance using Azure Migrate: Discovery and assessment tool.
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
Support | Details
**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). **Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+## SQL Server instance and database discovery requirements
+
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+
+After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. SQL Server configuration data is updated once every 24 hours. Performance data is captured every 30 seconds.
+
+Support | Details
+ |
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Windows servers** | Windows Server 2008 and later are supported.
+**Linux servers** | Currently not supported.
+**Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
+**SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+**SQL Server versions** | SQL Server 2008 and later are supported.
+**SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
+**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups is not supported.
+**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) is not supported.
+
+> [!NOTE]
+> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
+>
+> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+ ## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure.The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details |
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
Support | Details
**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). **Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+## SQL Server instance and database discovery requirements
+
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+
+After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. SQL Server configuration data is updated once every 24 hours. Performance data is captured every 30 seconds.
+
+Support | Details
+ |
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Windows servers** | Windows Server 2008 and later are supported.
+**Linux servers** | Currently not supported.
+**Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
+**SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+**SQL Server versions** | SQL Server 2008 and later are supported.
+**SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
+**Supported SQL configuration** | Currently, only discovery for standalone SQL Server instances and corresponding databases is supported.<br /><br /> Identification of Failover Cluster and Always On availability groups is not supported.
+**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) is not supported.
+
+> [!NOTE]
+> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
+>
+> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+ ## Dependency analysis requirements (agentless) [Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Support | Details
| **Supported servers** | You can perform software inventory on up to 10,000 servers running across vCenter Server(s) added to each Azure Migrate appliance. **Operating systems** | Servers running all Windows and Linux versions are supported.
-**Server requirements** | For software inventory, VMware Tools must be installed and running on your servers.The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
+**Server requirements** | For software inventory, VMware Tools must be installed and running on your servers. The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs. **Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers. **Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory.
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | Currently supported only for servers running SQL Server in your VMware environment. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/ Baremetal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
Support | Details
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure.The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details |
Requirement | Details
## Next steps - Review [assessment best practices](best-practices-assessment.md).-- Learn how to [prepare for a VMware assessment](./tutorial-discover-vmware.md).
+- Learn how to [prepare for a VMware assessment](./tutorial-discover-vmware.md).
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md
Creating assessment on top of servers containing SQL instances that weren't disc
## Common SQL Server instances and database discovery errors
-Azure Migrate supports discovery of SQL Server instances and databases running on on-premises machines by using Azure Migrate: Discovery and assessment. SQL discovery is currently supported for VMware only. See the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
+Azure Migrate supports discovery of SQL Server instances and databases running on on-premises machines by using Azure Migrate: Discovery and assessment. See the [Discovery](tutorial-discover-vmware.md) tutorial to get started.
Typical SQL discovery errors are summarized in the following table.
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
In this tutorial, you learn how to:
> [!NOTE] > Tutorials show the quickest path for trying out a scenario, and use default options where possible.
-> [!NOTE]
-> If SQL Servers are running on non-VMware platforms, [assess the readiness of a SQL Server data estate migrating to Azure SQL Database using the Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
- ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Click on **Start discovery**, to kick off discovery of the successfully validate
* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
+* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+ :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
## Verify servers in the portal After discovery finishes, you can verify that the servers appear in the portal.
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
Click **Start discovery**, to kick off discovery of the successfully validated s
## How discovery works + * It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
+* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+ :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
## Verify servers in the portal After discovery finishes, you can verify that the servers appear in the portal.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
SHA256 | 0ad60e7299925eff4d1ae9f1c7db485dc9316ef45b0964148a3c07c80761ade2
### Create an account to access servers
-The user account on your servers must have the required permissions to initiate discovery of installed applications and enable agentless dependency analysis. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
+The user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
-* For Windows servers, create an account (local or domain) that has administrator permissions on the servers.
+* For Windows servers, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
* For Linux servers, provide the root user account details or create an account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files. > [!NOTE]
-> You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications and enable agentless dependency analysis. You can add multiple domain, Windows (non-domain) or Linux (non-domain)credentials. Learn how to [add server credentials](add-server-credentials.md).
-
+> You can add multiple server credentials in the Azure Migrate appliance configuration manager to initiate discovery of installed applications, agentless dependency analysis, and SQL Server instances and databases. You can add multiple domain, Windows (non-domain), Linux (non-domain), or SQL Server authentication credentials. Learn how to [add server credentials](add-server-credentials.md).
## Set up a project Set up a new project.
Connect from the appliance to Hyper-V hosts or clusters, and start server discov
### Provide server credentials
-In **Step 3: Provide server credentials to perform software inventory and agentless dependency analysis.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can disable the slider and proceed with discovery of servers running on Hyper-V hosts/clusters. You can change this option at any time.
+In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases in your Microsoft HyperV environment.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can disable the slider and proceed with discovery of servers running on Hyper-V hosts/clusters. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-hyper-v/appliance-server-credentials-mapping.png" alt-text="Screenshot that shows providing credentials for software inventory and dependency analysis.":::
To add server credentials:
1. Select **Add Credentials**. 1. In the dropdown menu, select **Credentials type**.
- You can provide domain/, Windows(non-domain)/, Linux(non-domain) credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
+ You can provide domain/, Windows(non-domain)/, Linux(non-domain)/, and SQL Server authentication credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them.
1. For each type of credentials, enter: * A friendly name. * A username.
To add server credentials:
Select **Save**. If you choose to use domain credentials, you also must enter the FQDN for the domain. The FQDN is required to validate the authenticity of the credentials with the Active Directory instance in that domain.
-1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications and agentless dependency analysis.
+1. Review the [required permissions](add-server-credentials.md#required-permissions) on the account for discovery of installed applications, agentless dependency analysis, and discovery SQL Server instances and databases.
1. To add multiple credentials at once, select **Add more** to save credentials, and then add more credentials.
- When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as during discovery, the appliance iterates to map credentials to respective servers.
+ When you select **Save** or **Add more**, the appliance validates the domain credentials with the domain's Active Directory instance for authentication. Validation is made after each addition to avoid account lockouts as the appliance iterates to map credentials to respective servers.
To check validation of the domain credentials: In the configuration manager, in the credentials table, see the **Validation status** for domain credentials. Only domain credentials are validated.
-If validation fails, you can select the **Failed** status to see the validation error. Fix the issue, and then select **Revalidate credentials** to reattempt validation of the credentials.
+If validation fails, you can select a **Failed** status to see the validation error. Fix the issue, and then select **Revalidate credentials** to reattempt validation of the credentials.
:::image type="content" source="./media/tutorial-discover-hyper-v/add-server-credentials-multiple.png" alt-text="Screenshot that shows providing and validating multiple credentials.":::
Click on **Start discovery**, to kick off server discovery from the successfully
* It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal. * If you have provided server credentials, [software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers running on Hyper-V host(s)/cluster(s) is finished.
+* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
+* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+
+ :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
## Verify servers in the portal
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
For Windows servers, use a domain account for domain-joined servers, and a local
> [!Note] > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers. +
+ > [!Note]
+ > To discover SQL Server databases on Windows Servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
++ **Linux servers** For Linux servers, you can create a user account in one of three ways:
For Linux servers, you can create a user account in one of three ways:
- You need a root account on the servers that you want to discover. This account can be used to pull configuration and performance metadata and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity. > [!Note]
-> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, it recommended to use Option 1.
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it recommended to use Option 1.
### Option 2 - To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.
Click on **Start discovery**, to kick off discovery of the successfully validate
* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished.
+* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
+* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+
+ :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
## Verify servers in the portal
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (August 2022)
+
+- SQL discovery and assessment for Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud.
+ ## Update (June 2022) - Perform at-scale agentless migration of ASP.NET web apps running on IIS web servers hosted on a Windows OS in a VMware environment. [Learn more.](tutorial-migrate-webapps.md)
For more information, see [ASP.NET app containerization and migration to Azure K
## Update (July 2020) -- Agentless VMware migration now supports concurrent replication of 300 VMs per vCenter
+- Agentless VMware migration now supports concurrent replication of 300 VMs per vCenter.
## Update (June 2020) - Assessments for migrating on-premises VMware VMs to [Azure VMware Solution (AVS)](./concepts-azure-vmware-solution-assessment-calculation.md) are now supported. [Learn more](how-to-create-azure-vmware-solution-assessment.md) - Support for multiple credentials on appliance for physical server discovery.-- Support to allow Azure sign in from appliance for tenant where tenant restriction has been configured.
+- Support to allow Azure sign-in from appliance for tenant where tenant restriction has been configured.
## Update (April 2020)
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Azure Database for MySQL
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
- Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) to fill out the **Technology type** and **Custom location** fields. - Select the recommended packet core version in the **Version** field.
+ - Ensure **AKS-HCI** is selected in the **Platform** field.
1. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. Note the following:
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Let's take an example of a sample manufacturing company working across multiple
## Microsoft Purview - Profisee integration SaaS deployment on Azure Kubernetes Service (AKS) guide
-1. [Create a user-assigned managed identity in Azure](/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
+1. [Create a user-assigned managed identity in Azure](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
- Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down. - DNS Zone Contributor role to the particular DNS zone where the entry will be created **OR** Contributor role to the DNS Zone resource group. This DNS role is needed only if updating DNS hosted in Azure. - Application Administrator role in Azure Active Directory so the required permissions that are needed for the application registration can be assigned.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
# Automate threat response in Microsoft Sentinel with automation rules - This article explains what Microsoft Sentinel automation rules are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, increasing your SOC's effectiveness and saving you time and resources. ## What are automation rules?
Automation rules apply to the following categories of use cases:
- Inspect the contents of an incident (alerts, entities, and other properties) and take further action by calling a playbook. -- Automation rules can also be [the mechanism by which you run a playbook](whats-new.md#automation-rules-for-alerts) in response to an **alert** *not associated with an incident*.
+- Automation rules can also be [the mechanism by which you run a playbook](whats-new.md#automation-rules-for-alerts-preview) in response to an **alert** *not associated with an incident*.
> [!IMPORTANT] >
sentinel Best Practices Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-data.md
Title: Best practices for data collection in Microsoft Sentinel description: Learn about best practices to employ when connecting data sources to Microsoft Sentinel.--++ Last updated 11/09/2021
# Data collection best practices - This section reviews best practices for collecting data using Microsoft Sentinel data connectors. For more information, see [Connect data sources](connect-data-sources.md), [Microsoft Sentinel data connectors reference](data-connectors-reference.md), and the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md). ## Prioritize your data connectors
Standard configuration for data collection may not work well for your organizati
|Challenge / Requirement |Possible solutions |Considerations | ||||
-|**Requires log filtering** | Use Logstash <br><br>Use Azure Functions <br><br> Use LogicApps <br><br> Use custom code (.NET, Python) | While filtering can lead to cost savings, and ingests only the required data, some Microsoft Sentinel features are not supported, such as [UEBA](identify-threats-with-entity-behavior-analytics.md), [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages), [machine learning](bring-your-own-ml.md), and [fusion](fusion.md). <br><br>When configuring log filtering, you'll need to make updates in resources such as threat hunting queries and analytics rules |
+|**Requires log filtering** | Use Logstash <br><br>Use Azure Functions <br><br> Use LogicApps <br><br> Use custom code (.NET, Python) | While filtering can lead to cost savings, and ingests only the required data, some Microsoft Sentinel features are not supported, such as [UEBA](identify-threats-with-entity-behavior-analytics.md), [entity pages](entity-pages.md), [machine learning](bring-your-own-ml.md), and [fusion](fusion.md). <br><br>When configuring log filtering, you'll need to make updates in resources such as threat hunting queries and analytics rules |
|**Agent cannot be installed** |Use Windows Event Forwarding, supported with the [Azure Monitor Agent](connect-windows-security-events.md#connector-options) | Using Windows Event forwarding lowers load-balancing events per second from the Windows Event Collector, from 10,000 events to 500-1000 events.| |**Servers do not connect to the internet** | Use the [Log Analytics gateway](../azure-monitor/agents/gateway.md) | Configuring a proxy to your agent requires extra firewall rules to allow the Gateway to work. | |**Requires tagging and enrichment at ingestion** |Use Logstash to inject a ResourceID <br><br>Use an ARM template to inject the ResourceID into on-premises machines <br><br>Ingest the resource ID into separate workspaces | Log Analytics doesn't support RBAC for custom tables <br><br>Microsoft Sentinel doesnΓÇÖt support row-level RBAC <br><br>**Tip**: You may want to adopt cross workspace design and functionality for Microsoft Sentinel. |
sentinel Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities.md
Title: Use entities to classify and analyze data in Microsoft Sentinel | Microsoft Docs
+ Title: Use entities to classify and analyze data in Microsoft Sentinel
description: Assign entity classifications (users, hostnames, IP addresses) to data items in Microsoft Sentinel, and use them to compare, analyze, and correlate data from multiple sources. Previously updated : 11/09/2021 Last updated : 07/26/2022 # Classify and analyze data using entities in Microsoft Sentinel -
-## What are entities?
- When alerts are sent to or generated by Microsoft Sentinel, they contain data items that Sentinel can recognize and classify into categories as **entities**. When Microsoft Sentinel understands what kind of entity a particular data item represents, it knows the right questions to ask about it, and it can then compare insights about that item across the full range of data sources, and easily track it and refer to it throughout the entire Sentinel experience - analytics, investigation, remediation, hunting, and so on. Some common examples of entities are users, hosts, files, processes, IP addresses, and URLs.
-### Entity identifiers
+## Entity identifiers
Microsoft Sentinel supports a wide variety of entity types. Each type has its own unique attributes, including some that can be used to identify a particular entity. These attributes are represented as fields in the entity, and are called **identifiers**. See the full list of supported entities and their identifiers below.
-#### Strong and weak identifiers
+### Strong and weak identifiers
As noted just above, for each type of entity there are fields, or sets of fields, that can identify it. These fields or sets of fields can be referred to as **strong identifiers** if they can uniquely identify an entity without any ambiguity, or as **weak identifiers** if they can identify an entity under some circumstances, but are not guaranteed to uniquely identify an entity in all cases. In many cases, though, a selection of weak identifiers can be combined to produce a strong identifier.
If, however, one of your resource providers creates an alert in which an entity
In order to minimize the risk of this happening, you should verify that all of your alert providers properly identify the entities in the alerts they produce. Additionally, synchronizing user account entities with Azure Active Directory may create a unifying directory, which will be able to merge user account entities.
-#### Supported entities
+### Supported entities
The following types of entities are currently identified in Microsoft Sentinel:
Learn [which identifiers strongly identify an entity](entities-reference.md).
## Entity pages
-When you encounter a user or host entity (IP address entities are in preview) in an entity search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
-
-Entity pages consist of three parts:
--- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Microsoft Defender for Cloud, CEF/Syslog, and Microsoft 365 Defender.--- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, [anomalies](soc-ml-anomalies.md), and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing. --- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify [anomalies](soc-ml-anomalies.md) and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.-
-> [!NOTE]
-> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
-
-### The timeline
--
-The timeline is a major part of the entity page's contribution to behavior analytics in Microsoft Sentinel. It presents a story about entity-related events, helping you understand the entity's activity within a specific time frame.
-
-You can choose the **time range** from among several preset options (such as *last 24 hours*), or set it to any custom-defined time frame. Additionally, you can set filters that limit the information in the timeline to specific types of events or alerts.
-
-The following types of items are included in the timeline:
--- Alerts - any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly.--- Bookmarks - any bookmarks that include the specific entity shown on the page.--- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.--- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.-
-### Entity Insights
-
-Entity insights are queries defined by Microsoft security researchers to help your analysts investigate more efficiently and effectively. The insights are presented as part of the entity page, and provide valuable security information on hosts and users, in the form of tabular data and charts. Having the information here means you don't have to detour to Log Analytics. The insights include data regarding sign-ins, group additions, anomalous events and more, and include advanced ML algorithms to detect anomalous behavior.
-
-The insights are based on the following data sources:
--- Syslog (Linux)-- SecurityEvent (Windows)-- AuditLogs (Azure AD)-- SigninLogs (Azure AD)-- OfficeActivity (Office 365)-- BehaviorAnalytics (Microsoft Sentinel UEBA)-- Heartbeat (Azure Monitor Agent)-- CommonSecurityLog (Microsoft Sentinel)-
-### How to use entity pages
-
-Entity pages are designed to be part of multiple usage scenarios, and can be accessed from incident management, the investigation graph, bookmarks, or directly from the entity search page under **Entity behavior analytics** in the Microsoft Sentinel main menu.
--
-Entity page information is stored in the **BehaviorAnalytics** table, described in detail in the [Microsoft Sentinel UEBA reference](ueba-reference.md).
+Information about entity pages can now be found at [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
## Next steps
sentinel Entity Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entity-pages.md
+
+ Title: Investigate entities with entity pages in Microsoft Sentinel
+description: Use entity pages to get information about entities that you come across in your incident investigations. Gain insights into entity activities and assess risk.
+++ Last updated : 07/26/2022++
+# Investigate entities with entity pages in Microsoft Sentinel
+
+When you come across a user account, a hostname / IP address, or an Azure resource in an incident investigation, you may decide you want to know more about it. For example, you might want to know its activity history, whether it's appeared in other alerts or incidents, whether it's done anything unexpected or out of character, and so on. In short, you want information that can help you determine what sort of threat these entities represent and guide your investigation accordingly.
+
+## Entity pages
+
+In these situations, you can select the entity (it will appear as a clickable link) and be taken to an **entity page**, a datasheet full of useful information about that entity. You can also arrive at an entity page by searching directly for entities on the Microsoft Sentinel **entity behavior** page. The types of information you will find on entity pages include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
+
+More specifically, entity pages consist of three parts:
+
+- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Azure Activity, Azure Resource Manager, Microsoft Defender for Cloud, CEF/Syslog, and Microsoft 365 Defender (with all its components).
+
+- The center panel shows a [graphical and textual timeline](#the-timeline) of notable events related to the entity, such as alerts, bookmarks, [anomalies](soc-ml-anomalies.md), and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing.
+
+- The right-side panel presents [behavioral insights](#entity-insights) on the entity. These insights are continuously developed by Microsoft security research teams. They are based on various data sources and provide context for the entity and its observed activities, helping you to quickly identify [anomalous behavior](soc-ml-anomalies.md) and security threats.
+
+## The timeline
++
+The timeline is a major part of the entity page's contribution to behavior analytics in Microsoft Sentinel. It presents a story about entity-related events, helping you understand the entity's activity within a specific time frame.
+
+You can choose the **time range** from among several preset options (such as *last 24 hours*), or set it to any custom-defined time frame. Additionally, you can set filters that limit the information in the timeline to specific types of events or alerts.
+
+The following types of items are included in the timeline:
+
+- Alerts - any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly.
+
+- Bookmarks - any bookmarks that include the specific entity shown on the page.
+
+- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+
+- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
+
+## Entity insights
+
+Entity insights are queries defined by Microsoft security researchers to help your analysts investigate more efficiently and effectively. The insights are presented as part of the entity page, and provide valuable security information on hosts and users, in the form of tabular data and charts. Having the information here means you don't have to detour to Log Analytics. The insights include data regarding sign-ins, group additions, anomalous events and more, and include advanced ML algorithms to detect anomalous behavior.
+
+The insights are based on the following data sources:
+
+- Syslog (Linux)
+- SecurityEvent (Windows)
+- AuditLogs (Azure AD)
+- SigninLogs (Azure AD)
+- OfficeActivity (Office 365)
+- BehaviorAnalytics (Microsoft Sentinel UEBA)
+- Heartbeat (Azure Monitor Agent)
+- CommonSecurityLog (Microsoft Sentinel)
+
+## How to use entity pages
+
+Entity pages are designed to be part of multiple usage scenarios, and can be accessed from incident management, the investigation graph, bookmarks, or directly from the entity search page under **Entity behavior** in the Microsoft Sentinel main menu.
++
+Entity page information is stored in the **BehaviorAnalytics** table, described in detail in the [Microsoft Sentinel UEBA reference](ueba-reference.md).
+
+## Supported entity pages
+
+Microsoft Sentinel currently offers the following entity pages:
+
+- User account
+- Host
+- IP address (**Preview**)
+
+ > [!NOTE]
+ > The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
+
+- Azure resource (**Preview**)
+
+## Next steps
+
+In this document, you learned about getting information about entities in Microsoft Sentinel using entity pages. For more information about entities and how you can use them, see the following articles:
+
+- [Classify and analyze data using entities in Microsoft Sentinel](entities.md).
+- [Customize activities on entity page timelines](customize-entity-activities.md).
+- [Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](identify-threats-with-entity-behavior-analytics.md)
+- [Enable entity behavior analytics](./enable-entity-behavior-analytics.md) in Microsoft Sentinel.
+- [Hunt for security threats](./hunting.md).
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Title: Identify advanced threats with User and Entity Behavior Analytics (UEBA)
description: Create behavioral baselines for entities (users, hostnames, IP addresses) and use them to detect anomalous behavior and identify zero-day advanced persistent threats (APT). Previously updated : 11/09/2021 Last updated : 08/08/2022 # Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
-> [!IMPORTANT]
->
-> - The **IP address entity** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-## What is User and Entity Behavior Analytics (UEBA)?
Identifying threats inside your organization and their potential impact - whether a compromised entity or a malicious insider - has always been a time-consuming and labor-intensive process. Sifting through alerts, connecting the dots, and active hunting all add up to massive amounts of time and effort expended with minimal returns, and the possibility of sophisticated threats simply evading discovery. Particularly elusive threats like zero-day, targeted, and advanced persistent threats can be the most dangerous to your organization, making their detection all the more critical. The UEBA capability in Microsoft Sentinel eliminates the drudgery from your analystsΓÇÖ workloads and the uncertainty from their efforts, and delivers high-fidelity, actionable intelligence, so they can focus on investigation and remediation.
+## What is User and Entity Behavior Analytics (UEBA)?
+ As Microsoft Sentinel collects logs and alerts from all of its connected data sources, it analyzes them and builds baseline behavioral profiles of your organizationΓÇÖs entities (such as users, hosts, IP addresses, and applications) across time and peer group horizon. Using a variety of techniques and machine learning capabilities, Microsoft Sentinel can then identify anomalous activity and help you determine if an asset has been compromised. Not only that, but it can also figure out the relative sensitivity of particular assets, identify peer groups of assets, and evaluate the potential impact of any given compromised asset (its ΓÇ£blast radiusΓÇ¥). Armed with this information, you can effectively prioritize your investigation and incident handling. ### UEBA analytics architecture
Each activity is scored with ΓÇ£Investigation Priority ScoreΓÇ¥ ΓÇô which determ
See how behavior analytics is used in [Microsoft Defender for Cloud Apps](https://techcommunity.microsoft.com/t5/microsoft-security-and/prioritize-user-investigations-in-cloud-app-security/ba-p/700136) for an example of how this works.
-## Entity Pages
- Learn more about [entities in Microsoft Sentinel](entities.md) and see the full list of [supported entities and identifiers](entities-reference.md).
-When you encounter a user or host entity (IP address entities are in preview) in an entity search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
-
-Entity pages consist of three parts:
--- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Microsoft Defender for Cloud, CEF/Syslog, and Microsoft 365 Defender.--- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, [anomalies](soc-ml-anomalies.md), and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing. --- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify [anomalies](soc-ml-anomalies.md) and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.-
-> [!NOTE]
-> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
-
-### The timeline
--
-The timeline is a major part of the entity page's contribution to behavior analytics in Microsoft Sentinel. It presents a story about entity-related events, helping you understand the entity's activity within a specific time frame.
-
-You can choose the **time range** from among several preset options (such as *last 24 hours*), or set it to any custom-defined time frame. Additionally, you can set filters that limit the information in the timeline to specific types of events or alerts.
-
-The following types of items are included in the timeline:
--- Alerts - any alerts in which the entity is defined as a **mapped entity**. Note that if your organization has created [custom alerts using analytics rules](./detect-threats-custom.md), you should make sure that the rules' entity mapping is done properly.--- Bookmarks - any bookmarks that include the specific entity shown on the page.--- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.--- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.-
-### Entity Insights
-
-Entity insights are queries defined by Microsoft security researchers to help your analysts investigate more efficiently and effectively. The insights are presented as part of the entity page, and provide valuable security information on hosts and users, in the form of tabular data and charts. Having the information here means you don't have to detour to Log Analytics. The insights include data regarding sign-ins, group additions, anomalous events and more, and include advanced ML algorithms to detect anomalous behavior.
-
-The insights are based on the following data sources:
-- Syslog (Linux)-- SecurityEvent (Windows)-- AuditLogs (Azure AD)-- SigninLogs (Azure AD)-- OfficeActivity (Office 365)-- BehaviorAnalytics (Microsoft Sentinel UEBA)-- Heartbeat (Azure Monitor Agent)-- CommonSecurityLog (Microsoft Sentinel)-- ThreatIntelligenceIndicators (Microsoft Sentinel)-
-### How to use entity pages
-
-Entity pages are designed to be part of multiple usage scenarios, and can be accessed from incident management, the investigation graph, bookmarks, or directly from the entity search page under **Entity behavior analytics** in the Microsoft Sentinel main menu.
-
+### Entity pages
-Entity page information is stored in the **BehaviorAnalytics** table, described in detail in the [Microsoft Sentinel UEBA reference](ueba-reference.md).
+Information about **entity pages** can now be found at [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
## Querying behavior analytics data
sentinel Investigate With Ueba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-with-ueba.md
Title: Investigate incidents with UEBA data | Microsoft Docs description: Learn how to use UEBA data while investigating to gain greater context to potentially malicious activity occurring in your organization.-+ Last updated 11/09/2021-+ # Tutorial: Investigate incidents with UEBA data - This article describes common methods and sample procedures for using [user entity behavior analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md) in your regular investigation workflows. > [!IMPORTANT]
A common example of a false positive is when impossible travel activity is detec
### Analyze a false positive
-For example, for an **Impossible travel** incident, after confirming with the user that a VPN was used, navigate from the incident to the user entity page. Use the data displayed there to determine whether the locations captured are included in the user's commonly-known locations.
+For example, for an **Impossible travel** incident, after confirming with the user that a VPN was used, navigate from the incident to the user entity page. Use the data displayed there to determine whether the locations captured are included in the user's commonly known locations.
For example:
For example, to investigate a password spray incident with UEBA insights, you mi
1. Select the administrative user entity in the map, and then select **Insights** on the right to find more details, such as the graph of sign-ins over time.
-1. Select **Info** on the right, and then select **View full details** to jump to the [user entity page](identify-threats-with-entity-behavior-analytics.md#entity-pages) to drill down further.
+1. Select **Info** on the right, and then select **View full details** to jump to the [user entity page](entity-pages.md) to drill down further.
- For example, note whether this is the user's first Potential Password spray incident, or watch the user's sign in history to understand whether the failures were anomalous.
+ For example, note whether this is the user's first Potential Password spray incident, or watch the user's sign-in history to understand whether the failures were anomalous.
> [!TIP] > You can also run the **Anomalous Failed Logon** [hunting query](hunting.md) to monitor all of an organization's anomalous failed logins. Use the results from the query to start investigations into possible password spray attacks.
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
# Microsoft Sentinel skill-up training - This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 modules that present relevant product documentation, blog posts, and other resources. The modules listed here are split into five parts following the life cycle of a Security Operation Center (SOC):
This skill-up training is a level-400 training that's based on the [Microsoft Se
* Although the skill-up training is extensive, it naturally has to follow a script and can't expand on every topic. See the referenced documentation for information about each article. * You can now become certified with the new certification [SC-200: Microsoft Security Operations Analyst](/learn/certifications/exams/sc-200), which covers Microsoft Sentinel. For a broader, higher-level view of the Microsoft Security suite, you might also want to consider [SC-900: Microsoft Security, Compliance, and Identity Fundamentals](/learn/certifications/exams/sc-900) or [AZ-500: Microsoft Azure Security Technologies](/learn/certifications/exams/az-500).
-* If you're already skilled up on Microsoft Sentinel, keep track of [what's new](whats-new.md) or join the [Private Preview](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier view into upcoming releases.
+* If you're already skilled up on Microsoft Sentinel, keep track of [what's new](whats-new.md) or join the [Microsoft Cloud Security Private Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-kibZAPJAVBiU46J6wWF_5URDFSWUhYUldTWjdJNkFMVU1LTEU4VUZHMy4u) program for an earlier view into upcoming releases.
* Do you have a feature idea to share with us? Let us know on the [Microsoft Sentinel user voice page](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8). * Are you a premier customer? You might want the on-site or remote, four-day _Microsoft Sentinel Fundamentals Workshop_. Contact your Customer Success Account Manager for more details. * Do you have a specific issue? Ask (or answer others) on the [Microsoft Sentinel Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel). Or you can email your question or issue to us at <MicrosoftSentinel@microsoft.com>.
Use watchlists to help you with following scenarios:
* **Investigate threats and respond to incidents quickly**: Rapidly import IP addresses, file hashes, and other data from CSV files. After you import the data, use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
-* **Import business data as a watchlist**: For example, import lists of users with privileged system access, or terminated employees. Then, use the watchlist to create allow lists and block lists to detect or prevent those users from logging in to the network.
+* **Import business data as a watchlist**: For example, import lists of users with privileged system access, or terminated employees. Then, use the watchlist to create allowlists and blocklists to detect or prevent those users from logging in to the network.
-* **Reduce alert fatigue**: Create allow lists to suppress alerts from a group of users, such as users from authorized IP addresses who perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
+* **Reduce alert fatigue**: Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses who perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
* **Enrich event data**: Use watchlists to enrich your event data with name-value combinations that are derived from external data sources.
After you build your SOC, you need to start using it. The "day in an SOC analyst
To help enable your teams to collaborate seamlessly across the organization and with external stakeholders, see [Integrating with Microsoft Teams directly from Microsoft Sentinel](collaborate-in-microsoft-teams.md). And view the ["Decrease your SOCΓÇÖs MTTR (Mean Time to Respond) by integrating Microsoft Sentinel with Microsoft Teams"](https://www.youtube.com/watch?v=0REgc2jB560&ab_channel=MicrosoftSecurityCommunity) webinar.
-You might also want to read the [documentation article on incident investigation](investigate-cases.md). As part of the investigation, you'll also use the [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages) to get more information about entities that are related to your incident or identified as part of your investigation.
+You might also want to read the [documentation article on incident investigation](investigate-cases.md). As part of the investigation, you'll also use the [entity pages](entity-pages.md) to get more information about entities related to your incident or identified as part of your investigation.
Incident investigation in Microsoft Sentinel extends beyond the core incident investigation functionality. You can build additional investigation tools by using workbooks and notebooks, Notebooks are discussed in the next section, [Module 17: Hunting](#module-17-hunting). You can also build more investigation tools or modify existing ones to your specific needs. Examples include:
sentinel Ueba Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md
These are the data sources from which the UEBA engine collects and analyzes data
## UEBA enrichments
-This section describes the enrichments UEBA adds to Microsoft Sentinel entities, along with all their details, that you can use to focus and sharpen your security incident investigations. These enrichments are displayed on [entity pages](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages) and can be found in the following Log Analytics tables, the contents and schema of which are listed below:
+This section describes the enrichments UEBA adds to Microsoft Sentinel entities, along with all their details, that you can use to focus and sharpen your security incident investigations. These enrichments are displayed on [entity pages](entity-pages.md#how-to-use-entity-pages) and can be found in the following Log Analytics tables, the contents and schema of which are listed below:
- The **BehaviorAnalytics** table is where UEBA's output information is stored.
This section describes the enrichments UEBA adds to Microsoft Sentinel entities,
<a name="baseline-explained"></a>User activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which the dynamic baseline is derived. The lookback period is specified in the [**Baseline**](#activityinsights-field) column in this table. -- The **IdentityInfo** table is where identity information synchronized to UEBA from Azure Active Directory is stored.
+- The **IdentityInfo** table is where identity information synchronized to UEBA from Azure Active Directory (and from on-premises Active Directory via Microsoft Defender for Identity) is stored.
### BehaviorAnalytics table
-The following table describes the behavior analytics data displayed on each [entity details page](identify-threats-with-entity-behavior-analytics.md#how-to-use-entity-pages) in Microsoft Sentinel.
+The following table describes the behavior analytics data displayed on each [entity details page](entity-pages.md#how-to-use-entity-pages) in Microsoft Sentinel.
| Field | Type | Description | | - | -- | |
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
Title: Archive for What's new in Azure Sentinel description: A description of what's new and changed in Azure Sentinel from six months ago and earlier.--++ Last updated 11/22/2021
Azure Sentinel now supports the IP address entity, and you can now view IP entit
Like the user and host entity pages, the IP page includes general information about the IP, a list of activities the IP has been found to be a part of, and more, giving you an ever-richer store of information to enhance your investigation of security incidents.
-For more information, see [Entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages).
+For more information, see [Entity pages](entity-pages.md).
### Activity customization (Public preview)
Our collection of third-party integrations continues to grow, with thirty connec
### UEBA insights in the entity page (Public preview)
-The Azure Sentinel entity details pages provide an [Insights pane](identify-threats-with-entity-behavior-analytics.md#entity-insights), which displays behavioral insights on the entity and help to quickly identify anomalies and security threats.
+The Azure Sentinel entity details pages provide an [Insights pane](entity-pages.md#entity-insights), which displays behavioral insights on the entity and help to quickly identify anomalies and security threats.
If you have [UEBA enabled](ueba-reference.md), and have selected a timeframe of at least four days, this Insights pane will now also include the following new sections for UEBA insights:
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 03/01/2022 Last updated : 08/08/2022
If you're looking for items older than six months, you'll find them in the [Arch
## August 2022
+- [Azure resource entity page (Preview)](#azure-resource-entity-page-preview)
+- [New data sources for User and entity behavior analytics (UEBA) (Preview)](#new-data-sources-for-user-and-entity-behavior-analytics-ueba-preview)
+- [Microsoft Sentinel Solution for SAP is now generally available](#microsoft-sentinel-solution-for-sap-is-now-generally-available)
+
+### Azure resource entity page (Preview)
+
+Azure resources such as Azure Virtual Machines, Azure Storage Accounts, Azure Key Vault, Azure DNS, and more are essential parts of your network. Threat actors might attempt to obtain sensitive data from your storage account, gain access to your key vault and the secrets it contains, or infect your virtual machine with malware. The new [Azure resource entity page](entity-pages.md) is designed to help your SOC investigate incidents that involve Azure resources in your environment, hunt for potential attacks, and assess risk.
+
+You can now gain a 360-degree view of your resource security with the new entity page, which provides several layers of security information about your resources.
+
+First, it provides some basic details about the resource: where it is located, when it was created, to which resource group it belongs, the Azure tags it contains, etc. Further, it surfaces information about access management: how many owners, contributors, and other roles are authorized to access the resource, and what networks are allowed access to it; what is the permission model of the key vault, is public access to blobs allowed in the storage account, and more. Finally, the page also includes some integrations, such as Microsoft Defender for Cloud, Defender for Endpoint, and Purview, that enrich the information about the resource.
+
+### New data sources for User and entity behavior analytics (UEBA) (Preview)
+
+The [Security Events data source](ueba-reference.md#ueba-data-sources) for UEBA, which until now included only event ID 4624 (An account was successfully logged on), now includes four more event IDs and types, currently in **PREVIEW**:
+
+ - 4625: An account failed to log on.
+ - 4648: A logon was attempted using explicit credentials.
+ - 4672: Special privileges assigned to new logon.
+ - 4688: A new process has been created.
+
+Having user data for these new event types in your workspace will provide you with more and higher-quality insights into the described user activities, from Active Directory and Azure AD enrichments to anomalous activity to matching with internal Microsoft threat intelligence, all further enabling your incident investigations to piece together the attack story.
+
+As before, to use this data source you must enable the [Windows Security Events data connector](data-connectors-reference.md#windows-security-events-via-ama). If you have enabled the Security Events data source for UEBA, you will automatically begin receiving these new event types without having to take any additional action.
+
+It's likely that the inclusion of these new event types will result in the ingestion of somewhat more *Security Events* data, billed accordingly. Individual event IDs cannot be enabled or disabled independently; only the whole Security Events data set together. You can, however, filter the event data at the source if you're using the new [AMA-based version of the Windows Security Events data connector](data-connectors-reference.md#windows-security-events-via-ama).
+ ### Microsoft Sentinel Solution for SAP is now generally available The Microsoft Sentinel Solution for SAP is now generally available (GA). [Learn about billing and offer details](/pricing/offers/microsoft-sentinel-sap-promo/).
Use the solution to:
## July 2022 -- [Sync user entities from your on-premises Active Directory with Microsoft Sentinel](#sync-user-entities-from-your-on-premises-active-directory-with-microsoft-sentinel)-- [Automation rules for alerts](#automation-rules-for-alerts)
+- [Sync user entities from your on-premises Active Directory with Microsoft Sentinel (Preview)](#sync-user-entities-from-your-on-premises-active-directory-with-microsoft-sentinel-preview)
+- [Automation rules for alerts (Preview)](#automation-rules-for-alerts-preview)
-### Sync user entities from your on-premises Active Directory with Microsoft Sentinel
+### Sync user entities from your on-premises Active Directory with Microsoft Sentinel (Preview)
Until now, you've been able to bring your user account entities from your Azure Active Directory (Azure AD) into the IdentityInfo table in Microsoft Sentinel, so that User and Entity Behavior Analytics (UEBA) can use that information to provide context and give insight into user activities, to enrich your investigations.
If you have Microsoft Defender for Identity, [enable and configure User and Enti
Learn more about the [requirements for using Microsoft Defender for Identity](/defender-for-identity/prerequisites) this way.
-### Automation rules for alerts
+### Automation rules for alerts (Preview)
In addition to their incident-management duties, [automation rules](automate-incident-handling-with-automation-rules.md) have a new, added function: they are the preferred mechanism for running playbooks built on the **alert trigger**.
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
To enable a zone resilient Azure Service Fabric managed cluster, you must includ
{ "apiVersion": "2021-05-01", "type": "Microsoft.ServiceFabric/managedclusters",
- "zonalResiliency": "true"
+ "properties": {
+ ...
+ "zonalResiliency": "true",
+ ...
+ }
} ```
Requirements:
[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-multi-az-arch]: ./media/service-fabric-cross-availability-zones/sf-multi-az-topology.png
-[sfmc-multi-az-nodes]: ./media/how-to-managed-cluster-availability-zones/sfmc-multi-az-nodes.png
+[sfmc-multi-az-nodes]: ./media/how-to-managed-cluster-availability-zones/sfmc-multi-az-nodes.png
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 07/26/2022 Last updated : 08/08/2022 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Service Fabric Cluster Upgrade Version Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-version-azure.md
Using a supported target version information, you can use following PowerShell s
3) Invoke the API ```PowerShell $params = @{ "TargetVersion" = "<target version>"}
- Invoke-AzResourceAction -ResourceId -ResourceId <cluster resource id> -Parameters $params -Action listUpgradableVersions -Force
+ Invoke-AzResourceAction -ResourceId <cluster resource id> -Parameters $params -Action listUpgradableVersions -Force
``` Example:
service-fabric Service Fabric Tutorial Create Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-dotnet-app.md
Open **Views/Home/Index.cshtml**, the view specific to the Home controller. Rep
</div> <div class="row top-buffer" ng-repeat="vote in votes.data"> <div class="col-xs-8">
- <button class="btn btn-success text-left btn-block" ng-click="add(vote.key)">
+ <button class="btn btn-success text-left btn-block" ng-click="add(vote.Key)">
<span class="pull-left"> {{vote.key}} </span>
Open **Views/Home/Index.cshtml**, the view specific to the Home controller. Rep
</button> </div> <div class="col-xs-4">
- <button class="btn btn-danger pull-right btn-block" ng-click="remove(vote.key)">
+ <button class="btn btn-danger pull-right btn-block" ng-click="remove(vote.Key)">
<span class="glyphicon glyphicon-remove" aria-hidden="true"></span> Remove </button>
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0
[Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0 [Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0 [Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0 [Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0
-[Rollup 58](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 9.45.6096.1 | 5.1.6952.0 | 9.45.6096.1 | 5.1.6952.0 | 2.0.9237.0
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (August 2022)
+
+### Update Rollup 63
+
+[Update rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for Oracle Linux 8.6 Linux distro.
+**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.6 Linux distro.<br/><br/> Introduced the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery. [Learn more](move-from-classic-to-modernized-vmware-disaster-recovery.md).
++ ## Updates (July 2022) ### Update Rollup 62
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
To determine whether most of your requests are metadata-centric, start by follow
- Check to see whether the application can be modified to reduce the number of metadata operations. - Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata operations to be local. The setup offers performance similar to that of a local directly attached storage. - To mount a VHD on a Windows client, use the [Mount-DiskImage](/powershell/module/storage/mount-diskimage) PowerShell cmdlet.
- - To mount a VHD on Linux, consult the documentation for your Linux distribution.
+ - To mount a VHD on Linux, consult the documentation for your Linux distribution.
+- If you're continuously hitting the metadata operations limit that a single Azure file share can accommodate (2,000 operations per file share), we suggest separating the file share into multiple file shares within the same storage account.
### Cause 3: Single-threaded application
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
The following screenshot shows a finished Stream Analytics job. It highlights al
Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters. To configure an event hub as an input for your job, select the **Event Hub** symbol. A tile appears in the diagram view, including a side pane for its configuration and connection.
+When connecting to your Event Hub in no code editor, it is recommended that you create a new Consumer Group (which is the default option). This helps in avoiding the Event Hub reach the concurrent readersΓÇÖ limit. To understand more about Consumer groups and whether you should select an existing Consumer Group or create a new one, see [Consumer groups](../event-hubs/event-hubs-features.md). If your Event Hub is in Basic tier, you can only use the existing $Default Consumer group. If your Event Hub is in Standard or Premium tiers, you can create a new consumer group.
-After you set up your Event Hubs credentials and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+ ![Consumer group selection while setting up Event Hub](./media/no-code-stream-processing/consumer-group-nocode.png)
+
+When connecting to the Event Hub, if you choose ΓÇÿManaged IdentityΓÇÖ as Authentication mode, then the Azure Event Hubs Data owner role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for Event Hub, see [Event Hubs Managed Identity](event-hubs-managed-identity.md).
+Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+
+ ![Authentication method is selected as Managed Identity](./media/no-code-stream-processing/msi-eh-nocode.png)
+
+After you set up your Event Hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
You can always edit the field names, or remove or change the data type, by selecting the three dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
The available data types are:
- **Record** - Nested object with multiple records - **String** - Text
+## Reference data inputs
+
+Reference data is either static or changes slowly over time. It is typically used to enrich incoming streaming and do lookups in your job. For example, you might join data in the data stream input to data in the reference data, much as you would perform a SQL join to look up static values.For more information about reference data inputs, see [Using reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md).
+
+### ADLS Gen2 as reference data
+
+Reference data is modeled as a sequence of blobs in ascending order of the date/time specified in the blob name. Blobs can only be added to the end of the sequence by using a date/time greater than the one specified by the last blob in the sequence. Blobs are defined in the input configuration. For more information, see [Use reference data from Blob Storage for a Stream Analytics job](data-protection.md).
+
+First, you have to select your ADLS Gen2. To see details about each field, see Azure Blob Storage section in [Azure Blob Storage Reference data input](stream-analytics-use-reference-data.md).
+
+ ![Configure ADLS Gen2 as input in no code editor](./media/no-code-stream-processing/msi-eh-nocode.png)
+
+Then, upload a JSON of array file and the fields in the file will be detected. Use this reference data to perform transformation with Streaming input data from Event Hub.
+
+ ![Upload JSON for reference data](./media/no-code-stream-processing/blob-referencedata-upload-nocode.png)
+++ ## Transformations Streaming data transformations are inherently different from batch data transformations. Almost all streaming data has a time component, which affects any data preparation tasks involved.
Data Lake Storage Gen2 makes Azure Storage the foundation for building enterpris
Select **ADLS Gen2** as output for your Stream Analytics job and select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
+When connecting to ADLS Gen2, if you choose ‘Managed Identity’ as Authentication mode, then the Storage Blob Data Contributor role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for ADLS Gen2, see [Storage Blob Managed Identity](blob-output-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+
+ ![Managed identity for ADLS Gen2](./media/no-code-stream-processing/msi-adls-nocode.png)
+ ### Azure Synapse Analytics Azure Stream Analytics jobs can output to a dedicated SQL pool table in Azure Synapse Analytics and can process throughput rates up to 200MB/sec. It supports the most demanding real-time analytics and hot-path data processing needs for workloads such as reporting and dashboarding.
Azure Cosmos DB is a globally distributed database service that offers limitless
Select **CosmosDB** as output for your Stream Analytics job. For more information about Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
-## Data preview and errors
+When connecting to Azure Cosmos DB, if you choose ‘Managed Identity’ as Authentication mode, then the Contributor role will be granted to the Managed Identity for the Stream Analytics job.To learn more about Managed Identity for Cosmos DB, see [Cosmos DB Managed Identity](cosmos-db-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+
+ ![Managed identity for Cosmos DB](./media/no-code-stream-processing/msi-cosmosdb-nocode.png)
+
+## Data preview, errors and metrics
The no code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
Runtime errors are warning/Error/Critical level errors. These errors are helpful
:::image type="content" source="./media/no-code-stream-processing/runtime-errors.png" alt-text="Screenshot showing the Runtime errors tab where you can select a timespan to filter error events." lightbox="./media/no-code-stream-processing/runtime-errors.png" :::
+### Metrics
+
+If the job is running, you can monitor the health of your job by navigating to Metrics tab. The four metrics shown by default are Watermark delay, Input events, Backlogged input events, Output events. You can use these to understand if the events are flowing in & output of the job without any input backlog. You can select more metrics from the list.To understand all the metrics in details, see [Stream Analytics metrics](stream-analytics-job-metrics.md).
+
+ ![Metrics for jobs created from no code editor](./media/no-code-stream-processing/metrics-nocode.png)
+ ## Start a Stream Analytics job
-Once you have configured Event Hubs, operations and Streaming outputs for the job, you Save and Start the job.
+You can save the job anytime while creating it. Once you have configured the Event Hub, transformations, and Streaming outputs for the job, you can Start the job.
+**Note**: While the no code editor is in Preview, the Azure Stream Analytics service is Generally Available.
:::image type="content" source="./media/no-code-stream-processing/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-stream-processing/no-code-save-start.png" :::
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
# Quickstart: Create a new Azure Machine Learning linked service in Synapse
-[!IMPORTANT]
-**The Azure ML integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure ML using private endpoints, you can set up a managed AzureML private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
+> **IMPORTANT, PLEASE NOTE THE BELOW LIMITATIONS:**
+> - **The Azure ML integration is not currently supported in Synapse Workspaces with Data Exfiltration Protection.** If you are **not** using data exfiltration protection and want to connect to Azure ML using private endpoints, you can set up a managed AzureML private endpoint in your Synapse workspace. [Read more about managed private endpoints](../security/how-to-create-managed-private-endpoints.md)
+> - **AzureML linked service is not supported with self hosted integration runtimes.** This applies to Synapse workspaces with and without Data Exfiltration Protection.
In this quickstart, you'll link an Azure Synapse Analytics workspace to an Azure Machine Learning workspace. Linking these workspaces allows you to leverage Azure Machine Learning from various experiences in Synapse.
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Title: Create an autoscale scaling plan for Azure Virtual Desktop
description: How to create an autoscale scaling plan to optimize deployment costs. Previously updated : 08/03/2022 Last updated : 08/08/2022
Autoscale lets you scale your session host virtual machines (VMs) in a host pool
> - Autoscale doesn't support scaling of ephemeral disks. > - Autoscale doesn't support scaling of generalized VMs. > - You can't use autoscale and [scale session hosts using Azure Automation and Azure Logic Apps](scaling-automation-logic-apps.md) on the same host pool. You must use one or the other.
+> - Autoscale is currently only available in the public cloud.
For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager templates or first-party tools from Microsoft. >[!IMPORTANT]
->Deploying scaling plans with autoscale is currently limited to the following Azure regions:
+>Deploying scaling plans with autoscale is currently limited to the following Azure regions in the public cloud:
> > - Australia East > - Canada Central
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md
Some properties may only be changed to certain values if the VMs in the scale se
- **SKU Name**- If the new VM SKU is not supported on the hardware the scale set is currently on, you need to deallocate the VMs in the scale set before you modify the SKU name. For more information, see [how to resize an Azure VM](../virtual-machines/resize-vm.md). ## VM-specific updates
-Certain modifications may be applied to specific VMs instead of the global scale set properties. Currently, the only VM-specific update that is supported is to attach/detach data disks to/from VMs in the scale set. This feature is in preview. For more information, see the [preview documentation](https://github.com/Azure/vm-scale-sets/tree/master/preview/disk).
+Certain modifications may be applied to specific VMs instead of the global scale set properties. Currently, the only VM-specific update that is supported is to attach/detach data disks to/from VMs in the scale set. This feature is in preview. For more information, see the [preview documentation](https://github.com/Azure/vm-scale-sets/tree/master/z_deprecated/preview/disk).
## Scenarios
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
Track the state of the overall reservation through the following properties:
- `virtualMachinesAllocated` = List of VMs allocated against the Capacity Reservation and count towards consuming the capacity. These VMs are either *Running*, *Stopped* (*Allocated*), or in a transitional state such as *Starting* or *Stopping*. This list doesnΓÇÖt include the VMs that are in deallocated state, referred to as *Stopped* (*deallocated*). - `virtualMachinesAssociated` = List of VMs associated with the Capacity Reservation. This list has all the VMs that have been configured to use the reservation, including the ones that are in deallocated state.
-The previous example will start with `capacity` as 2 and length of `virutalMachinesAllocated` and `virtualMachinesAssociated` as 0.
+The previous example will start with `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 0.
When a VM is then allocated against the Capacity Reservation, it will logically consume one of the reserved capacity instances: ![Capacity Reservation image 2.](./media/capacity-reservation-overview/capacity-reservation-2.jpg)
-The status of the Capacity Reservation will now show `capacity` as 2 and length of `virutalMachinesAllocated` and `virtualMachinesAssociated` as 1.
+The status of the Capacity Reservation will now show `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 1.
Allocations against the Capacity Reservation will succeed as along as the VMs have matching properties and there is at least one empty capacity instance.
Using our example, when a third VM is allocated against the Capacity Reservation
![Capacity Reservation image 3.](./media/capacity-reservation-overview/capacity-reservation-3.jpg)
-The `capacity` is 2 and the length of `virutalMachinesAllocated` and `virtualMachinesAssociated` is 3.
+The `capacity` is 2 and the length of `virtualMachinesAllocated` and `virtualMachinesAssociated` is 3.
Now suppose the application scales down to the minimum of two VMs. Since VM 0 needs an update, it is chosen for deallocation. The reservation automatically shifts to this state:
Get started reserving Compute capacity. Check out our other related Capacity Res
- [Remove a VM](capacity-reservation-remove-vm.md) - [Associate a VM scale set - Flexible](capacity-reservation-associate-virtual-machine-scale-set-flex.md) - [Associate a VM scale set - Uniform](capacity-reservation-associate-virtual-machine-scale-set.md)-- [Remove a VM scale set](capacity-reservation-remove-virtual-machine-scale-set.md)
+- [Remove a VM scale set](capacity-reservation-remove-virtual-machine-scale-set.md)
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
virtual-machines Change Availability Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/change-availability-set.md
This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://
> [!WARNING] > This is just an example and in some cases it will need to be updated for your specific deployment.
->
+>
+> Make sure the disks are set to `detach` as the [delete](../delete.md) option. If they are set to `delete`, update the VMs before deleting the VMs.
+>
> If your VM is attached to a load balancer, you will need to update the script to handle that case. > > Some extensions may also need to be reinstalled after you finish this process.
virtual-machines Change Drive Letter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/change-drive-letter.md
First, you'll need to attach the data disk to the virtual machine. To do this us
## Temporarily move pagefile.sys to C drive 1. Connect to the virtual machine. 2. Right-click the **Start** menu and select **System**.
-3. In the left-hand menu, select **Advanced system settings**.
+3. In the left-hand menu, search for and select **View advanced system settings**.
4. In the **Performance** section, select **Settings**. 5. Select the **Advanced** tab. 6. In the **Virtual memory** section, select **Change**.
First, you'll need to attach the data disk to the virtual machine. To do this us
## Move pagefile.sys back to the temporary storage drive 1. Right-click the **Start** menu and select **System**
-2. In the left-hand menu, select **Advanced system settings**.
+2. In the left-hand menu, search for and select **View advanced system settings**.
3. In the **Performance** section, select **Settings**. 4. Select the **Advanced** tab. 5. In the **Virtual memory** section, select **Change**.
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
description: Configure your deployment control plane for the SAP deployment auto
Previously updated : 11/17/2021 Last updated : 8/8/2022
virtual-machines Automation Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-webapp.md
Title: Configure a Deployer UX Web Application for SAP Deployment Automation Framework
+ Title: Configure a Deployer Web Application for SAP Deployment Automation Framework
description: Configure a web app as a part of the control plane to help creating and deploying SAP workload zones and systems on Azure. Previously updated : 06/21/2022 Last updated : 08/1/2022
-# Configure the Control Plane UX Web Application
+# Configure the Control Plane Web Application
-As a part of the SAP automation framework control plane, you can optionally create an interactive web application that will assist you in creating the required configuration files and deploying SAP workload zones and systems using Azure DevOps Pipelines.
+As a part of the SAP automation framework control plane, you can optionally create an interactive web application that will assist you in creating the required configuration files and deploying SAP workload zones and systems using Azure Pipelines.
:::image type="content" source="./media/automation-deployment-framework/webapp-front-page.png" alt-text="Web app front page":::
+> [!IMPORTANT]
+> Control Plane Web Application is currently in PREVIEW and not yet available in the main branch.
+ ## Create an app registration If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands:
rm ./manifest.json
```
-## Deploy via Azure DevOps (pipelines)
+## Deploy via Azure Pipelines
For full instructions on setting up the web app using Azure DevOps, see [Use SAP Deployment Automation Framework from Azure DevOps Services](automation-configure-devops.md)
For full instructions on setting up the web app using the Azure CLI, see [Deploy
## Using the web app
-The web app allows you to create SAP workload zone objects and system infrastructure objects. These are essentially another representation of the Terraform configuration file.
+The web app allows you to create SAP workload zone objects and system infrastructure objects. These objects are essentially another representation of the Terraform configuration file.
If deploying using Azure Pipelines, you have ability to deploy these workload zones and system infrastructures right from the web app. If deploying using the Azure CLI, you can download the parameter file for any landscape or system object you create, and use that in your command line deployments.
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
cd sap-automation/deploy/scripts
The script will install Terraform and Ansible and configure the deployer.
-## Deploy the web app software
+## Deploy the Control Plane Web Application
+
+> [!IMPORTANT]
+> Control Plane Web Application is currently in PREVIEW and not yet available in the main branch.
If you would like to use the web app, follow the steps below. If not, ignore this section.
virtual-network Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-overview.md
This table lists the methods that you can use to create a virtual network and su
## Network security groups
-A [network security group (NSG)](../virtual-network/virtual-network-vnet-plan-design-arm.md) contains a list of Access Control List (ACL) rules that allow or deny network traffic to subnets, NICs, or both. NSGs can be associated with either subnets or individual NICs connected to a subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VMs in that subnet. Traffic to an individual NIC can be restricted by associating an NSG directly to a NIC.
+A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of Access Control List (ACL) rules that allow or deny network traffic to subnets, NICs, or both. NSGs can be associated with either subnets or individual NICs connected to a subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VMs in that subnet. Traffic to an individual NIC can be restricted by associating an NSG directly to a NIC.
NSGs contain two sets of rules, inbound and outbound. The priority for a rule must be unique within each set.
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/26/2022 Last updated : 08/08/2022
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Front Door.
|Bot300300|General purpose HTTP clients and SDKs| |Bot300400|Service agents| |Bot300500|Site health monitoring services|
-|Bot300600|Unknown bots detected by threat intelligence|
+|Bot300600|Unknown bots detected by threat intelligence<br />(This rule also includes IP addresses matched to the Tor network.)|
|Bot300700|Other bots|
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
You can monitor Web Application Firewall resources using logs. You can save perf
You can use different types of logs in Azure to manage and troubleshoot application gateways. You can access some of these logs through the portal. All logs can be extracted from Azure Blob storage and viewed in different tools, such as [Azure Monitor logs](../../azure-monitor/insights/azure-networking-analytics.md), Excel, and Power BI. You can learn more about the different types of logs from the following list: * **Activity log**: You can use [Azure activity logs](../../azure-monitor/essentials/activity-log.md) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal.
-* **Access Resource log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 300 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.
+* **Access Resource log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. This log contains individual records for each request and associates that request to the unique Application Gateway that processed the request. Unique Application Gateway instances can be identified by the property instanceId.
* **Performance Resource log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](../../application-gateway/application-gateway-metrics.md) for performance data. * **Firewall Resource log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall.