Updates from: 08/25/2021 03:28:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensi
## Test the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*.
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
To return the promo code claim back to the relying party application, add an out
## Test the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkExtensions.xml*, and *SignUpOrSignin.xml*.
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-change-policy.md
In Azure Active Directory B2C (Azure AD B2C), you can enable users who are signe
## Upload and test the policy 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your tenant.
-3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-4. Select **Identity Experience Framework**.
-5. On the Custom Policies page, click **Upload Policy**.
-6. Select **Overwrite the policy if it exists**, and then search for and select the *TrustframeworkExtensions.xml* file.
-7. Click **Upload**.
-8. Repeat steps 5 through 7 for the relying party file, such as *ProfileEditPasswordChange.xml*.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Identity Experience Framework**.
+1. On the Custom Policies page, click **Upload Policy**.
+1. Select **Overwrite the policy if it exists**, and then search for and select the *TrustframeworkExtensions.xml* file.
+1. Click **Upload**.
+1. Repeat steps 5 through 7 for the relying party file, such as *ProfileEditPasswordChange.xml*.
### Run the policy
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-reset-policy.md
The self-service password reset experience can be configured for the **Sign-in (
To enable self-service password reset for the sign-up or sign-in user flow: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directories + Subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**. 1. Select a sign-up or sign-in user flow (of type **Recommended**) that you want to customize.
Your application might need to detect whether the user signed in via the Forgot
### Upload the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directories + Subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the two policy files that you changed in the following order:
active-directory-b2c Add Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-in-policy.md
If you haven't already done so, [register a web application in Azure Active Dire
To add sign-in policy: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directories + Subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**. 1. On the **Create a user flow** page, select the **Sign in** user flow.
The **SelfAsserted-LocalAccountSignin-Email** technical profile is a [self-asser
## Update and test your policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directory + subscription** filter in the top menu and choosing the directory that contains your Azure AD tenant.
+1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy file that you changed, *TrustFrameworkExtensions.xml*.
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
If you haven't already done so, [register a web application in Azure Active Dire
The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single configuration. Users of your application are led down the right path depending on the context. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directories + Subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**.
active-directory-b2c Add Web Api Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-web-api-application.md
To register an application in your Azure AD B2C tenant, you can use our new unif
#### [App registrations](#tab/app-reg-ga/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Enter a **Name** for the application. For example, *webapi1*.
If you have an application that implements the implicit grant flow, for example
#### [Applications (Legacy)](#tab/applications-legacy/) 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your tenant.
-3. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-4. Select **Applications (Legacy)**, and then select **Add**.
-5. Enter a name for the application. For example, *webapi1*.
-6. For **Include web app/ web API** and **Allow implicit flow**, select **Yes**.
-7. For **Reply URL**, enter an endpoint where Azure AD B2C should return any tokens that your application requests. In your production application, you might set the reply URL to a value such as `https://localhost:44332`. For testing purposes, set the reply URL to `https://jwt.ms`.
-8. For **App ID URI**, enter the identifier used for your web API. The full identifier URI including the domain is generated for you. For example, `https://contosotenant.onmicrosoft.com/api`.
-9. Click **Create**.
-10. On the properties page, record the application ID that you'll use when you configure the web application.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Applications (Legacy)**, and then select **Add**.
+1. Enter a name for the application. For example, *webapi1*.
+1. For **Include web app/ web API** and **Allow implicit flow**, select **Yes**.
+1. For **Reply URL**, enter an endpoint where Azure AD B2C should return any tokens that your application requests. In your production application, you might set the reply URL to a value such as `https://localhost:44332`. For testing purposes, set the reply URL to `https://jwt.ms`.
+1. For **App ID URI**, enter the identifier used for your web API. The full identifier URI including the domain is generated for you. For example, `https://contosotenant.onmicrosoft.com/api`.
+1. Click **Create**.
+1. On the properties page, record the application ID that you'll use when you configure the web application.
* * *
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/age-gating.md
Azure AD B2C uses the information that the user enters to identify whether they'
To use age gating in a user flow, you need to configure your tenant to have extra properties. 1. Use [this link](https://portal.azure.com/?Microsoft_AAD_B2CAdmin_agegatingenabled=true#blade/Microsoft_AAD_B2CAdmin/TenantManagementMenuBlade/overview) to try the age gating preview.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directory + subscription** filter in the top menu. Select the directory that contains your tenant.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. Select **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Properties** for your tenant in the menu on the left. 1. Under the **Age gating**, select **Configure**.
active-directory-b2c Analytics With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/analytics-with-application-insights.md
When you use Application Insights, consider the following:
When you use Application Insights with Azure AD B2C, all you need to do is create a resource and get the instrumentation key. For information, see [Create an Application Insights resource](../azure-monitor/app/create-new-resource.md). 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Make sure you're using the directory that has your Azure subscription. Select the **Directory + subscription** filter in the top menu and choose the directory that contains your Azure subscription. This tenant isn't your Azure AD B2C tenant.
+1. Make sure you're using the directory that has your Azure AD subscription, and not your Azure AD B2C directory. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find the Azure AD directory that has your subscription in the **Directory name** list, and then select **Switch**
1. Choose **Create a resource** in the upper-left corner of the Azure portal, and then search for and select **Application Insights**. 1. Select **Create**. 1. For **Name**, enter a name for the resource.
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
Your final configuration file should look like the following JSON:
## Step 6: Run the sample web app 1. Build and run the project.
-1. Browse to [https://localhost:5000](https://localhost:5000).
+1. Browse to `https://localhost:5000`.
1. Complete the sign-up or sign-in process. After successful authentication, you'll see your display name in the navigation bar. To view the claims that Azure AD B2C token returns to your app, select **TodoList**.
active-directory-b2c Configure Authentication Sample Wpf Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-wpf-desktop-app.md
# Configure authentication in a sample WPF desktop application using Azure Active Directory B2C
-This article uses a sample [WPF desktop](/visualstudio/designers/getting-started-with-wpf.md) application to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your desktop apps.
+This article uses a sample [WPF desktop](/visualstudio/designers/getting-started-with-wpf) application to illustrate how to add Azure Active Directory B2C (Azure AD B2C) authentication to your desktop apps.
## Overview
public static string ApiEndpoint = "https://contoso.azurewebsites.net/hello";
## Step 6: Run and test the desktop app
-1. [Restore the NuGet packages](/nuget/consume-packages/package-restore.md).
+1. [Restore the NuGet packages](/nuget/consume-packages/package-restore).
1. Press **F5** to build and run the sample. 1. Select **Sign In**. Then sign up or sign in with your Azure AD B2C local or social account.
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Follow these steps to create a Front Door for your Azure AD B2C tenant. For more
The frontend host is the domain name used by your application. When you create a Front Door, the default frontend host is a subdomain of `azurefd.net`.
-Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example, https://login.contoso.com.
+Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example, `https://login.contoso.com`.
To add a frontend host, follow these steps:
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml-options.md
The following example demonstrates an authorization request with **AllowCreate**
</samlp:AuthnRequest> ```
+### Force authentication
+
+You can force the external SAML IDP to prompt the user for authentication by passing the `ForceAuthN` property in the SAML authentication request. Your identity provider must also support this property.
+
+The `ForceAuthN` property is a Boolean `true` or `false` value. By default, Azure AD B2C sets the ForceAuthN value to `false`. You can change this behavior by setting ForceAuthN to `true` so that when there is a valid session, the initiating request forces authentication (for example, by sending `prompt=login` in the OpenID Connect request).
+
+The following example shows the `ForceAuthN` property set to `true`:
+
+```xml
+<Metadata>
+ ...
+ <Item Key="ForceAuthN">true</Item>
+ ...
+</Metadata>
+```
+
+The following example shows the `ForceAuthN` property in an authorization request:
++
+```xml
+<samlp:AuthnRequest AssertionConsumerServiceURL="https://..." ...
+ ForceAuthN="true">
+ ...
+</samlp:AuthnRequest>
+```
+ ### Include authentication context class references A SAML authorization request may contain a **AuthnContext** element, which specifies the context of an authorization request. The element can contain an authentication context class reference, which tells the SAML identity provider which authentication mechanism to present to the user.
The following SAML authorization request contains the authentication context cla
## Include custom data in the authorization request
-You can optionally include protocol message extension elements that are agreed to by both Azure AD BC and your identity provider. The extension is presented in XML format. You include extension elements by adding XML data inside the CDATA element `<![CDATA[Your IDP metadata]]>`. Check your identity providerΓÇÖs documentation to see if the extensions element is supported.
+You can optionally include protocol message extension elements that are agreed to by both Azure AD BC and your identity provider. The extension is presented in XML format. You include extension elements by adding XML data inside the CDATA element `<![CDATA[Your Custom XML]]>`. Check your identity providerΓÇÖs documentation to see if the extensions element is supported.
The following example illustrates the use of extension data:
The following example illustrates the use of extension data:
</Metadata> ```
+> [!NOTE]
+> Per the SAML specification, the extension data must be namespace-qualified XML (for example, 'urn:ext:custom' shown in the sample above), and it must not be one of the SAML-specific namespaces.
+ When using the SAML protocol message extension, the SAML response will look like the following example: ```xml
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **OutputClaimsTransformations** element may contain a collection of **Output
| IncludeKeyInfo | No | Indicates whether the SAML authentication request contains the public key of the certificate when the binding is set to `HTTP-POST`. Possible values: `true` or `false`. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
+|ForceAuthN| No| Passes the ForceAuthN value in the SAML authentication request to determine if the external SAML IDP will be forced to prompt the user for authentication. By default, Azure AD B2C sets the ForceAuthN value to `false`. You can change this behavior by setting ForceAuthN to `true` so that when there is a valid session, the initiating request forces authentication (for example, by sending `prompt=login` in the OpenID Connect request). Possible values: `true` or `false`.|
+ ## Cryptographic keys
Example:
See the following articles for examples of working with SAML identity providers in Azure AD B2C: - [Add ADFS as a SAML identity provider using custom policies](identity-provider-adfs.md)-- [Sign in by using Salesforce accounts via SAML](identity-provider-salesforce-saml.md)
+- [Sign in by using Salesforce accounts via SAML](identity-provider-salesforce-saml.md)
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Following the steps below will delete your existing customappsso job and create
10. Run the command below to create a new provisioning job that has the latest service fixes. `POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs`
- `{ templateId: "scim" }`
+ `{ "templateId": "scim" }`
11. In the results of the last step, copy the full "ID" string that begins with "scim". Optionally, reapply your old attribute-mappings by running the command below, replacing [new-job-id] with the new job ID you copied, and entering the JSON output from step #7 as the request body.
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 08/10/2021 Last updated : 08/24/2021
After the ECMA Connector Host schema mapping has been configured, start the serv
| Error | Resolution | | -- | -- | | Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
-| Invalid LDAP style of object's DN. DN: username@domain.com" | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host.|
+| Invalid LDAP style of object's DN. DN: username@domain.com" | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
## Understand incoming SCIM requests
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Previously updated : 03/31/2021 Last updated : 08/24/2021
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout can be integrated with hybrid deployments that use password hash s
When using [pass-through authentication](../hybrid/how-to-connect-pta.md), the following considerations apply:
-* The Azure AD lockout threshold is **less** than the AD DS account lockout threshold. Set the values so that the AD DS account lockout threshold is at least two or three times longer than the Azure AD lockout threshold.
+* The Azure AD lockout threshold is **less** than the AD DS account lockout threshold. Set the values so that the AD DS account lockout threshold is at least two or three times greater than the Azure AD lockout threshold.
* The Azure AD lockout duration must be set longer than the AD DS reset account lockout counter after duration. The Azure AD duration is set in seconds, while the AD duration is set in minutes.
-For example, if you want your Azure AD counter to be higher than AD DS, then Azure AD would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds).
+For example, if you want your Azure AD smart lockout duration to be higher than AD DS, then Azure AD would be 120 seconds (2 minutes) while your on-premises AD is set to 1 minute (60 seconds). If you want your Azure AD lockout threshold to be 5, then you want your on-premises AD lockout threshold to be 10. This configuration would ensure smart lockout prevents your on-premises AD accounts from being locked out by brute force attacks on your Azure AD accounts.
> [!IMPORTANT] > Currently, an administrator can't unlock the users' cloud accounts if they have been locked out by the Smart Lockout capability. The administrator must wait for the lockout duration to expire. However, the user can unlock by using self-service password reset (SSPR) from a trusted device or location.
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-support-help-options.md
Previously updated : 03/09/2021 Last updated : 08/19/2021
If you need help with one of the Microsoft Authentication Libraries (MSAL), open
| MSAL for iOS and macOS| https://github.com/AzureAD/microsoft-authentication-library-for-objc/issues | | MSAL Java | https://github.com/AzureAD/microsoft-authentication-library-for-java/issues | | MSAL.js | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
-|MSAL.NET| https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues |
+| MSAL.NET| https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues |
| MSAL Node | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues | | MSAL Python | https://github.com/AzureAD/microsoft-authentication-library-for-python/issues | | MSAL React | https://github.com/AzureAD/microsoft-authentication-library-for-js/issues |
-## Submit feedback on Azure Feedback
-
-<div class='icon is-large'>
- <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
-</div>
-
-To request new features, post them on Azure Feedback. Share your ideas for making the Microsoft identity platform work better for the applications you develop.
-
-| Service | Azure Feedback URL |
-|-||
-| Azure Active Directory | https://feedback.azure.com/forums/169401-azure-active-directory |
-| Azure Active Directory - Developer experiences | https://feedback.azure.com/forums/169401-azure-active-directory?category_id=164757 |
-| Azure Active Directory - Authentication | https://feedback.azure.com/forums/169401-azure-active-directory?category_id=167256 |
- ## Stay informed of updates and new releases <div class='icon is-large'>
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-configure-publisher-domain.md
To set your app's publisher domain, follow these steps.
If your app is registered in a tenant, you'll see two tabs to select from: **Select a verified domain** and **Verify a new domain**.
-If your app isn't registered in a tenant, you'll only see the option to verify a new domain for your application.
+If your domain isn't registered in the tenant, you'll only see the option to verify a new domain for your application.
### To verify a new domain for your app
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
Each code sample includes a _README.md_ file that describes how to build the pro
These samples show how to write a single-page application secured with Microsoft identity platform. These samples use one of the flavors of MSAL.js. > [!div class="mx-tdCol2BreakAll"]
-> | Language/<br/>Platform | Code sample | Description | Auth libraries | Auth flow |
-> | - | -- | | - | -- |
-> | Angular | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial) | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Proof of Possession (PoP)|
-> | Blazor WebAssembly | [GitHub repo](https://github.com/Azure-Samples/ms-identity-blazor-wasm) | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Auth code flow (with PKCE) |
-> | JavaScript | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial) | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Call Node.js web API via OBO & CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA) |
-> | React | [GitHub repo](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial) | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call Node.js web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage & App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Auth code flow (with PKCE)<br/>&#8226; On-behalf-of (OBO) flow<br/>&#8226; Conditional Access (CA)<br/>&#8226; Proof of Possession (PoP) |
+> | Language/<br/>Platform | Code sample(s) <br/>on GitHub | Auth<br/> libraries | Auth flow |
+> | - | -- | - | -- |
+> | Angular | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call .NET Core web API](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call .NET Core web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call .NET Core web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/7-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/4-Deployment/README.md)| MSAL Angular | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Proof of Possession (PoP)|
+> | Blazor WebAssembly | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/MyOrg/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-OIDC/B2C/README.md)<br/>&#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/WebApp-graph-user/Call-MSGraph/README.md)<br/>&#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-blazor-wasm/blob/main/Deploy-to-Azure/README.md) | MSAL.js | Authorization code with PKCE |
+> | JavaScript | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/1-call-api/README.md)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/3-Authorization-II/2-call-api-b2c/README.md)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph/README.md)<br/>&#8226; [Call Node.js web API via OBO and CA](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/2-call-api-api-c)| MSAL.js | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access (CA) |
+> | React | &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/1-sign-in/README.md)<br/>&#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/1-Authentication/2-sign-in-b2c/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/2-Authorization-I/1-call-graph/README.md)<br/>&#8226; [Call Node.js web API](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api)<br/>&#8226; [Call Node.js web API (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c)<br/>&#8226; [Call Microsoft Graph via OBO](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/1-call-api-obo/README.md)<br/>&#8226; [Call Node.js web API using PoP](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/6-AdvancedScenarios/2-call-api-pop/README.md)<br/>&#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/1-call-api-roles/README.md)<br/>&#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl/2-call-api-groups/README.md)<br/>&#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/1-deploy-storage/README.md)<br/>&#8226; [Deploy to Azure Static Web Apps](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/4-Deployment/2-deploy-static/README.md)| MSAL React | &#8226; Authorization code with PKCE<br/>&#8226; On-behalf-of (OBO) <br/>&#8226; Conditional Access (CA)<br/>&#8226; Proof of Possession (PoP) |
## Web applications The following samples illustrate web applications that sign in users. Some samples also demonstrate the application calling Microsoft Graph, or your own web API with the user's identity. > [!div class="mx-tdCol2BreakAll"]
-> | Language/<br/>Platform | Code sample<br/>on GitHub | Description | Authentication libraries used | Authentication flow |
-> | - | -- | | - | -- |
-> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2) | ASP.NET Core Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/README.md) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/1-5-B2C/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) <br/> &#8226; [Customize token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-2-TokenCache/README.md) <br/> &#8226; [Call Graph (multi-tenant)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md) <br/> &#8226; [Call Azure REST APIs](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/3-WebApp-multi-APIs/README.md) <br/> &#8226; [Protect web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-1-MyOrg/README.md) <br/> &#8226; [Protect web API (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-2-B2C/README.md) <br/> &#8226; [Protect multi-tenant web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-3-AnyOrg/Readme.md) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) <br/> &#8226; [Deploy to Azure Storage & App Service](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/6-Deploy-to-Azure/README.md) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | &#8226; [OIDC flow](./v2-protocols-oidc.md) <br/> &#8226; [Auth code flow](./v2-oauth2-auth-code-flow.md) <br/> &#8226; [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) |
-> | Blazor | [GitHub repo](https://github.com/Azure-Samples/ms-identity-blazor-server/) | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | MSAL.NET | |
-> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | [Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) |
-> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | [Use the Conditional Access auth context to perform step\-up authentication ](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | &#8226; [MSAL.NET](https://aka.ms/msal-net) <br/> &#8226; [Microsoft.Identity.Web](https://aka.ms/microsoft-identity-web) | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | ASP.NET Core|[GitHub repo](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [MSAL.NET](https://aka.ms/msal-net) | |
-> | ASP.NET |[GitHub repo](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [MSAL.NET](https://aka.ms/msal-net) | |
-> | ASP.NET |[GitHub repo](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [MSAL.NET](https://aka.ms/msal-net) | |
-> | ASP.NET |[GitHub repo](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [Admin Restricted Scopes <br/> &#8226; Sign in users <br/> &#8226; call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [MSAL.NET](https://aka.ms/msal-net) | |
-> | ASP.NET |[GitHub repo](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) | Microsoft Graph Training Sample | [MSAL.NET](https://aka.ms/msal-net) | |
-> | Java </p> Spring |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial) | Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java <br/> AAD Boot Starter | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Java </p> Servlets |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Java |[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapp) | Sign in users, call Microsoft Graph | MSAL Java | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Java </p> Spring|[GitHub repo](https://github.com/Azure-Samples/ms-identity-java-webapi) | Sign in users & call Microsoft Graph via OBO </p> &#8226; web API | MSAL Java | &#8226; [Auth code flow](./v2-oauth2-auth-code-flow.md) <br/> &#8226; [On-Behalf-Of (OBO) flow](./v2-oauth2-on-behalf-of-flow.md) |
-> | Node.js </p> Express |[GitHub repo](https://github.com/Azure-Samples/ms-identity-node) | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | MSAL Node | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | Flask Series <br/> &#8226; Sign in users <br/> &#8226; Sign in users (B2C) <br/> &#8226; Call Microsoft Graph <br/> &#8226; Deploy to Azure App Service | MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Python </p> Django |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-django-tutorial) | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Python </p> Flask |[GitHub repo](https://github.com/Azure-Samples/ms-identity-python-webapp) | Flask standalone sample <br/> [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) | MSAL Python | [Auth code flow](./v2-oauth2-auth-code-flow.md) |
-> | Ruby |[GitHub repo](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | Graph Training <br/> &#8226; [Sign in and Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | | |
-
-## Desktop and mobile public client apps
-
-The following samples show public client applications (desktop or mobile applications) that access the Microsoft Graph API, or your own web API in the name of a user. Apart from the *Desktop (Console) with WAM* sample, all these client applications use the Microsoft Authentication Library (MSAL).
-
-| Client application | Platform | Flow/grant | Calls Microsoft Graph | Calls an ASP.NET Core web API |
-| | -- | -| - | - |
-| Desktop tutorial (.NET Core) - Optionally using:</p>- the cross platform token cache</p>- custom web UI | ![This image shows the .NET/C# logo](medi#authorization-code)| [ms-identity-dotnet-desktop-tutorial](https://github.com/azure-samples/ms-identity-dotnet-desktop-tutorial) | |
-| Desktop (WPF) | ![This image shows the .NET desktop/C# logo](medi#authorization-code)| [dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | [dotnet-native-aspnetcore-v2](https://aka.ms/msidentity-aspnetcore-webapi) |
-| Desktop (Console) | ![Image that shows the .NET/C# (Desktop) logo](medi#integrated-windows-authentication) | [dotnet-iwa-v2](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | |
-| Desktop (Console) | ![Image that shows the .NET/C# (Desktop) logo](medi) |
-| Desktop (Console) <br> Use certificates instead of secrets | ![Image that shows the .NET/C# (Desktop) logo](medi#authorization-code) | [active-directory-dotnetcore-daemon-v2](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph#variation-daemon-application-using-client-credentials-with-certificates) |[active-directory-dotnetcore-daemon-v2](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi#variation-daemon-application-using-client-credentials-with-certificates) |
-| Desktop (Console) | ![This image shows the Java logo](medi#integrated-windows-authentication) |[ms-identity-java-desktop](https://github.com/Azure-Samples/ms-identity-java-desktop/) | |
-| Desktop (Console) | ![This is the .NET/C# (Desktop) logo](medi#usernamepassword) |[dotnetcore-up-v2](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | |
-| Desktop (Console) with WAM | ![This is the logo for .NET/C# (Desktop)](media/sample-v2-code/logo_NETcore.png) | Interactive with [Web Account Manager](/windows/uwp/security/web-account-manager) (WAM) |[dotnet-native-uwp-wam](https://github.com/azure-samples/active-directory-dotnet-native-uwp-wam) | |
-| Desktop (Console) | ![This image shows the Java logo](medi#usernamepassword) |[ms-identity-java-desktop](https://github.com/Azure-Samples/ms-identity-java-desktop/) | |
-| Desktop (Console) | ![This image shows the Python logo](medi#usernamepassword) |[ms-identity-python-desktop](https://github.com/Azure-Samples/ms-identity-python-desktop) | |
-| Desktop (Electron) | ![This image shows the Node.js logo](medi#authorization-code) |[ms-identity-javascript-nodejs-desktop](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | |
-| Mobile (Android, iOS, UWP) | ![This image shows the .NET/C# (Xamarin) logo](medi#authorization-code) |[xamarin-native-v2](https://github.com/azure-samples/active-directory-xamarin-native-v2) | |
-| Mobile (iOS) | ![This image shows iOS/Objective-C or Swift](medi#authorization-code) |[ios-swift-objc-native-v2](https://github.com/azure-samples/active-directory-ios-swift-native-v2) </p> [ios-native-nxoauth2-v2](https://github.com/azure-samples/active-directory-ios-native-nxoauth2-v2) | |
-| Desktop (macOS) | macOS | [Authorization code](msal-authentication-flows.md#authorization-code) |[macOS-swift-objc-native-v2](https://github.com/Azure-Samples/ms-identity-macOS-swift-objc) | |
-| Mobile (Android-Java) | ![This image shows the Android logo](medi#authorization-code) | [android-Java](https://github.com/Azure-Samples/ms-identity-android-java) | |
-| Mobile (Android-Kotlin) | ![This image shows the Android logo](medi#authorization-code) | [android-Kotlin](https://github.com/Azure-Samples/ms-identity-android-kotlin) | |
-
-## Daemon applications
+> | Language/<br/>Platform | Code sample(s)<br/> on GitHub | Auth<br/> libraries | Auth flow |
+> | - | | - | -- |
+> | ASP.NET Core| ASP.NET Core Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/README.md) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/1-5-B2C/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) <br/> &#8226; [Customize token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-2-TokenCache/README.md) <br/> &#8226; [Call Graph (multi-tenant)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md) <br/> &#8226; [Call Azure REST APIs](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/3-WebApp-multi-APIs/README.md) <br/> &#8226; [Protect web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-1-MyOrg/README.md) <br/> &#8226; [Protect web API (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-2-B2C/README.md) <br/> &#8226; [Protect multi-tenant web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-3-AnyOrg/Readme.md) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) <br/> &#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/6-Deploy-to-Azure/README.md) | &#8226; MSAL.NET<br/> &#8226; Microsoft.Identity.Web | &#8226; OpenID connect <br/> &#8226; Authorization code <br/> &#8226; On-Behalf-Of|
+> | Blazor | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | MSAL.NET | Authorization code with PKCE|
+> | ASP.NET Core|[Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | On-Behalf-Of (OBO) |
+> | ASP.NET Core|[Use the Conditional Access auth context to perform step\-up authentication](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | &#8226; MSAL.NET <br/> &#8226; Microsoft.Identity.Web | Authorization code |
+> | ASP.NET Core|[Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | MSAL.NET | &#8226; SAML <br/> &#8226; OpenID connect |
+> | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | MSAL.NET | &#8226; OpenID connect <br/> &#8226; Authorization code |
+> | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-spring-tutorial/tree/main/4-Deployment/deploy-to-azure-app-service) | &#8226; MSAL Java <br/> &#8226; Azure AD Boot Starter | Authorization code |
+> | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication/tree/main/4-Deployment/deploy-to-azure-app-service) | MSAL Java | Authorization code |
+> | Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-webapp)| MSAL Java | Authorization code |
+> | Java </p> Spring| Sign in users and call Microsoft Graph via OBO </p> &#8226; [Web API](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | &#8226; Authorization code <br/> &#8226; On-Behalf-Of (OBO) |
+> | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) <br/> &#8226; [Web app that sign in users](https://github.com/Azure-Samples/ms-identity-node) | MSAL Node | Authorization code |
+> | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | MSAL Python | Authorization code |
+> | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| MSAL Python | Authorization code |
+> | Ruby | Graph Training <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | OmniAuth OAuth2 | Authorization code |
+
+## Web API
-The following samples show an application that accesses the Microsoft Graph API with its own identity (with no user).
-
-| Client application | Platform | Flow/Grant | Calls Microsoft Graph |
-| | -- | - | -- |
-| Console | ![This image shows the .NET Core logo](medi#client-credentials) | [dotnetcore-daemon-v2](https://github.com/azure-samples/active-directory-dotnetcore-daemon-v2) |
-| Web app | ![Screenshot that shows the ASP.NET logo.](medi#client-credentials) | [dotnet-daemon-v2](https://github.com/azure-samples/active-directory-dotnet-daemon-v2) |
-| Console | ![This image shows the Java logo](medi#client-credentials) | [ms-identity-java-daemon](https://github.com/Azure-Samples/ms-identity-java-daemon) |
-| Console | ![This image shows the Node.js logo](medi#client-credentials) | [ms-identity-javascript-nodejs-console](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) |
-| Console | ![This image shows the Python logo](medi#client-credentials) | [ms-identity-python-daemon](https://github.com/Azure-Samples/ms-identity-python-daemon) |
-
-## Headless applications
+The following samples show how to protect a web API with the Microsoft identity platform, and how to call a downstream API from the web API.
-The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses the Microsoft Authentication Library (MSAL).
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| ASP.NET | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) | MSAL.NET | On-Behalf-Of (OBO) |
+>| Java | [Sign in users](https://github.com/Azure-Samples/ms-identity-java-webapi) | MSAL Java | On-Behalf-Of (OBO) |
+>| Node.js | &#8226; [Protect a Node.js web API](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2) <br/> &#8226; [Protect a Node.js Web API with Azure AD B2C](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | MSAL Node | Authorization bearer |
-| Client application | Platform | Flow/Grant | Calls Microsoft Graph |
-| | -- | -| - |
-| Desktop (Console) | ![This image shows the .NET/C# (Desktop) logo](medi#device-code) |[dotnetcore-devicecodeflow-v2](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) |
-| Desktop (Console) | ![This image shows the Java logo](medi#device-code) |[ms-identity-java-devicecodeflow](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) |
-| Desktop (Console) | ![This image shows the Python logo](medi#device-code) |[ms-identity-python-devicecodeflow](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) |
+## Desktop
-## Multi-tenant SaaS applications
+The following samples show public client desktop applications that access the Microsoft Graph API, or your own web API in the name of the user. Apart from the *Desktop (Console) with Workspace Application Manager (WAM)* sample, all these client applications use the Microsoft Authentication Library (MSAL).
-The following samples show how to configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. Configuring your application to be *multi-tenant* means that you can offer a **Software as a Service** (SaaS) application to many organizations, allowing their users to be able to sign-in to your application after providing consent.
+> [!div class="mx-tdCol2BreakAll"]
+> | Language/<br/>Platform | Code sample(s) <br/> on GitHub | Auth<br/> libraries | Auth flow |
+> | - | -- | - | -- |
+>| .NET Core | &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/1-Calling-MSGraph/1-1-AzureAD) <br/> &#8226; [Call Microsoft Graph with token cache](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/2-TokenCache) <br/> &#8226; [Call Micrsoft Graph with custom web UI HTML](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-1-CustomHTML) <br/> &#8226; [Call Microsoft Graph with custom web browser](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/3-CustomWebUI/3-2-CustomBrowser) <br/> &#8226; [Sign in users with device code flow](https://github.com/Azure-Samples/ms-identity-dotnet-desktop-tutorial/tree/master/4-DeviceCodeFlow) | MSAL.NET |&#8226; Authorization code with PKCE <br/> &#8226; Device code |
+>| .NET | &#8226; [Call Microsoft Graph with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API with daemon console](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/2-Call-OwnApi/README.md) | MSAL.NET | Authorization code with PKCE |
+>| .NET | [Invoke protected API with integrated windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | MSAL.NET | Integrated windows authentication |
+>| ASP.NET | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | MSAL.NET | Credentials grant |
+>| Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-desktop/) | MSAL Java | Integrated windows authentication |
+>| Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | MSAL Node | Authorization code with PKCE |
+>| Powershell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | MSAL.NET | Resource owner password credentials |
+>| Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | MSAL Python | Authorization code with PKCE |
+>| Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-native-uwp-wam) | Web account manager API | Integrated windows authentication |
+>| XAML | &#8226; [Sign in users and call ASP.NET core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | MSAL.NET | Authorization code with PKCE |
+
+## Mobile
+
+The following samples show public client mobile applications that access the Microsoft Graph API, or your own web API in the name of the user. These client applications use the Microsoft Authentication Library (MSAL).
-| Platform | Description | Link |
-| -- | | -- |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | Multi-tenant SPA calls multi-tenant custom web API |[ms-identity-javascript-angular-spa-aspnet-webapi-multitenant](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter2) |
-| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png) [.NET Core (MSAL.NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ASP.NET Core MVC web application calls Graph API |[active-directory-aspnetcore-webapp-openidconnect-v2](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-3-Multi-Tenant) |
-| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png) [.NET Core (MSAL.NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ASP.NET Core MVC web application calls ASP.NET Core Web API |[active-directory-aspnetcore-webapp-openidconnect-v2](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-3-AnyOrg) |
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| iOS | &#8226; [Call Microsoft Graph native](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc) <br/> &#8226; [Call Microsoft Graph with Azure AD nxoauth](https://github.com/azure-samples/active-directory-ios-native-nxoauth2-v2) | MSAL iOS | Authorization code with PKCE |
+>| Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-java) | MSAL Android | Authorization code with PKCE |
+>| Kotlin | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-kotlin) | MSAL Android | Authorization code with PKCE |
+>| Xamarin | &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/1-Basic) <br/>&#8226; [Sign in users with broker and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | MSAL.NET | Authorization code with PKCE |
-## Web APIs
+## Service / daemon
-The following samples show how to protect a web API with the Microsoft identity platform, and how to call a downstream API from the web API.
+The following samples show an application that accesses the Microsoft Graph API with its own identity (with no user).
-| Platform | Sample |
-| -- | - |
-| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png)</p>ASP.NET Core | ASP.NET Core web API (service) of [dotnet-native-aspnetcore-v2](https://aka.ms/msidentity-aspnetcore-webapi-calls-msgraph) |
-| ![This image shows the ASP.NET logo](media/sample-v2-code/logo_NET.png)</p>ASP.NET MVC | Web API (service) of [ms-identity-aspnet-webapi-onbehalfof](https://github.com/Azure-Samples/ms-identity-aspnet-webapi-onbehalfof) |
-| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | Web API (service) of [ms-identity-java-webapi](https://github.com/Azure-Samples/ms-identity-java-webapi) |
-| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (Passport.js)| Web API (service) of [active-directory-javascript-nodejs-webapi-v2](https://github.com/Azure-Samples/active-directory-javascript-nodejs-webapi-v2) |
-| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (Passport.js)| B2C Web API (service) of [active-directory-b2c-javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) |
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| ASP.NET| &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)<br/> &#8226; [Call own web API](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop) <br/> &#8226; [Using managed identity and Azure key vault](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/3-Using-KeyVault) <br/> &#8226; [Multi-tenant with Microsoft identity platform endpoint](https://github.com/Azure-Samples/ms-identity-aspnet-daemon-webapp) | MSAL.NET | Client credentials grant|
+>| Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-java-daemon)| MSAL Java| Client credentials grant|
+>| Node.js | [Sign in users and call web API](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console) | MSAL Node | Client credentials grant |
+>| Python | &#8226; [Call Microsoft Graph with secret](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/1-Call-MsGraph-WithSecret) <br/> &#8226; [Call Microsoft Graph with certificate](https://github.com/Azure-Samples/ms-identity-python-daemon/tree/master/2-Call-MsGraph-WithCertificate) | MSAL Python| Client credentials grant|
## Azure Functions as web APIs The following samples show how to protect an Azure Function using HttpTrigger and exposing a web API with the Microsoft identity platform, and how to call a downstream API from the web API.
-| Platform | Sample |
-| -- | - |
-| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png)</p>ASP.NET Core | ASP.NET Core web API (service) Azure Function of [dotnet-native-aspnetcore-v2](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) |
-| ![This image shows the Python logo](media/sample-v2-code/logo_python.png)</p>Python | Web API (service) of [Python](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) |
-| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (Passport.js)| Web API (service) of [Node.js and passport-azure-ad](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-azurefunctions) |
-| ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (Passport.js)| Web API (service) of [Node.js and passport-azure-ad using on behalf of](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) |
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| .NET | [.NET Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-dotnet-webapi-azurefunctions) | MSAL.NET | Authorization code |
+>| Node.js | [Node.js Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-azurefunctions) | MSAL Node | Authorization bearer |
+>| Node.js | [Call Microsoft Graph API on behalf of a user](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) | MSAL Node| On-Behalf-Of (OBO)|
+>| Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | MSAL Python | Authorization code |
+
+## Headless
-## Other Microsoft Graph samples
+The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses the Microsoft Authentication Library (MSAL).
+
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| .NET core | [Invoke protected API from text-only device](https://github.com/azure-samples/active-directory-dotnetcore-devicecodeflow-v2) | MSAL.NET | Device code|
+>| Java | [Sign in users and invoke protected API](https://github.com/Azure-Samples/ms-identity-java-devicecodeflow) | MSAL Java | Device code |
+>| Python | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-devicecodeflow) | MSAL Python | Device code |
+
+## Multi-tenant SaaS
+
+The following samples show how to configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. Configuring your application to be *multi-tenant* means that you can offer a **Software as a Service** (SaaS) application to many organizations, allowing their users to be able to sign-in to your application after providing consent.
+
+> [!div class="mx-tdCol2BreakAll"]
+>| Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
+>| -- | -- |-- |-- |
+>| ASP.NET Core | [ASP.NET Core MVC web application calls Microsoft Graph API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-3-Multi-Tenant) | MSAL.NET | OpenID connect |
+>| ASP.NET Core | [ASP.NET Core MVC web application calls ASP.NET Core Web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-3-AnyOrg) | MSAL.NET | Authorization code |
-To learn about [samples](https://github.com/microsoftgraph/msgraph-community-samples/tree/master/samples#aspnet) and tutorials that demonstrate different usage patterns for the Microsoft Graph API, including authentication with Azure AD, see [Microsoft Graph Community samples & tutorials](https://github.com/microsoftgraph/msgraph-community-samples).
+## Next steps
-## See also
+If you'd like to delve deeper into more sample code, see:
-[Microsoft Graph API conceptual and reference](/graph/use-the-api?context=graph%2fapi%2fbeta&view=graph-rest-beta&preserve-view=true)
+- [Sign in users and call the Microsoft Graph API from an Angular](tutorial-v2-angular-auth-code.md)
+- [Sign in users in a Node.js and Express web app](tutorial-v2-nodejs-webapp-msal.md)
+- [Call the Microsoft Graph API from a Universal Windows Platform](tutorial-v2-windows-uwp.md)
active-directory Scenario Desktop Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-production.md
For Microsoft personal account users, reprompting for consent on each native cli
## Next steps
-To try out additional samples, see [Desktop and mobile public client apps](sample-v2-code.md#desktop-and-mobile-public-client-apps).
+To try out additional samples, see [Desktop public client applications](sample-v2-code.md#desktop).
active-directory Scenario Mobile Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-production.md
For each Microsoft Authentication Library (MSAL) type, you can find sample code
## Next steps
-To try out additional samples, see [Desktop and mobile public client apps](sample-v2-code.md#desktop-and-mobile-public-client-apps).
+To try out additional samples, [Mobile public client applications](sample-v2-code.md#mobile).
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/single-sign-on-saml-protocol.md
Previously updated : 05/18/2020 Last updated : 08/24/2021
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/single-sign-out-saml-protocol.md
Previously updated : 03/22/2021 Last updated : 08/24/2021
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-admin-takeover.md
The supported service plans include:
External admin takeover is not supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription.
+> [!NOTE]
+> External admin takeover is not supported cross cloud boundaries (ex. Azure Commercial to Azure Government). In these scenarios it is recommended to perform External admin takeover into another Azure Commercial tenant, and then delete the domain from this tenant so you may verify succesfully into the destination Azure Government tenant.
+ You can optionally use the [**ForceTakeover** option](#azure-ad-powershell-cmdlets-for-the-forcetakeover-option) for removing the domain name from the unmanaged organization and verifying it on the desired organization. #### More information about RMS for individuals
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-self-service-management.md
The group settings enable to control who can create security and Microsoft 365 g
| Setting | Value | Effect on your tenant | | | :: | | | Users can create security groups in Azure portals, API or PowerShell | Yes | All users in your Azure AD organization are allowed to create new security groups and add members to these groups in Azure portals, API, or PowerShell. These new groups would also show up in the Access Panel for all other users. If the policy setting on the group allows it, other users can create requests to join these groups. |
-| | No | Users can't security create groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups. |
+| | No | Users can't create security groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups. |
| Users can create Microsoft 365 groups in Azure portals, API or PowerShell | Yes | All users in your Azure AD organization are allowed to create new Microsoft 365 groups and add members to these groups in Azure portals, API, or PowerShell. These new groups would also show up in the Access Panel for all other users. If the policy setting on the group allows it, other users can create requests to join these groups. | | | No | Users can't create Microsoft 365 groups and can't change existing groups for which they are an owner. However, they can still manage the memberships of those groups and approve requests from other users to join their groups. |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) | | Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Office 365 A3 for faculty | ENTERPRISEPACKPLUS_FACULTY | e578b273-6db4-4691-bba0-8d691f4da603 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2(94a54592-cd8b-425e-87c6-97868b000b91)<br/> YAMMER_EDU(2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A3 for students | ENTERPRISEPACKPLUS_STUDENT | 98b6e773-24d4-4c0d-a968-6e787a1f8204 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for students | ENTERPRISEPREMIUM_STUDENT | ee656612-49fa-43e5-b67e-cb1fdf7699df | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 Advanced Compliance | EQUIVIO_ANALYTICS | 1b1b1f7a-8355-43b6-829f-336cfccb744c | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
active-directory Delegate Invitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/delegate-invitations.md
Previously updated : 06/04/2021 Last updated : 08/24/2021
By default, all users, including guests, can invite guest users.
> [!NOTE] > If **Members can invite** is set to **No** and **Admins and users in the guest inviter role can invite** is set to **Yes**, users in the **Guest Inviter** role will still be able to invite guests.
-6. Under **Email one-time passcode for guests**, choose the appropriate settings (for more information, see [Email one-time passcode authentication](one-time-passcode.md)):
-
- - **Automatically enable email one-time passcode for guests in October 2021**. (Default) If the email one-time passcode feature is not already enabled for your tenant, it will be automatically turned on in October 2021. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.
-
- - **Enable email one-time passcode for guests effective now**. Turns on the email one-time passcode feature for your tenant.
-
- - **Disable email one-time passcode for guests**. Turns off the email one-time passcode feature for your tenant, and prevents the feature from turning on in October 2021.
-
- > [!NOTE]
- > Instead of the options above, you'll see the following toggle if you've enabled or disabled this feature or if you've previously opted in to the preview:
- >
- >![Enable Email one-time passcode opted in](media/delegate-invitations/enable-email-otp-opted-in.png)
-
-7. Under **Enable guest self-service sign up via user flows**, select **Yes** if you want to be able to create user flows that let users sign up for apps. For more information about this setting, see [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md).
+6. Under **Enable guest self-service sign up via user flows**, select **Yes** if you want to be able to create user flows that let users sign up for apps. For more information about this setting, see [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md).
![Self-service sign up via user flows setting](./media/delegate-invitations/self-service-sign-up-setting.png)
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
Previously updated : 08/17/2021 Last updated : 08/24/2021
The following are known scenarios that will impact Gmail users:
This change does not affect: - Web apps
+- Microsoft 365 services that are accessed through a website (for example, SharePoint Online, Office web apps, and Teams web app)
- Mobile apps using system web-views for authentication ([SFSafariViewController](https://developer.apple.com/documentation/safariservices/sfsafariviewcontroller) on iOS, [Custom Tabs](https://developer.chrome.com/docs/android/custom-tabs/overview/) on Android). - Google Workspace identities, for example when youΓÇÖre using [SAML-based federation](direct-federation.md) with Google Workspace
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/user-properties.md
Previously updated : 08/04/2021 Last updated : 08/24/2021
This article describes the properties and states of an invited Azure Active Dire
Depending on the inviting organization's needs, an Azure AD B2B collaboration user can be in one of the following account states: -- State 1: Homed in an external instance of Azure AD and represented as a guest user in the inviting organization. In this case, the B2B user signs in by using an Azure AD account that belongs to the invited tenant. If the partner organization doesn't use Azure AD, the guest user in Azure AD is still created. The requirements are that they redeem their invitation and Azure AD verifies their email address. This arrangement is also called a just-in-time (JIT) tenancy or a "viral" tenancy.
+- State 1: Homed in an external instance of Azure AD and represented as a guest user in the inviting organization. In this case, the B2B user signs in by using an Azure AD account that belongs to the invited tenant. If the partner organization doesn't use Azure AD, the guest user in Azure AD is still created. The requirements are that they redeem their invitation and Azure AD verifies their email address. This arrangement is also called a just-in-time (JIT) tenancy, a "viral" tenancy, or an unmanaged Azure AD tenancy.
> [!IMPORTANT] > **Starting October 2021**, Microsoft will no longer support the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. In preparation, we encourage customers to opt into [email one-time passcode authentication](one-time-passcode.md), which is now generally available.
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
Title: Find help and open a support ticket - Azure Active Directory | Microsoft Docs description: Instructions about how to get help and open a support ticket for Azure Active Directory.
# Find help and open a support ticket for Azure Active Directory
-Microsoft provides global technical, pre-sales, billing, and subscription support for Azure Active Directory (Azure AD). Support is available both online and by phone for Microsoft Azure paid and trial subscriptions. Phone support and online billing support are available in additional languages.
+
+Microsoft provides global technical, pre-sales, billing, and subscription support for Azure Active Directory (Azure AD). Support is available both online and by phone for Microsoft Azure paid and trial subscriptions. Phone support and online billing support are available in additional languages.
## Find help without opening a support ticket
Before creating a support ticket, check out the following resources for answers
* The [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for our IT pro partners and customers to collaborate, share, and learn. The [Microsoft Technical Community Info Center](https://techcommunity.microsoft.com/t5/Community-Info-Center/ct-p/Community-Info-Center) is used for announcements, blog posts, ask-me-anything (AMA) interactions with experts, and more. You can also [join the community to submit your ideas](https://techcommunity.microsoft.com/t5/Communities/ct-p/communities). - ## Open a support ticket If you are unable to find answers by using self-help resources, you can open an online support ticket. You should open each support ticket for only a single problem, so that we can connect you to the support engineers who are subject matter experts for your problem. Also, Azure Active Directory engineering teams prioritize their work based on incidents that are generated, so you're often contributing to service improvements.
active-directory Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/support-help-options.md
+
+ Title: Support and help options for Azure Active Directory
+description: Learn where to get help and find answers to your questions as you build and configure identity and access management (IAM) solutions that integrate with Azure Active Directory (Azure AD).
++++++++ Last updated : 08/23/2021++++
+# Support and help options for Azure Active Directory
+
+If you need an answer to a question or help in solving a problem not covered in our documentation, it might be time to reach out to experts for help. Here are several suggestions for getting answers to your questions as you use Azure Active Directory (Azure AD).
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're an IT admin managing your organization's tenant, a developer just starting your cloud journey, or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- If you're not an Azure customer, you can open a support request with [Microsoft Support for business](https://support.serviceshub.microsoft.com/supportforbusiness).
+
+## Post a question to Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='../develop/media/common/question-mark-icon.png'>
+</div>
+
+Get answers to your identity and access management questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
+
+[Microsoft Q&A](/answers/products/) is Azure's recommended source of community support.
+
+If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Use one of following tags when you ask your [high-quality question](/answers/articles/24951/how-to-write-a-quality-question.html):
+
+| Component/area| Tags |
+|||
+| Active Directory Authentication Library (ADAL) | [[adal]](/answers/topics/azure-ad-adal-deprecation.html) |
+| Microsoft Authentication Library (MSAL) | [[msal]](/answers/topics/azure-ad-msal.html) |
+| Open Web Interface for .NET (OWIN) middleware | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+| [Azure AD B2B / External Identities](../external-identities/what-is-b2b.md) | [[azure-ad-b2b]](/answers/topics/azure-ad-b2b.html) |
+| [Azure AD B2C](https://azure.microsoft.com/services/active-directory-b2c/) | [[azure-ad-b2c]](/answers/topics/azure-ad-b2c.html) |
+| [Microsoft Graph API](https://developer.microsoft.com/graph/) | [[azure-ad-graph]](/answers/topics/azure-ad-graph.html) |
+| All other authentication and authorization areas | [[azure-active-directory]](/answers/topics/azure-active-directory.html) |
+
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+</div>
+
+- [Azure Updates](https://azure.microsoft.com/updates/?category=identity): Learn about important product updates, roadmap, and announcements.
+
+- [What's new in Azure AD](whats-new.md): Get to know what's new in Azure AD including the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.
+
+- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD.
+
+- [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage and learn from experts.
active-directory Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/common-scenarios.md
description: Centralize application management with Azure AD
-+ Last updated 03/02/2019
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following list to configure managed identity for Azure App Service
Azure Arc enabled Kubernetes currently [supports system assigned identity](../../azure-arc/kubernetes/quickstart-connect-cluster.md). The managed service identity certificate is used by all Azure Arc enabled Kubernetes agents for communication with Azure.
-### Azure Arc enabled servers
+### Azure Arc-enabled servers
| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: | | System assigned | ![Available][check] | ![Available][check] | Not available | Not available | | User assigned | Not available | Not available | Not available | Not available |
-All Azure Arc enabled servers have a system assigned identity. You cannot disable or change the system assigned identity on an Azure Arc enabled server. Refer to the following resources to learn more about how to consume managed identities on Azure Arc enabled servers:
+All Azure Arc-enabled servers have a system assigned identity. You cannot disable or change the system assigned identity on an Azure Arc-enabled server. Refer to the following resources to learn more about how to consume managed identities on Azure Arc-enabled servers:
-- [Authenticate against Azure resources with Arc enabled servers](../../azure-arc/servers/managed-identity-authentication.md)-- [Using a managed identity with Arc enabled servers](../../azure-arc/servers/security-overview.md#using-a-managed-identity-with-arc-enabled-servers)
+- [Authenticate against Azure resources with Arc-enabled servers](../../azure-arc/servers/managed-identity-authentication.md)
+- [Using a managed identity with Arc-enabled servers](../../azure-arc/servers/security-overview.md#using-a-managed-identity-with-arc-enabled-servers)
### Azure Automanage
active-directory 15Five Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/15five-tutorial.md
Previously updated : 05/27/2021 Last updated : 08/20/2021 # Tutorial: Azure Active Directory integration with 15Five
To get started, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment. * 15Five supports **SP** initiated SSO.
+* 15Five supports [Automated user provisioning](15five-provisioning-tutorial.md).
## Add 15Five from the gallery
active-directory 4Me Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/4me-tutorial.md
Previously updated : 06/09/2021 Last updated : 08/20/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* 4me supports **SP** initiated SSO. * 4me supports **Just In Time** user provisioning.
+* 4me supports [Automated user provisioning](4me-provisioning-tutorial.md).
## Add 4me from the gallery
active-directory Airstack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/airstack-tutorial.md
Previously updated : 07/29/2019 Last updated : 08/20/2021
In this tutorial, you'll learn how to integrate Airstack with Azure Active Direc
* Enable your users to be automatically signed-in to Airstack with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
* Airstack single sign-on (SSO) enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Airstack supports **SP and IDP** initiated SSO
+* Airstack supports **SP and IDP** initiated SSO.
+* Airstack supports [Automated user provisioning](airstack-provisioning-tutorial.md).
-## Adding Airstack from the gallery
+## Add Airstack from the gallery
To configure the integration of Airstack into Azure AD, you need to add Airstack from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Airstack** in the search box. 1. Select **Airstack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Airstack
Configure and test Azure AD SSO with Airstack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Airstack.
-To configure and test Azure AD SSO with Airstack, complete the following building blocks:
+To configure and test Azure AD SSO with Airstack, perfrom the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
2. **[Configure Airstack SSO](#configure-airstack-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-5. **[Create Airstack test user](#create-airstack-test-user)** - to have a counterpart of B.Simon in Airstack that is linked to the Azure AD representation of user.
+ 1. **[Create Airstack test user](#create-airstack-test-user)** - to have a counterpart of B.Simon in Airstack that is linked to the Azure AD representation of user.
6. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Airstack** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Airstack** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- ![Airstack Domain and URLs single sign-on information](common/preintegrated.png)
- 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL:
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/copy-metadataurl.png)
-### Configure Airstack SSO
-
-To configure single sign-on on **Airstack** side, you need to send the **App Federation Metadata Url** to [Airstack support team](mailto:jsinger@lenovo.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Airstack**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure Airstack SSO
+
+To configure single sign-on on **Airstack** side, you need to send the **App Federation Metadata Url** to [Airstack support team](mailto:jsinger@lenovo.com). They set this setting to have the SAML SSO connection set properly on both sides.
++ ### Create Airstack test user In this section, you create a user called B.Simon in Airstack. Work with [Airstack support team](mailto:jsinger@lenovo.com) to add the users in the Airstack platform. Users must be created and activated before you use single sign-on.
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Airstack Sign on URL where you can initiate the login flow.
+
+* Go to Airstack Sign-on URL directly and initiate the login flow from there.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+#### IDP initiated:
-When you click the Airstack tile in the Access Panel, you should be automatically signed in to the Airstack for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Airstack for which you set up the SSO.
-## Additional resources
+You can also use Microsoft My Apps to test the application in any mode. When you click the Airstack tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Airstack for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Airstack you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Alertmedia Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/alertmedia-tutorial.md
Previously updated : 05/13/2021 Last updated : 08/20/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* AlertMedia supports **IDP** initiated SSO. * AlertMedia supports **Just In Time** user provisioning.
+* AlertMedia supports [Automated user provisioning](alertmedia-provisioning-tutorial.md).
## Add AlertMedia from the gallery
active-directory Baldwin Safety & Compliance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/baldwin-safety-&-compliance-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Baldwin Safety & Compliance | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Baldwin Safety & Compliance.
++++++++ Last updated : 08/16/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Baldwin Safety & Compliance
+
+In this tutorial, you'll learn how to integrate Baldwin Safety & Compliance with Azure Active Directory (Azure AD). When you integrate Baldwin Safety & Compliance with Azure AD, you can:
+
+* Control in Azure AD who has access to Baldwin Safety & Compliance.
+* Enable your users to be automatically signed-in to Baldwin Safety & Compliance with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Baldwin Safety & Compliance single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Baldwin Safety & Compliance supports **IDP** initiated SSO.
+
+## Add Baldwin Safety & Compliance from the gallery
+
+To configure the integration of Baldwin Safety & Compliance into Azure AD, you need to add Baldwin Safety & Compliance from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Baldwin Safety & Compliance** in the search box.
+1. Select **Baldwin Safety & Compliance** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Baldwin Safety & Compliance
+
+Configure and test Azure AD SSO with Baldwin Safety & Compliance using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Baldwin Safety & Compliance.
+
+To configure and test Azure AD SSO with Baldwin Safety & Compliance, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Baldwin Safety and Compliance SSO](#configure-baldwin-safety-and-compliance-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Baldwin Safety and Compliance test user](#create-baldwin-safety-and-compliance-test-user)** - to have a counterpart of B.Simon in Baldwin Safety & Compliance that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Baldwin Safety & Compliance** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
+
+ ![Edit SAML Signing Certificate](common/edit-certificate.png)
+
+1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer.
+
+ ![Copy Thumbprint value](common/copy-thumbprint.png)
+
+1. On the **Set up Baldwin Safety & Compliance** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Baldwin Safety & Compliance.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Baldwin Safety & Compliance**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Baldwin Safety and Compliance SSO
+
+To configure single sign-on on **Baldwin Safety & Compliance** side, you need to send the **Thumbprint Value** and appropriate copied URLs from Azure portal to [Baldwin Safety & Compliance support team](mailto:support@baldwinaviation.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Baldwin Safety and Compliance test user
+
+In this section, you create a user called Britta Simon in Baldwin Safety & Compliance. Work with [Baldwin Safety & Compliance support team](mailto:support@baldwinaviation.com) to add the users in the Baldwin Safety & Compliance platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Baldwin Safety & Compliance for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Baldwin Safety & Compliance tile in the My Apps, you should be automatically signed in to the Baldwin Safety & Compliance for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Baldwin Safety & Compliance you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Comm100livechat Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/comm100livechat-tutorial.md
Previously updated : 10/22/2019 Last updated : 08/17/2021
In this tutorial, you'll learn how to integrate Comm100 Live Chat with Azure Act
* Enable your users to be automatically signed-in to Comm100 Live Chat with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Comm100 Live Chat supports **SP** initiated SSO
+* Comm100 Live Chat supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Comm100 Live Chat from the gallery
+## Add Comm100 Live Chat from the gallery
To configure the integration of Comm100 Live Chat into Azure AD, you need to add Comm100 Live Chat from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Comm100 Live Chat** in the search box. 1. Select **Comm100 Live Chat** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Comm100 Live Chat
+## Configure and test Azure AD SSO for Comm100 Live Chat
Configure and test Azure AD SSO with Comm100 Live Chat using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Comm100 Live Chat.
-To configure and test Azure AD SSO with Comm100 Live Chat, complete the following building blocks:
+To configure and test Azure AD SSO with Comm100 Live Chat, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Comm100 Live Chat SSO](#configure-comm100-live-chat-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Comm100 Live Chat test user](#create-comm100-live-chat-test-user)** - to have a counterpart of B.Simon in Comm100 Live Chat that is linked to the Azure AD representation of user.
+ 1. **[Create Comm100 Live Chat test user](#create-comm100-live-chat-test-user)** - to have a counterpart of B.Simon in Comm100 Live Chat that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Comm100 Live Chat** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Comm100 Live Chat** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.comm100.com/AdminManage/LoginSSO.aspx?siteId=<SITEID>`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Comm100 Live Chat**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. On the top right side of the page, click **My Account**.
- ![Comm100 Live Chat myaccount](./media/comm100livechat-tutorial/tutorial_comm100livechat_account.png)
+ ![Comm100 Live Chat my account.](./media/comm100livechat-tutorial/account.png)
1. From the left side of menu, click **Security** and then click **Agent Single Sign-On**.
- ![Screenshot that shows the left-side account menu with "Security" and "Agent Single Sign-On" highlighted.](./media/comm100livechat-tutorial/tutorial_comm100livechat_security.png)
+ ![Screenshot that shows the left-side account menu with "Security" and "Agent Single Sign-On" highlighted.](./media/comm100livechat-tutorial/security.png)
1. On the **Agent Single Sign-On** page, perform the following steps:
- ![Comm100 Live Chat security](./media/comm100livechat-tutorial/tutorial_comm100livechat_singlesignon.png)
+ ![Comm100 Live Chat security.](./media/comm100livechat-tutorial/certificate.png)
a. Copy the first highlighted link and paste it in **Sign-on URL** textbox in **Basic SAML Configuration** section on Azure portal.
To enable Azure AD users to sign in to Comm100 Live Chat, they must be provision
2. On the top right side of the page, click **My Account**.
- ![Comm100 Live Chat myaccount](./media/comm100livechat-tutorial/tutorial_comm100livechat_account.png)
+ ![Comm100 Live Chat my account.](./media/comm100livechat-tutorial/account.png)
3. From the left side of menu, click **Agents** and then click **New Agent**.
- ![Comm100 Live Chat agent](./media/comm100livechat-tutorial/tutorial_comm100livechat_agent.png)
+ ![Comm100 Live Chat agent.](./media/comm100livechat-tutorial/agent.png)
4. On the **New Agent** page, perform the following steps:
- ![Comm100 Live Chat new agent](./media/comm100livechat-tutorial/tutorial_comm100livechat_newagent.png)
+ ![Comm100 Live Chat new agent.](./media/comm100livechat-tutorial/new-agent.png)
a. a. In **Email** text box, enter the email of user like **B.simon\@contoso.com**.
To enable Azure AD users to sign in to Comm100 Live Chat, they must be provision
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Comm100 Live Chat tile in the Access Panel, you should be automatically signed in to the Comm100 Live Chat for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Comm100 Live Chat Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Comm100 Live Chat Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Comm100 Live Chat tile in the My Apps, this will redirect to Comm100 Live Chat Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Comm100 Live Chat with Azure AD](https://aad.portal.azure.com/)
+Once you configure Comm100 Live Chat you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Contrast Security Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/contrast-security-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Contrast Security | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Contrast Security.
++++++++ Last updated : 08/16/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Contrast Security
+
+In this tutorial, you'll learn how to integrate Contrast Security with Azure Active Directory (Azure AD). When you integrate Contrast Security with Azure AD, you can:
+
+* Control in Azure AD who has access to Contrast Security.
+* Enable your users to be automatically signed-in to Contrast Security with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Contrast Security single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Contrast Security supports **SP and IDP** initiated SSO.
+* Contrast Security supports **Just In Time** user provisioning.
+
+## Add Contrast Security from the gallery
+
+To configure the integration of Contrast Security into Azure AD, you need to add Contrast Security from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Contrast Security** in the search box.
+1. Select **Contrast Security** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Contrast Security
+
+Configure and test Azure AD SSO with Contrast Security using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Contrast Security.
+
+To configure and test Azure AD SSO with Contrast Security, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Contrast Security SSO](#configure-contrast-security-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Contrast Security test user](#create-contrast-security-test-user)** - to have a counterpart of B.Simon in Contrast Security that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Contrast Security** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<customerUrl>/Contrast/saml/metadata` |
+ | `https://<customerDNS>:port/Contrast/saml/metadata` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://<customerUrl>/Contrast/saml/SSO` |
+ | `https://<customerDNS>:port/Contrast/saml/SSO` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |--|
+ | `https://<customername>.contrastsecurity.com/Contrast`|
+ | `https://<customerDNS>:port/Contrast` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Contrast Security Client support team](mailto:support@contrastsecurity.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Contrast Security.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Contrast Security**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Contrast Security SSO
+
+To configure single sign-on on **Contrast Security** side, you need to send the **App Federation Metadata Url** to [Contrast Security support team](mailto:support@contrastsecurity.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Contrast Security test user
+
+In this section, a user called Britta Simon is created in Contrast Security. Contrast Security supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Contrast Security, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Contrast Security Sign on URL where you can initiate the login flow.
+
+* Go to Contrast Security Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Contrast Security for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Contrast Security tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Contrast Security for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Contrast Security you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Draup Inc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/draup-inc-tutorial.md
Previously updated : 07/16/2021 Last updated : 08/19/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** page, enter the values for the following fields:
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- a. In the **Sign-on URL** text box, type the URL:
+ In the **Sign-on URL** text box, type the URL:
`https://platform.draup.com/saml2/login/` 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
active-directory Metatask Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/metatask-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Metatask | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Metatask.
++++++++ Last updated : 08/16/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Metatask
+
+In this tutorial, you'll learn how to integrate Metatask with Azure Active Directory (Azure AD). When you integrate Metatask with Azure AD, you can:
+
+* Control in Azure AD who has access to Metatask.
+* Enable your users to be automatically signed-in to Metatask with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Metatask single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Metatask supports **SP and IDP** initiated SSO.
+* Metatask supports **Just In Time** user provisioning.
+
+## Add Metatask from the gallery
+
+To configure the integration of Metatask into Azure AD, you need to add Metatask from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Metatask** in the search box.
+1. Select **Metatask** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Metatask
+
+Configure and test Azure AD SSO with Metatask using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Metatask.
+
+To configure and test Azure AD SSO with Metatask, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Metatask SSO](#configure-metatask-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Metatask test user](#create-metatask-test-user)** - to have a counterpart of B.Simon in Metatask that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Metatask** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<DOMAIN_NAME>.metatask.io/api/authenticate/saml`
+
+ b. In the **Relay State** textbox, type a value using the following pattern:
+ `<DOMAIN_NAME>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign-on URL and Relay State. Contact [Metatask Client support team](mailto:support@metatask.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Metatask application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Metatask application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre-populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | display_name | user.displayname |
+ | email | user.mail |
+ | family_name | user.surname |
+ | first_name | user.givenname |
+ | location | user.userprincipalname |
+ | username | user.objectid |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Metatask.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Metatask**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Metatask SSO
+
+To configure single sign-on on **Metatask** side, you need to send the **App Federation Metadata Url** to [Metatask support team](mailto:support@metatask.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Metatask test user
+
+In this section, a user called Britta Simon is created in Metatask. Metatask supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Metatask, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Metatask Sign on URL where you can initiate the login flow.
+
+* Go to Metatask Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Metatask for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Metatask tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Metatask for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Metatask you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Nimbus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/nimbus-tutorial.md
Previously updated : 09/17/2020 Last updated : 08/17/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Nimbus supports **SP and IDP** initiated SSO
-* Nimbus supports **Just In Time** user provisioning
+* Nimbus supports **SP and IDP** initiated SSO.
+* Nimbus supports **Just In Time** user provisioning.
-## Adding Nimbus from the gallery
+## Add Nimbus from the gallery
To configure the integration of Nimbus into Azure AD, you need to add Nimbus from the gallery to your list of managed SaaS apps.
To configure the integration of Nimbus into Azure AD, you need to add Nimbus fro
1. In the **Add from the gallery** section, type **Nimbus** in the search box. 1. Select **Nimbus** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Nimbus Configure and test Azure AD SSO with Nimbus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Nimbus.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Nimbus** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<CUSTOMER_NAME>.time2work.com/Security/ADFS.aspx`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Nimbus for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Nimbus for which you set up the SSO.
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Nimbus tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Nimbus for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Nimbus tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Nimbus for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Scclifecycle Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/scclifecycle-tutorial.md
Previously updated : 03/22/2019 Last updated : 08/17/2021 # Tutorial: Azure Active Directory integration with SCC LifeCycle
-In this tutorial, you learn how to integrate SCC LifeCycle with Azure Active Directory (Azure AD).
-Integrating SCC LifeCycle with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SCC LifeCycle with Azure Active Directory (Azure AD). When you integrate SCC LifeCycle with Azure AD, you can:
-* You can control in Azure AD who has access to SCC LifeCycle.
-* You can enable your users to be automatically signed-in to SCC LifeCycle (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SCC LifeCycle.
+* Enable your users to be automatically signed-in to SCC LifeCycle with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with SCC LifeCycle, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* SCC LifeCycle single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* SCC LifeCycle single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* SCC LifeCycle supports **SP** initiated SSO
+* SCC LifeCycle supports **SP** initiated SSO.
-## Adding SCC LifeCycle from the gallery
+## Add SCC LifeCycle from the gallery
To configure the integration of SCC LifeCycle into Azure AD, you need to add SCC LifeCycle from the gallery to your list of managed SaaS apps.
-**To add SCC LifeCycle from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **SCC LifeCycle**, select **SCC LifeCycle** from result panel then click **Add** button to add the application.
-
- ![SCC LifeCycle in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SCC LifeCycle based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SCC LifeCycle needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SCC LifeCycle** in the search box.
+1. Select **SCC LifeCycle** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with SCC LifeCycle, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for SCC LifeCycle
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SCC LifeCycle Single Sign-On](#configure-scc-lifecycle-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SCC LifeCycle test user](#create-scc-lifecycle-test-user)** - to have a counterpart of Britta Simon in SCC LifeCycle that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with SCC LifeCycle using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SCC LifeCycle.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with SCC LifeCycle, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SCC LifeCycle SSO](#configure-scc-lifecycle-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SCC LifeCycle test user](#create-scc-lifecycle-test-user)** - to have a counterpart of B.Simon in SCC LifeCycle that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with SCC LifeCycle, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **SCC LifeCycle** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **SCC LifeCycle** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![SCC LifeCycle Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |-|
+ | `https://bs1.scc.com/<entity>` |
+ | `https://lifecycle.scc.com/<entity>` |
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<sub-domain>.scc.com/ic7/welcome/customer/PICTtest.aspx`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
-
- - `https://bs1.scc.com/<entity>`
- - `https://lifecycle.scc.com/<entity>`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [SCC LifeCycle Client support team](mailto:lifecycle.support@scc.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [SCC LifeCycle Client support team](mailto:lifecycle.support@scc.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with SCC LifeCycle, perform the following s
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure SCC LifeCycle Single Sign-On
-
-To configure single sign-on on **SCC LifeCycle** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [SCC LifeCycle support team](mailto:lifecycle.support@scc.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
- > [!NOTE]
- > Single sign-on has to be enabled by the [SCC LifeCycle support team](mailto:lifecycle.support@scc.com).
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SCC LifeCycle.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SCC LifeCycle**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **SCC LifeCycle**.
-
- ![The SCC LifeCycle link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SCC LifeCycle.
- ![The "Users and groups" link](common/users-groups-blade.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SCC LifeCycle**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure SCC LifeCycle SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+To configure single sign-on on **SCC LifeCycle** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [SCC LifeCycle support team](mailto:lifecycle.support@scc.com). They set this setting to have the SAML SSO connection set properly on both sides.
-7. In the **Add Assignment** dialog click the **Assign** button.
+ > [!NOTE]
+ > Single sign-on has to be enabled by the [SCC LifeCycle support team](mailto:lifecycle.support@scc.com).
### Create SCC LifeCycle test user
When an assigned user tries to log into SCC LifeCycle, an SCC LifeCycle account
> [!NOTE] > The Azure Active Directory account holder receives an email and follows a link to confirm their account before it becomes active.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SCC LifeCycle tile in the Access Panel, you should be automatically signed in to the SCC LifeCycle for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to SCC LifeCycle Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to SCC LifeCycle Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the SCC LifeCycle tile in the My Apps, this will redirect to SCC LifeCycle Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure SCC LifeCycle you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory X Point Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/x-point-cloud-tutorial.md
Previously updated : 07/15/2021 Last updated : 08/23/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.atledcloud.jp`
- b. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.atledcloud.jp/xpoint/saml/acs`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.atledcloud.jp/xpoint` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [X-point Cloud Client support team](mailto:x-point@atled.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [X-point Cloud Client support team](mailto:x-point@atled.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificateraw.png)
1. On the **Set up X-point Cloud** section, copy the appropriate URL(s) based on your requirement.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure X-point Cloud SSO
-To configure single sign-on on **X-point Cloud** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [X-point Cloud support team](mailto:x-point@atled.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **X-point Cloud** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [X-point Cloud support team](mailto:x-point@atled.jp). They set this setting to have the SAML SSO connection set properly on both sides.
### Create X-point Cloud test user
active-directory Zivver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zivver-tutorial.md
Previously updated : 04/22/2019 Last updated : 08/17/2021 # Tutorial: Azure Active Directory integration with ZIVVER
-In this tutorial, you learn how to integrate ZIVVER with Azure Active Directory (Azure AD).
-Integrating ZIVVER with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ZIVVER with Azure Active Directory (Azure AD). When you integrate ZIVVER with Azure AD, you can:
-* You can control in Azure AD who has access to ZIVVER.
-* You can enable your users to be automatically signed-in to ZIVVER (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ZIVVER.
+* Enable your users to be automatically signed-in to ZIVVER with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with ZIVVER, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* ZIVVER single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* ZIVVER single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ZIVVER supports **IDP** initiated SSO
-
-## Adding ZIVVER from the gallery
-
-To configure the integration of ZIVVER into Azure AD, you need to add ZIVVER from the gallery to your list of managed SaaS apps.
-
-**To add ZIVVER from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **ZIVVER**, select **ZIVVER** from result panel then click **Add** button to add the application.
+* ZIVVER supports **IDP** initiated SSO.
- ![ZIVVER in the results list](common/search-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Configure and test Azure AD single sign-on
+## Add ZIVVER from the gallery
-In this section, you configure and test Azure AD single sign-on with ZIVVER based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ZIVVER needs to be established.
-
-To configure and test Azure AD single sign-on with ZIVVER, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ZIVVER Single Sign-On](#configure-zivver-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ZIVVER test user](#create-zivver-test-user)** - to have a counterpart of Britta Simon in ZIVVER that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of ZIVVER into Azure AD, you need to add ZIVVER from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ZIVVER** in the search box.
+1. Select **ZIVVER** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with ZIVVER, perform the following steps:
+## Configure and test Azure AD SSO for ZIVVER
-1. In the [Azure portal](https://portal.azure.com/), on the **ZIVVER** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with ZIVVER using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ZIVVER.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with ZIVVER, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ZIVVER SSO](#configure-zivver-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ZIVVER test user](#create-zivver-test-user)** - to have a counterpart of B.Simon in ZIVVER that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **ZIVVER** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![ZIVVER Domain and URLs single sign-on information](common/idp-identifier.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
- In the **Identifier** text box, type a URL:
+ In the **Identifier** text box, type the URL:
`https://app.zivver.com/SAML/Zivver` 5. ZIVVER application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. ZIVVER application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
To configure Azure AD single sign-on with ZIVVER, perform the following steps:
6. In addition to above, ZIVVER application expects few more attributes to be passed back in SAML response. In the **User Claims** section on the **User Attributes** dialog, perform the following steps to add SAML token attribute as shown in the below table:
- | Name | Namespace | Source Attribute|
- | | |
+ | Name | Namespace | Source Attribute |
+ | || - |
| ZivverAccountKey | https:\//zivver.com/SAML/Attributes | user.objectid | >[!NOTE]
To configure Azure AD single sign-on with ZIVVER, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ZIVVER.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ZIVVER**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure ZIVVER Single Sign-On
+## Configure ZIVVER SSO
1. In a different web browser window, sign in to your ZIVVER company [site](https://app.zivver.com/login) as an administrator.
To configure Azure AD single sign-on with ZIVVER, perform the following steps:
7. Click **SAVE**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ZIVVER.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ZIVVER**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **ZIVVER**.
-
- ![The ZIVVER link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create ZIVVER test user In this section, you create a user called Britta Simon in ZIVVER. Work with [ZIVVER support team](https://support.zivver.com/) to add the users in the ZIVVER platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the ZIVVER tile in the Access Panel, you should be automatically signed in to the ZIVVER for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ZIVVER for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ZIVVER tile in the My Apps, you should be automatically signed in to the ZIVVER for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ZIVVER you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/availability-zones.md
Kubernetes is aware of Azure availability zones since version 1.12. You can depl
When *creating* an AKS cluster, if you explicitly define a [null value in a template][arm-template-null] with syntax such as `"availabilityZones": null`, the Resource Manager template treats the property as if it doesn't exist, which means your cluster wonΓÇÖt have availability zones enabled. Also, if you create a cluster with a Resource Manager template that omits the availability zones property, availability zones are disabled.
-You can't update settings for availability zones on an existing cluster, so the behavior is different when updating am AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, there are no changes made to your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**.
+You can't update settings for availability zones on an existing cluster, so the behavior is different when updating an AKS cluster with Resource Manager templates. If you explicitly set a null value in your template for availability zones and *update* your cluster, there are no changes made to your cluster for availability zones. However, if you omit the availability zones property with syntax such as `"availabilityZones": []`, the deployment attempts to disable availability zones on your existing AKS cluster and **fails**.
## Overview of availability zones for AKS clusters
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-portal.md
For existing clusters, you may need to enable the Kubernetes resource view. To e
> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `--api-server-authorized-ip-ranges` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser. ```bash # Retrieve your IP address
-CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
+CURRENT_IP=$(dig +short myip.opendns.com @resolver1.opendns.com)
# Add to AKS approved list az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32
This article showed you how to access Kubernetes resources for your AKS cluster.
[aks-managed-aad]: managed-aad.md [cli-aad-upgrade]: managed-aad.md#upgrading-to-aks-managed-azure-ad-integration [enable-monitor]: ../azure-monitor/containers/container-insights-enable-existing-clusters.md
-[portal-cluster]: kubernetes-walkthrough-portal.md
+[portal-cluster]: kubernetes-walkthrough-portal.md
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough.md
Output for successfully created resource group:
## Enable cluster monitoring
-1. Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
+Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
- ```azurecli
- az provider show -n Microsoft.OperationsManagement -o table
- az provider show -n Microsoft.OperationalInsights -o table
- ```
+```azurecli
+az provider show -n Microsoft.OperationsManagement -o table
+az provider show -n Microsoft.OperationalInsights -o table
+```
- If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
- ```azurecli
- az provider register --namespace Microsoft.OperationsManagement
- az provider register --namespace Microsoft.OperationalInsights
- ```
-
-2. Enable [Azure Monitor for containers][azure-monitor-containers] using the *--enable-addons monitoring* parameter.
+```azurecli
+az provider register --namespace Microsoft.OperationsManagement
+az provider register --namespace Microsoft.OperationalInsights
+```
## Create AKS cluster
-Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a cluster named *myAKSCluster* with one node:
+Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Azure Monitor for containers][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
```azurecli-interactive az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/configure-custom-domain.md
editor: ''
Previously updated : 01/13/2020 Last updated : 08/24/2021 # Configure a custom domain name for your Azure API Management instance
-When you create an Azure API Management service instance, Azure assigns it a subdomain of `azure-api.net` (for example, `apim-service-name.azure-api.net`). However, you can expose your API Management endpoints using your own custom domain name, such as **contoso.com**. This tutorial shows you how to map an existing custom DNS name to endpoints exposed by an API Management instance.
+When you create an Azure API Management service instance, Azure assigns it a `azure-api.net` subdomain (for example, `apim-service-name.azure-api.net`). You can also expose your API Management endpoints using your own custom domain name, such as **`contoso.com`**. This tutorial shows you how to map an existing custom DNS name to endpoints exposed by an API Management instance.
> [!IMPORTANT]
-> API Management accepts only requests with [host header](https://tools.ietf.org/html/rfc2616#section-14.23) values matching the default domain name or any of the configured custom domain names.
+> API Management accepts only requests with [host header](https://tools.ietf.org/html/rfc2616#section-14.23) values matching:
+>
+>* The default domain name
+>* Any of the configured custom domain names
> [!WARNING]
-> Customers who wish to use certificate pinning to improve the security of their applications must use a custom domain name and certificate which they manage, not the default certificate. Customers that pin the default certificate instead will be taking a hard dependency on the properties of the certificate they don't control, which is not a recommended practice.
+> If you wish to improve the security of your applications with certificate pinning, you must use a custom domain name and certificate that you manage, not the default certificate. Pinning the default certificate takes a hard dependency on the properties of the certificate you don't manage, which we do not recommend.
## Prerequisites
-To perform the steps described in this article, you must have:
--- An active Azure subscription.-
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-
+- An active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
- An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). - A custom domain name that is owned by you or your organization. This topic does not provide instructions on how to procure a custom domain name.-- A CNAME record hosted on a DNS server that maps the custom domain name to the default domain name of your API Management instance. This topic does not provide instructions on how to host a CNAME record.-- You must have a valid certificate with a public and private key (.PFX). Subject or subject alternative name (SAN) has to match the domain name (this enables API Management instance to securely expose URLs over TLS).
+- A [CNAME-record hosted on a DNS server](#dns-configuration) that maps the custom domain name to the default domain name of your API Management instance. This topic does not provide instructions on how to host a CNAME-record.
+- A valid certificate with a public and private key (.PFX). Subject or subject alternative name (SAN) has to match the domain name (this enables API Management instance to securely expose URLs over TLS).
## Use the Azure portal to set a custom domain name
To perform the steps described in this article, you must have:
There are a number of endpoints to which you can assign a custom domain name. Currently, the following endpoints are available:
- - **Gateway** (default is: `<apim-service-name>.azure-api.net`),
- - **Developer portal (legacy)** (default is: `<apim-service-name>.portal.azure-api.net`),
- - **Developer portal** (default is: `<apim-service-name>.developer.azure-api.net`).
- - **Management** (default is: `<apim-service-name>.management.azure-api.net`),
- - **SCM** (default is: `<apim-service-name>.scm.azure-api.net`),
+ | Endpoint | Default |
+ | -- | -- |
+ | **Gateway** | Default is: `<apim-service-name>.azure-api.net`. Gateway is the only endpoint available for configuration in the Consumption tier. |
+ | **Developer portal (legacy)** | Default is: `<apim-service-name>.portal.azure-api.net` |
+ | **Developer portal** | Default is: `<apim-service-name>.developer.azure-api.net` |
+ | **Management** | Default is: `<apim-service-name>.management.azure-api.net` |
+ | **SCM** | Default is: `<apim-service-name>.scm.azure-api.net` |
> [!NOTE]
- > Only the **Gateway** endpoint is available for configuration in the Consumption tier.
- > You can update all of the endpoints or some of them. Commonly, customers update **Gateway** (this URL is used to call the API exposed through API Management) and **Portal** (the developer portal URL).
- > **Management** and **SCM** endpoints are used internally by the API Management instance owners only and thus are less frequently assigned a custom domain name.
+ > You can update any of the endpoints. Typically, customers update **Gateway** (this URL is used to call the API exposed through API Management) and **Portal** (the developer portal URL).
+ >
+ > Only API Management instance owners can use **Management** and **SCM** endpoints internally. These endpoints are less frequently assigned a custom domain name.
+ >
> The **Premium** and **Developer** tiers support setting multiple host names for the **Gateway** endpoint.
-1. Select the endpoint that you want to update.
-1. In the window on the right, click **Custom**.
-
- - In the **Custom domain name**, specify the name you want to use. For example, `api.contoso.com`.
- - In the **Certificate**, select a certificate from Key Vault. You can also upload a valid .PFX file and provide its **Password**, if the certificate is protected with a password.
+1. Select **+Add**, or select an existing endpoint that you want to update.
+1. In the window on the right, select the **Type** of endpoint for the custom domain.
+1. In the **Hostname** field, specify the name you want to use. For example, `api.contoso.com`.
+1. Under **Certificate**, select either **Key Vault** or **Custom**.
+ - **Key Vault**
+ - Click **Select**.
+ - Select the **Subscription** from the dropdown list.
+ - Select the **Key vault** from the dropdown list.
+ - Once the certificates have loaded, select the **Certificate** from the dropdown list.
+ - Click **Select**.
+ - **Custom**
+ - Select the **Certificate file** field to select and upload a certificate.
+ - Upload a valid .PFX file and provide its **Password**, if the certificate is protected with a password.
+1. When configuring a Gateway endpoint, select or deselect [other options as necessary](#clients-calling-with-server-name-indication-sni-header), like **Negotiate client certificate** or **Default SSL binding**.
+1. Select **Update**.
> [!NOTE]
- > Wildcard domain names, e.g. `*.contoso.com` are supported in all tiers except the Consumption tier.
+ > Wildcard domain names, like `*.contoso.com`, are supported in all tiers except the Consumption tier.
> [!TIP]
- > We recommend using [Azure Key Vault for managing certificates](../key-vault/certificates/about-certificates.md) and setting them to autorenew.
+ > We recommend using [Azure Key Vault for managing certificates](../key-vault/certificates/about-certificates.md) and setting them to `autorenew`.
+ >
> If you use Azure Key Vault to manage the custom domain TLS/SSL certificate, make sure the certificate is inserted into Key Vault [as a _certificate_](/rest/api/keyvault/createcertificate/createcertificate), not a _secret_. >
- > To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. When using Azure portal all the necessary configuration steps will be completed automatically. When using command line tools or management API, these permissions must be granted manually. This is done in two steps. First, use Managed identities page on your API Management instance to make sure that Managed Identity is enabled and make a note of the principal id shown on that page. Second, give permission list and get secrets permissions to this principal id on the Azure Key Vault containing the certificate.
+ > To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate.
>
- > If the certificate is set to autorenew, API Management will pick up the latest version automatically without any downtime to the service (if your API Management tier has SLA - i. e. in all tiers except the Developer tier).
+ >* When using Azure portal, all the necessary configuration steps will be completed automatically.
+ >* When using command line tools or management API, these permissions must be granted manually, in two steps:
+ > * Using the **Managed identities** page on your API Management instance, ensure that Managed Identity is enabled and note the principal id on that page.
+ > * Give the permission list and get secrets permissions to this principal id on the Azure Key Vault containing the certificate.
+ >
+ > If the certificate is set to `autorenew` and your API Management tier has SLA (i. e. in all tiers except the Developer tier), API Management will pick up the latest version automatically, without any downtime to the service.
1. Click Apply. > [!NOTE]
- > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer SKU has downtime, Basic and higher SKUs do not have downtime.
+ > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer SKU has downtime, while Basic and higher SKUs do not.
[!INCLUDE [api-management-custom-domain](../../includes/api-management-custom-domain.md)] ## DNS configuration
-When configuring DNS for your custom domain name, you have two options:
+When configuring DNS for your custom domain name, you can either:
-- Configure a CNAME-record that points to the endpoint of your configured custom domain name.
+- Configure a CNAME-record that points to the endpoint of your configured custom domain name, or
- Configure an A-record that points to your API Management gateway IP address.
+While CNAME-records (or alias records) and A-records both allow you to associate a domain name with a specific server or service, they work differently.
+
+### CNAME or Alias record
+A CNAME-record maps a *specific* domain (such as `contoso.com` or www\.contoso.com) to a canonical domain name. Once created, the CNAME creates an alias for the domain. The CNAME entry will resolve to the IP address of your custom domain service automatically, so if the IP address changes, you do not have to take any action.
+
+> [!NOTE]
+> Some domain registrars only allow you to map subdomains when using a CNAME-record, such as www\.contoso.com, and not root names, such as contoso.com. For more information on CNAME-records, see the documentation provided by your registrar, [the Wikipedia entry on CNAME-record](https://en.wikipedia.org/wiki/CNAME_record), or the [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035) document.
+
+### A-record
+An A-record maps a domain, such as `contoso.com` or **www\.contoso.com**, *or a wildcard domain*, such as **\*.contoso.com**, to an IP address. Since an A-record is mapped to a static IP address, it cannot automatically resolve changes to the IP address. We recommend using the more stable CNAME-record instead of an A-record.
+ > [!NOTE]
-> Although the API Managment instance IP address is static, it may change in a few scenarios. Because of this it's recommended to use CNAME when configuring custom domain. Take that into consideration when choosing DNS configuration method. Read more in the [the IP documentation article](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-back-end-services-).
+> Although the API Management instance IP address is static, it may change in a few scenarios. When choosing DNS configuration method, we recommend using a CNAME-record when configuring custom domain, as it is more stable than an A-record in case the IP changes. Read more in the [the IP documentation article](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](./api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-back-end-services-).
## Next steps
-[Upgrade and scale your service](upgrade-and-scale.md)
+[Upgrade and scale your service](upgrade-and-scale.md)
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Deploying the API Management gateway on an Arc-enabled Kubernetes cluster expand
## Prerequisites
-* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within [a supported Azure Arc region](../azure-arc/kubernetes/overview.md#supported-regions).
+* [Connect your Kubernetes cluster](../azure-arc/kubernetes/quickstart-connect-cluster.md) within [a supported Azure Arc region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
* Install the `k8s-extension` Azure CLI extension: ```azurecli
api-management Monetization Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/monetization-overview.md
+
+ Title: Monetization with Azure API Management
+description: Learn how to set up your monetization strategy for Azure API Management in six simple stages.
++ Last updated : 08/23/2021++++
+# Monetization with Azure API Management
+
+Modern web APIs underpin the digital economy. They provide a company's intellectual property (IP) to third parties and generate revenue by:
+
+- Packaging IP in the form of data, algorithms, or processes.
+- Allowing other parties to discover and consume useful IP in a consistent, frictionless manner.
+- Offering a mechanism for direct or indirect payment for this usage.
+
+A common theme across API success stories is a *healthy business model*. Value is created and exchanged between all parties, in a sustainable way.
+
+Start-ups, established organizations, and everything in-between typically seek to digitally transform starting with the business model. APIs allow the business model to be realized, enabling an easier and more cost-efficient way for marketing, adopting, consuming, and scaling the underlying IP.
+
+Organizations publishing their first API face a complex set of decisions. While the Azure API Management platform de-escalates risk and accelerates key elements, organizations still need to configure and build their API around their unique technical and business model.
+
+## Developing a monetization strategy
+
+*Monetization* is the process of converting something into money - in this case, the API value. API interactions typically involve three distinct parties in the value chain:
++
+Categories of API monetization strategy include:
+
+| API monetization strategy | Description |
+| -- | -- |
+| **Free** | An API facilitates business to business integration, such as streamlining a supply chain. The API is not monetized, but delivers significant value by enabling business processes efficiencies for both the API provider and API consumer. |
+| **Consumer pays** | API consumers pay based on the number of interactions they have with the API. We focus on this approach in this document. |
+| **Consumer gets paid** | For example, an API consumer uses the API to embed advertising in their website and receives a share of the generated revenue. |
+| **Indirect monetization** | API monetization is not driven by the number of interactions with the API, but through other sources of revenue facilitated by the API. |
+
+>[!NOTE]
+>The monetization strategy is set by the API provider and should be designed to meet the needs of the API consumer.
+
+Since a wide range of factors influence the design, API monetization doesn't come as a one-size-fits-all solution. Monetization strategy differentiates your API from your competitors and maximizes your generated revenue.
+
+The following steps explain how to implement a monetization strategy for your API.
++
+### Step 1 - Understand your customer
+
+1. Map out the stages in your API consumers' likely journey, from first discovery of your API to maximum scale.
+
+ For example, a set of customer stages could be:
+
+ | Customer stage | Description |
+ | -- | -- |
+ | **Investigation** | Enable the API Consumer to try out your API with zero cost and friction. |
+ | **Implementation** | Provide sufficient access to the API to support the development and testing work required to integrate with it. |
+ | **Preview** | Allow the customer to launch their offering and understand initial demand. |
+ | **Initial production usage** | Support early adoption of the API in production when usage levels aren't fully understood and a risk-adverse approach may be necessary. |
+ | **Initial growth** | Enable the API Consumer to ramp up usage of the API in response to increased demand from end users. |
+ | **Scale** | Incentivize the API Consumer to commit to a higher volume of purchase once the API is consistently reaching high levels of usage each month. |
+ | **Global growth** | Reward the API users who are using the API at global scale by offering the optimal wholesale price. |
+
+1. Analyze the value that your API will be generating for the customer at each stage in their journey.
+1. Consider applying a value-based pricing strategy if the direct value of the API to the customer is well understood.
+1. Calculate the anticipated lifetime usage levels of the API for a customer and your expected number of customers over the lifetime of the API.
+
+### Step 2 - Quantify the costs
+
+Calculate the total cost of ownership for your API.
+
+| Cost | Description |
+| -- | -- |
+| **Cost of customer acquisition (COCA)** | The cost of marketing, sales, and onboarding. The most successful APIs tend to have a COCA with zero as adoption levels increase. APIs should be largely self-service in onboarding. Factors include documentation and frictionless integration with payment systems. |
+| **Engineering costs** | The human resources required to build, test, operate, and maintain the API over its lifetime. Tends to be the most significant cost component. Where possible, exploit cloud PaaS and serverless technologies to minimize. |
+| **Infrastructure costs** | The costs for the underlying platforms, compute, network, and storage required to support the API over its lifetime. Exploit cloud platforms to achieve an infrastructure cost model that scales up proportionally in line with API usage levels. |
+
+### Step 3 - Conduct market research
+
+1. Research the market to identify competitors.
+1. Analyze competitors' monetization strategies.
+1. Understand the specific features (functional and non-functional) that they are offering with their API.
+
+### Step 4 - Design the revenue model
+
+Design a revenue model based on the outcome of the steps above. You can work across two dimensions:
+
+| Dimension | Description |
+| | -- |
+| **Quality of service** | Put constraints on the service level you are offering by setting a cap on API usage. Define a quota for the API calls that can be made over a period of time (for example, 50,000 calls per month) and then block calls once that quota is reached. <br> You can also set a rate limit, throttling the number of calls that can be made in a short period (for example, 100 calls per second). <br> Caps and rate limits are applied in conjunction, preventing users from consuming their monthly quota in a short intensive burst of API calls. |
+| **Price** | Define the unit price to be paid for each API call. |
+
+Maximize the lifetime value (LTV) you generate from each customer by designing a revenue model that supports your customer at each stage of the customer journey.
+
+1. Make it as easy as possible for your customers to scale and grow:
+ - Suggest customers move up to the next tier in the revenue model.
+ - For example, reward customers who purchase a higher volume of API calls with a lower unit price.
+1. Keep the revenue model as simple as possible:
+ - Balance the need to provide choice with the risk of overwhelming customers with an array of options.
+ - Keep down the number of dimensions used to differentiate across the revenue model tiers.
+1. Be transparent:
+ - Provide clear documentation about the different options.
+ - Give your customers tools for choosing the revenue model that best suits their needs.
+
+Identify the range of required pricing models. A *pricing model* describes a specific set of rules for the API provider to turn consumption by the API consumer into revenue.
+
+For example, to support the [customer stages above](#step-1understand-your-customer), we would need six types of subscription:
+
+| Subscription type | Description |
+| -- | -- |
+| `Free` | Enables the API consumer to trial the API in an obligation and cost free way, to determine whether it fulfills a use case. Removes all barriers to entry. |
+| `Freemium` | Allows the API consumer to use the API for free, but to transition into a paid service as demand increases. |
+| `Metered` | The API consumer can make as many calls as they want per month, and will pay a fixed amount per call. |
+| `Tier` | The API consumer pays for a set number of calls per month. If they exceed this limit, they pay an overage amount per extra call. If they regularly incur overage, they can upgrade to the next tier. |
+| `Tier + Overage` | The API consumer pays for a set number of calls per month. If they exceed this limit, they pay a set amount per extra call. |
+| `Unit` | The API consumer pays for a set amount of call per month. If they exceed this limit, they have to pay for another unit of calls. |
+
+Your revenue model will define the set of API products. Each API product implements a specific pricing model to target a specific stage in the API consumer lifecycle.
+
+While pricing models generally shouldn't change, you may need to adapt the configuration and application of pricing models for your revenue model. For example, you may want to adjust your prices to match a competitor.
+
+Building on the examples above, the pricing models could be applied to create an overall revenue model as follows:
+
+| Customer lifecycle stage | Pricing model | Pricing model configuration | Quality of Service |
+| | - | | -- |
+| Investigation | Free | Not implemented. | Quota set to limit the Consumer to 100 calls/month. |
+| Implementation | Freemium | Graduated tiers: <ul> <li>First tier flat amount is $0.</li> <li>Next tiers per unit amount charge set to charge $0.20/100 calls.</li></ul> | No quotas set. Consumer can continue to make and pay for calls with a rate limit of 100 calls/minute. |
+| Preview | Metered | Price set to charge consumer $0.15/100 calls. | No quotas set. Consumer can continue to make and pay for calls at a rate limit of 200 calls/minute. |
+| Initial production usage | Tier | Price set to charge consumer $14.95/month. | Quota set to limit the consumer to 50,000 calls/month with a rate limit of 100 calls/minute. |
+| Initial growth | Tier + Overage | Graduated tiers: <ul><li>First tier flat amount is $89.95/month for first 100,000 calls.</li><li>Next tiers per unit amount charge set to charge $0.10/100 calls.</li></ul> | No quotas set. Consumer can continue to make and pay for extra calls at a rate limit of 100 calls/minute. |
+| Scale | Tier + Overage | Graduated tiers:<ul><li>First tier flat amount is $449.95/month for first 500,000 calls.</li><li>Next tiers per unit amount charge set to charge $0.06/100 calls.</li></ul> | No quotas set. Consumer can continue to make and pay for extra calls at a rate limit of 1,200 calls/minute. |
+| Global growth | Unit | Graduated tiers, where every tier flat amount is $749.95/month for 1,500,000 calls. | No quotas set. Consumer can continue to make and pay for extra calls at a rate limit of 3,500 calls/minute. |
+
+**Two examples of how to interpret the revenue model based on the table above:**
+
+* **Tier pricing model**
+ Applied to support API consumers during the **Initial production phase** of the lifecycle. With the Tier pricing model configuration, the consumer:
+ * Pays $14.95/month.
+ * Can make up to a maximum of 50,000 calls/month.
+ * Be rate limited to 100 calls/minute.
+
+* **Scale phase of the lifecycle**
+ Implemented by applying the **Tier + Overage** pricing model, where consumers:
+ * Pay $449.95/month for first 500,000 calls.
+ * Are charged an extra $0.06/100 calls past the first 50,000.
+ * Rate limited to 1,200 calls/minute.
+
+### Step 5 - Calibrate
+
+Calibrate the pricing across the revenue model to:
+
+- Set the pricing to prevent overpricing or underpricing your API, based on the market research in step 3 above.
+- Avoid any points in the revenue model that appear unfair or encourage customers to work around the model to achieve more favorable pricing.
+- Ensure the revenue model is geared to generate a total lifetime value (TLV) sufficient to cover the total cost of ownership plus margin.
+- Verify the quality of your service offerings in each revenue model tier can be supported by your solution.
+ - For example, if you are offering to support 3,500 calls/minute, make sure your end-to-end solution can scale to support that throughput level.
+
+### Step 6 - Release and monitor
+
+Choose an appropriate solution to collect payment for usage of your APIs. Providers tend to fall into two groups:
+
+- **Payment platforms, like [Stripe](https://stripe.com/)**
+
+ Calculate the payment based on the raw API usage metrics by applying the specific revenue model that the customer has chosen. Configure the payment platform to reflect your monetization strategy.
+- **Payment providers, like [Adyen](https://www.adyen.com/)**
+
+ Only concerned with the facilitating the payment transaction. You will need to apply your monetization strategy (like, translate API usage metrics into a payment) before calling this service.
+
+Use Azure API Management to accelerate and de-risk the implementation by using built-in capabilities provided in API Management. For more information about the specific features in API Management, see [how API Management supports monetization](monetization-support.md).
+
+Implement a solution that builds flexibility into how you codify your monetization strategy in the underlying systems using the same approach as the sample project. With flexible coding, you can respond dynamically and minimize the risk and cost of making changes.
+
+Follow the [monetization GitHub repo documentation](https://github.com/microsoft/azure-api-management-monetization) to implement the sample project in your own Azure subscription.
+
+Regularly monitor how your API is being consumed to enable you to make evidence-based decisions. For example, if evidence shows you are churning customers, repeat steps 1 to 5 above to uncover and address the source.
+
+## Ongoing evolution
+
+Review your monetization strategy regularly by revisiting and re-evaluating all of the steps above. You may need to evolve your monetization strategy over time as you learn more about your customers, what it costs to provide the API, and how you respond to shifting competition in the market.
+
+Remember that the monetization strategy is only one facet of a successful API implementation. Other facets include:
+* The developer experience
+* The quality of your documentation
+* The legal terms
+* Your ability to scale the API to meet the committed service levels.
+
+## Next Steps
+* [How API Management supports monetization](monetization-support.md).
+* Deploy a demo Adyen or Stripe integration via the associated [Git repo](https://github.com/microsoft/azure-api-management-monetization).
api-management Monetization Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/monetization-support.md
+
+ Title: Azure API Management support for monetization
+description: Learn how Azure API Management supports monetization strategies for your API products.
++ Last updated : 08/23/2021++++
+# How API Management supports monetization
+
+With [Azure API Management](./api-management-key-concepts.md) service platform, you can:
+* Publish APIs, to which your consumers subscribe.
+* De-risk implementation.
+* Accelerate project timescales.
+* Scale your APIs with confidence.
+
+In this document, we focus on API Management features that enable the implementation of your monetization strategy, like providing a frictionless experience to:
+* Discover your public APIs.
+* Enter payment details.
+* Activate your subscription.
+* Consume the API.
+* Monitor usage.
+* Automatically pay for usage of the API.
+
+The diagram below introduces these key API Management features:
++
+## API discovery
+
+Launch your API and onboard API consumers using API Management's built-in developer portal. Emphasize good quality development content for the developer portal, enabling API consumers to explore and use your APIs seamlessly. Test the content and information provided for accessibility, thoroughness, and usability.
+
+For details about how to add content and control the branding of the developer portal, see the [overview of the developer portal](./api-management-howto-developer-portal.md).
+
+## API packaging
+
+API Management manages how your APIs are packaged and presented using the concept of *products* and *policies*.
+
+### Products
+
+APIs are published [via products](./api-management-howto-add-products.md). Products allow you to define:
+* Which APIs a subscriber can access.
+* Specific throttling [policies](./api-management-howto-policies.md), like limiting a specific subscription to a quota of calls per month.
+
+When an API consumer subscribes to a product, they receive an API key, which with they make calls. Initially, the subscription is set to a `submitted` state. Activate the subscription to allow subscribers to use the APIs.
+
+Configure the API Management products to package your underlying API to mirror your revenue model, with:
+* A one-to-one relationship between each tier in your revenue model.
+* A corresponding API Management product.
+
+Example projects use API Management products as the top-level means of codifying the monetization strategy. The API Management products mirror the revenue model tiers and index the specific pricing model for each tier. This setup provides a flexible, configuration-driven approach to preparing the monetization strategy.
+
+### Policies
+
+Apply API Management policies to control the quality of service for each product. Example projects use two specific policy features to control quality of service, in line with the revenue model:
+
+| Policy feature | Description |
+| -- | -- |
+| **Quota** | Defines the total number of calls the user can make to the API over a specified time period. For example, "100 calls per month". Once the user reaches the quota, the calls to the API will fail and the caller will receive a `403 Forbidden` response status code. |
+| **Rate limit** | Defines the number of calls over a sliding time window that can be made to the API. For example, "200 calls per minute". Designed to prevent spikes in API usage beyond the paid quality of service with the chosen product. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code. |
+
+For more details about policies, see the [Policies in Azure API Management](./api-management-howto-policies.md) documentation.
+
+## API consumption
+
+Grant access for API consumers to your APIs via products using API subscriptions.
+
+1. API consumers establish API subscriptions when signing up for a specific API Management product.
+1. Integrate the subscription process with the payment provider using API Management delegation.
+1. Once successfully providing payment details, users gain access to the API with a generated, unique security key for the subscription.
+
+For more information about subscriptions, see the [Subscriptions in Azure API Management](./api-management-subscriptions.md) documentation.
+
+## API usage monitoring
+
+Gain insights about your API usage and performance using API Management's built-in analytics. These analytics provide reports by:
+* API
+* Geography
+* API operations
+* Product
+* Request
+* Subscription
+* Time
+* User
+
+Review the analytics reports regularly to understand how your monetization strategy is being adopted by API consumers.
+
+For more information, see [Get API analytics in Azure API Management](./howto-use-analytics.md).
+
+## Security
+
+Control the access level for each user to each product using API Management's products, API policies, and subscriptions. Prevent misuse and abuse by granting subscription-level API access if the user has successfully authenticated with the payment provider, even if the specific API product is free.
+
+## Integration
+
+Create a seamless monetization experience through both front-end and back-end integration between API Management and your chosen payment provider. Use API Management delegation for front-end integration and the REST API for back-end integration.
+
+### Delegation
+
+In the example projects, you can use [API Management delegation](./api-management-howto-setup-delegation.md) to make custom integrations with the third-party payment providers. The demo uses delegation for both the sign-up/sign-in and product subscription experiences.
+
+#### Sign-up/Sign-in workflow
+
+1. Developer clicks on the sign-in or sign-up link at the API Management developer portal.
+1. Browser redirects to the delegation endpoint (configured to a page in the custom billing portal app).
+1. Custom billing portal app presents a sign-in/sign-up UI.
+1. Upon successful sign-in/sign-up, user is authenticated and redirected back to the starting API Management developer portal page.
+
+#### Product subscription workflow
+
+1. Developer selects a product in the API Management developer portal and clicks on the **Subscribe** button
+1. Browser redirects to the delegation endpoint (configured to a page in the custom billing portal app).
+1. Custom billing portal app:
+ * Presents a UI configured based on the payment provider (Stripe or Adyen).
+ * Takes user through the relevant checkout process.
+1. The user is redirected back to the starting API Management product page.
+ * The product will be active and the API keys will be available.
+
+### REST API
+
+Use the REST API for API Management to automate the operation of your monetization strategy.
+
+The sample projects use the API to programmatically:
+
+- Retrieve API Management products and policies to enable synchronized configuration of similar concepts in payment providers, such as Stripe.
+- Poll API Management regularly to retrieve API usage metrics for each subscription and drive the billing process.
+
+For more information, see [the REST API Azure API Management](https://docs.microsoft.com/rest/api/apimanagement/#rest-operation-groups) overview.
+
+## DevOps
+
+Version control and automate deployment changes to API Management using Azure Resource Manager, including configuring features that implement your monetization strategy, like:
+* Products
+* Policies
+* The developer portal
+
+In example projects, the Azure Resource Manager scripts are augmented by a JSON file, which defines each API Management product's pricing model. With this augmentation, you can synchronize the configuration between API Management and the chosen payment provider. The entire solution is managed under a single source control repository, to:
+* Coordinate all changes associated with the ongoing monetization strategy evolution as a single release.
+* Carry out the changes, following governance and auditing requirements.
+
+## Initialization and deployment
+
+API Management can be deployed either through:
+* The Azure portal UI, or
+* An "infrastructure as code" approach using [Azure Resource Manager templates](https://azure.microsoft.com/services/arm-templates).
+
+## Next steps
+
+* [Learn more about API Management monetization strategies](monetization-overview.md).
+* Deploy a demo Adyen or Stripe integration via the associated [Git repo](https://github.com/microsoft/azure-api-management-monetization).
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-bindings.md
In App Service, [TLS termination](https://wikipedia.org/wiki/TLS_termination_pro
Language specific configuration guides, such as the [Linux Node.js configuration](configure-language-nodejs.md#detect-https-session) guide, shows you how to detect an HTTPS session in your application code.
+## Renew certificate binding
+
+> [!NOTE]
+> To renew an [App Service certificate you purchased](configure-ssl-certificate.md#import-an-app-service-certificate), see [Export (an App Service) certificate](configure-ssl-certificate.md#export-certificate). App Service certificates can be automatically renewed and the binding can be automatically synced.
+
+To replace an expiring certificate, how you update the certificate binding with the new certificate can adversely affect user experience. For example, your inbound IP address can change when you delete a binding, even if that binding is IP-based. This is especially important when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app, follow these steps in order:
+
+1. Upload the new certificate.
+2. Bind the new certificate to the same custom domain without deleting the existing (expiring) certificate. This action replaces the binding instead of removing the existing certificate.
+3. Delete the existing certificate.
+ ## Automate with scripts ### Azure CLI
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
Once the rekey operation is complete, click **Sync**. The sync operation automat
### Renew certificate
+> [!NOTE]
+> To renew a [certificate you uploaded](#upload-a-private-certificate), see [Export certificate binding](configure-ssl-bindings.md#renew-certificate-binding).
+ > [!NOTE] > The renewal process requires that [the well-known service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). This permission is configured for you when you import an App Service Certificate through the portal, and should not be removed from your key vault.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
Imagine you have two applications (or one app with a slot) with Health check ena
In the scenario where all instances of your application are unhealthy, App Service will remove instances from the load balancer up to the percentage specified in `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT`. In this scenario, taking all unhealthy app instances out of the load balancer rotation would effectively cause an outage for your application.
+### Does Health Check work on App Service Environments?
+
+Yes, on App Service Environments (ASEs), the platform will ping your instances on the specified path and remove any unhealthy instances from the load balancer so requests will not be routed to them. However, currently these unhealthy instances will not be replaced with new instances if they remain unhealthy for 1 hour.
+ ## Next steps - [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/quick-create-cli.md
az network application-gateway create \
--resource-group myResourceGroupAG \ --capacity 2 \ --sku Standard_v2 \
- --http-settings-cookie-based-affinity Enabled \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
automanage Automanage Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-arc.md
Title: Azure Automanage for Arc enabled servers
-description: Learn about the Azure Automanage for Arc enabled servers
+ Title: Azure Automanage for Arc-enabled servers
+description: Learn about the Azure Automanage for Arc-enabled servers
-# Azure Automanage for Machines Best Practices - Arc enabled servers
+# Azure Automanage for Machines Best Practices - Arc-enabled servers
These Azure services are automatically onboarded for you when you use Automanage Machine Best Practices on an Arc-enabled server VM. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management).
For all of these services, we will auto-onboard, auto-configure, monitor for dri
## Supported operating systems
-Automanage supports the following operating systems for Arc enabled servers
+Automanage supports the following operating systems for Arc-enabled servers
- Windows Server 2012/R2 - Windows Server 2016
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|[Azure Security Center](../security-center/security-center-introduction.md) |Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud. Learn [more](../security-center/security-center-introduction.md). Automanage will configure the subscription where your VM resides to the free-tier offering of Azure Security Center. If your subscription is already onboarded to Azure Security Center, then Automanage will not reconfigure it. |Production, Dev/Test |No | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
-|[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the Guest Configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
+|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test |No | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
Azure Automanage also automatically monitors for drift and corrects for it when
Automanage doesn't store/process customer data outside the geography your VMs are located. In the SoutheastAsia region, Automanage does not store/process data outside of SoutheastAsia. > [!NOTE]
-> Automanage can be enabled on Azure virtual machines as well as Arc enabled servers. Automanage is not available in US Government Cloud at this time.
+> Automanage can be enabled on Azure virtual machines as well as Arc-enabled servers. Automanage is not available in US Government Cloud at this time.
## Prerequisites
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. Learn [more](../security/fundamentals/antimalware.md). |Production, Dev/Test |Yes | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Production, Dev/Test |No | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Production, Dev/Test |No |
-|[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the Guest Configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
+|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. For Windows machines, the guest configuration service will automatically reapply the baseline settings if they are out of compliance. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test |No |
|[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test |No | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |No | |[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |No |
automanage Quick Create Virtual Machines Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/quick-create-virtual-machines-portal.md
Sign in to the [Azure portal](https://aka.ms/AutomanagePortal-Ignite21).
1. Check the checkbox of each virtual machine you want to onboard. 1. Click the **Select** button. > [!NOTE]
- > You may select both Azure VMs and Arc enabled servers.
+ > You may select both Azure VMs and Arc-enabled servers.
:::image type="content" source="media\quick-create-virtual-machine-portal\existing-vm-select-machine.png" alt-text="Select existing VM from list of available VMs.":::
automanage Virtual Machines Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/virtual-machines-best-practices.md
For all of these services, we will auto-onboard, auto-configure, monitor for dri
|Microsoft Antimalware |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. Learn [more](../security/fundamentals/antimalware.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |Yes | |Update Management |You can use Update Management in Azure Automation to manage operating system updates for your virtual machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. Learn [more](../automation/update-management/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Change Tracking & Inventory |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
-|Azure Guest Configuration | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the Guest Configuration extension. Learn [more](../governance/policy/concepts/guest-configuration.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
+|Guest configuration | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. Learn [more](../governance/policy/concepts/guest-configuration.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
|Azure Automation Account |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
# Deploy a Linux Hybrid Runbook Worker
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly it and against resources in the environment to manage those local resources.
The Linux Hybrid Runbook Worker executes runbooks as a special user that can be elevated for running commands that need elevation. Azure Automation stores and manages runbooks and then delivers them to one or more designated machines. This article describes how to install the Hybrid Runbook Worker on a Linux machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group.
If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo
### Log Analytics agent
-The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md).
+The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) for the supported Linux operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md).
> [!NOTE] > After installing the Log Analytics agent for Linux, you should not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the **nxautomation** account, which is the user context the Hybrid Runbook Worker runs under. The permissions should not be removed. Restricting this to certain folders or commands may result in a breaking change.
To install and configure a Linux Hybrid Runbook Worker, perform the following st
* For Azure VMs, install the Log Analytics agent for Linux using the [virtual machine extension for Linux](../virtual-machines/extensions/oms-linux.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, the Azure CLI, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
- * For non-Azure machines, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md). Arc enabled servers support deploying the Log Analytics agent using the following methods:
+ * For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Arc-enabled servers support deploying the Log Analytics agent using the following methods:
- Using the VM extensions framework.
- This feature in Azure Arc enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
+ This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers:
- The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md)
To install and configure a Linux Hybrid Runbook Worker, perform the following st
- Using Azure Policy.
- Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
+ Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
External services, for example, Azure DevOps Services and GitHub, can start a ru
To share resources among all runbooks in the cloud, Azure uses a concept called fair share. Using fair share, Azure temporarily unloads or stops any job that has run for more than three hours. Jobs for [PowerShell runbooks](automation-runbook-types.md#powershell-runbooks) and [Python runbooks](automation-runbook-types.md#python-runbooks) are stopped and not restarted, and the job status becomes Stopped.
-For long-running Azure Automation tasks, it's recommended to use a Hybrid Runbook Worker. Hybrid Runbook Workers aren't limited by fair share, and don't have a limitation on how long a runbook can execute. The other job [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) apply to both Azure sandboxes and Hybrid Runbook Workers. While Hybrid Runbook Workers aren't limited by the three-hour fair share limit, you should develop runbooks to run on the workers that support restarts from unexpected local infrastructure issues.
+For long-running Azure Automation tasks, it's recommended to use a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md). Hybrid Runbook Workers aren't limited by fair share, and don't have a limitation on how long a runbook can execute. The other job [limits](../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits) apply to both Azure sandboxes and Hybrid Runbook Workers. While Hybrid Runbook Workers aren't limited by the three-hour fair share limit, you should develop runbooks to run on the workers that support restarts from unexpected local infrastructure issues.
Another option is to optimize a runbook by using child runbooks. For example, your runbook might loop through the same function on several resources, for example, with a database operation on several databases. You can move this function to a [child runbook](automation-child-runbooks.md) and have your runbook call it using [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook). Child runbooks execute in parallel in separate processes.
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
# Deploy a Windows Hybrid Runbook Worker
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
Azure Automation stores and manages runbooks and then delivers them to one or more designated machines. This article describes how to deploy a user Hybrid Runbook Worker on a Windows machine, how to remove the worker, and how to remove a Hybrid Runbook Worker group.
If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Mo
### Log Analytics agent
-The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md).
+The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) for the supported Windows operating system. For servers or machines hosted outside of Azure, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md).
### Supported Windows operating system
To install and configure a Windows Hybrid Runbook Worker, perform the following
* For Azure VMs, install the Log Analytics agent for Windows using the [virtual machine extension for Windows](../virtual-machines/extensions/oms-windows.md). The extension installs the Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace. You can use an Azure Resource Manager template, PowerShell, or Azure Policy to assign the [Deploy Log Analytics agent for *Linux* or *Windows* VMs](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Once the agent is installed, the machine can be added to a Hybrid Runbook Worker group in your Automation account.
- * For non-Azure machines, you can install the Log Analytics agent using [Azure Arc enabled servers](../azure-arc/servers/overview.md). Arc enabled servers support deploying the Log Analytics agent using the following methods:
+ * For non-Azure machines, you can install the Log Analytics agent using [Azure Arc-enabled servers](../azure-arc/servers/overview.md). Arc-enabled servers support deploying the Log Analytics agent using the following methods:
- Using the VM extensions framework.
- This feature in Azure Arc enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
+ This feature in Azure Arc-enabled servers allows you to deploy the Log Analytics agent VM extension to a non-Azure Windows and/or Linux server. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers:
- The [Azure portal](../azure-arc/servers/manage-vm-extensions-portal.md) - The [Azure CLI](../azure-arc/servers/manage-vm-extensions-cli.md)
To install and configure a Windows Hybrid Runbook Worker, perform the following
- Using Azure Policy.
- Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
+ Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent is not installed, it automatically deploys it using a remediation task. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent.
We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy.
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/enable-from-automation-account.md
Sign in to Azure at https://portal.azure.com.
## Enable non-Azure VMs
-Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Machines not in Azure need to be added manually. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you also plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
1. From your Automation account select **Inventory** or **Change tracking** under **Configuration Management**.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
You can enable Change Tracking and Inventory in the following ways:
- From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines. -- Manually for non-Azure machines, including machines or servers registered with [Azure Arc enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+- Manually for non-Azure machines, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then using Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. If you plan to also monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
- For a single Azure VM from the [Virtual machine page](enable-from-vm.md) in the Azure portal. This scenario is available for Linux and Windows VMs.
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/private-link-security.md
You can start runbooks by doing a POST on the webhook URL. For example, the URL
### Hybrid Runbook Worker scenario
-The user Hybrid Runbook Worker feature of Azure Automation enables you to run runbooks directly on the Azure or non-Azure machine, including servers registered with Azure Arc enabled servers. From the machine or server that's hosting the role, you can run runbooks directly on it and against resources in the environment to manage those local resources.
+The user Hybrid Runbook Worker feature of Azure Automation enables you to run runbooks directly on the Azure or non-Azure machine, including servers registered with Azure Arc-enabled servers. From the machine or server that's hosting the role, you can run runbooks directly on it and against resources in the environment to manage those local resources.
A JRDS endpoint is used by the hybrid worker to start/stop runbooks, download the runbooks to the worker, and to send the job log stream back to the Automation service. After enabling JRDS endpoint, the URL would look like this: `https://<automationaccountID>.jrds.<region>.privatelink.azure-automation.net`. This would ensure runbook execution on the hybrid worker connected to Azure Virtual Network is able to execute jobs without the need to open an outbound connection to the Internet.
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-management.md
Machines do appear in Azure Resource Graph query results, but still don't show u
5. If the machine is not set up as a system Hybrid Runbook Worker, review the methods to enable using one of the following methods:
- - From your [Automation account](../update-management/enable-from-automation-account.md) for one or more Azure and non-Azure machines, including Arc enabled servers.
+ - From your [Automation account](../update-management/enable-from-automation-account.md) for one or more Azure and non-Azure machines, including Arc-enabled servers.
- Using the **Enable-AutomationSolution** [runbook](../update-management/enable-from-runbook.md) to automate onboarding Azure VMs.
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
# How to deploy updates and review results
-This article describes how to schedule an update deployment and review the process after the deployment is complete. You can configure an update deployment from a selected Azure virtual machine, from the selected Arc enabled server, or from the Automation account across all configured machines and servers.
+This article describes how to schedule an update deployment and review the process after the deployment is complete. You can configure an update deployment from a selected Azure virtual machine, from the selected Arc-enabled server, or from the Automation account across all configured machines and servers.
-Under each scenario, the deployment you create targets that selected machine or server, or in the case of creating a deployment from your Automation account, you can target one or more machines. When you schedule an update deployment from an Azure VM or Arc enabled server, the steps are the same as deploying from your Automation account, with the following exceptions:
+Under each scenario, the deployment you create targets that selected machine or server, or in the case of creating a deployment from your Automation account, you can target one or more machines. When you schedule an update deployment from an Azure VM or Arc-enabled server, the steps are the same as deploying from your Automation account, with the following exceptions:
* The operating system is automatically pre-selected based on the OS of the machine * The target machine to update is set to target itself automatically
Scheduling an update deployment creates a [schedule](../shared-resources/schedul
>[!NOTE] >If you delete the schedule resource from the Azure portal or using PowerShell after creating the deployment, the deletion breaks the scheduled update deployment and presents an error when you attempt to reconfigure the schedule resource from the portal. You can only delete the schedule resource by deleting the corresponding deployment schedule.
-To schedule a new update deployment, perform the following steps. Depending on the resource selected (that is, Automation account, Arc enabled server, Azure VM), the steps below apply to all with minor differences while configuring the deployment schedule.
+To schedule a new update deployment, perform the following steps. Depending on the resource selected (that is, Automation account, Arc-enabled server, Azure VM), the steps below apply to all with minor differences while configuring the deployment schedule.
1. In the portal, to schedule a deployment for: * One or more machines, navigate to **Automation accounts** and select your Automation account with Update Management enabled from the list. * For an Azure VM, navigate to **Virtual machines** and select your VM from the list.
- * For an Arc enabled server, navigate to **Servers - Azure Arc** and select your server from the list.
+ * For an Arc-enabled server, navigate to **Servers - Azure Arc** and select your server from the list.
2. Depending on the resource you selected, to navigate to Update Management: * If you selected your Automation account, go to **Update management** under **Update management**, and then select **Schedule update deployment**. * If you selected an Azure VM, go to **Guest + host updates**, and then select **Go to Update Management**.
- * If you selected an Arc enabled server, go to **Update Management**, and then select **Schedule update deployment**.
+ * If you selected an Arc-enabled server, go to **Update Management**, and then select **Schedule update deployment**.
3. Under **New update deployment**, in the **Name** field enter a unique name for your deployment. 4. Select the operating system to target for the update deployment. > [!NOTE]
- > This option is not available if you selected an Azure VM or Arc enabled server. The operating system is automatically identified.
+ > This option is not available if you selected an Azure VM or Arc-enabled server. The operating system is automatically identified.
5. In the **Groups to update** region, define a query that combines subscription, resource groups, locations, and tags to build a dynamic group of Azure VMs to include in your deployment. To learn more, see [Use dynamic groups with Update Management](configure-groups.md). > [!NOTE]
- > This option is not available if you selected an Azure VM or Arc enabled server. The machine is automatically targeted for the scheduled deployment.
+ > This option is not available if you selected an Azure VM or Arc-enabled server. The machine is automatically targeted for the scheduled deployment.
> [!IMPORTANT] > When building a dynamic group of Azure VMs, Update Management only supports a maximum of 500 queries that combines subscriptions or resource groups in the scope of the group.
To schedule a new update deployment, perform the following steps. Depending on t
6. In the **Machines to update** region, select a saved search, an imported group, or pick **Machines** from the dropdown menu and select individual machines. With this option, you can see the readiness of the Log Analytics agent for each machine. To learn about the different methods of creating computer groups in Azure Monitor logs, see [Computer groups in Azure Monitor logs](../../azure-monitor/logs/computer-groups.md). You can include up to a maximum of 1000 machines in a scheduled update deployment. > [!NOTE]
- > This option is not available if you selected an Azure VM or Arc enabled server. The machine is automatically targeted for the scheduled deployment.
+ > This option is not available if you selected an Azure VM or Arc-enabled server. The machine is automatically targeted for the scheduled deployment.
7. Use the **Update classifications** region to specify [update classifications](view-update-assessments.md#work-with-update-classifications) for products. For each product, deselect all supported update classifications but the ones to include in your update deployment.
To schedule a new update deployment, perform the following steps. Depending on t
9. Select **Schedule settings**. The default start time is 30 minutes after the current time. You can set the start time to any time from 10 minutes in the future. > [!NOTE]
- > This option is different if you selected an Arc enabled server. You can select **Update now** or a start time 20 minutes into the future.
+ > This option is different if you selected an Arc-enabled server. You can select **Update now** or a start time 20 minutes into the future.
10. Use the **Recurrence** to specify if the deployment occurs once or uses a recurring schedule, then select **OK**.
To schedule a new update deployment, perform the following steps. Depending on t
![Update Schedule Settings pane](./media/deploy-updates/manageupdates-schedule-win.png) > [!NOTE]
- > When you're finished configuring the deployment schedule for a selected Arc enabled server, select **Review + create**.
+ > When you're finished configuring the deployment schedule for a selected Arc-enabled server, select **Review + create**.
15. You're returned to the status dashboard. Select **Deployment schedules** to show the deployment schedule that you've created. A maximum of 500 schedules are listed. If you have more than 500 schedules and you want to review the complete list, see the [Software Update Configurations - List](/rest/api/automation/softwareupdateconfigurations/list) REST API method. Specify API version 2019-06-01 or higher.
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-automation-account.md
# Enable Update Management from an Automation account
-This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc enabled servers](../../azure-arc/servers/overview.md). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management.
+This article describes how you can use your Automation account to enable the [Update Management](overview.md) feature for VMs in your environment, including machines or servers registered with [Azure Arc-enabled servers](../../azure-arc/servers/overview.md). To enable Azure VMs at scale, you must enable an existing Azure VM using Update Management.
> [!NOTE] > When enabling Update Management, only certain regions are supported for linking a Log Analytics workspace and an Automation account. For a list of the supported mapping pairs, see [Region mapping for Automation account and Log Analytics workspace](../how-to/region-mappings.md).
This article describes how you can use your Automation account to enable the [Up
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](../automation-security-overview.md) to manage machines.
-* An [Azure virtual machine](../../virtual-machines/windows/quick-create-portal.md), or VM or server registered with Arc enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows or Linux installed and reporting to the workspace linked to the Automation account Update Management is enabled in. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+* An [Azure virtual machine](../../virtual-machines/windows/quick-create-portal.md), or VM or server registered with Arc-enabled servers. Non-Azure VMs or servers need to have the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows or Linux installed and reporting to the workspace linked to the Automation account Update Management is enabled in. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to *Linux* or *Windows* Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
## Enable non-Azure VMs
-For machines or servers hosted outside of Azure, including the ones registered with Azure Arc enabled servers, perform the following steps to enable them with Update Management.
+For machines or servers hosted outside of Azure, including the ones registered with Azure Arc-enabled servers, perform the following steps to enable them with Update Management.
1. From your Automation account, select **Update management** under **Update management**.
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
Software Requirements:
- Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-windows-hrw-install.md#prerequisites).
-Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/agents/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment.
Software Requirements:
> [!NOTE] > Update assessment of Linux machines is only supported in certain regions. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
-For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
## Next steps
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/plan-deployment.md
The [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for
On Azure VMs, if the Log Analytics agent isn't already installed, when you enable Update Management for the VM it is automatically installed using the Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md). The agent is configured to report to the Log Analytics workspace linked to the Automation account Update Management is enabled in.
-Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](../../azure-monitor/vm/vminsights-overview.md), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Non-Azure VMs or servers need to have the Log Analytics agent for Windows or Linux installed and reporting to the linked workspace. We recommend installing the Log Analytics agent for Windows or Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with [VM insights](../../azure-monitor/vm/vminsights-overview.md), instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
If you're enabling a machine that's currently managed by Operations Manager, a new agent isn't required. The workspace information is added to the agents configuration when you connect the management group to the Log Analytics workspace.
Enable Update Management and select machines to be managed using one of the foll
- Using an Azure [Resource Manager template](enable-from-template.md) to deploy Update Management to a new or existing Automation account and Azure Monitor Log Analytics workspace in your subscription. It does not configure the scope of machines that should be managed, this is performed as a separate step after using the template. -- From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines, including Arc enabled servers.
+- From your [Automation account](enable-from-automation-account.md) for one or more Azure and non-Azure machines, including Arc-enabled servers.
- Using the **Enable-AutomationSolution** [runbook](enable-from-runbook.md) to automate onboarding Azure VMs.
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/view-update-assessments.md
# View update assessments in Update Management
-In Update Management, you can view information about your machines, missing updates, update deployments, and scheduled update deployments. You can view the assessment information scoped to the selected Azure virtual machine, from the selected Arc enabled server, or from the Automation account across all configured machines and servers.
+In Update Management, you can view information about your machines, missing updates, update deployments, and scheduled update deployments. You can view the assessment information scoped to the selected Azure virtual machine, from the selected Arc-enabled server, or from the Automation account across all configured machines and servers.
## Sign in to the Azure portal
In Update Management, you can view information about your machine, missing updat
[ ![Update Management assessment view for Azure VM](./media/view-update-assessments/update-assessment-azure-vm.png)](./media/view-update-assessments/update-assessment-azure-vm-expanded.png#lightbox)
-To view update assessment from an Arc enabled server, navigate to **Servers - Azure Arc** and select your server from the list. From the left menu, select **Guest and host updates**. On the **Guest + host updates** page, select **Go to Update Management**.
+To view update assessment from an Arc-enabled server, navigate to **Servers - Azure Arc** and select your server from the list. From the left menu, select **Guest and host updates**. On the **Guest + host updates** page, select **Go to Update Management**.
In Update Management, you can view information about your Arc enabled machine, missing updates, update deployments, and scheduled update deployments.
-[ ![Update Management assessment view for Arc enabled servers](./media/view-update-assessments/update-assessment-arc-server.png)](./media/view-update-assessments/update-assessment-arc-server-expanded.png#lightbox)
+[ ![Update Management assessment view for Arc-enabled servers](./media/view-update-assessments/update-assessment-arc-server.png)](./media/view-update-assessments/update-assessment-arc-server-expanded.png#lightbox)
-To view update assessment across all machines, including Arc enabled servers from your Automation account, navigate to **Automation accounts** and select your Automation account with Update Management enabled from the list. In your Automation account, select **Update management** from the left menu.
+To view update assessment across all machines, including Arc-enabled servers from your Automation account, navigate to **Automation accounts** and select your Automation account with Update Management enabled from the list. In your Automation account, select **Update management** from the left menu.
The updates for your environment are listed on the **Update management** page. If any updates are identified as missing, a list of them is shown on the **Missing updates** tab.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
Start/Stop VM runbooks have been updated to use Az modules in place of Azure Res
**Type:** New feature
-Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc enabled servers DSC VM extension. For more information, read [Arc enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md).
+Use Azure Automation State Configuration to centrally store configurations and maintain the desired state of hybrid connected machines enabled through the Azure Arc-enabled servers DSC VM extension. For more information, read [Arc-enabled servers VM extensions overview](../azure-arc/servers/manage-vm-extensions.md).
### July 2020
azure-arc Create Data Controller Direct Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-direct-azure-portal.md
This article describes how to deploy the Azure Arc data controller in direct con
Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md).
->[!NOTE]
->You first need to deploy an Arc enabled Kubernetes data services extension using the Azure CLI.
->
-> To complete this, you will need to identify:
->
-> - `<connected_cluster_name>` - Name of your cluster.
-> - `<resource_group_name>` - Name of your resource group.
-> - `<namespace>` - The Kubernetes namespace that will contain your data services.
->
-> Use these values in the following script to create the extension:
->
->```azurecli
->az k8s-extension create -c "<connected_cluster_name>" -g "<resource_group_name>" --name "arcdataservices" --cluster-type "connectedClusters" --extension-type "microsoft.arcdataservices" --scope "cluster" --release-namespace "<namespace>" --config "Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper"
->```
- ## Deploy Azure Arc data controller Azure Arc data controller create flow can be launched from the Azure portal in one of the following ways:
azure-arc Create Data Controller Indirect Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-indirect-azure-portal.md
You can use the Azure portal to create an Azure Arc data controller, in indirect connectivity mode.
-Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc-enabled servers follows this pattern to [create Arc enabled servers](../servers/onboard-portal.md).
+Many of the creation experiences for Azure Arc start in the Azure portal even though the resource to be created or managed is outside of Azure infrastructure. The user experience pattern in these cases, especially when there is no direct connectivity between Azure and your environment, is to use the Azure portal to generate a script which can then be downloaded and executed in your environment to establish a secure connection back to Azure. For example, Azure Arc-enabled servers follows this pattern to [create Arc-enabled servers](../servers/onboard-portal.md).
When you use the indirect connect mode of Azure Arc-enabled data services, you can use the Azure portal to generate a notebook for you that can then be downloaded and run in Azure Data Studio against your Kubernetes cluster.
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
az postgres arc-server create -n postgres01 --workers 2 --k8s-namespace <namespa
This example assumes that your server group is hosted in an Azure Kubernetes Service (AKS) cluster. This example uses azurefile-premium as storage class name. You may adjust the below example to match your environment. Note that **accessModes ReadWriteMany is required** for this configuration.
-First, create a YAML file that contains the below description of the backup PVC and name it CreateBackupPVC.yml for example:
+First, create a YAML file that contains the below description of the backup PVC (Persistent Volume Claim) and name it CreateBackupPVC.yml for example:
```console apiVersion: v1 kind: PersistentVolumeClaim
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
| Endpoint | Port | |-|-| |`*.servicebus.windows.net` | 443 |
- |`*.guestnotificationservice.azure.com` | 443 |
+ |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | 443 |
## Usage
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/custom-locations.md
A conceptual overview of this feature is available in [Custom locations - Azure
- Verify completed provider registration for `Microsoft.ExtendedLocation`. 1. Enter the following commands:
- ```azurecli
- az provider register --namespace Microsoft.ExtendedLocation
- ```
+ ```azurecli
+ az provider register --namespace Microsoft.ExtendedLocation
+ ```
2. Monitor the registration process. Registration may take up to 10 minutes.
- ```azurecli
- az provider show -n Microsoft.ExtendedLocation -o table
- ```
+ ```azurecli
+ az provider show -n Microsoft.ExtendedLocation -o table
+ ```
+
+ Once registered, the `RegistrationState` state will have the `Registered` value.
- Verify you have an existing [Azure Arc enabled Kubernetes connected cluster](quickstart-connect-cluster.md). - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.1.0 or later.
->[!NOTE]
->**Supported regions for custom locations:**
->* East US
->* West Europe
- ## Enable custom locations on cluster If you are logged into Azure CLI as a Azure AD user, to enable this feature on your cluster, execute the following command:
-```console
+```azurecli
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations ```
If you are logged into Azure CLI using a service principal, to enable this featu
1. Fetch the Object ID of the Azure AD application used by Azure Arc service:
- ```console
+ ```azurecli
az ad sp show --id 'bc313c14-388c-4e7d-a58e-70017303ee3b' --query objectId -o tsv ``` 1. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
- ```console
+ ```azurecli
az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations ```
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/overview.md
description: "This article provides an overview of Azure Arc enabled Kubernetes." keywords: "Kubernetes, Arc, Azure, containers"- # What is Azure Arc enabled Kubernetes?
Azure Arc enabled Kubernetes supports the following scenarios:
[!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
-## Supported regions
-
-Azure Arc enabled Kubernetes is currently supported in these regions:
-
-* East US
-* West Europe
-* West Central US
-* South Central US
-* Southeast Asia
-* UK South
-* West US 2
-* Australia East
-* East US 2
-* North Europe
- ## Next steps Learn how to connect a cluster to Azure Arc.
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
| `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` | Required to get the regional endpoint for pulling system-assigned Managed Service Identity (MSI) certificates. | | `https://<region-code>.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Service Identity (MSI) certificates. `<region-code>` mapping for Azure cloud regions: `eus` (East US), `weu` (West Europe), `wcus` (West Central US), `scus` (South Central US), `sea` (South East Asia), `uks` (UK South), `wus2` (West US 2), `ae` (Australia East), `eus2` (East US 2), `ne` (North Europe), `fc` (France Central). |
-|`*.servicebus.windows.net`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
+|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
## 1. Register providers for Azure Arc enabled Kubernetes
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/overview.md
Azure Arc simplifies governance and management by delivering a consistent multi-
* Manage virtual machines, Kubernetes clusters, and databases as if they are running in Azure. * Use familiar Azure services and management capabilities, regardless of where they live. * Continue using traditional ITOps, while introducing DevOps practices to support new cloud native patterns in your environment.
-* Configure Custom Locations as an abstraction layer on top of Azure Arc enabled Kubernetes cluster, cluster connect, and cluster extensions.
+* Configure Custom Locations as an abstraction layer on top of Azure Arc-enabled Kubernetes cluster, cluster connect, and cluster extensions.
:::image type="content" source="./media/overview/azure-arc-control-plane.png" alt-text="Azure Arc management control plane diagram" border="false":::
Today, Azure Arc allows you to manage the following resource types hosted outsid
* Servers - both physical and virtual machines running Windows or Linux. * Kubernetes clusters - supporting multiple Kubernetes distributions. * Azure data services - Azure SQL Managed Instance and PostgreSQL Hyperscale services.
-* SQL Server - enroll instances from any location.
+* SQL Server - enroll instances from any location with [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview).
## What does Azure Arc deliver?
Key features of Azure Arc include:
* Run [Azure data services](../azure-arc/kubernetes/custom-locations.md) on any Kubernetes environment as if it runs in Azure (specifically Azure SQL Managed Instance and Azure Database for PostgreSQL Hyperscale, with benefits such as upgrades, updates, security, and monitoring). Use elastic scale and apply updates without any application downtime, even without continuous connection to Azure.
-* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc enabled Data Services](./dat).
+* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
-* A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
+* A unified experience viewing your Azure Arc-enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
## How much does Azure Arc cost? The following are pricing details for the features available today with Azure Arc.
-### Arc enabled servers
+### Arc-enabled servers
The following Azure Arc control plane functionality is offered at no extra cost:
The following Azure Arc control plane functionality is offered at no ex
* Update management.
-Any Azure service that is used on Arc enabled servers, for example Azure Security Center or Azure Monitor, will be charged as per the pricing for that service. For more information, see the [Azure pricing page](https://azure.microsoft.com/pricing/).
+Any Azure service that is used on Arc-enabled servers, for example Azure Security Center or Azure Monitor, will be charged as per the pricing for that service. For more information, see the [Azure pricing page](https://azure.microsoft.com/pricing/).
-### Azure Arc enabled Kubernetes
+### Azure Arc-enabled Kubernetes
-Any Azure service that is used on Arc enabled Kubernetes, for example Azure Security Center or Azure Monitor, will be charged as per the pricing for that service. For more information on pricing for configurations on top of Azure Arc enabled Kubernetes, see [Azure pricing page](https://azure.microsoft.com/pricing/).
+Any Azure service that is used on Arc-enabled Kubernetes, for example Azure Security Center or Azure Monitor, will be charged as per the pricing for that service. For more information on pricing for configurations on top of Azure Arc-enabled Kubernetes, see [Azure pricing page](https://azure.microsoft.com/pricing/).
-### Azure Arc enabled data services
+### Azure Arc-enabled data services
-In the current preview phase, Azure Arc enabled data services are offered at no extra cost.
+In the current preview phase, Azure Arc-enabled data services are offered at no extra cost.
## Next steps
-* To learn more about Arc enabled servers, see the following [overview](./servers/overview.md)
+* To learn more about Azure Arc-enabled servers, see the following [overview](./servers/overview.md)
-* To learn more about Arc enabled Kubernetes, see the following [overview](./kubernetes/overview.md)
+* To learn more about Azure Arc-enabled Kubernetes, see the following [overview](./kubernetes/overview.md)
-* To learn more about Arc enabled data services, see the following [overview](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/)
+* To learn more about Azure Arc-enabled data services, see the following [overview](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/)
-* Experience Arc enabled services from the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/)
+* To learn more about SQL Server on Azure Arc-enabled servers, see the following [overview](/sql/sql-server/azure-arc/overview)
+
+* Experience Azure Arc-enabled services from the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/)
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
The Azure Connected Machine agent package contains several logical components, w
* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
-* The Guest Configuration agent provides In-Guest Policy and Guest Configuration functionality, such as assessing whether the machine complies with required policies.
+* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
- Note the following behavior with Azure Policy [Guest Configuration](../../governance/policy/concepts/guest-configuration.md) for a disconnected machine:
+ Note the following behavior with Azure Policy [guest configuration](../../governance/policy/concepts/guest-configuration.md) for a disconnected machine:
- * A Guest Configuration policy assignment that targets disconnected machines is unaffected.
+ * An Azure Policy assignment that targets disconnected machines is unaffected.
* Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. * Assignments are deleted after 14 days, and are not reassigned to the machine after the 14-day period.
Metadata information about the connected machine is collected after the Connecte
* Connected Machine agent heartbeat * Connected Machine agent version * Public key for managed identity
-* Policy compliance status and details (if using Azure Policy Guest Configuration policies)
-* Microsoft SQL Server installed (Boolean value)
+* Policy compliance status and details (if using guest configuration policies)
+* SQL Server installed (Boolean value)
* Cluster resource ID (for Azure Stack HCI nodes) The following metadata information is requested by the agent from Azure:
URLs:
|`login.windows.net`|Azure Active Directory| |`login.microsoftonline.com`|Azure Active Directory| |`dc.services.visualstudio.com`|Application Insights|
-|`*.guestconfiguration.azure.com` |Guest Configuration|
+|`*.guestconfiguration.azure.com` |Guest configuration|
|`*.his.arc.azure.com`|Hybrid Identity Service| |`*.blob.core.windows.net`|Download source for Arc-enabled servers extensions|
Preview agents (version 0.11 and lower) also require access to the following URL
| Agent resource | Description | |||
-|`agentserviceapi.azure-automation.net`|Guest Configuration|
-|`*-agentservice-prod-1.azure-automation.net`|Guest Configuration|
+|`agentserviceapi.azure-automation.net`|Guest configuration|
+|`*-agentservice-prod-1.azure-automation.net`|Guest configuration|
For a list of IP addresses for each service tag/region, see the JSON file - [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
After installing the Connected Machine agent for Windows, the following system-w
|%ProgramData%\AzureConnectedMachineAgent |Contains the agent configuration files.| |%ProgramData%\AzureConnectedMachineAgent\Tokens |Contains the acquired tokens.| |%ProgramData%\AzureConnectedMachineAgent\Config |Contains the agent configuration file `agentconfig.json` recording its registration information with the service.|
- |%ProgramFiles%\ArcConnectedMachineAgent\ExtensionService\GC | Installation path containing the Guest Configuration agent files. |
+ |%ProgramFiles%\ArcConnectedMachineAgent\ExtensionService\GC | Installation path containing the guest configuration agent files. |
|%ProgramData%\GuestConfig |Contains the (applied) policies from Azure.| |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads | Extensions are downloaded from Azure and copied here.|
After installing the Connected Machine agent for Windows, the following system-w
|Service name |Display name |Process name |Description | |-|-|-|| |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Azure Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |GCArcService |Guest Configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
- |ExtensionService |Guest Configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
+ |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
+ |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
* The following environmental variables are created during agent installation.
After installing the Connected Machine agent for Linux, the following system-wid
|-|| |/var/opt/azcmagent/ |Default installation path containing the agent support files.| |/opt/azcmagent/ |
- |/opt/GC_Ext | Installation path containing the Guest Configuration agent files.|
+ |/opt/GC_Ext | Installation path containing the guest configuration agent files.|
|/opt/DSC/ | |/var/opt/azcmagent/tokens |Contains the acquired tokens.| |/var/lib/GuestConfig |Contains the (applied) policies from Azure.|
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes.md
Version: 1.0 (General Availability)
- Support for preview agents (all versions older than 1.0) will be removed in a future service update. - Removed support for fallback endpoint `.azure-automation.net`. If you have a proxy, you need to allow the endpoint `*.his.arc.azure.com`. - If the Connected Machine agent is installed on a virtual machine hosted in Azure, VM extensions can't be installed or modified from the Arc-enabled servers resource. This is to avoid conflicting extension operations being performed from the virtual machine's **Microsoft.Compute** and **Microsoft.HybridCompute** resource. Use the **Microsoft.Compute** resource for the machine for all extension operations.-- Name of Guest Configuration process has changed, from *gcd* to *gcad* on Linux, and *gcservice* to *gcarcservice* on Windows.
+- Name of guest configuration process has changed, from *gcd* to *gcad* on Linux, and *gcservice* to *gcarcservice* on Windows.
### New features
azure-arc Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/data-residency.md
Metadata information about the connected machine is also collected. Specifically
* Connected Machine agent heartbeat * Connected Machine agent version * Public key for managed identity
-* Policy compliance status and details (if using Azure Policy Guest Configuration policies)
+* Policy compliance status and details (if using guest configuration policies)
Arc-enabled servers allow you to specify the region where your data is stored. Microsoft may replicate to other regions for data resiliency, but Microsoft does not replicate or move data outside the geography. This data is stored in the region where the Azure Arc machine resource is configured. For example, if the machine is registered with Arc in the East US region, this data is stored in the US region.
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Connect hybrid machine with Azure Arc enabled servers
-description: Learn how to connect and register your hybrid machine with Azure Arc enabled servers.
+ Title: Connect hybrid machine with Azure Arc-enabled servers
+description: Learn how to connect and register your hybrid machine with Azure Arc-enabled servers.
Last updated 12/15/2020
-# Quickstart: Connect hybrid machines with Azure Arc enabled servers
+# Quickstart: Connect hybrid machines with Azure Arc-enabled servers
-[Azure Arc enabled servers](../overview.md) enables you to manage and govern your Windows and Linux machines hosted across on-premises, edge, and multicloud environments. In this quickstart, you'll deploy and configure the Connected Machine agent on your Windows or Linux machine hosted outside of Azure for management by Arc enabled servers.
+[Azure Arc-enabled servers](../overview.md) enables you to manage and govern your Windows and Linux machines hosted across on-premises, edge, and multicloud environments. In this quickstart, you'll deploy and configure the Connected Machine agent on your Windows or Linux machine hosted outside of Azure for management by Arc-enabled servers.
## Prerequisites * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Deploying the Arc enabled servers Hybrid Connected Machine agent requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, with an account that is a member of the Local Administrators group.
+* Deploying the Arc-enabled servers Hybrid Connected Machine agent requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, with an account that is a member of the Local Administrators group.
* Before you get started, be sure to review the agent [prerequisites](../agent-overview.md#prerequisites) and verify the following:
Last updated 12/15/2020
* If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../agent-overview.md#networking-configuration) are not blocked.
- * Azure Arc enabled servers supports only the regions specified [here](../overview.md#supported-regions).
+ * Azure Arc-enabled servers supports only the regions specified [here](../overview.md#supported-regions).
> [!WARNING] > The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. See [Resolve reserved resource name errors](../../../azure-resource-manager/templates/error-reserved-resource-name.md) for a list of the reserved words.
Last updated 12/15/2020
## Register Azure resource providers
-Azure Arc enabled servers depends on the following Azure resource providers in your subscription in order to use this service:
+Azure Arc-enabled servers depends on the following Azure resource providers in your subscription in order to use this service:
* Microsoft.HybridCompute * Microsoft.GuestConfiguration
The script to automate the download, installation, and establish the connection
1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Servers - Azure Arc**.
- :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Arc enabled servers in All Services" border="false":::
+ :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Arc-enabled servers in All Services" border="false":::
1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
The script to automate the download, installation, and establish the connection
## Verify the connection with Azure Arc
-After you install the agent and configure it to connect to Azure Arc enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://aka.ms/hybridmachineportal).
+After you install the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machine in the [Azure portal](https://aka.ms/hybridmachineportal).
:::image type="content" source="./media/quick-enable-hybrid-vm/enabled-machine.png" alt-text="A successful machine connection" border="false":::
After you install the agent and configure it to connect to Azure Arc enabled ser
Now that you've enabled your Linux or Windows hybrid machine and successfully connected to the service, you are ready to enable Azure Policy to understand compliance in Azure.
-To learn how to identify Azure Arc enabled servers enabled machine that doesn't have the Log Analytics agent installed, continue to the tutorial:
+To learn how to identify Azure Arc-enabled servers enabled machine that doesn't have the Log Analytics agent installed, continue to the tutorial:
> [!div class="nextstepaction"] > [Create a policy assignment to identify non-compliant resources](tutorial-assign-policy-portal.md)
azure-arc Tutorial Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md
Last updated 04/21/2021
# Tutorial: Create a policy assignment to identify non-compliant resources
-The first step in understanding compliance in Azure is to identify the status of your resources. Azure Policy supports auditing the state of your Arc enabled server with Guest Configuration policies. Guest Configuration policies do not apply configurations, they only audit settings inside the machine. This tutorial steps you through the process of creating and assigning a policy, identifying which of your Arc enabled servers don't have the Log Analytics agent installed.
+The first step in understanding compliance in Azure is to identify the status of your resources. Azure Policy supports auditing the state of your Arc-enabled server with guest configuration policies. Azure Policy's guest configuration definitions can audit or apply settings inside the machine. This tutorial steps you through the process of creating and assigning a policy, identifying which of your Arc-enabled servers don't have the Log Analytics agent installed.
At the end of this process, you'll successfully identify machines that don't have the Log Analytics agent for Windows or Linux installed. They're _non-compliant_ with the policy assignment.
In this tutorial, you create a policy assignment and assign the _\[Preview]: Log
For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md). 1. Search through the policy definitions list to find the _\[Preview]: Log Analytics agent should be installed on your Windows Azure Arc machines_
- definition if you have enabled the Arc enabled servers agent on a Windows-based machine. For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
+ definition if you have enabled the Arc-enabled servers agent on a Windows-based machine. For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics agent should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
:::image type="content" source="./media/tutorial-assign-policy-portal/select-available-definition.png" alt-text="Find the correct policy definition" border="false":::
To remove the assignment created, follow these steps:
## Next steps
-In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc enabled servers machine by enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md).
+In this tutorial, you assigned a policy definition to a scope and evaluated its compliance report. The policy definition validates that all the resources in the scope are compliant and identifies which ones aren't. Now you are ready to monitor your Azure Arc-enabled servers machine by enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md).
To learn how to monitor and view the performance, running process and their dependencies from your machine, continue to the tutorial:
azure-arc Tutorial Enable Vm Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md
Last updated 04/21/2021
# Tutorial: Monitor a hybrid machine with VM insights
-[Azure Monitor](../../../azure-monitor/overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically this would entail installing the [Log Analytics agent](../../../azure-monitor/agents/agents-overview.md#log-analytics-agent) on the machine using a script, manually, or automated method following your configuration management standards. Arc enabled servers recently introduced support to install the Log Analytics and Dependency agent [VM extensions](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md) to collect data from your non-Azure VMs.
+[Azure Monitor](../../../azure-monitor/overview.md) can collect data directly from your hybrid machines into a Log Analytics workspace for detailed analysis and correlation. Typically this would entail installing the [Log Analytics agent](../../../azure-monitor/agents/agents-overview.md#log-analytics-agent) on the machine using a script, manually, or automated method following your configuration management standards. Arc-enabled servers recently introduced support to install the Log Analytics and Dependency agent [VM extensions](../manage-vm-extensions.md) for Windows and Linux, enabling [VM insights](../../../azure-monitor/vm/vminsights-overview.md) to collect data from your non-Azure VMs.
This tutorial shows you how to configure and collect data from your Linux or Windows machines by enabling VM insights following a simplified set of steps, which streamlines the experience and takes a shorter amount of time.
Sign in to the [Azure portal](https://portal.azure.com).
1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Machines - Azure Arc**.
- :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Arc enabled servers in All Services" border="false":::
+ :::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Arc-enabled servers in All Services" border="false":::
1. On the **Machines - Azure Arc** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-agent.md
You can perform a **Connect** and **Disconnect** manually while logged on intera
This parameter specifies a resource in Azure Resource Manager representing the machine is created in Azure. The resource is in the subscription and resource group specified, and data about the machine is stored in the Azure region specified by the `--location` setting. The default resource name is the hostname of the machine if not specified.
-A certificate corresponding to the system-assigned identity of the machine is then downloaded and stored locally. Once this step is completed, the Azure Connected Machine Metadata Service and Guest Configuration Agent begin synchronizing with Azure Arc-enabled servers.
+A certificate corresponding to the system-assigned identity of the machine is then downloaded and stored locally. Once this step is completed, the Azure Connected Machine Metadata Service and guest configuration agent service begins synchronizing with Azure Arc-enabled servers.
To connect using a service principal, run the following command:
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 08/11/2021 Last updated : 08/24/2021
To learn about the Azure Connected Machine agent package and details about the E
> [!NOTE] > Recently support for the DSC VM extension was removed for Arc-enabled servers. Alternatively, we recommend using the Custom Script Extension to manage the post-deployment configuration of your server or machine.
-Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
+Arc-enabled servers support moving machines with one or more VM extensions installed between resource groups or another Azure subscription without experiencing any impact to their configuration. The source and destination subscriptions must exist within the same [Azure Active Directory tenant](../../active-directory/develop/quickstart-create-new-tenant.md). This support is enabled starting with the Connected Machine agent version **1.8.21197.005**. For more information about moving resources and considerations before proceeding, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
### Windows extensions
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers
-description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources.
+ Title: Built-in policy definitions for Azure Arc-enabled servers
+description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated 08/20/2021
azure-arc Scenario Migrate To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/scenario-migrate-to-azure.md
List role assignments for the Arc-enabled servers resource, using [Azure PowerSh
If you're using a managed identity for an application or process running on an Arc-enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-managed-identity).
-A system-managed identity is also used when Azure Policy is used to audit settings inside a machine or server. With Arc-enabled servers, the Guest Configuration agent is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/policy/concepts/guest-configuration.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the Guest Configuration extension.
+A system-managed identity is also used when Azure Policy is used to audit or configure settings inside a machine or server. With Arc-enabled servers, the guest configuration agent service is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/policy/concepts/guest-configuration.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the guest configuration extension.
Update role assignment with any resources accessed by the managed identity to allow the new Azure VM identity to authenticate to those services. See the following to learn [how managed identities for Azure resources work for an Azure Virtual Machine (VM)](../../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
Before proceeding with the migration with Azure Migration, review the [Prepare o
After migration and completion of all post-migration configuration steps, you can now deploy the Azure VM extensions based on the VM extensions originally installed on your Arc-enabled server. Review [Azure virtual machine extensions and features](../../virtual-machines/extensions/overview.md) to help plan your extension deployment.
-To resume using audit settings inside a machine with Azure Policy Guest Configuration policy definitions, see [Enable Guest Configuration](../../governance/policy/concepts/guest-configuration.md#enable-guest-configuration).
+To resume using audit settings inside a machine with guest configuration policy definitions, see [Enable guest configuration](../../governance/policy/concepts/guest-configuration.md#enable-guest-configuration).
If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), remove the [exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) you created earlier. To use Azure Policy to enable Azure virtual machines, see [Deploy Azure Monitor at scale using Azure Policy](../../azure-monitor/deploy-scale.md#vm-insights).
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions currently supports the following languages: * **C#**: both [precompiled class libraries](../functions-dotnet-class-library.md) and [C# script](../functions-reference-csharp.md).
-* **JavaScript**: supported only for version 2.x of the Azure Functions runtime. Requires version 1.7.0 of the Durable Functions extension, or a later version.
+* **JavaScript**: supported only for version 2.x or later of the Azure Functions runtime. Requires version 1.7.0 of the Durable Functions extension, or a later version.
* **Python**: requires version 2.3.1 of the Durable Functions extension, or a later version. * **F#**: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure Functions runtime. * **PowerShell**: Supported only for version 3.x of the Azure Functions runtime and PowerShell 7. Requires version 2.x of the bundle extensions.
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
The example host.json file below contains only the settings for version 5.0.0 an
"version": "2.0", "extensions": { "serviceBus": {
- "retryOptions":{
+ "clientRetryOptions":{
"mode": "exponential", "tryTimeout": "00:01:00", "delay": "00:00:00.80",
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
For Management and governance services availability in Azure Government, see [Pr
- By default, all data and saved queries are encrypted at rest using Microsoft-managed keys. Configure encryption at rest of your data in Azure Monitor [using customer-managed keys in Azure Key Vault](../azure-monitor/logs/customer-managed-keys.md). > [!IMPORTANT]
-> See additional guidance below for **Log Analytics**, which is a feature of Azure Monitor.
+> See additional guidance for **[Log Analytics](#log-analytics)**, which is a feature of Azure Monitor.
+
+### [Azure Policy](https://azure.microsoft.com/services/azure-policy/)
+
+Azure Policy supports Impact Level 5 workloads in Azure Government with no extra configuration required.
+
+### [Azure Policy's guest configuration](../governance/policy/concepts/guest-configuration.md)
+
+Azure Policy's guest configuration supports Impact Level 5 workloads in Azure Government with no extra configuration required.
#### [Log Analytics](../azure-monitor/logs/data-platform-logs.md)
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
If you're not yet using Application Insights, you can use the [Azure Monitor pri
1. estimate your likely data ingestion based on what other similar applications generate, or 2. use of default monitoring and adaptive sampling, which is available in the ASP.NET SDK.
-### Learn from what similar applicatiopns collect
+### Learn from what similar applications collect
In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
azure-monitor Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/network-performance-monitor.md
The script creates registry keys required by the solution. It also creates Windo
### Configure the solution
-1. Add the Network Performance Monitor solution to your workspace from the [Azure marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/solarwinds.solarwinds-orion-network-performance-monitor?tab=Overview). You also can use the process described in [Add Azure Monitor solutions from the Solutions Gallery](./solutions.md).
+1. Add the Network Performance Monitor solution to your workspace from the process described in [Add Azure Monitor solutions from the Solutions Gallery](./solutions.md#install-a-monitoring-solution). This is required if you want to work with Non-Azure endpoints within Connection Monitor.
2. Open your Log Analytics workspace, and select the **Overview** tile. 3. Select the **Network Performance Monitor** tile with the message *Solution requires additional configuration*.
Information on pricing is available [online](network-performance-monitor-pricing
* **Join our cohort:** We're always interested in having new customers join our cohort. As part of it, you get early access to new features and an opportunity to help us improve Network Performance Monitor. If you're interested in joining, fill out this [quick survey](https://aka.ms/npmcohort). ## Next steps
-Learn more about [Performance Monitor](network-performance-monitor-performance-monitor.md), [Service Connectivity Monitor](network-performance-monitor-performance-monitor.md), and [ExpressRoute Monitor](network-performance-monitor-expressroute.md).
+Learn more about [Performance Monitor](network-performance-monitor-performance-monitor.md), [Service Connectivity Monitor](network-performance-monitor-performance-monitor.md), and [ExpressRoute Monitor](network-performance-monitor-expressroute.md).
azure-monitor Azure Ad Authentication Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-ad-authentication-logs.md
+
+ Title: Azure AD authentication for Logs
+description: Learn how to enable Azure Active Directory (Azure AD) authentication for Log Analytics in Azure Monitor.
+ Last updated : 08/24/2021++
+# Azure AD authentication for Logs
+
+Azure Monitor can [collect data in Logs from multiple sources](data-platform-logs.md#data-collection) including agents on virtual machines, Application Insights, diagnostic settings for Azure resources, and Data Collector API.
+
+Log Analytics agents use a workspace key as an enrollment key to verify initial access and provision a certificate further used to establish a secure connection between the agent and Azure Monitor. To learn more, see [send data from agents](data-security.md#2-send-data-from-agents). Data Collector API uses the same workspace key to [authorize access](data-collector-api.md#authorization).
+
+These options may be cumbersome and pose risk since itΓÇÖs difficult to manage credentials specifically, workspace keys at a large scale. You can choose to opt-out of local authentication and ensure only telemetry that is exclusively authenticated using Managed Identities and Azure Active Directory is ingested into Azure Monitor. This feature enhances the security and reliability of the telemetry used to make both critical operational and business decisions.
+
+Use the following steps to enable Azure Active Directory integration for Azure Monitor Logs and remove reliance on these shared secrets.
+
+1. Azure Monitor Agent (AMA) doesn't require any keys but instead [requires a system-managed identity](../agents/azure-monitor-agent-overview.md#security). [Migrate to Azure Monitor Agent](../agents/azure-monitor-agent-migration.md) from the Log Analytics agents.
+2. [Disable local authentication for Log Analytics Workspaces](#disable-local-authentication-for-log-analytics).
+3. Ensure that only authenticated telemetry is ingested in your Application Insights resources with [Azure AD authentication for Application Insights (Preview)](../app/azure-ad-authentication.md).
+
+## Disable local authentication for Log Analytics
+
+After you removed your reliance on Log Analytics agent, you can choose to disable local authentication for Log Analytics workspaces. This will allow you to ingest and query telemetry authenticated exclusively by Azure AD.
+
+Disabling local authentication may limit some functionality available, specifically:
+
+- Existing Log Analytics Agents will stop functioning, only Azure Monitor Agent (AMA) is supported. Azure Monitor Agent is missing some capabilities that are available through Log Analytics agent (for example, custom log collection, IIS log collection).
+- Data Collector API (preview) doesn't support Azure AD authentication and won't be available to ingest data.
+
+You can disable local authentication by using the Azure Policy, or programmatically through Azure Resource Manager Template, PowerShell, or CLI.
+
+### Azure Policy
+
+Azure Policy for ΓÇÿDisableLocalAuthΓÇÖ will deny from users to create a new Log Analytics Workspace without this property setting to ΓÇÿtrueΓÇÖ. The policy name is `Log Analytics Workspaces should block non-Azure Active Directory based ingestion`. To apply this policy definition to your subscription, [create a new policy assignment and assign the policy](../../governance/policy/assign-policy-portal.md).
+
+Below is the policy template definition:
+
+```json
+{
+ "properties": {
+ "displayName": "Log Analytics Workspaces should block non-Azure Active Directory based ingestion.",
+ "policyType": "BuiltIn",
+ "mode": "Indexed",
+ "description": "Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system.",
+ "metadata": {
+ "version": "1.0.0",
+ "category": "Monitoring"
+ },
+ "parameters": {
+ "effect": {
+ "type": "String",
+ "metadata": {
+ "displayName": "Effect",
+ "description": "Enable or disable the execution of the policy"
+ },
+ "allowedValues": [
+ "Deny",
+ "Audit",
+ "Disabled"
+ ],
+ "defaultValue": "Audit"
+ }
+ },
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.OperationalInsights/workspaces"
+ },
+ {
+ "field": "Microsoft.OperationalInsights/workspaces/features.disableLocalAuth",
+ "notEquals": "true"
+ }
+ ]
+ },
+ "then": {
+ "effect": "[parameters('effect')]"
+ }
+ }
+ },
+ "id": "/providers/Microsoft.Authorization/policyDefinitions/e15effd4-2278-4c65-a0da-4d6f6d1890e2",
+ "type": "Microsoft.Authorization/policyDefinitions",
+ "name": "e15effd4-2278-4c65-a0da-4d6f6d1890e2"
+}
+```
+
+### Azure Resource Manager
+
+Property `DisableLocalAuth` is used to disable any local authentication on your Log Analytics Workspace. When set to true, this property enforces that Azure AD authentication must be used for all access.
+
+Below is an example Azure Resource Manager template that you can use to disable local auth:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaces_name": {
+ "defaultValue": "workspace-name",
+ "type": "string"
+ },
+ "workspace_location": {
+ "defaultValue": "region-name",
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "name": "[parameters('workspaces_name')]",
+ "location": "[parameters('workspace_location')]",
+ "properties": {
+ "sku": {
+ "name": "PerGB2018"
+ },
+ "retentionInDays": 30,
+ "features": {
+ "disableLocalAuth": false,
+ "enableLogAccessUsingOnlyResourcePermissions": true
+ }
+ }
+ }
+ ]
+}
+
+```
++
+### CLI
+
+Property `DisableLocalAuth` is used to disable any local authentication on your Log Analytics Workspace. When set to true, this property enforces that Azure AD authentication must be used for all access.
+
+Below is an example of CLI commands that you can use to disable local authentication:
+
+```azurecli
+ az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]--api-version "2021-06-01" --set properties.features.disableLocalAuth=True
+```
+
+### PowerShell
+
+Property `DisableLocalAuth` is used to disable any local authentication on your Log Analytics Workspace. When set to true, this property enforces that Azure AD authentication must be used for all access.
+
+Below is an example of PowerShell commands that you can use to disable local authentication:
+
+```powershell
+ $workspaceSubscriptionId = "[You subscription ID]"
+ $workspaceResourceGroup = "[You resource group]"
+ $workspaceName = "[Your workspace name]"
+ $disableLocalAuth = $false
+
+ # login
+ Connect-AzAccount
+
+ # select subscription
+ Select-AzSubscription -SubscriptionId $workspaceSubscriptionId
+
+ # get private link workspace resource
+ $workspace = Get-AzResource -ResourceType Microsoft.OperationalInsights/workspaces -ResourceGroupName $workspaceResourceGroup -ResourceName $workspaceName -ApiVersion "2021-06-01"
+
+ # set DisableLocalAuth
+ $workspace.Properties.Features | Add-Member -MemberType NoteProperty -Name DisableLocalAuth -Value $disableLocalAuth -Force
+ $workspace | Set-AzResource -Force
+```
+
+## Next steps
+* [Azure AD authentication for Application Insights (Preview)](../app/azure-ad-authentication.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na ms.devlang: na Previously updated : 07/28/2021 Last updated : 08/24/2021 # Resource limits for Azure NetApp Files
The service dynamically adjusts the maxfiles limit for a volume based on its pro
| > 3 TiB but <= 4 TiB | 80 million | | > 4 TiB | 100 million |
-If you have already allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#limit_increase) to increase the maxfiles (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the maxfiles limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+If you have already allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the maxfiles (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the maxfiles limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
You can increase the maxfiles limit to 500 million if your volume quota is at least 20 TiB. <!-- ANF-11854 --> ## Regional capacity quota
-Azure NetApp Files has a regional limit based on capacity. The standard capacity limit for each subscription is 25 TiB per region, across all service levels.
+You can click **Quota** under Settings of Azure NetApp Files to display the current and default quota sizes for the region.
-You can request a capacity increase by submitting a specific **Service and subscription limits (quotas)** support ticket as follows:
+For example:
-1. Go to **Support + Troubleshooting** in the portal to start the Support request process:
+![Screenshot that shows how to display quota information.](../media/azure-netapp-files/quota-display.png)
- ![Screenshot that shows the Support Troubleshooting menu.](../media/azure-netapp-files/support-troubleshoot-menu.png)
+You can [submit a support request](#request-limit-increase) for an increase of a regional capacity quota without incurring extra cost. The support request you submit will be sent to the Azure capacity management team for processing. You will receive a response typically within two business days. The Azure capacity management team might contact you if you have a large request.
-2. Select the **Service and subscription limits (quotas)** issue type and enter all relevant details:
+A regional capacity quota increase does not incur a billing increase. Billing is still based on the provisioned capacity pools.
+For example, if you currently have 25 TiB of provisioned capacity, you can request a quota increase to 35 TiB. Within two business days, your quota increase will be applied to the requested region. When the quota increase is applied, you still pay for only the current provisioned capacity (25 TiB). But when you actually provision the additional 10 TiB, you will be billed for 35 TiB.
- ![Screenshot that shows the Service and Subscription Limits menu.](../media/azure-netapp-files/service-subscription-limits-menu.png)
+The current [resource limits](#resource-limits) for Azure NetApp Files are not changing. You will still be able to provision a 500-TiB capacity pool. But before doing so, the regional capacity quota needs to be increased to 500 TiB.
-3. Click the **Enter details** link in the Details tab, then select the **TiBs per subscription** quota type:
-
- ![Screenshot that shows the Enter Details link in Details tab.](../media/azure-netapp-files/support-details.png)
-
- ![Screenshot that shows the Quota Details window.](../media/azure-netapp-files/support-quota-details.png)
-
-4. On the Support Method page, make sure to select **Severity Level B ΓÇô Moderate impact**:
-
- ![Screenshot that shows the Support Method window.](../media/azure-netapp-files/support-method-severity.png)
-
-5. Complete the request process to issue the request.
-
-After the ticket is submitted, the request will be sent to the Azure capacity management team for processing. You will receive a response typically within 2 business days. The Azure capacity management team might contact you for handling of large requests.
-
-A regional capacity quota increase does not incur a billing increase. Billing will still be based on the provisioned capacity pools.
-
-## Request limit increase <a name="limit_increase"></a>
+## Request limit increase
You can create an Azure support request to increase the adjustable limits from the [Resource Limits](#resource-limits) table.
-From Azure portal navigation plane:
+1. Go to **New Support Request** under **Support + troubleshooting**.
+1. Under the **Problem description** tab, provide the requested information.
+1. Under the **Additional details** tab, click **Enter details** in the Request Details field.
-1. Click **Help + support**.
-2. Click **+ New support request**.
-3. On the Basics tab, provide the following information:
- 1. Issue type: Select **Service and subscription limits (quotas)**.
- 2. Subscriptions: Select the subscription for the resource that you need the quota increased.
- 3. Quota type: Select **Storage: Azure NetApp Files limits**.
- 4. Click **Next: Solutions**.
-4. On the Details tab:
- 1. In the Description box, provide the following information for the corresponding resource type:
+ ![Screenshot that shows the Details tab and the Enter Details field.](../media/azure-netapp-files/quota-additional-details.png)
- | Resource | Parent resources | Requested new limits | Reason for quota increase |
- |-||||
- | Account | *Subscription ID* | *Requested new maximum **account** number* | *What scenario or use case prompted the request?* |
- | Pool | *Subscription ID, NetApp account URI* | *Requested new maximum **pool** number* | *What scenario or use case prompted the request?* |
- | Volume | *Subscription ID, NetApp account URI, capacity pool URI* | *Requested new maximum **volume** number* | *What scenario or use case prompted the request?* |
- | Maxfiles | *Subscription ID, NetApp account URI, capacity pool URI, volume URI* | *Requested new maximum **maxfiles** number* | *What scenario or use case prompted the request?* |
- | Cross-region replication data protection volumes | *Subscription ID, destination NetApp account URI, destination capacity pool URI, source NetApp account URI, source capacity pool URI, source volume URI* | *Requested new maximum number of **cross-region replication data protection volumes (destination volumes)*** | *What scenario or use case prompted the request?* |
+1. In the Quota Details window that appears:
- 2. Specify the appropriate support method and provide your contract information.
+ 1. In Quota Type, select the type of resource you want to increase.
+ For example:
+ * *Regional Capacity Quota per Subscription (TiB)*
+ * *Number of NetApp accounts per Azure region per subscription*
+ * *Number of volumes per subscription*
- 3. Click **Next: Review + create** to create the request.
+ 1. In Region Requested, select your region.
+ The current and default sizes are displayed under Quota State.
+ 1. Enter a value to request an increase for the quota type you specified.
+
+ ![Screenshot that shows how to display and request increase for regional quota.](../media/azure-netapp-files/quota-details-regional-request.png)
+1. Click **Next** and **Review + create** to create the request.
## Next steps
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
description: Customers who need assistance can use the Azure portal to find self
ms.assetid: fd6841ea-c1d5-4bb7-86bd-0c708d193b89 Previously updated : 05/25/2021 Last updated : 08/24/2021 # Create an Azure support request
Next, we collect additional details about the problem. Providing thorough and de
1. If possible, tell us when the problem started and any steps to reproduce it. You can upload a file, such as a log file or output from diagnostics. For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines).
-1. After we have all the information about the problem, choose how to get support. In the **Support method** section of **Details**, select the severity of impact. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
+1. In the **Share diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. In some cases, there will be additional options to choose from, such as whether to allow access to a virtual machine's memory.
- By default the **Share diagnostic information** option is selected. This allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. You can clear this option if you prefer not to share diagnostic information. In some cases, there is a second question that isn't selected by default, such as requesting access to a virtual machine's memory.
+1. **Support method** section of **Details**, select the severity of impact. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
-1. Provide your preferred contact method, a good time to contact you, and your support language.
+1. Provide your preferred contact method, your availability, and your preferred support language.
1. Next, complete the **Contact info** section so we know how to contact you.
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
description: Describes how to view support requests, send messages, change the r
tags: billing ms.assetid: 86697fdf-3499-4cab-ab3f-10d40d3c1f70 Previously updated : 05/25/2021 Last updated : 08/24/2021 # To add: close and reopen, review request status, update contact info
On this page, you can search, filter, and sort support requests. Select a suppor
## Share diagnostic information with Azure support
-When you create a support request, the **Share diagnostic information** option is selected by default. This option allows Azure support to gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources that can potentially help resolve your issue.
+When you create a support request, you can select **Yes** or **No** in the **Share diagnostic information** section. This option determines whether Azure support can gather [diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources that can potentially help resolve your issue.
To change your **Share diagnostic information** selection after the request has been created:
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/linter.md
The Bicep linter can be used to analyze Bicep files. It checks syntax errors, an
## Install linter
-The linter can be used with Visual Studio code and Bicep CLI. It requires:
+The linter can be used with Visual Studio Code and Bicep CLI. It requires:
- Bicep CLI version 0.4 or later. - Bicep extension for Visual Studio Code version 0.4 or later.
The Bicep extension of Visual Studio Code provides intellisense for editing Bice
:::image type="content" source="./media/linter/bicep-linter-configure-intellisense.png" alt-text="The intellisense support in configuring bicepconfig.json.":::
-## Use in Visual Studio code
+## Use in Visual Studio Code
Install the Bicep extension 0.4 or later to use linter. The following screenshot shows linter in action:
Select the solution to fix the issue automatically.
## Use in Bicep CLI
-Install the Bicep CLI 0.4 or later to use linter. The following screenshot shows linter in action. The Bicep file is the same as used in [Use in Visual Studio code](#use-in-visual-studio-code).
+Install the Bicep CLI 0.4 or later to use linter. The following screenshot shows linter in action. The Bicep file is the same as used in [Use in Visual Studio Code](#use-in-visual-studio-code).
:::image type="content" source="./media/linter/bicep-linter-command-line.png" alt-text="Bicep linter usage in command line.":::
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
This article describes the steps to move App Service resources. There are specif
When moving a Web App across subscriptions, the following guidance applies:
+- Moving a resource to a new resource group or subscription is a metadata change that shouldn't affect anything about how the resource functions. For example, the inbound IP address for an app service doesn't change when moving the app service.
- The destination resource group must not have any existing App Service resources. App Service resources include: - Web Apps - App Service plans
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
Previously updated : 08/23/2021 Last updated : 08/24/2021 tags: azure-synapse # Data Discovery & Classification
An important aspect of the classification is the ability to monitor access to se
[ ![Audit log](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png)](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png#lightbox)
+These are the activites that are actually auditable with sensitivity information:
+- ALTER TABLE ... DROP COLUMN
+- BULK INSERT
+- DELETE
+- INSERT
+- MERGE
+- UPDATE
+- UPDATETEXT
+- WRITETEXT
+- DROP TABLE
+- BACKUP
+- DBCC CloneDatabase
+- SELECT INTO
+- INSERT INTO EXEC
+- TRUNCATE TABLE
+- DBCC SHOW_STATISTICS
+- sys.dm_db_stats_histogram
+
+Use [sys.fn_get_audit_file](https://docs.microsoft.com/sql/relational-databases/system-functions/sys-fn-get-audit-file-transact-sql) to returns information from an audit file stored in an Azure Storage account.
+ ## <a id="permissions"></a>Permissions These built-in roles can read the data classification of a database:
azure-sql Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-overview.md
Previously updated : 10/26/2020 Last updated : 08/23/2021 # An overview of Azure SQL Database and SQL Managed Instance security capabilities [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
Authentication is the process of proving the user is who they claim to be. Azure
## Authorization
-Authorization refers to the permissions assigned to a user within a database in Azure SQL Database or Azure SQL Managed Instance, and determines what the user is allowed to do. Permissions are controlled by adding user accounts to [database roles](/sql/relational-databases/security/authentication-access/database-level-roles) and assigning database-level permissions to those roles or by granting the user certain [object-level permissions](/sql/relational-databases/security/permissions-database-engine). For more information, see [Logins and users](logins-create-manage.md)
+Authorization refers to controlling access on resources and commands within a database. This is done by assigning permissions to a user within a database in Azure SQL Database or Azure SQL Managed Instance. Permissions are ideally managed by adding user accounts to [database roles](/sql/relational-databases/security/authentication-access/database-level-roles) and assigning database-level permissions to those roles. Alternatively an individual user can also be granted certain [object-level permissions](/sql/relational-databases/security/permissions-database-engine). For more information, see [Logins and users](logins-create-manage.md)
-As a best practice, create custom roles when needed. Add users to the role with the least privileges required to do their job function. Do not assign permissions directly to users. The server admin account is a member of the built-in db_owner role, which has extensive permissions and should only be granted to few users with administrative duties. For applications, use the [EXECUTE AS](/sql/t-sql/statements/execute-as-clause-transact-sql) to specify the execution context of the called module or use [Application Roles](/sql/relational-databases/security/authentication-access/application-roles) with limited permissions. This practice ensures that the application that connects to the database has the least privileges needed by the application. Following these best practices also fosters separation of duties.
+As a best practice, create custom roles when needed. Add users to the role with the least privileges required to do their job function. Do not assign permissions directly to users. The server admin account is a member of the built-in db_owner role, which has extensive permissions and should only be granted to few users with administrative duties. To further limit the scope of what a user can do, the [EXECUTE AS](/sql/t-sql/statements/execute-as-clause-transact-sql) can be used to specify the execution context of the called module. Following these best practices is also a fundamental step towards Separation of Duties.
### Row-level security
Dynamic data masking limits sensitive data exposure by masking it to non-privile
### Data discovery and classification
-Data discovery and classification (currently in preview) provides advanced capabilities built into Azure SQL Database and SQL Managed Instance for discovering, classifying, labeling, and protecting the sensitive data in your databases. Discovering and classifying your utmost sensitive data (business/financial, healthcare, personal data, etc.) can play a pivotal role in your organizational Information protection stature. It can serve as infrastructure for:
+Data discovery and classification (currently in preview) provides basic capabilities built into Azure SQL Database and SQL Managed Instance for discovering, classifying and labeling the sensitive data in your databases. Discovering and classifying your utmost sensitive data (business/financial, healthcare, personal data, etc.) can play a pivotal role in your organizational Information protection stature. It can serve as infrastructure for:
- Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data. - Controlling access to, and hardening the security of, databases containing highly sensitive data.
In addition to the above features and functionality that can help your applicati
- For a discussion of the use of logins, user accounts, database roles, and permissions in SQL Database and SQL Managed Instance, see [Manage logins and user accounts](logins-create-manage.md). - For a discussion of database auditing, see [auditing](../../azure-sql/database/auditing-overview.md).-- For a discussion of threat detection, see [threat detection](threat-detection-configure.md).
+- For a discussion of threat detection, see [threat detection](threat-detection-configure.md).
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
In this tutorial, you learn how to:
- This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Create a resource group
+## Create an Azure Web PubSub instance
+
+### Create a resource group
[!INCLUDE [Create a resource group](includes/cli-rg-creation.md)]
-## Create a Web PubSub instance
+### Create a Web PubSub instance
[!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)]
-## Get the ConnectionString for future use
+### Get the ConnectionString for future use
[!INCLUDE [Get the connection string](includes/cli-awps-connstr.md)]
Copy the fetched **ConnectionString** and it will be used later in this tutorial
-### Create the application
+## Create the application
-In Azure Web PubSub, there are two roles, server and client. This concept is similar to the sever and client roles in a web application. Server is responsible for managing the clients, listen and respond to client messages, while client's role is to send user's messages to server, and receive messages from server and visualize them to end user.
+In Azure Web PubSub, there are two roles, server and client. This concept is similar to the server and client roles in a web application. Server is responsible for managing the clients, listen, and respond to client messages, while client's role is to send user's messages to server, and receive messages from server and visualize them to end user.
In this tutorial, we'll build a real-time chat web application. In a real web application, server's responsibility also includes authenticating clients and serving static web pages for the application UI.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
```bash dotnet add package Microsoft.Extensions.Azure ```
-2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your service.
+2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your services.
```csharp public void ConfigureServices(IServiceCollection services)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
}); ```
- This token generation code is very similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
+ This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
You can test this API by running `dotnet run` and accessing `http://localhost:5000/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</script> ```
- You can test it by open the home page, input your user name, then you'll see `connected` being printed out in browser console.
+ You can test it by opening the home page, input your user name, then you'll see `connected` being printed in browser console.
# [JavaScript](#tab/javascript) We'll use [express.js](https://expressjs.com/), a popular web framework for node.js to achieve this job.
-First let's create an empty express app.
+First create an empty express app.
1. Install express.js
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
app.listen(8080, () => console.log('server started')); ```
- This token generation code is very similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
+ This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
You can test this API by running `node server "<connection-string>"` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</script> ```
- You can test it by open the home page, input your user name, then you'll see `connected` being printed out in browser console.
+ You can test it by opening the home page, input your user name, then you'll see `connected` being printed in browser console.
-### Handle events
+## Handle events
-In Azure Web PubSub, when there are certain activities happening at client side (for example a client is connected or disconnected), service will send notifications to sever so it can react to these events.
+In Azure Web PubSub, when there are certain activities happening at client side (for example a client is connected or disconnected), service will send notifications to server so it can react to these events.
Events are delivered to server in the form of Webhook. Webhook is served and exposed by the application server and registered at the Azure Web PubSub service side. The service invokes the webhooks whenever an event happens. Azure Web PubSub follows [CloudEvents](./reference-cloud-events.md) to describe the event data. # [C#](#tab/csharp)
-For now, you need to implement the event handler by your own in C#, the steps are pretty straight forward following [the protocol spec](./reference-cloud-events.md) as well as illustrated below.
+For now, you need to implement the event handler by your own in C#, the steps are straight forward following [the protocol spec](./reference-cloud-events.md) and illustrated below.
1. Add event handlers inside `UseEndpoints`. Specify the endpoint path for the events, let's say `/eventhandler`.
For now, you need to implement the event handler by your own in C#, the steps ar
}); ```
-3. Then we'd like to check if the incoming requests are the events we expects. Let's say we now cares about the system `connected` event, which should contains the header `ce-type` as `azure.webpubsub.sys.connected`. We add the logic after abuse protection:
+3. Then we'd like to check if the incoming requests are the events we expect. Let's say we now care about the system `connected` event, which should contain the header `ce-type` as `azure.webpubsub.sys.connected`. We add the logic after abuse protection:
```csharp app.UseEndpoints(endpoints => {
For now, you need to implement the event handler by your own in C#, the steps ar
}); ```
-In the above code we simply print a message to console when a client is connected. You can see we use `context.Request.Headers["ce-userId"]` so we can see the identity of the connected client.
+In the above code, we simply print a message to console when a client is connected. You can see we use `context.Request.Headers["ce-userId"]` so we can see the identity of the connected client.
# [JavaScript](#tab/javascript)
let handler = new WebPubSubEventHandler(hubName, ['*'], {
app.use(handler.getMiddleware()); ```
-In the above code we simply print a message to console when a client is connected. You can see we use `req.context.userId` so we can see the identity of the connected client.
+In the above code, we simply print a message to console when a client is connected. You can see we use `req.context.userId` so we can see the identity of the connected client.
-### Set up the event handler
+## Set up the event handler
-#### Expose localhost
+### Expose localhost
Then we need to set the Webhook URL in the service so it can know where to call when there is a new event. But there is a problem that our server is running on localhost so does not have an internet accessible endpoint. Here we use [ngrok](https://ngrok.com/) to expose our localhost to internet.
Then we need to set the Webhook URL in the service so it can know where to call
ngrok http 8080 ```
-ngrok will print out an URL (`https://<domain-name>.ngrok.io`) that can be accessed from internet.
+ngrok will print a URL (`https://<domain-name>.ngrok.io`) that can be accessed from internet.
-#### Set event handler
+### Set event handler
Then we update the service event handler and set the Webhook URL. [!INCLUDE [update event handler](includes/cli-awps-update-event-handler.md)]
-After the update is completed, open the home page http://localhost:5000/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed out in the server console.
+After the update is completed, open the home page http://localhost:5000/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed in the server console.
-### Message events
+## Handle Message events
Besides system events like `connected` or `disconnected`, client can also send messages through the WebSocket connection and these messages will be delivered to server as a special type of event called `message` event. We can use this event to receive messages from one client and broadcast them to all clients so they can talk to each other. # [C#](#tab/csharp)
-The `ce-type` of `message` event is always `azure.webpubsub.user.message`, details please see [Event message](./reference-cloud-events.md#message).
+The `ce-type` of `message` event is always `azure.webpubsub.user.message`, details see [Event message](./reference-cloud-events.md#message).
1. Handle message event
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
-3. Finally let's also update the `onConnected` handler to broadcast the connected event to all clients so they can see who joined the chat room.
+3. Finally update the `onConnected` handler to broadcast the connected event to all clients so they can see who joined the chat room.
```csharp app.UseEndpoints(endpoints =>
The complete code sample of this tutorial can be found [here][code].
This event handler uses `WebPubSubServiceClient.sendToAll()` to broadcast the received message to all clients.
- You can see `handleUserEvent` also has a `res` object where you can send message back to the event sender. Here we simply call `res.success()` to make the WebHook return 200 (please note this is required even you don't want to return anything back to client, otherwise the WebHook will never return and client connection will be closed).
+ You can see `handleUserEvent` also has a `res` object where you can send message back to the event sender. Here we simply call `res.success()` to make the WebHook return 200 (note this call is required even you don't want to return anything back to client, otherwise the WebHook never returns and client connection will be closed).
2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page.
The complete code sample of this tutorial can be found [here][code].
You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
-3. `sendToAll` accepts object as an input and send JSON text to the clients. In real scenarios, we probably need complex object to carry more information about the message. Finally let's also update the handlers to broadcast JSON objects to all clients:
+3. `sendToAll` accepts object as an input and send JSON text to the clients. In real scenarios, we probably need complex object to carry more information about the message. Finally update the handlers to broadcast JSON objects to all clients:
```javascript let handler = new WebPubSubEventHandler(hubName, ['*'], {
azure-web-pubsub Tutorial Pub Sub Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-pub-sub-messages.md
In this tutorial, you learn how to:
- This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Create a resource group
+## Create an Azure Web PubSub instance
+
+### Create a resource group
[!INCLUDE [Create a resource group](includes/cli-rg-creation.md)]
-## Create a Web PubSub instance
+### Create a Web PubSub instance
[!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)]
-## Get the ConnectionString for future use
+### Get the ConnectionString for future use
[!INCLUDE [Get the connection string](includes/cli-awps-connstr.md)]
Copy the fetched **ConnectionString** and it will be used later in this tutorial
-### Set up the subscriber
+## Set up the subscriber
Clients connect to the Azure Web PubSub service through the standard WebSocket protocol using [JSON Web Token (JWT)](https://jwt.io/) authentication. The service SDK provides helper methods to generate the token. In this tutorial, the subscriber directly generates the token from *ConnectionString*. In real applications, we usually use a server-side application to handle the authentication/authorization workflow. Try the [Build a chat app](./tutorial-build-chat.md) tutorial to better understand the workflow.
Clients connect to the Azure Web PubSub service through the standard WebSocket p
-### Publish messages using service SDK
+## Publish messages using service SDK
Now let's use Azure Web PubSub SDK to publish a message to the connected client.
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-subprotocol.md
+
+ Title: Tutorial - Publish and subscribe messages between WebSocket clients using subprotocol in Azure Web PubSub service
+description: A tutorial to walk through how to use Azure Web PubSub service and its supported WebSocket subprotocol to sync between clients.
++++ Last updated : 08/16/2021++
+# Tutorial: Publish and subscribe messages between WebSocket clients using subprotocol
+
+In [Build a chat app tutorial](./tutorial-build-chat.md), you've learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You can see there's no protocol needed when client is communicating with the service. For example, you can use `WebSocket.send()` to send any data and server will receive the data as is. This is easy to use, but the functionality is also limited. You can't, for example, specify the event name when sending the event to server, or publish message to other clients instead of sending it to server. In this tutorial, you'll learn how to use subprotocol to extend the functionality of client.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a Web PubSub service instance
+> * Generate the full URL to establish the WebSocket connection
+> * Publish messages between WebSocket clients using subprotocol
+++
+- This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create an Azure Web PubSub instance
+
+### Create a resource group
++
+### Create a Web PubSub instance
++
+### Get the ConnectionString for future use
++
+Copy the fetched **ConnectionString** and it will be used later in this tutorial as the value of `<connection_string>`.
+
+## Set up the project
+
+### Prerequisites
+
+# [C#](#tab/csharp)
+
+* [ASP.NET Core 3.1 or above](/aspnet/core)
+
+# [JavaScript](#tab/javascript)
+
+* [Node.js 12.x or above](https://nodejs.org)
+
+# [Python](#tab/python)
+* [Python](https://www.python.org/)
+++
+## Using a subprotocol
+
+The client can start a WebSocket connection using a specific [subprotocol](https://datatracker.ietf.org/doc/html/rfc6455#section-1.9). Azure Web PubSub service supports a subprotocol called `json.webpubsub.azure.v1` to empower the clients to do publish/subscribe directly instead of a round trip to the upstream server. Check [Azure Web PubSub supported JSON WebSocket subprotocol](./reference-json-webpubsub-subprotocol.md) for details about the subprotocol.
+
+> If you use other protocol names, they will be ignored by the service and passthrough to server in the connect event handler, so you can build your own protocols.
+
+Now let's create a web application using the `json.webpubsub.azure.v1` subprotocol.
+
+1. Install dependencies
+
+ # [C#](#tab/csharp)
+ ```bash
+ mkdir logstream
+ cd logstream
+ dotnet new web
+ dotnet add package Microsoft.Extensions.Azure
+ dotnet add package Azure.Messaging.WebPubSub --prerelease
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ ```bash
+ mkdir logstream
+ cd logstream
+ npm init -y
+ npm install --save express
+ npm install --save ws
+ npm install --save node-fetch
+ npm install --save @azure/web-pubsub
+ ```
+
+ # [Python](#tab/python)
+
+ ```bash
+ mkdir logstream
+ cd logstream
+
+ # Create venv
+ python -m venv env
+
+ # Active venv
+ ./env/Scripts/activate
+
+ # Or call .\env\Scripts\activate when you are using CMD under Windows
+
+ pip install azure-messaging-webpubsubservice
+ ```
+
+
+
+2. Create the server-side to host the `/negotiate` API and web page.
+
+ # [C#](#tab/csharp)
+
+ Update `Startup.cs` with the below code.
+ - Update the `ConfigureServices` method to add the service client, and read the connection string from configuration.
+ - Update the `Configure` method to add `app.UseStaticFiles();` before `app.UseRouting();` to support static files.
+ - And update `app.UseEndpoints` to generate the client access token with `/negotiate` requests.
+
+ ```csharp
+ using Azure.Messaging.WebPubSub;
+
+ using Microsoft.AspNetCore.Builder;
+ using Microsoft.AspNetCore.Hosting;
+ using Microsoft.AspNetCore.Http;
+ using Microsoft.Extensions.Azure;
+ using Microsoft.Extensions.Configuration;
+ using Microsoft.Extensions.DependencyInjection;
+ using Microsoft.Extensions.Hosting;
+
+ namespace logstream
+ {
+ public class Startup
+ {
+ public Startup(IConfiguration configuration)
+ {
+ Configuration = configuration;
+ }
+
+ public IConfiguration Configuration { get; }
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAzureClients(builder =>
+ {
+ builder.AddWebPubSubServiceClient(Configuration["Azure:WebPubSub:ConnectionString"], "stream");
+ });
+ }
+
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ {
+ if (env.IsDevelopment())
+ {
+ app.UseDeveloperExceptionPage();
+ }
+
+ app.UseStaticFiles();
+
+ app.UseRouting();
+
+ app.UseEndpoints(endpoints =>
+ {
+ endpoints.MapGet("/negotiate", async context =>
+ {
+ var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>();
+ var response = new
+ {
+ url = serviceClient.GenerateClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.stream", "webpubsub.joinLeaveGroup.stream" }).AbsoluteUri
+ };
+ await context.Response.WriteAsJsonAsync(response);
+ });
+ });
+ }
+ }
+ }
+
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ Create a `server.js` and add below code:
+
+ ```javascript
+ const express = require('express');
+ const { WebPubSubServiceClient } = require('@azure/web-pubsub');
+
+ let endpoint = new WebPubSubServiceClient(process.argv[2], 'stream');
+ const app = express();
+
+ app.get('/negotiate', async (req, res) => {
+ let token = await endpoint.getAuthenticationToken({
+ roles: ['webpubsub.sendToGroup.stream', 'webpubsub.joinLeaveGroup.stream']
+ });
+ res.send({
+ url: token.url
+ });
+ });
+
+ app.use(express.static('public'));
+ app.listen(8080, () => console.log('server started'));
+ ```
+
+ # [Python](#tab/python)
+
+ Create a `server.py` to host the `/negotiate` API and web page.
+
+ ```python
+ import json
+ import sys
+ from http.server import HTTPServer, SimpleHTTPRequestHandler
+ from azure.messaging.webpubsubservice import (
+ build_authentication_token
+ )
+
+ class Resquest(SimpleHTTPRequestHandler):
+ def do_GET(self):
+ if self.path == '/':
+ self.path = 'public/https://docsupdatetracker.net/index.html'
+ return SimpleHTTPRequestHandler.do_GET(self)
+ elif self.path == '/negotiate':
+ token = build_authentication_token(sys.argv[1], 'stream', roles=['webpubsub.sendToGroup.stream', 'webpubsub.joinLeaveGroup.stream'])
+ print(token)
+ self.send_response(200)
+ self.send_header('Content-Type', 'application/json')
+ self.end_headers()
+ self.wfile.write(json.dumps({
+ 'url': token['url']
+ }).encode())
+
+ if __name__ == '__main__':
+ if len(sys.argv) != 2:
+ print('Usage: python server.py <connection-string>')
+ exit(1)
+
+ server = HTTPServer(('localhost', 8080), Resquest)
+ print('server started')
+ server.serve_forever()
+ ```
+
+
+
+3. Create the web page
+
+ # [C#](#tab/csharp)
+ Create an HTML page with below content and save it as `wwwroot/https://docsupdatetracker.net/index.html`:
+
+ # [JavaScript](#tab/javascript)
+
+ Create an HTML page with below content and save it as `public/https://docsupdatetracker.net/index.html`:
+
+ # [Python](#tab/python)
+
+ Create an HTML page with below content and save it as `public/https://docsupdatetracker.net/index.html`:
+
+
+
+ ```html
+ <html>
+
+ <body>
+ <div id="output"></div>
+ <script>
+ (async function () {
+ let res = await fetch('/negotiate')
+ let data = await res.json();
+ let ws = new WebSocket(data.url, 'json.webpubsub.azure.v1');
+ ws.onopen = () => {
+ console.log('connected');
+ };
+
+ let output = document.querySelector('#output');
+ ws.onmessage = event => {
+ let d = document.createElement('p');
+ d.innerText = event.data;
+ output.appendChild(d);
+ };
+ })();
+ </script>
+ </body>
+
+ </html>
+ ```
+
+ It just connects to the service and print any message received to the page. The main change is that we specify the subprotocol when creating the WebSocket connection.
+
+4. Run the server
+ # [C#](#tab/csharp)
+ We use [Secret Manager](/aspnet/core/security/app-secrets#secret-manager) tool for .NET Core to set the connection string. Run the below command, replacing `<connection_string>` with the one fetched in [previous step](#get-the-connectionstring-for-future-use), and open http://localhost:5000/https://docsupdatetracker.net/index.html in browser:
+
+ ```bash
+ dotnet user-secrets init
+ dotnet user-secrets set Azure:WebPubSub:ConnectionString "<connection-string>"
+ dotnet run
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ Now run the below command, replacing `<connection-string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use), and open http://localhost:8080 in browser:
+
+ ```bash
+
+ node server "<connection-string>"
+ ```
+
+ # [Python](#tab/python)
+
+ Now run the below command, replacing `<connection-string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use), and open http://localhost:8080 in browser:
+
+ ```bash
+ python server.py "<connection-string>"
+ ```
+
+
+ If you are using Chrome, you can press F12 or right-click -> **Inspect** -> **Developer Tools**, and select the **Network** tab. Load the web page, and you can see the WebSocket connection is established. Click to inspect the WebSocket connection, you can see below `connected` event message is received in client. You can see that you can get the `connectionId` generated for this client.
+
+ ```json
+ {"type":"system","event":"connected","userId":null,"connectionId":"<the_connection_id>"}
+ ```
+
+You can see that with the help of subprotocol, you can get some metadata of the connection when the connection is `connected`.
+
+Also note that, instead of a plain text, client now receives a JSON message that contains more information, like what's the message type and where it is from. So you can use this information to do more processing to the message (for example, display the message in a different style if it's from a different source), which you can find in later sections.
+
+## Publish messages from client
+
+In the [Build a chat app](./tutorial-build-chat.md) tutorial, when client sends a message through WebSocket connection, it will trigger a user event at the server side. With subprotocol, client will have more functionalities by sending a JSON message. For example, you can publish message directly from client to other clients.
+
+This will be useful if you want to stream a large amount of data to other clients in real time. Let's use this feature to build a log streaming application, which can stream console logs to browser in real time.
+
+1. Creating the streaming program
+
+ # [C#](#tab/csharp)
+ Create a `stream` program:
+ ```bash
+ mkdir stream
+ cd stream
+ dotnet new console
+ ```
+
+ Update `Program.cs` with the following content:
+ ```csharp
+ using System;
+ using System.Net.Http;
+ using System.Net.WebSockets;
+ using System.Text;
+ using System.Text.Json;
+ using System.Threading.Tasks;
+
+ namespace stream
+ {
+ class Program
+ {
+ private static readonly HttpClient http = new HttpClient();
+ static async Task Main(string[] args)
+ {
+ // Get client url from remote
+ var stream = await http.GetStreamAsync("http://localhost:5000/negotiate");
+ var url = (await JsonSerializer.DeserializeAsync<ClientToken>(stream)).url;
+ var client = new ClientWebSocket();
+ client.Options.AddSubProtocol("json.webpubsub.azure.v1");
+
+ await client.ConnectAsync(new Uri(url), default);
+
+ Console.WriteLine("Connected.");
+ var streaming = Console.ReadLine();
+ while (streaming != null)
+ {
+ if (!string.IsNullOrEmpty(streaming))
+ {
+ var message = JsonSerializer.Serialize(new
+ {
+ type = "sendToGroup",
+ group = "stream",
+ data = streaming + Environment.NewLine,
+ });
+ Console.WriteLine("Sending " + message);
+ await client.SendAsync(Encoding.UTF8.GetBytes(message), WebSocketMessageType.Text, true, default);
+ }
+
+ streaming = Console.ReadLine();
+ }
+
+ await client.CloseAsync(WebSocketCloseStatus.NormalClosure, null, default);
+ }
+
+ private sealed class ClientToken
+ {
+ public string url { get; set; }
+ }
+ }
+ }
+
+ ```
+
+ # [JavaScript](#tab/javascript)
+ Create a `stream.js` with the following content.
+
+ ```javascript
+ const WebSocket = require('ws');
+ const fetch = require('node-fetch');
+
+ async function main() {
+ let res = await fetch(`http://localhost:8080/negotiate`);
+ let data = await res.json();
+ let ws = new WebSocket(data.url, 'json.webpubsub.azure.v1');
+ ws.on('open', () => {
+ process.stdin.on('data', data => {
+ ws.send(JSON.stringify({
+ type: 'sendToGroup',
+ group: 'stream',
+ dataType: 'text',
+ data: data.toString()
+ }));
+ process.stdout.write(data);
+ });
+ });
+ process.stdin.on('close', () => ws.close());
+ }
+
+ main();
+ ```
+
+ The code above creates a WebSocket connection to the service and then whenever it receives some data it uses `ws.send()` to publish the data. In order to publish to others, you just need to set `type` to `sendToGroup` and specify a group name in the message.
+
+ # [Python](#tab/python)
+
+ Open another bash window for the `stream` program, and install the `websockets` dependency:
+
+ ```bash
+ mkdir stream
+ cd stream
+
+ # Create venv
+ python -m venv env
+
+ # Active venv
+ ./env/Scripts/activate
+
+ # Or call .\env\Scripts\activate when you are using CMD under Windows
+ pip install websockets
+ ```
+
+ Create a `stream.py` with the following content.
+
+ ```python
+ import asyncio
+ import sys
+ import threading
+ import time
+ import websockets
+ import requests
+ import json
++
+ async def connect(url):
+ async with websockets.connect(url, subprotocols=['json.webpubsub.azure.v1']) as ws:
+ print('connected')
+ id = 1
+ while True:
+ data = input()
+ payload = {
+ 'type': 'sendToGroup',
+ 'group': 'stream',
+ 'dataType': 'text',
+ 'data': str(data + '\n'),
+ 'ackId': id
+ }
+ id = id + 1
+ await ws.send(json.dumps(payload))
+ await ws.recv()
+
+ res = requests.get('http://localhost:8080/negotiate').json()
+
+ try:
+ asyncio.get_event_loop().run_until_complete(connect(res['url']))
+ except KeyboardInterrupt:
+ pass
+
+ ```
+
+ The code above creates a WebSocket connection to the service and then whenever it receives some data it uses `ws.send()` to publish the data. In order to publish to others, you just need to set `type` to `sendToGroup` and specify a group name in the message.
+
+
+
+ You can see there is a new concept "group" here. Group is logical concept in a hub where you can publish message to a group of connections. In a hub, you can have multiple groups and one client can subscribe to multiple groups at the same time. When using subprotocol, you can only publish to a group instead of broadcasting to the whole hub. For details about the terms, check the [basic concepts](./key-concepts.md).
+
+2. Since we use group here, we also need to update the web page `https://docsupdatetracker.net/index.html` to join the group when the WebSocket connection is established inside `ws.onopen` callback.
+
+ ```javascript
+ ws.onopen = () => {
+ console.log('connected');
+ ws.send(JSON.stringify({
+ type: 'joinGroup',
+ group: 'stream'
+ }));
+ };
+ ```
+
+ You can see client joins the group by sending a message in `joinGroup` type.
+
+3. Also update the `ws.onmessage` callback logic slightly to parse the JSON response and print the messages only from `stream` group so that it acts as live stream printer.
+
+ ```javascript
+ ws.onmessage = event => {
+ let message = JSON.parse(event.data);
+ if (message.type === 'message' && message.group === 'stream') {
+ let d = document.createElement('span');
+ d.innerText = message.data;
+ output.appendChild(d);
+ window.scrollTo(0, document.body.scrollHeight);
+ }
+ };
+ ```
+
+4. For security consideration, by default a client can't publish or subscribe to a group by itself. So you may have noticed that we set `roles` to the client when generating the token:
+
+ # [C#](#tab/csharp)
+ Set the `roles` when `GenerateClientAccessUri` in `Startup.cs` like below:
+ ```csharp
+ serviceClient.GenerateClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.stream", "webpubsub.joinLeaveGroup.stream" })
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ Add the `roles` when `getAuthenticationToken` in `server.js` like below:
+
+ ```javascript
+ app.get('/negotiate', async (req, res) => {
+ let token = await endpoint.getAuthenticationToken({
+ roles: ['webpubsub.sendToGroup.stream', 'webpubsub.joinLeaveGroup.stream']
+ });
+ ...
+ });
+
+ ```
+
+ # [Python](#tab/python)
+
+ Update the token generation code to give client such `roles` when `build_authentication_token` in `server.py`:
+
+ ```python
+ token = build_authentication_token(sys.argv[1], 'stream', roles=['webpubsub.sendToGroup.stream', 'webpubsub.joinLeaveGroup.stream'])
+
+ ```
+
+
+
+5. Finally also apply some style to `https://docsupdatetracker.net/index.html` so it displays nicely.
+
+ ```html
+ <html>
+
+ <head>
+ <style>
+ #output {
+ white-space: pre;
+ font-family: monospace;
+ }
+ </style>
+ </head>
+ ```
+
+Now run below code and type any text and they'll be displayed in the browser in real time:
+
+# [C#](#tab/csharp)
+
+```bash
+ls -R | dotnet run
+
+# Or call `dir /s /b | dotnet run` when you are using CMD under Windows
+
+```
+
+Or you make it slower so you can see the data is streamed to browser in real time:
+
+```bash
+for i in $(ls -R); do echo $i; sleep 0.1; done | dotnet run
+```
+
+The complete code sample of this tutorial can be found [here][code-csharp].
+
+# [JavaScript](#tab/javascript)
+
+`node stream`
+
+Or you can also use this app pipe any output from another console app and stream it to the browser. For example:
+
+```bash
+ls -R | node stream
+
+# Or call `dir /s /b | node stream` when you are using CMD under Windows
+```
+
+Or you make it slower so you can see the data is streamed to browser in real time:
+
+```bash
+for i in $(ls -R); do echo $i; sleep 0.1; done | node stream
+```
+
+The complete code sample of this tutorial can be found [here][code-js].
+
+# [Python](#tab/python)
+
+Now you can run `python stream.py`, type any text and they'll be displayed in the browser in real time.
+
+Or you can also use this app pipe any output from another console app and stream it to the browser. For example:
+
+```bash
+ls -R | python stream.py
+
+# Or call `dir /s /b | python stream.py` when you are using CMD under Windows
+```
+
+Or you make it slower so you can see the data is streamed to browser in real time:
+
+```bash
+for i in $(ls -R); do echo $i; sleep 0.1; done | python stream.py
+```
+
+The complete code sample of this tutorial can be found [here][code-python].
+++++
+## Next steps
+
+This tutorial provides you a basic idea of how to connect to the Web PubSub service and how to publish messages to the connected clients using subprotocol.
+
+Check other tutorials to further dive into how to use the service.
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)
+
+[code-csharp]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/logstream/
+
+[code-js]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/logstream/
+
+[code-python]: https://github.com/Azure/azure-webpubsub/tree/main/samples/python/logstream/
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-client-automation.md
Title: Use PowerShell to back up Windows Server to Azure description: In this article, learn how to use PowerShell to set up Azure Backup on Windows Server or a Windows client, and manage backup and recovery. Previously updated : 12/2/2019 Last updated : 08/24/2021
PolicyState : Valid
Now the policy object is complete and has an associated backup schedule, retention policy, and an inclusion/exclusion list of files. This policy can now be committed for Azure Backup to use. Before you apply the newly created policy, ensure that there are no existing backup policies associated with the server by using the [Remove-OBPolicy](/powershell/module/msonlinebackup/remove-obpolicy) cmdlet. Removing the policy will prompt for confirmation. To skip the confirmation, use the `-Confirm:$false` flag with the cmdlet.
+>[!Note]
+>While running the cmdlet if it prompts to set a Security PIN, see the [Method 1 section](/azure/backup/backup-azure-delete-vault#method-1).
+ ```powershell Get-OBPolicy | Remove-OBPolicy ```
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 08/19/2021 Last updated : 08/24/2021
TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Type : SystemAssigned ```
-### Assign user-assigned managed identity to the vault
+### Assign user-assigned managed identity to the vault (in preview)
+
+>[!Note]
+>- Vaults using user-assigned managed identities for CMK encryption don't support the use of private endpoints for Backup.
+>- Azure Key Vaults limiting access to specific networks aren't yet supported for use along with user-assigned managed identities for CMK encryption.
To assign the user-assigned managed identity for your Recovery Services vault, perform the following steps:
batch Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/monitoring-overview.md
Title: Monitor Azure Batch description: Learn about Azure monitoring services, metrics, diagnostic logs, and other monitoring features for Azure Batch. Previously updated : 04/05/2018 Last updated : 08/23/2021 # Monitor Batch solutions
-Azure and the Batch service provide a range of services, tools, and APIs to monitor your Batch solutions. This overview article helps you choose a monitoring approach that fits your needs.
-
-For an overview of the Azure components and services available to monitor Azure resources, see [Monitoring Azure applications and resources](../azure-monitor/overview.md).
+[Azure Monitor](../azure-monitor/overview.md) and the Batch service provide a range of services, tools, and APIs to monitor your Batch solutions. This overview article helps you choose a monitoring approach that fits your needs.
## Subscription-level monitoring
-At the subscription level, which includes Batch accounts, the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md) collects operational event data in [several categories](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+At the subscription level, which includes Batch accounts, the [Azure activity log](../azure-monitor/essentials/activity-log.md) collects operational event data in several categories.
For Batch accounts specifically, the activity log collects events related to account creation and deletion and key management.
-One way to retrieve events from your activity log is to use the Azure portal. Click **All services** > **Activity Log**. Or, query for events using the Azure CLI, PowerShell cmdlets, or the Azure Monitor REST API. You can also export the activity log, or configure [activity log alerts](../azure-monitor/alerts/alerts-activity-log.md).
+You can view the activity log in the Azure portal, or query for events using the Azure CLI, PowerShell cmdlets, or the Azure Monitor REST API. You can also export the activity log, or configure [activity log alerts](../azure-monitor/alerts/alerts-activity-log.md).
## Batch account-level monitoring
-Monitor each Batch account using features of [Azure Monitor](../azure-monitor/overview.md). Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and optionally [diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) for resources scoped at the level of a Batch account, such as pools, jobs, and tasks. Collect and consume this data manually or programmatically to monitor activities in your Batch account and to diagnose issues. For details, see [Batch metrics, alerts, and logs for diagnostic evaluation and monitoring](batch-diagnostics.md).
-
+Monitor each Batch account using features of [Azure Monitor](../azure-monitor/overview.md). Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and optionally [resource logs](../azure-monitor/essentials/resource-logs.md) for resources within a Batch account, such as pools, jobs, and tasks. Collect and consume this data manually or programmatically to monitor activities in your Batch account and to diagnose issues. For more information, see [Batch metrics, alerts, and logs for diagnostic evaluation and monitoring](batch-diagnostics.md).
+ > [!NOTE]
-> Metrics are available by default in your Batch account without additional configuration, and they have a 30-day rolling history. You must enable diagnostic logging for a Batch account, and you may incur additional costs to store or process diagnostic log data.
+> Metrics are available by default in your Batch account without additional configuration, and they have a 30-day rolling history. You must create a diagnostic setting for a Batch account in order to send its resource logs to a Log Analytics workspace, and you may incur additional costs to store or process resource log data.
## Batch resource monitoring In your Batch applications, use the Batch APIs to monitor or query the status of your resources including jobs, tasks, nodes, and pools. For example:
-* [Count tasks and compute nodes by state](batch-get-resource-counts.md)
-* [Create queries to list Batch resources efficiently](batch-efficient-list-queries.md)
-* [Create task dependencies](batch-task-dependencies.md)
-* Use a [job manager task](/rest/api/batchservice/job/add#jobmanagertask)
-* Monitor the [task state](/rest/api/batchservice/task/list#taskstate)
-* Monitor the [node state](/rest/api/batchservice/computenode/list#computenodestate)
-* Monitor the [pool state](/rest/api/batchservice/pool/get#poolstate)
-* Monitor [pool usage in the account](/rest/api/batchservice/pool/listusagemetrics)
-* [Count pool nodes by state](/rest/api/batchservice/account/listpoolnodecounts)
+- [Count tasks and compute nodes by state](batch-get-resource-counts.md)
+- [Create queries to list Batch resources efficiently](batch-efficient-list-queries.md)
+- [Create task dependencies](batch-task-dependencies.md)
+- Use a [job manager task](/rest/api/batchservice/job/add#jobmanagertask)
+- Monitor the [task state](/rest/api/batchservice/task/list#taskstate)
+- Monitor the [node state](/rest/api/batchservice/computenode/list#computenodestate)
+- Monitor the [pool state](/rest/api/batchservice/pool/get#poolstate)
+- Monitor [pool usage in the account](/rest/api/batchservice/pool/listusagemetrics)
+- Count [pool nodes by state](/rest/api/batchservice/account/listpoolnodecounts)
-## VM performance counters and application monitoring
+## Additional monitoring solutions
-* [Application Insights](../azure-monitor/app/app-insights-overview.md) is an Azure service you can use to programmatically monitor the availability, performance, and usage of your Batch jobs and tasks. Easily get performance counters from compute nodes (VMs) and custom information for tasks off of the VMs.
+Use [Application Insights](../azure-monitor/app/app-insights-overview.md) to programmatically monitor the availability, performance, and usage of your Batch jobs and tasks. Application Insights lets you monitor performance counters from compute nodes (VMs) and retrieve custom information for the tasks that run on them.
- For an example, see [Monitor and debug a Batch .NET application with Application Insights](monitor-application-insights.md) and the accompanying [code sample](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights).
+For an example, see [Monitor and debug a Batch .NET application with Application Insights](monitor-application-insights.md) and the accompanying [code sample](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights).
- > [!NOTE]
- > You may incur additional costs to use Application Insights. See the [pricing options](https://azure.microsoft.com/pricing/details/application-insights/).
- >
-
-* [Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. Optionally configure your Batch solution to [display Application Insights data](https://github.com/Azure/batch-insights) such as VM performance counters in Batch Explorer.
+> [!NOTE]
+> You may incur additional costs to use Application Insights. See the [pricing information](https://azure.microsoft.com/pricing/details/application-insights/).
+[Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. Optionally, use [Azure Batch Insights](https://github.com/Azure/batch-insights) to get system statistics for your Batch nodes, such as VM performance counters, in Batch Explorer.
## Next steps
-* Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.
-* Learn more about [diagnostic logging](batch-diagnostics.md) with Batch.
+- Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.
+- Learn more about [diagnostic logging](batch-diagnostics.md) with Batch.
batch Pool File Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/pool-file-shares.md
Title: Azure file share for Azure Batch pools description: How to mount an Azure Files share from compute nodes in a Linux or Windows pool in Azure Batch. Previously updated : 05/24/2018 Last updated : 08/23/2021 # Use an Azure file share with a Batch pool
-[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) protocol. This article provides information and code examples for mounting and using an Azure file share on pool compute nodes.
+[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) protocol. You can mount and use an Azure file share on Batch pool compute nodes.
## Considerations for use with Batch
-* Consider using an Azure file share when you have pools that run a relatively low number of parallel tasks if using non-premium Azure Files. Review the [performance and scale targets](../storage/files/storage-files-scale-targets.md) to determine if Azure Files (which uses an Azure Storage account) should be used, given your expected pool size and number of asset files.
+Consider using an Azure file share when you have pools that run a relatively low number of parallel tasks if using non-premium Azure Files. Review the [performance and scale targets](../storage/files/storage-files-scale-targets.md) to determine if Azure Files (which uses an Azure Storage account) should be used, given your expected pool size and number of asset files.
-* Azure file shares are [cost-efficient](https://azure.microsoft.com/pricing/details/storage/files/) and can be configured with data replication to another region so are globally redundant.
+Azure file shares are [cost-efficient](https://azure.microsoft.com/pricing/details/storage/files/) and can be configured with data replication to another region to be globally redundant.
-* You can mount an Azure file share concurrently from an on-premises computer. However, ensure that you understand [concurrency implications](../storage/blobs/concurrency-manage.md) especially when using REST APIs.
-
-* See also the general [planning considerations](../storage/files/storage-files-planning.md) for Azure file shares.
+You can mount an Azure file share concurrently from an on-premises computer. However, ensure that you understand [concurrency implications](../storage/blobs/concurrency-manage.md), especially when using REST APIs.
+See also the general [planning considerations](../storage/files/storage-files-planning.md) for Azure file shares.
## Create a file share
-[Create a file share](../storage/files/storage-how-to-create-file-share.md) in a storage account that is linked to your Batch account, or in a separate storage account.
+You can create an Azure file share in a storage account that is linked to your Batch account, or in a separate storage account. For more information, see [Create an Azure file share](../storage/files/storage-how-to-create-file-share.md).
-## Mount an Azure File share on a Batch pool
+## Mount an Azure file share on a Batch pool
-Please refer to the documentation on how to [Mount a virtual file system on a Batch pool](virtual-file-mount.md).
+For details on how to mount an Azure file share on a pool, see [Mount a virtual file system on a Batch pool](virtual-file-mount.md).
## Next steps
-* For other options to read and write data in Batch, see [Persist job and task output](batch-task-output.md).
-* See also the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit, which includes [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes) to deploy file systems for Batch container workloads.
+- To learn about other options to read and write data in Batch, see [Persist job and task output](batch-task-output.md).
+- Explore the [Batch Shipyard](https://github.com/Azure/batch-shipyard) toolkit, which includes [Shipyard recipes](https://github.com/Azure/batch-shipyard/tree/master/recipes) to deploy file systems for Batch container workloads.
batch Pool File Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-php-create-web-role.md
Last updated 04/11/2018
# Create PHP web and worker roles- ## Overview + This guide will show you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide. Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running or perpetual tasks independent of user interaction or input.
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/applications-dont-support-tls-1-2.md
Last updated 03/16/2020
# Troubleshooting applications that don't support TLS 1.2
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article describes how to enable the older TLS protocols (TLS 1.0 and 1.1) as well as applying legacy cipher suites to support the additional protocols on the Windows Server 2019 cloud service web and worker roles.
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/automation-manage-cloud-services.md
# Managing Azure Cloud Services (classic) using Azure Automation
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
+ This guide will introduce you to the Azure Automation service, and how it can be used to simplify management of your Azure cloud services. ## What is Azure Automation?
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-allocation-failures.md
# Troubleshooting allocation failure when you deploy Cloud Services (classic) in Azure
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
## Summary
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-certs-create.md
# Certificates overview for Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Certificates are used in Azure for cloud services ([service certificates](#what-are-service-certificates)) and for authenticating with the management API ([management certificates](#what-are-management-certificates)). This topic gives a general overview of both certificate types, how to [create](#create) and deploy them to Azure.
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-choose-me.md
# Overview of Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Azure Cloud Services is an example of a [platform as a service](https://azure.microsoft.com/overview/what-is-paas/) (PaaS). Like [Azure App Service](../app-service/overview.md), this technology is designed to support applications that are scalable, reliable, and inexpensive to operate. In the same way that App Service is hosted on virtual machines (VMs), so too is Azure Cloud Services. However, you have more control over the VMs. You can install your own software on VMs that use Azure Cloud Services, and you can access them remotely.
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
# Configuring TLS for an application in Azure
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Transport Layer Security (TLS), previously known as Secure Socket Layer (SSL) encryption, is the most commonly used method of securing data sent across the internet. This common task discusses how to specify an HTTPS endpoint for a web role and how to upload a TLS/SSL certificate to secure your application.
cloud-services Cloud Services Connect To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-connect-to-custom-domain.md
# Connecting Azure Cloud Services (classic) Roles to a custom AD Domain Controller hosted in Azure
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
We will first set up a Virtual Network (VNet) in Azure. We will then add an Active Directory Domain Controller (hosted on an Azure Virtual Machine) to the VNet. Next, we will add existing cloud service roles to the pre-created VNet, then connect them to the Domain Controller.
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-custom-domain-name-portal.md
# Configuring a custom domain name for an Azure cloud service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
When you create a Cloud Service, Azure assigns it to a subdomain of **cloudapp.net**. For example, if your Cloud Service is named "contoso", your users will be able to access your application on a URL like `http://contoso.cloudapp.net`. Azure also assigns a virtual IP address.
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-diagnostics-powershell.md
# Enable diagnostics in Azure Cloud Services (classic) using PowerShell
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article.
cloud-services Cloud Services Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-disaster-recovery-guidance.md
# What to do in the event of an Azure service disruption that impacts Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces beyond our control sometimes impact us in ways that cause unplanned service disruptions.
cloud-services Cloud Services Dotnet Diagnostics Trace Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md
# Trace the flow of a Cloud Services (classic) application with Azure Diagnostics
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Tracing is a way for you to monitor the execution of your application while it is running. You can use the [System.Diagnostics.Trace](/dotnet/api/system.diagnostics.trace), [System.Diagnostics.Debug](/dotnet/api/system.diagnostics.debug), and [System.Diagnostics.TraceSource](/dotnet/api/system.diagnostics.tracesource) classes to record information about errors and application execution in logs, text files, or other devices for later analysis. For more information about tracing, see [Tracing and Instrumenting Applications](/dotnet/framework/debug-trace-profile/tracing-and-instrumenting-applications).
cloud-services Cloud Services Dotnet Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-diagnostics.md
# Enabling Azure Diagnostics in Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
See [Azure Diagnostics Overview](../azure-monitor/agents/diagnostics-extension-overview.md) for a background on Azure Diagnostics.
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-get-started.md
## Overview
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This tutorial shows how to create a multi-tier .NET application with an ASP.NET MVC front-end, and deploy it to an [Azure cloud service](cloud-services-choose-me.md). The application uses [Azure SQL Database](/previous-versions/azure/ee336279(v=azure.100)), the [Azure Blob service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/unstructured-blob-storage), and the [Azure Queue service](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/queue-centric-work-pattern). You can [download the Visual Studio project](https://code.msdn.microsoft.com/Simple-Azure-Cloud-Service-e01df2e4) from the MSDN Code Gallery.
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
# Install .NET on Azure Cloud Services (classic) roles
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article describes how to install versions of .NET Framework that don't come with the Azure Guest OS. You can use .NET on the Guest OS to configure your cloud service web and worker roles.
cloud-services Cloud Services Enable Communication Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-enable-communication-role-instances.md
# Enable communication for role instances in Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Cloud service roles communicate through internal and external connections. External connections are called **input endpoints** while internal connections are called **internal endpoints**. This topic describes how to modify the [service definition](cloud-services-model-and-package.md#csdef) to create endpoints.
cloud-services Cloud Services How To Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-configure-portal.md
# How to Configure and Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
You can configure the most commonly used settings for a cloud service in the Azure portal. Or, if you like to update your configuration files directly, download a service configuration file to update, and then upload the updated file and update the cloud service with the configuration changes. Either way, the configuration updates are pushed out to all role instances.
cloud-services Cloud Services How To Create Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md
# How to create and deploy an Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The Azure portal provides two ways for you to create and deploy a cloud service: *Quick Create* and *Custom Create*.
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-manage-portal.md
# Manage Cloud Services (classic) in the Azure portal
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
In the **Cloud Services** area of the Azure portal, you can:
cloud-services Cloud Services How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-monitor.md
# Introduction to Cloud Service (classic) Monitoring
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
You can monitor key performance metrics for any cloud service. Every cloud service role collects minimal data: CPU usage, network usage, and disk utilization. If the cloud service has the `Microsoft.Azure.Diagnostics` extension applied to a role, that role can collect additional points of data. This article provides an introduction to Azure Diagnostics for Cloud Services.
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-portal.md
# How to configure auto scaling for a Cloud Service (classic) in the portal
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Conditions can be set for a cloud service worker role that trigger a scale in or out operation. The conditions for the role can be based on the CPU, disk, or network load of the role. You can also set a condition based on a message queue or the metric of some other Azure resource associated with your subscription.
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-powershell.md
# How to scale an Azure Cloud Service (classic) in PowerShell
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
You can use Windows PowerShell to scale a web role or worker role in or out by adding or removing instances.
cloud-services Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-model-and-package.md
# What is the Cloud Service (classic) model and how do I package it?
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
A cloud service is created from three components, the service definition *(.csdef)*, the service config *(.cscfg)*, and a service package *(.cspkg)*. Both the **ServiceDefinition.csdef** and **ServiceConfig.cscfg** files are XML-based and describe the structure of the cloud service and how it's configured; collectively called the model. The **ServicePackage.cspkg** is a zip file that is generated from the **ServiceDefinition.csdef** and among other things, contains all the required binary-based dependencies. Azure creates a cloud service from both the **ServicePackage.cspkg** and the **ServiceConfig.cscfg**.
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
# Build a Node.js chat application with Socket.IO on an Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Socket.IO provides real time communication between your Node.js server and clients. This tutorial walks you through hosting a
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
# Build and deploy a Node.js application to an Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This tutorial shows how to create a simple Node.js application running in an Azure Cloud Service. Cloud Services are the building blocks of scalable cloud applications in Azure. They allow the separation and independent management and scale-out of front-end and back-end components of your application. Cloud Services provide a robust dedicated virtual machine for hosting each role reliably.
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
# Build and deploy a Node.js web application using Express on an Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Node.js includes a minimal set of functionality in the core runtime. Developers often use 3rd party modules to provide additional
cloud-services Cloud Services Performance Testing Visual Studio Profiler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md
# Testing the Performance of a Cloud Service (classic) Locally in the Azure Compute Emulator Using the Visual Studio Profiler
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
A variety of tools and techniques are available for testing the performance of cloud services. When you publish a cloud service to Azure, you can have Visual Studio collect profiling
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
# Use an Azure PowerShell command to create an empty cloud service (classic) container
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article explains how to quickly create a Cloud Services container using Azure PowerShell cmdlets. Please follow the steps below:
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
# Use service management from Python
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This guide shows you how to programmatically perform common service management tasks from Python. The **ServiceManagementService** class in the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python) supports programmatic access to much of the service management-related functionality that is available in the [Azure portal][management-portal]. You can use this functionality to create, update, and delete cloud services, deployments, data management services, and virtual machines. This functionality can be useful in building applications that need programmatic access to service management.
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-python-ptvs.md
# Python web and worker roles with Python Tools for Visual Studio
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article provides an overview of using Python web and worker roles using [Python Tools for Visual Studio][Python Tools for Visual Studio]. Learn how to use Visual Studio to create and deploy a basic Cloud Service that uses Python.
cloud-services Cloud Services Role Config Xpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-config-xpath.md
# Expose role configuration settings as an environment variable with XPath
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
In the cloud service worker or web role service definition file, you can expose runtime configuration values as environment variables. The following XPath values are supported (which correspond to API values).
cloud-services Cloud Services Role Enable Remote Desktop New Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md
# Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
> [!div class="op_single_selector"] > * [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md)
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
# Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic) using PowerShell
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
> [!div class="op_single_selector"] > * [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md)
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
# Enable Remote Desktop Connection for a Role in Azure Cloud Services (classic) using Visual Studio
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
> [!div class="op_single_selector"] > * [Azure portal](cloud-services-role-enable-remote-desktop-new-portal.md)
cloud-services Cloud Services Role Lifecycle Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md
# Customize the Lifecycle of a Web or Worker role in .NET
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
When you create a worker role, you extend the [RoleEntryPoint](/previous-versions/azure/reference/ee758619(v=azure.100)) class which provides methods for you to override that let you respond to lifecycle events. For web roles this class is optional, so you must use it to respond to lifecycle events.
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-sizes-specs.md
# Sizes for Cloud Services (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This topic describes the available sizes and options for Cloud Service role instances (web roles and worker roles). It also provides deployment considerations to be aware of when planning to use these resources. Each size has an ID that you put in your [service definition file](cloud-services-model-and-package.md#csdef). Prices for each size are available on the [Cloud Services Pricing](https://azure.microsoft.com/pricing/details/cloud-services/) page.
cloud-services Cloud Services Startup Tasks Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-startup-tasks-common.md
# Common Cloud Service (classic) startup tasks
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article provides some examples of common startup tasks you may want to perform in your cloud service. You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process.
cloud-services Cloud Services Startup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-startup-tasks.md
# How to configure and run startup tasks for an Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
You can use startup tasks to perform operations before a role starts. Operations that you might want to perform include installing a component, registering COM components, setting registry keys, or starting a long running process.
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
# Common issues that cause Azure Cloud Service (classic) roles to recycle
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article discusses some of the common causes of deployment problems and provides troubleshooting tips to help you resolve these problems. An indication that a problem exists with an application is when the role instance fails to start, or it cycles between the initializing, busy, and stopping states.
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
Last updated 02/22/2021
# Troubleshoot ConstrainedAllocationFailed when deploying a Cloud service (classic) to Azure + In this article, you'll troubleshoot allocation failures where Azure Cloud services (classic) can't deploy because of allocation constraints. When you deploy instances to a Cloud service (classic) or add new web or worker role instances, Microsoft Azure allocates compute resources.
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
# Default TEMP folder size is too small on a cloud service (classic) web/worker role
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
+ The default temporary directory of a cloud service worker or web role has a maximum size of 100 MB, which may become full at some point. This article describes how to avoid running out of space for the temporary directory.
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
# Troubleshoot Azure Cloud Services (Classic) deployment problems
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
When you deploy a cloud service application package to Azure, you can obtain information about the deployment from the **Properties** pane in the Azure portal. You can use the details in this pane to help you troubleshoot problems with the cloud service, and you can provide this information to Azure Support when opening a new support request.
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
Last updated 02/22/2021
# Troubleshoot FabricInternalServerError or ServiceAllocationFailure when deploying a Cloud service (classic) to Azure + In this article, you'll troubleshoot allocation failures where the fabric controller cannot allocate when deploying an Azure Cloud service (classic). When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources.
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
# Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service (classic) to Azure + In this article, you'll troubleshoot allocation failures where a Virtual Machine (VM) size isn't available when you deploy an Azure Cloud service (classic). When you deploy instances to a Cloud service (classic) or add new web or worker role instances, Microsoft Azure allocates compute resources.
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Last updated 02/22/2021
# Troubleshoot OverconstrainedAllocationRequest when deploying Cloud services (classic) to Azure + In this article, you'll troubleshoot over constrained allocation failures that prevent deployment of Azure Cloud Services (classic). When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources.
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
# Troubleshoot Azure Cloud Service (classic) roles that fail to start
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Here are some common problems and solutions related to Azure Cloud Services roles that fail to start.
cloud-services Cloud Services Update Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-update-azure-service.md
# How to update an Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Updating a cloud service, including both its roles and guest OS, is a three step process. First, the binaries and configuration files for the new cloud service or OS version must be uploaded. Next, Azure reserves compute and network resources for the cloud service based on the requirements of the new cloud service version. Finally, Azure performs a rolling upgrade to incrementally update the tenant to the new version or guest OS, while preserving your availability. This article discusses the details of this last step ΓÇô the rolling upgrade.
cloud-services Cloud Services Workflow Process https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-workflow-process.md
# Workflow of Windows Azure classic VM Architecture
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article provides an overview of the workflow processes that occur when you deploy or update an Azure resource such as a virtual machine.
cloud-services Diagnostics Extension To Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/diagnostics-extension-to-storage.md
# Store and view diagnostic data in Azure Storage
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Diagnostic data is not permanently stored unless you transfer it to the Microsoft Azure Storage Emulator or to Azure Storage. Once in storage, it can be viewed with one of several available tools.
cloud-services Diagnostics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/diagnostics-performance-counters.md
# Collect performance counters for your Azure Cloud Service (classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
Performance counters provide a way for you to track how well your application and the host are performing. Windows Server provides many different performance counters related to hardware, applications, the operating system, and more. By collecting and sending performance counters to Azure, you can analyze this information to help make better decisions.
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/resource-health-for-cloud-services.md
# Resource Health Check (RHC) Support for Azure Cloud Services (Classic)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
This article talks about Resource Health Check (RHC) Support for [Microsoft Azure Cloud Services (Classic)](https://azure.microsoft.com/services/cloud-services)
cloud-services Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-file.md
# Azure Cloud Services (classic) Config Schema (.cscfg File)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The service configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role. If the service is part of a Virtual Network, configuration information for the network must be provided in the service configuration file, as well as in the virtual networking configuration file. The default extension for the service configuration file is .cscfg.
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-networkconfiguration.md
# Azure Cloud Services (classic) Config NetworkConfiguration Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The `NetworkConfiguration` element of the service configuration file specifies Virtual Network and DNS values. These settings are optional for cloud services.
cloud-services Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-role.md
# Azure Cloud Services (classic) Config Role Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The `Role` element of the configuration file specifies the number of role instances to deploy for each role in the service, the values of any configuration settings, and the thumbprints for any certificates associated with a role.
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-file.md
# Azure Cloud Services (classic) Definition Schema (.csdef File)
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The service definition file defines the service model for an application. The file contains the definitions for the roles that are available to a cloud service, specifies the service endpoints, and establishes configuration settings for the service. Configuration setting values are set in the service configuration file, as described by the [Cloud Service (classic) Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)).
cloud-services Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-loadbalancerprobe.md
# Azure Cloud Services (classic) Definition LoadBalancerProbe Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The load balancer probe is a customer defined health probe of UDP endpoints and endpoints in role instances. The `LoadBalancerProbe` is not a standalone element; it is combined with the web role or worker role in a service definition file. A `LoadBalancerProbe` can be used by more than one role.
cloud-services Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-networktrafficrules.md
# Azure Cloud Services (classic) Definition NetworkTrafficRules Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The `NetworkTrafficRules` node is an optional element in the service definition file that specifies how roles communicate with each other. It limits which roles can access the internal endpoints of the specific role. The `NetworkTrafficRules` is not a standalone element; it is combined with two or more roles in a service definition file.
cloud-services Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-webrole.md
# Azure Cloud Services (classic) Definition WebRole Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The Azure web role is a role that is customized for web application programming as supported by IIS 7, such as ASP.NET, PHP, Windows Communication Foundation, and FastCGI.
cloud-services Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-workerrole.md
# Azure Cloud Services (classic) Definition WorkerRole Schema
-> [!IMPORTANT]
-> [Azure Cloud Services (extended support)](../cloud-services-extended-support/overview.md) is a new Azure Resource Manager based deployment model for the Azure Cloud Services product. With this change, Azure Cloud Services running on the Azure Service Manager based deployment model have been renamed as Cloud Services (classic) and all new deployments should use [Cloud Services (extended support)](../cloud-services-extended-support/overview.md).
The Azure worker role is a role that is useful for generalized development, and may perform background processing for a web role.
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
See the following list of possible errors and their causes:
* Timeout - Image processing timed out. * InternalServerError
+> [!TIP]
+> While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](https://docs.microsoft.com/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](https://docs.microsoft.com/azure/architecture/patterns/circuit-breaker).
++ ## Next steps To try out the REST API, go to the [Image Analysis API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b).
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
When the Edge compute role is set up on the Edge device, it creates two devices:
### Enable MPS on Azure Stack Edge
-1. Run a Windows PowerShell session as an Administrator.
+Follow these steps to remotely connect from a Windows client.
+
+1. Run a Windows PowerShell session as an administrator.
+2. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type:
-2. Make sure that the Windows Remote Management service is running on your client. In the PowerShell terminal, use the following command
-
```powershell winrm quickconfig ```
-
- If you see warnings about a firewall exception, check your network connection type, and see the [Windows Remote Management](/windows/win32/winrm/installation-and-configuration-for-windows-remote-management) documentation.
-3. Assign a variable to the device IP address.
-
+ For more information, see [Installation and configuration for Windows Remote Management](/windows/win32/winrm/installation-and-configuration-for-windows-remote-management#quick-default-configuration).
+
+3. Assign a variable to the connection string used in the `hosts` file.
+ ```powershell
- $ip = "<device-IP-address>"
- ```
-
-4. To add the IP address of your device to the client's trusted hosts list, use the following command:
-
+ $Name = "<Node serial number>.<DNS domain of the device>"
+ ```
+
+ Replace `<Node serial number>` and `<DNS domain of the device>` with the node serial number and DNS domain of your device. You can get the values for node serial number from the **Certificates** page and DNS domain from the **Device** page in the local web UI of your device.
+
+4. To add this connection string for your device to the clientΓÇÖs trusted hosts list, type the following command:
+ ```powershell
- Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force
+ Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
```
-5. Start a Windows PowerShell session on the device.
+5. Start a Windows PowerShell session on the device:
```powershell
- Enter-PSSession -ComputerName $ip -Credential $ip\EdgeUser -ConfigurationName Minishell
+ Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
```
-6. Provide the password when prompted. Use the same password that is used to sign into the local web UI. The default local web UI password is `Password1`.
+ If you see an error related to trust relationship, then check if the signing chain of the node certificate uploaded to your device is also installed on the client accessing your device.
-Type `Start-HcsGpuMPS` to start the MPS service on the device.
+6. Provide the password when prompted. Use the same password that is used to sign into the local web UI. The default local web UI password is *Password1*. When you successfully connect to the device using remote PowerShell, you see the following sample output:
-For help troubleshooting the Azure Stack Edge device, see [Troubleshooting the Azure Stack Edge device](spatial-analysis-logging.md#troubleshooting-the-azure-stack-edge-device)
+ ```
+ Windows PowerShell
+ Copyright (C) Microsoft Corporation. All rights reserved.
+
+ PS C:\WINDOWS\system32> winrm quickconfig
+ WinRM service is already running on this machine.
+ PS C:\WINDOWS\system32> $Name = "1HXQG13.wdshcsso.com"
+ PS C:\WINDOWS\system32> Set-Item WSMan:\localhost\Client\TrustedHosts $Name -Concatenate -Force
+ PS C:\WINDOWS\system32> Enter-PSSession -ComputerName $Name -Credential ~\EdgeUser -ConfigurationName Minishell -UseSSL
+
+ WARNING: The Windows PowerShell interface of your device is intended to be used only for the initial network configuration. Please engage Microsoft Support if you need to access this interface to troubleshoot any potential issues you may be experiencing. Changes made through this interface without involving Microsoft Support could result in an unsupported configuration.
+ [1HXQG13.wdshcsso.com]: PS>
+ ```
#### [Desktop machine](#tab/desktop-machine)
cognitive-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/text-moderation-api.md
The following example shows a sample response:
## Auto-correction
-Suppose the input text is (the 'lzay' and 'f0x' are intentional):
+Suppose the input text is (the "qu!ck," "f0x," and "lzay" are intentional):
> The qu!ck brown f0x jumps over the lzay dog.
The Content Moderator provides a [Term List API](https://westus.dev.cognitive.mi
## Next steps
-Test out the APIs with the [Text moderation API console](try-text-api.md). Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
+Test out the APIs with the [Text moderation API console](try-text-api.md). Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
keywords: facial recognition, facial recognition software, facial analysis, face
# What is the Azure Face service? > [!WARNING]
-> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States.
+> On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the [Responsible AI (RAI) documentation](https://go.microsoft.com/fwlink/?linkid=2164191) and will use this service in accordance with it.
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Previously updated : 01/07/2021 Last updated : 08/24/2021
Available to organizations with a business presence in China. See more informati
- [https://portal.azure.cn/](https://portal.azure.cn/) - **Regions:** - China East 2
+ - China North 2
- **Available pricing tiers:** - Free (F0) and Standard (S0). See more details [here](https://www.azure.cn/pricing/details/cognitive-services/https://docsupdatetracker.net/index.html) - **Supported features:**
Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your su
| | Region identifier | |--|--| | **China East 2** | `chinaeast2` |
+| **China North 2** | `chinanorth2` |
#### Speech SDK
Replace `subscriptionKey` with your Speech resource key. Replace `azCnHost` with
|--|--| | **China East 2** | | | Speech-to-text | `wss://chinaeast2.stt.speech.azure.cn` |
-| Text-to-Speech | `https://chinaeast2.tts.speech.azure.cn` |
+| Text-to-Speech | `https://chinaeast2.tts.speech.azure.cn` |
+| **China North 2** | |
+| Speech-to-text | `wss://chinanorth2.stt.speech.azure.cn` |
+| Text-to-Speech | `https://chinanorth2.tts.speech.azure.cn` |
cognitive-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/tutorial-use-personalizer-chat-bot.md
A chat bot is typically a back-and-forth conversation with a user. This specific
The chat bot needs to manage turns in conversation. The chat bot uses [Bot Framework](https://github.com/microsoft/botframework-sdk) to manage the bot architecture and conversation and uses the Cognitive Service, [Language Understanding](../LUIS/index.yml) (LUIS), to understand the intent of the natural language from the user.
-The chat bot is a web site with a specific route available to answer requests, `http://localhost:3978/api/messages`. You can use the bot emulator to visually interact with the running chat bot while you are developing a bot locally.
+The chat bot is a web site with a specific route available to answer requests, `http://localhost:3978/api/messages`. You can use the Bot Framework Emulator to visually interact with the running chat bot while you are developing a bot locally.
### User interactions with the bot
Once you have configured the `appsettings.json`, you are ready to build and run
Keep the web site running because the tutorial explains what the bot is doing, so you can interact with the bot.
-## Set up the bot emulator
+## Set up the Bot Framework Emulator
1. Open the Bot Framework Emulator, and select **Open Bot**.
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-startup.png" alt-text="Screenshot of bot emulator startup screen.":::
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-startup.png" alt-text="Screenshot of Bot Framework Emulator startup screen.":::
1. Configure the bot with the following **bot URL** then select **Connect**: `http://localhost:3978/api/messages`
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-open-bot-settings.png" alt-text="Screenshot of bot emulator open bot settings.":::
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-open-bot-settings.png" alt-text="Screenshot of Bot Framework Emulator open bot settings.":::
The emulator connects to the chat bot and displays the instructional text, along with logging and debug information helpful for local development.
- :::image type="content" source="media/tutorial-chat-bot/bot-emulator-bot-conversation-first-turn.png" alt-text="Screenshot of bot emulator in first turn of conversation.":::
+ :::image type="content" source="media/tutorial-chat-bot/bot-emulator-bot-conversation-first-turn.png" alt-text="Screenshot of Bot Framework Emulator in first turn of conversation.":::
-## Use the bot in the bot emulator
+## Use the bot in the Bot Framework Emulator
1. Ask to see the menu by entering `I would like to see the menu`. The chat bot displays the items. 1. Let the bot suggest an item by entering `Please suggest a drink for me.`
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/data-limits.md
Previously updated : 11/19/2020 Last updated : 04/07/2021
# Data and rate limits for the Text Analytics API <a name="data-limits"></a>
-Use this article to find the limits for the size, and rates that you can send data to Text Analytics API. Note that pricing is not affected by the data limits or rate limits. Pricing is subject to your Text Analytics resource's [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
+Use this article to find the limits for the size, and rates that you can send data to Text Analytics API.
## Data limits > [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
+> * Pricing is not affected by data or rate limits. Pricing is based on the number of text records you send to the API, and is subject to your Text Analytics resource's [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
+> * A text record is measured as 1000 characters.
+> * Data and rate limits are based on the number of documents you send to the API. If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+> * A document is a single string of text characters.
| Limit | Value | |||
Your rate limit will vary with your [pricing tier](https://azure.microsoft.com/p
Requests rates are measured for each Text Analytics feature separately. You can send the maximum number of requests for your pricing tier to each feature, at the same time. For example, if you're in the `S` tier and send 1000 requests at once, you wouldn't be able to send another request for 59 seconds.
+S0-S4 tiers have been deprecated and you are encouraged to switch to S tier.
+ ## See also * [What is the Text Analytics API](../overview.md)
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/tasks-agent-pools.md
This feature is available in the **Premium** container registry service tier. Fo
## Preview limitations - Task agent pools currently support Linux nodes. Windows nodes aren't currently supported.-- Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, West Europe, North Europe, Canada Central, USGov Arizona, USGov Texas, and USGov Virginia.
+- Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, West Europe, North Europe, Canada Central, East Asia, USGov Arizona, USGov Texas, and USGov Virginia.
- For each registry, the default total vCPU (core) quota is 16 for all standard agent pools and is 0 for isolated agent pools. Open a [support request][open-support-ticket] for additional allocation. - You can't currently cancel a task run on an agent pool.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
The following constraints are applicable on the operational data in Azure Cosmos
* The deletion of all documents in a collection doesn't reset the analytical store schema. * There is not schema versioning. The last version inferred from transactional store is what you will see in analytical store.
-* Currently we do not support Azure Synapse Spark reading properties that contain blanks (white spaces) in their names. You will need to use Spark functions like `cast` or `replace` to be able to load the data into a Spark DataFrame.
+* Currently Azure Synapse Spark can't read properties that contain some special characters in their names, listed bellow. If this is your case, please contact the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com) for more information.
+ * : (Colon)
+ * ` (Grave accent)
+ * , (Comma)
+ * ; (Semicolon)
+ * {}
+ * ()
+ * \n
+ * \t
+ * = (Equal sign)
+ * " (Quotation mark)
+
+* Azure Synapse Spark now supports properties with whitespaces in their names.
### Schema representation
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-java-v4-sdk.md
The following table lists different Azure Cosmos DB Java SDKs, the package name
| Java SDK| Release Date | Bundled APIs | Maven Jar | Java package name |API Reference | Release Notes | Retire date | |-||--|--|--|-||--|
-| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | - |
+| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | August 31, 2024 |
| Sync 2.x.x | Sept 2018 | Sync | `com.microsoft.azure::azure-documentdb` | `com.microsoft.azure.cosmosdb` | [API](https://azure.github.io/azure-cosmosdb-jav) | February 29, 2024 |
-| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | - |
+| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | August 31, 2024 |
| 4.0 | June 2020 | Async(Reactor)/Sync | `com.azure::azure-cosmos` | `com.azure.cosmos` | [API](/java/api/overview/azure/cosmosdb) | - | - | ## SDK level implementation changes
cosmos-db Performance Tips Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/performance-tips-async-java.md
> The performance tips in this article are for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release notes](sql-api-sdk-async-java.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb), and Azure Cosmos DB Async Java SDK v2 [troubleshooting guide](troubleshoot-java-async-sdk.md) for more information. >
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+ Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using the [Azure Cosmos DB Async Java SDK v2](sql-api-sdk-async-java.md). So if you're asking "How can I improve my database performance?" consider the following options:
cosmos-db Performance Tips Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/performance-tips-java.md
> These performance tips are for Azure Cosmos DB Sync Java SDK v2 only. Please view the Azure Cosmos DB Sync Java SDK v2 [Release notes](sql-api-sdk-java.md) and [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-documentdb) for more information. >
+> [!IMPORTANT]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+ Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You do not have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [how to provision container throughput](how-to-provision-container-throughput.md) or [how to provision database throughput](how-to-provision-database-throughput.md). However, because Azure Cosmos DB is accessed via network calls there are client-side optimizations you can make to achieve peak performance when using [Azure Cosmos DB Sync Java SDK v2](./sql-api-sdk-java.md). So if you're asking "How can I improve my database performance?" consider the following options:
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-async-java.md
The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynch
> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide. >
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+ | | Links | ||| | **SDK Download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) |
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java.md
This is the original Azure Cosmos DB Sync Java SDK v2 for SQL API which supports
> This is *not* the latest Java SDK for Azure Cosmos DB! Consider using [Azure Cosmos DB Java SDK v4](sql-api-sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide. >
+> [!IMPORTANT]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
++ | | Links | ||| |**SDK Download**|[Maven](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20a%3A%22azure-documentdb%22)|
Microsoft will provide notification at least **12 months** in advance of retirin
| Version | Release Date | Retirement Date | | | | |
-| [2.6.1](#2.6.1) |Dec 17, 2020 | |
-| [2.6.0](#2.6.0) |July 16, 2020 | |
-| [2.5.1](#2.5.1) |June 03, 2020 | |
-| [2.5.0](#2.5.0) |May 12, 2020 | |
-| [2.4.7](#2.4.7) |Feb 20, 2020 | |
-| [2.4.6](#2.4.6) |Jan 24, 2020 | |
-| [2.4.5](#2.4.5) |Nov 10, 2019 | |
-| [2.4.4](#2.4.4) |Oct 24, 2019 | |
-| [2.4.2](#2.4.2) |Sep 26, 2019 | |
-| [2.4.1](#2.4.1) |Jul 18, 2019 | |
-| [2.4.0](#2.4.0) |May 04, 2019 | |
-| [2.3.0](#2.3.0) |Apr 24, 2019 | |
-| [2.2.3](#2.2.3) |Apr 16, 2019 | |
-| [2.2.2](#2.2.2) |Apr 05, 2019 | |
-| [2.2.0](#2.2.0) |Mar 27, 2019 | |
-| [2.1.3](#2.1.3) |Mar 13, 2019 | |
-| [2.1.2](#2.1.2) |Mar 09, 2019 | |
-| [2.1.1](#2.1.1) |Dec 13, 2018 | |
-| [2.1.0](#2.1.0) |Nov 20, 2018 | |
-| [2.0.0](#2.0.0) |Sept 21, 2018 | |
+| [2.6.1](#2.6.1) |Dec 17, 2020 |Feb 29, 2024|
+| [2.6.0](#2.6.0) |July 16, 2020 |Feb 29, 2024|
+| [2.5.1](#2.5.1) |June 03, 2020 |Feb 29, 2024|
+| [2.5.0](#2.5.0) |May 12, 2020 |Feb 29, 2024|
+| [2.4.7](#2.4.7) |Feb 20, 2020 |Feb 29, 2024|
+| [2.4.6](#2.4.6) |Jan 24, 2020 |Feb 29, 2024|
+| [2.4.5](#2.4.5) |Nov 10, 2019 |Feb 29, 2024|
+| [2.4.4](#2.4.4) |Oct 24, 2019 |Feb 29, 2024|
+| [2.4.2](#2.4.2) |Sep 26, 2019 |Feb 29, 2024|
+| [2.4.1](#2.4.1) |Jul 18, 2019 |Feb 29, 2024|
+| [2.4.0](#2.4.0) |May 04, 2019 |Feb 29, 2024|
+| [2.3.0](#2.3.0) |Apr 24, 2019 |Feb 29, 2024|
+| [2.2.3](#2.2.3) |Apr 16, 2019 |Feb 29, 2024|
+| [2.2.2](#2.2.2) |Apr 05, 2019 |Feb 29, 2024|
+| [2.2.0](#2.2.0) |Mar 27, 2019 |Feb 29, 2024|
+| [2.1.3](#2.1.3) |Mar 13, 2019 |Feb 29, 2024|
+| [2.1.2](#2.1.2) |Mar 09, 2019 |Feb 29, 2024|
+| [2.1.1](#2.1.1) |Dec 13, 2018 |Feb 29, 2024|
+| [2.1.0](#2.1.0) |Nov 20, 2018 |Feb 29, 2024|
+| [2.0.0](#2.0.0) |Sept 21, 2018 |Feb 29, 2024|
| [1.16.4](#1.16.4) |Sept 10, 2018 |May 30, 2020 | | [1.16.3](#1.16.3) |Sept 09, 2018 |May 30, 2020 | | [1.16.2](#1.16.2) |June 29, 2018 |May 30, 2020 |
cosmos-db Tutorial Develop Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table/tutorial-develop-table-dotnet.md
[!INCLUDE [storage-table-applies-to-storagetable-and-cosmos](../../../includes/storage-table-applies-to-storagetable-and-cosmos.md)] + You can use the Azure Cosmos DB Table API or Azure Table storage to store structured NoSQL data in the cloud, providing a key/attribute store with a schema less design. Because Azure Cosmos DB Table API and Table storage are schema less, it's easy to adapt your data as the needs of your application evolve. You can use Azure Cosmos DB Table API or the Table storage to store flexible datasets such as user data for web applications, address books, device information, or other types of metadata your service requires.
-This tutorial describes a sample that shows you how to use the [Microsoft Azure Cosmos DB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) with Azure Cosmos DB Table API and Azure Table storage scenarios. You must use the connection specific to the Azure service. These scenarios are explored using C# examples that illustrate how to create tables, insert/ update data, query data and delete the tables.
+This tutorial describes a sample that shows you how to use the [Microsoft Azure Cosmos DB Table Library for .NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) with Azure Cosmos DB Table API and Azure Table storage scenarios. These scenarios are explored using C# examples that illustrate how to create tables, insert/ update data, query data and delete the tables.
+
+While this walkthrough will discuss the specifics of the Cosmos DB implementation, you can create a Table Storage resource and use the same NuGet package and API to access the resource; only the resource creation is different. Regardless of which resource type you choose, you must use the connection string specific to the Azure service you have created.
## Prerequisites
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-java-async-sdk.md
> This article covers troubleshooting for Azure Cosmos DB Async Java SDK v2 only. See the Azure Cosmos DB Async Java SDK v2 [Release Notes](sql-api-sdk-async-java.md), [Maven repository](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb) and [performance tips](performance-tips-async-java.md) for more information. >
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB Async Java SDK v2.x
+> will be retired; the SDK and all applications using the SDK
+> **will continue to function**; Azure Cosmos DB will simply cease
+> to provide further maintenance and support for this SDK.
+> We recommend following the instructions above to migrate to
+> Azure Cosmos DB Java SDK v4.
+>
+ This article covers common issues, workarounds, diagnostic steps, and tools when you use the [Java Async SDK](sql-api-sdk-async-java.md) with Azure Cosmos DB SQL API accounts. The Java Async SDK provides client-side logical representation to access the Azure Cosmos DB SQL API. This article describes tools and approaches to help you if you run into any issues.
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-expression-builder.md
Previously updated : 04/29/2021 Last updated : 08/24/2021 # Build expressions in mapping data flow
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 06/07/2021 Last updated : 08/24/2021 # Mapping data flows performance and tuning guide
data-factory Concepts Datasets Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-datasets-linked-services.md
Previously updated : 08/24/2020 Last updated : 08/24/2021 # Datasets in Azure Data Factory and Azure Synapse Analytics
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
Previously updated : 06/16/2021 Last updated : 08/24/2021 # Integration runtime in Azure Data Factory
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
Previously updated : 08/21/2020 Last updated : 08/24/2021 # Linked services in Azure Data Factory and Azure Synapse Analytics
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipeline-execution-triggers.md
Previously updated : 07/05/2018 Last updated : 08/24/2021
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-pipelines-activities.md
Previously updated : 06/19/2021 Last updated : 08/24/2021 # Pipelines and activities in Azure Data Factory and Azure Synapse Analytics
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 07/19/2021 Last updated : 08/24/2021 # Copy and transform data in Azure Blob storage by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 08/09/2021 Last updated : 08/24/2021 # Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 07/19/2021 Last updated : 08/24/2021 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 07/19/2021 Last updated : 08/24/2021 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Previously updated : 06/16/2021 Last updated : 08/24/2021 # Copy data to and from Azure Databricks Delta Lake using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-data-warehouse.md
Previously updated : 08/15/2021 Last updated : 08/24/2021 # Copy and transform data in Azure Synapse Analytics by using Azure Data Factory or Synapse pipelines
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 06/15/2021 Last updated : 08/24/2021 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 03/17/2021 Last updated : 08/24/2021 # Copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
Previously updated : 03/29/2021 Last updated : 08/24/2021
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-http.md
Previously updated : 03/17/2021 Last updated : 08/24/2021 # Copy data from an HTTP endpoint by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-oracle.md
Previously updated : 03/17/2021 Last updated : 08/24/2021
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Previously updated : 05/26/2021 Last updated : 08/24/2021
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
Previously updated : 07/27/2021 Last updated : 08/24/2021 # Copy data from and to a REST endpoint using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-salesforce.md
Previously updated : 03/17/2021 Last updated : 08/24/2021 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
Previously updated : 07/30/2021 Last updated : 08/24/2021 # Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sftp.md
Previously updated : 03/17/2021 Last updated : 08/24/2021 # Copy data from and to the SFTP server using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
Previously updated : 05/19/2020 Last updated : 08/24/2021 # Copy data from SharePoint Online List by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-snowflake.md
Previously updated : 03/16/2021 Last updated : 08/24/2021 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sql-server.md
Previously updated : 06/08/2021 Last updated : 08/24/2021 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
Previously updated : 08/18/2021 Last updated : 08/24/2021
data-factory Continuous Integration Deployment Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment-improvements.md
Previously updated : 02/02/2021 Last updated : 08/23/2021 # Automated publishing for continuous integration and delivery
Two commands are currently available in the package:
### Export ARM template
-Run `npm run start export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example:
+Run `npm run build export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example:
```dos
-npm run start export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
+npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
``` - `RootFolder` is a mandatory field that represents where the Data Factory resources are located.
npm run start export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxx
### Validate
-Run `npm run start validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
+Run `npm run build validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
```dos
-npm run start validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory
+npm run build validate C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory
``` - `RootFolder` is a mandatory field that represents where the Data Factory resources are located.
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
Previously updated : 07/30/2021 Last updated : 08/24/2021 # Azure Function activity in Azure Data Factory
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 05/20/2021 Last updated : 08/24/2021 # Data Flow activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
Previously updated : 07/16/2021 Last updated : 08/24/2021 # Expressions and functions in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
Previously updated : 01/23/2019 Last updated : 08/24/2021 # ForEach activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-get-metadata-activity.md
Previously updated : 02/25/2021 Last updated : 08/24/2021
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-lookup-activity.md
Previously updated : 08/10/2021 Last updated : 08/24/2021 # Lookup activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-set-variable-activity.md
Previously updated : 04/07/2020 Last updated : 08/24/2021
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
Previously updated : 06/12/2018 Last updated : 08/24/2021 # System variables supported by Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-web-activity.md
Previously updated : 12/19/2018 Last updated : 08/24/2021 # Web activity in Azure Data Factory and Azure Synapse Analytics
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-monitoring.md
Previously updated : 03/22/2021 Last updated : 08/24/2021 # Monitor copy activity
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
Previously updated : 6/1/2021 Last updated : 08/24/2021
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-features.md
Previously updated : 09/24/2020 Last updated : 08/24/2021 # Copy activity performance optimization features
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance-troubleshooting.md
Previously updated : 01/07/2021 Last updated : 08/24/2021 # Troubleshoot copy activity performance
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance.md
Previously updated : 09/15/2020 Last updated : 08/24/2021 # Copy activity performance and scalability guide
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-schema-and-type-mapping.md
Previously updated : 06/22/2020 Last updated : 08/24/2021 # Schema and data type mapping in copy activity
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
description: Learn how to create Azure integration runtime in Azure Data Factory
Previously updated : 06/04/2021 Last updated : 08/24/2021
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Previously updated : 06/16/2021 Last updated : 08/24/2021
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
Previously updated : 06/18/2021 Last updated : 08/24/2021
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-alter-row.md
Previously updated : 05/06/2020 Last updated : 08/24/2021 # Alter row transformation in mapping data flow
data-factory Data Flow Derived Column https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-derived-column.md
Previously updated : 09/14/2020 Last updated : 08/24/2021 # Derived column transformation in mapping data flow
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 07/04/2021 Last updated : 08/24/2021 # Data transformation expressions in mapping data flow
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 07/27/2021 Last updated : 08/24/2021 # Sink transformation in mapping data flow
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
Previously updated : 07/27/2021 Last updated : 08/24/2021 # Source transformation in mapping data flow
data-factory Delete Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/delete-activity.md
Previously updated : 08/12/2020 Last updated : 08/24/2021 # Delete Activity in Azure Data Factory and Azure Synapse Analytics
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delimited-text.md
Previously updated : 03/23/2021 Last updated : 08/24/2021
data-factory Format Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-excel.md
Previously updated : 12/08/2020 Last updated : 08/24/2021
data-factory Format Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-json.md
Previously updated : 10/29/2020 Last updated : 08/24/2021
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-parquet.md
Previously updated : 09/27/2020 Last updated : 08/24/2021
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
Previously updated : 03/11/2021 Last updated : 08/24/2021 # Create a trigger that runs a pipeline in response to a storage event
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-schedule-trigger.md
Previously updated : 10/30/2020 Last updated : 08/24/2021
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-tumbling-window-trigger.md
Previously updated : 07/26/2021 Last updated : 08/24/2021 # Create a trigger that runs a pipeline on a tumbling window
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
This article shows you how to use the Data Factory copy data tool to copy data f
To assess upgrading from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2 in general, see [Upgrade your big data analytics solutions from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md). The following sections introduce best practices for using Data Factory for a data upgrade from Data Lake Storage Gen1 to Data Lake Storage Gen2.
-### Historical data copy
+### Initial snapshot data migration
-#### Performance tuning by proof-of-concept
+#### Performance
-Use a proof of concept to verify the end-to-end solution and test the copy throughput in your environment. Major proof-of-concept steps:
+ADF offers a serverless architecture that allows parallelism at different levels, which allows developers to build pipelines to fully utilize your network bandwidth as well as storage IOPS and bandwidth to maximize data movement throughput for your environment.
-1. Create one Data Factory pipeline with a single copy activity to copy several TBs of data from Data Lake Storage Gen1 to Data Lake Storage Gen2 to get a copy performance baseline. Start with [data integration units (DIUs)](copy-activity-performance-features.md#data-integration-units) as 128. The [Parallel copy](copy-activity-performance-features.md#parallel-copy) is suggested to be set as **empty (default)**.
-2. Based on the copy throughput you get in step 1, calculate the estimated time that's required for the entire data migration. If the copy throughput is not good for you, identify and resolve the performance bottlenecks by following the [performance tuning steps](copy-activity-performance.md#performance-tuning-steps).
-3. If you have maximized the performance of a single copy activity, but have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel. Each copy activity can be configured to copy one partition at a time, so that multiple copy activities can copy data from single Data Lake Storage Gen1 account cocurrently. The way to partition the files is to use **name range- listAfter/listBefore** in [copy activity property](connector-azure-data-lake-store.md#copy-activity-properties).
+Customers have successfully migrated petabytes of data consisting of hundreds of millions of files from Data Lake Storage Gen1 to Gen2, with a sustained throughput of 2 GBps and higher.
-If your total data size in Data Lake Storage Gen1 is less than 30 TB and the number of files is less than 1 million, you can copy all data in a single copy activity run. If you have a larger amount of data to copy, or you want the flexibility to manage data migration in batches and make each of them complete within a specific time frame, partition the data. Partitioning also reduces the risk of any unexpected issue.
+you can achieve great data movement speeds through different levels of parallelism:
+- A single copy activity can take advantage of scalable compute resources: when using Azure Integration Runtime, you can specify up to 256 [data integration units (DIUs)](copy-activity-performance-features.md#data-integration-units) for each copy activity in a serverless manner; when using self-hosted Integration Runtime, you can manually scale up the machine or scale out to multiple machines (up to 4 nodes), and a single copy activity will partition its file set across all nodes.
+- A single copy activity reads from and writes to the data store using multiple threads.
+- ADF control flow can start multiple copy activities in parallel, for example using For Each loop.
-#### Network bandwidth and storage I/O
+#### Data partitions
-If you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries impact your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the number of [data integration units (DIUs)](copy-activity-performance-features.md#data-integration-units).
+If your total data size in Data Lake Storage Gen1 is less than 10 TB and the number of files is less than 1 million, you can copy all data in a single copy activity run. If you have a larger amount of data to copy, or you want the flexibility to manage data migration in batches and make each of them complete within a specific time frame, partition the data. Partitioning also reduces the risk of any unexpected issue.
+The way to partition the files is to use **name range- listAfter/listBefore** in [copy activity property](connector-azure-data-lake-store.md#copy-activity-properties). Each copy activity can be configured to copy one partition at a time, so that multiple copy activities can copy data from single Data Lake Storage Gen1 account cocurrently.
-### Incremental copy
+
+#### Rate limiting
+
+As a best practice, conduct a performance POC with a representative sample dataset, so that you can determine an appropriate partition size.
+
+1. Start with a single partition and a single copy activity with default DIU setting. The [Parallel copy](copy-activity-performance-features.md#parallel-copy) is always suggested to be set as **empty (default)**. If the copy throughput is not good for you, identify and resolve the performance bottlenecks by following the [performance tuning steps](copy-activity-performance.md#performance-tuning-steps).
+
+2. Gradually increase the DIU setting until you reach the bandwidth limit of your network or IOPS/bandwidth limit of the data stores, or you have reached the max 256 DIU allowed on a single copy activity.
+
+3. If you have maximized the performance of a single copy activity, but have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel.
+
+When you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries impact your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the DIU.
++
+### Delta data migration
You can use several approaches to load only the new or updated files from Data Lake Storage Gen1:
You can use several approaches to load only the new or updated files from Data L
The proper frequency to do incremental load depends on the total number of files in Azure Data Lake Storage Gen1 and the volume of new or updated files to be loaded every time.
+### Network security
+
+By default, ADF transfers data from Azure Data Lake Storage Gen1 to Gen2 using encrypted connection over HTTPS protocol. HTTPS provides data encryption in transit and prevents eavesdropping and man-in-the-middle attacks.
+
+Alternatively, if you do not want data to be transferred over public Internet, you can achieve higher security by transferring data over a private network.
### Preserve ACLs If you want to replicate the ACLs along with data files when you upgrade from Data Lake Storage Gen1 to Data Lake Storage Gen2, see [Preserve ACLs from Data Lake Storage Gen1](connector-azure-data-lake-storage.md#preserve-acls).
+### Resilience
+
+Within a single copy activity run, ADF has built-in retry mechanism so it can handle a certain level of transient failures in the data stores or in the underlying network. If you migrate more than 10 TB data, you are encouraged to partition the data to reduce the risk of any unexpected issues.
+
+You can also enable [fault tolerance](copy-activity-fault-tolerance.md) in copy activity to skip the predefined errors. The [data consistency verification](copy-activity-data-consistency.md) in copy activity can also be enabled to do additional verification to ensure the data is not only successfully copied from source to destination store, but also verified to be consistent between source and destination store.
++ ### Permissions In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you need, grant high enough permissions for the account you provide to access, read, or write all files and set ACLs if you choose to. Grant it a super-user or owner role during the migration period.
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-sql-data-warehouse.md
Previously updated : 07/28/2021 Last updated : 08/24/2021 # Load data into Azure Synapse Analytics using Azure Data Factory or a Synapse pipeline
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameterize-linked-services.md
Previously updated : 06/01/2021 Last updated : 08/24/2021
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameters-data-flow.md
Previously updated : 04/19/2021 Last updated : 08/24/2021 # Parameterizing mapping data flows
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
Previously updated : 05/31/2021 Last updated : 08/24/2021
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/supported-file-formats-and-compression-codecs.md
Previously updated : 07/16/2020 Last updated : 08/24/2021
data-factory Transform Data Using Dotnet Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-dotnet-custom-activity.md
Previously updated : 11/26/2018 Last updated : 08/24/2021 # Use custom activities in an Azure Data Factory or Azure Synapse Analytics pipeline
data-factory Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data.md
Previously updated : 07/31/2018 Last updated : 08/24/2021 # Transform data in Azure Data Factory and Azure Synapse Analytics
databox-online Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md
Follow these steps to configure the Kubernetes cluster for Azure Arc management:
Add the `CloudEnvironment` parameter if you are using a cloud other than Azure public. You can set this parameter to `AZUREPUBLICCLOUD`, `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, and `AZUREUSGOVERNMENTCLOUD`. > [!NOTE]
- > - To deploy Azure Arc on your device, make sure that you are using a [Supported region for Azure Arc](../azure-arc/kubernetes/overview.md#supported-regions).
+ > - To deploy Azure Arc on your device, make sure that you are using a [Supported region for Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
> - Use the `az account list-locations` command to figure out the exact location name to pass in the `Set-HcsKubernetesAzureArcAgent` cmdlet. Location names are typically formatted without any spaces. > - `ClientId` and `ClientSecret` are required parameters. `ClientSecret` is a secure string.
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/overview.md
Azure Firewall Manager has the following known issues:
|Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.| |DDoS Protection Standard not supported with secured virtual hubs|DDoS Protection Standard is not integrated with vWANs.|Investigating| |Activity logs not fully supported|Firewall policy does not currently support Activity logs.|Investigating|
+|Description of rules not fully supported|Firewall policy does not display the description of rules in an ARM export.|Investigating|
|Azure Firewall Manager overwrites static and custom routes causing downtime in virtual WAN hub.|You should not use Azure Firewall Manager to manage your settings in deployments configured with custom or static routes. Updates from Firewall Manager can potentially overwrite static or custom route settings.|If you use static or custom routes, use the Virtual WAN page to manage security settings and avoid configuration via Azure Firewall Manager.<br><br>For more information, see [Scenario: Azure Firewall - custom](../virtual-wan/scenario-route-between-vnets-firewall.md).| ## Next steps
governance Guest Configuration Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-assignments.md
+
+ Title: Understand guest configuration assignment resources
+description: Guest configuration creates extension resources named guest configuration assignments that map configurations to machines.
Last updated : 08/15/2021++
+# Understand guest configuration assignment resources
+
+When an Azure Policy is assigned, if it's in the category "Guest Configuration"
+there's metadata included to describe a guest assignment.
+
+[A video walk-through of this document is available](https://youtu.be/DmCphySEB7A).
+
+You can think of a guest assignment as a link between a machine and an Azure
+Policy scenario. For example, the following snippet associates the Azure Windows
+Baseline configuration with minimum version `1.0.0` to any machines in scope of
+the policy.
+
+```json
+"metadata": {
+ "category": "Guest Configuration",
+ "guestConfiguration": {
+ "name": "AzureWindowsBaseline",
+ "version": "1.*"
+ }
+//additional metadata properties exist
+```
+
+## How Azure Policy uses guest configuration assignments
+
+The metadata information is used by the guest configuration service to
+automatically create an audit resource for definitions with either
+**AuditIfNotExists** or **DeployIfNotExists** policy effects. The resource type
+is `Microsoft.GuestConfiguration/guestConfigurationAssignments`. Azure Policy
+uses the **complianceStatus** property of the guest assignment resource to
+report compliance status. For more information, see
+[getting compliance data](../how-to/get-compliance-data.md).
+
+### Deletion of guest assignments from Azure Policy
+
+When an Azure Policy assignment is deleted, if a guest configuration assignment
+was created by the policy, the guest configuration assignment is also deleted.
+
+## Manually creating guest configuration assignments
+
+Guest assignment resources in Azure Resource Manager can be created by Azure
+Policy or any client SDK.
+
+An example deployment template:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "apiVersion": "2021-01-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "myMachine/Microsoft.GuestConfiguration/myConfig",
+ "location": "westus2",
+ "properties": {
+ "guestConfiguration": {
+ "name": "myConfig",
+ "contentUri": "https://mystorageaccount.blob.core.windows.net/mystoragecontainer/myConfig.zip?sv=SASTOKEN",
+ "contentHash": "SHA256HASH",
+ "version": "1.0.0",
+ "assignmentType": "ApplyAndMonitor",
+ "configurationParameter": {}
+ }
+ }
+ }
+ ]
+}
+```
+
+The following table describes each property of guest assignment resources.
+
+| Property | Description |
+|-|-|
+| name | Name of the configuration inside the content package MOF file. |
+| contentUri | HTTPS URI path to the content package (.zip). |
+| contentHash | A SHA256 hash value of the content package, used to verify it has not changed. |
+| version | Version of the content package. Only used for built-in packages and not used for custom content packages. |
+| assignmentType | Behavior of the assignment. Allowed values: `Audit`, `ApplyandMonitor`, and `ApplyandAutoCorrect`. |
+| configurationParameter | List of DSC resource type, name, and value in the content package MOF file to be overridden after it's downloaded in the machine. |
+
+### Deletion of manually created guest configuration assignments
+
+Guest configuration assignments created through any manual approach (such as
+an Azure Resource Manager template deployment) must be deleted manually.
+Deleting the parent resource (virtual machine or Arc-enabled machine) will also
+delete the guest configuration assignment.
+
+To manually delete a guest configuration assignment, use the following
+example. Make sure to replace all example strings, indicated by "\<\>" brackets.
+
+```PowerShell
+# First get details about the guest configuration assignment
+$resourceDetails = @{
+ ResourceGroupName = '<myResourceGroupName>'
+ ResourceType = 'Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments/'
+ ResourceName = '<myVMName>/Microsoft.GuestConfiguration'
+ ApiVersion = '2020-06-25'
+}
+$guestAssignment = Get-AzResource @resourceDetails
+
+# Review details of the guest configuration assignment
+$guestAssignment
+
+# After reviewing properties of $guestAssignment to confirm
+$guestAssignment | Remove-AzResource
+```
+
+## Next steps
+
+- Read the [guest configuration overview](./guest-configuration.md).
+- Setup a custom guest configuration package [development environment](../how-to/guest-configuration-create-setup.md).
+- [Create a package artifact](../how-to/guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](../how-to/guest-configuration-create-test.md)
+ from your development environment.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](../how-to/guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
+
+ Title: Changes to behavior in PowerShell Desired State Configuration for guest configuration
+description: This article provides an overview of the platform used to deliver configuration changes to machines through Azure Policy.
Last updated : 05/31/2021++
+# Changes to behavior in PowerShell Desired State Configuration for guest configuration
+
+Before you begin, it's a good idea to read the overview of
+[guest configuration](./guest-configuration.md).
+
+[A video walk-through of this document is available](https://youtu.be/nYd55FiKpgs).
+
+Guest configuration uses
+[Desired State Configuration (DSC)](/powershell/scripting/dsc/overview/overview)
+version 3 to audit and configure machines. The DSC configuration defines the
+state that the machine should be in. There are many notable differences in how
+DSC is implemented in guest configuration.
+
+## Guest configuration uses PowerShell 7 cross platform
+
+Guest configuration is designed so the experience of managing Windows and Linux
+can be consistent. Across both operating system environments, someone with
+PowerShell DSC knowledge can create and publish configurations using scripting
+skills.
+
+Guest configuration only uses PowerShell DSC version 3 and doesn't rely on the
+previous implementation of
+[DSC for Linux](https://github.com/Microsoft/PowerShell-DSC-for-Linux)
+or the "nx" providers included in that repository.
+
+Guest configuration operates in PowerShell 7.1.3 for Windows and PowerShell 7.2
+preview 6 for Linux. Starting with version 7.2, the `PSDesiredStateConfiguration`
+module moved from being part of the PowerShell installation and is instead
+installed as a
+[module from the PowerShell Gallery](https://www.powershellgallery.com/packages/PSDesiredStateConfiguration).
+
+## Multiple configurations
+
+Guest configuration supports assigning multiple configurations to
+the same machine. There are no special steps required within the
+operating system of guest configuration extension. There's no need to configure
+[partial configurations](/powershell/scripting/dsc/pull-server/partialConfigs).
+
+## Configuration mode is set in the package artifact
+
+When creating the configuration package, the mode is set using the following
+options:
+
+- _Audit_: Verifies the compliance of a machine. No changes are made.
+- _AuditandSet_: Verifies and Remediates the compliance state of the machine.
+ Changes are made if the machine isn't compliant.
+
+The mode is set in the package rather than in the
+[Local Configuration Manager](/powershell/scripting/dsc/managing-nodes/metaConfig#basic-settings)
+service because it can be different per configuration, when multiple
+configurations are assigned.
+
+## Parameter support through Azure Resource Manager
+
+Parameters set by the `configurationParameter` property array in
+[guest configuration assignments](guest-configuration-assignments.md)
+overwrite the static text within a configuration MOF file when the file is
+stored on a machine. Parameters allow for customization and changes to be controlled
+by an operator from the service API without needing to run commands within
+the machine.
+
+Parameters in Azure Policy that pass values to guest configuration
+assignments must be _string_ type. It isn't possible to pass arrays through
+parameters, even if the DSC resource supports arrays.
+
+## Sequence of events
+
+When guest configuration audits or configures a machine the same
+sequence of events is used for both Windows and Linux. The notable change in
+behavior is the `Get` method is called by the service to return details about
+the state of the machine.
+
+1. The agent first runs `Test` to determine whether the configuration is in the
+ correct state.
+1. If the package is set to `Audit`, the Boolean value returned by the function
+ determines
+ if the Azure Resource Manager status for the Guest Assignment should be
+ Compliant/Not-Compliant.
+1. If the package is set to `AuditandSet`, the Boolean value determines whether
+ to remediate the machine by applying the configuration using the `Set` method.
+ If the `Test` method returns False, `Set` is run. If `Test` returns True, then
+ `Set` isn't run.
+1. Last, the provider runs `Get` to return the current state of each setting so
+ details are available both about why a machine isn't compliant and to confirm
+ that the current state is compliant.
+
+## Special requirements for Get
+
+The function `Get` method has special requirements for Azure Policy guest
+configuration that haven't been needed for Windows PowerShell Desired State
+Configuration.
+
+- The hashtable that is returned should include a property named **Reasons**.
+- The Reasons property must be an array.
+- Each item in the array should be a hashtable with keys named **Code** and
+ **Phrase**.
+
+The Reasons property is used by the service to standardize how compliance
+information is presented. You can think of each item in Reasons as a "reason"
+that the resource is or isn't compliant. The property is an array because a
+resource could be out of compliance for more than one reason.
+
+The properties **Code** and **Phrase** are expected by the service. When
+authoring a custom resource, set the text (typically stdout) you would like to
+show as the reason the resource isn't compliant as the value for **Phrase**.
+**Code** has specific formatting requirements so reporting can clearly display
+information about the resource used to do the audit. This solution makes guest
+configuration extensible. Any command could be run as long as the output can be
+returned as a string value for the **Phrase** property.
+
+- **Code** (string): The name of the resource, repeated, and then a short name
+ with no spaces as an identifier for the reason. These three values should be
+ colon-delimited with no spaces.
+ - An example would be `registry:registry:keynotpresent`
+- **Phrase** (string): Human-readable text to explain why the setting isn't
+ compliant.
+ - An example would be `The registry key $key isn't present on the machine.`
+
+```powershell
+$reasons = @()
+$reasons += @{
+ Code = 'Name:Name:ReasonIdentifer'
+ Phrase = 'Explain why the setting isn't compliant'
+}
+return @{
+ reasons = $reasons
+}
+```
+
+### The Reasons property embedded class
+
+In script-based resources (Windows only), the Reasons class is included in the
+schema MOF file as follows.
+
+```mof
+[ClassVersion("1.0.0.0")]
+class Reason
+{
+ [Read] String Phrase;
+ [Read] String Code;
+};
+
+[ClassVersion("1.0.0.0"), FriendlyName("ResourceName")]
+class ResourceName : OMI_BaseResource
+{
+ [Key, Description("Example description")] String Example;
+ [Read, EmbeddedInstance("Reason")] String Reasons[];
+};
+```
+
+In class-based resources (Windows and Linux), the `Reason` class is included in
+the PowerShell module as follows. Linux is case-sensitive, so the "C" in Code
+and "P" in Phrase must be capitalized.
+
+```powershell
+enum ensure {
+ Absent
+ Present
+}
+
+class Reason {
+ [DscProperty()]
+ [string] $Code
+
+ [DscProperty()]
+ [string] $Phrase
+}
+
+[DscResource()]
+class Example {
+
+ [DscProperty(Key)]
+ [ensure] $ensure
+
+ [DscProperty()]
+ [Reason[]] $Reasons
+
+ [Example] Get() {
+ # return current current state
+ }
+
+ [void] Set() {
+ # set the state
+ }
+
+ [bool] Test() {
+ # check whether state is correct
+ }
+}
+
+```
+
+If the resource has required properties, those properties should also be
+returned by `Get` in parallel with the `Reason` class. If `Reason` isn't
+included, the service includes a "catch-all" behavior that compares the values
+input to `Get` and the values returned by `Get`, and provides a detailed
+comparison as `Reason`.
+
+## Configuration names
+
+The name of the custom configuration must be consistent everywhere. The name of
+the `.zip` file for the content package, the configuration name in the MOF file,
+and the guest assignment name in the Azure Resource Manager template, must be
+the same.
+
+## Common DSC features not available during guest configuration public preview
+
+During public preview, guest configuration does not support
+[specifying cross-machine dependencies](/powershell/scripting/dsc/configurations/crossnodedependencies)
+using "WaitFor*" resources. It isn't possible for one
+machine to monitor and wait for another machine to reach a state before
+progressing.
+
+[Reboot handling](/powershell/scripting/dsc/configurations/reboot-a-node) isn't
+available in the public preview release of guest configuration, including,
+the `$global:DSCMachineStatus` isn't available. Configurations aren't able to reboot a node during or at the end of a configuration.
+
+## Coexistance with DSC version 3 and previous versions
+
+DSC version 3 in guest configuration can coexist with older versions installed in
+[Windows](/powershell/scripting/dsc/getting-started/wingettingstarted) and
+[Linux](/powershell/scripting/dsc/getting-started/lnxgettingstarted).
+The implementations are separate. However, there's no conflict detection
+across DSC versions, so don't attempt to manage the same settings.
+
+## Next steps
+
+- Read the [guest configuration overview](./guest-configuration.md).
+- Setup a custom guest configuration package [development environment](../how-to/guest-configuration-create-setup.md).
+- [Create a package artifact](../how-to/guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](../how-to/guest-configuration-create-test.md)
+ from your development environment.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](../how-to/guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Policy Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-policy-effects.md
+
+ Title: Remediation options for guest configuration
+description: Azure Policy's guest configuration feature offers options for continuous remediation or control using remediation tasks.
Last updated : 07/12/2021++
+# Remediation options for guest configuration
+
+Before you begin, it's a good idea to read the overview page for
+[guest configuration](../concepts/guest-configuration.md).
+
+> [!IMPORTANT]
+> The guest configuration extension is required for Azure virtual machines. To
+> deploy the extension at scale across all machines, assign the following policy
+> initiative: `Deploy prerequisites to enable guest configuration policies on
+> virtual machines`
+>
+> To use guest configuration packages that apply configurations, Azure VM guest
+> configuration extension version **1.29.24** or later,
+> or Arc agent **1.10.0** or later, is required.
+>
+> Custom guest configuration policy definitions using **AuditIfNotExists** are
+> Generally Available, but definitions using **DeployIfNotExists** with guest
+> configuration are **in preview**.
+
+## How remediation (Set) is managed by guest configuration
+
+Guest configuration uses the policy effect
+[DeployIfNotExists](../concepts/effects.md#deployifnotexists)
+for definitions that deliver changes inside machines.
+Set the properties of a policy assignment to control how
+[evaluation](../concepts/effects.md#deployifnotexists-evaluation)
+delivers configurations automatically or on-demand.
+
+[A video walk-through of this document is available](https://youtu.be/rjAk1eNmDLk).
+
+### Guest configuration assignment types
+
+There are three available assignment types when guest assignments are created.
+The property is available as a parameter of guest configuration definitions
+that support **DeployIfNotExists**.
+
+| Assignment type | Behavior |
+|-|-|
+| Audit | Report on the state of the machine, but don't make changes. |
+| ApplyandMonitor | Applied to the machine once and then monitored for changes. If the configuration drifts and becomes NonCompliant, it won't be automatically corrected unless remediation is triggered. |
+| ApplyandAutoCorrect | Applied to the machine. If it drifts, the local service inside the machine makes a correction at the next evaluation. |
+
+In each of the three assignment types, when a new policy assignment is assigned
+to an existing machine, a guest assignment is automatically created to
+audit the state of the configuration first, providing information to make
+decisions about which machines need remediation.
+
+## Remediation on-demand (ApplyAndMonitor)
+
+By default, guest configuration assignments operates in a "remediation on
+demand" scenario. The configuration is applied and then allowed to drift out of
+compliance. The compliance status of the guest assignment is "Compliant"
+unless an error occurs while applying the configuration or if during the next
+evaluation the machine is no longer in the desired state. The agent reports
+the status as "NonCompliant" and doesn't automatically remediate.
+
+To enable this behavior, set the
+[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype)
+of the guest configuration assignment to "ApplyandMonitor". Each time the
+assignment is processed within the machine, for each resource the
+[Test](/powershell/scripting/dsc/resources/get-test-set#test)
+method returns "true" the agent reports "Compliant"
+or if the method returns "false" the agent reports "NonCompliant".
+
+## Continuous remediation (AutoCorrect)
+
+Guest configuration supports the concept of "continuous remediation". If the machine drifts out of compliance for a configuration, the next time it's evaluated the configuration is corrected automatically. Unless an error occurs, the machine always reports status as "Compliant" for the configuration. There's no way to report when a drift was automatically corrected when using continuous remediation.
+
+To enable this behavior, set the
+[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype)
+of the guest configuration assignment to "ApplyandAutoCorrect". Each time the
+assignment is processed within the machine, for each resource the
+[Test](/powershell/scripting/dsc/resources/get-test-set#test)
+method returns "false", the
+[Set](/powershell/scripting/dsc/resources/get-test-set#set)
+method runs automatically.
+
+## Disable remediation
+
+When the `assignmentType` property is set to "Audit", the agent only
+performs an audit of the machine and doesn't attempt to remediate the configuration
+if it isn't compliant.
+
+### Disable remediation of custom content
+
+You can override the assignment type property for custom content packages by
+adding a tag to the machine with name **CustomGuestConfigurationSetPolicy** and
+value **disable**. Adding the tag disables remediation for custom content
+packages only, not for built-in content provided by Microsoft.
+
+## Azure Policy enforcement
+
+Azure Policy assignments include a required property
+[Enforcement Mode](../concepts/assignment-structure.md#enforcement-mode)
+that determines behavior for new and existing resources.
+Use this property to control whether configurations are automatically applied to
+machines.
+
+**By default, enforcement is "Enabled"**. When a new machine is deployed **or the
+properties of a machine are updated**, if the machine is in the scope of an Azure
+Policy assignment with a policy definition in the category "Guest
+Configuration", Azure Policy automatically applies the configuration. **Update
+operations include actions that occur in Azure Resource Manager** such as adding
+or changing a tag, and for virtual machines, changes such as resizing or
+attaching a disk. Leave enforcement enabled if the configuration should be
+remediated when changes occur to the machine resource in Azure. Changes
+happening inside the machine don't trigger automatic remediation as long as they
+don't change the machine resource in Azure Resource Manager.
+
+If enforcement is set to "Disabled", the configuration assignment
+audits the state of the machine until the behavior is changed by a
+[remediation task](../how-to/remediate-resources.md). By default, guest configuration
+definitions update the
+[assignmentType property](/rest/api/guestconfiguration/guest-configuration-assignments/get#assignmenttype) from "Audit" to "ApplyandMonitor" so the configuration
+is applied one time and then it won't apply again until a remediation is
+triggered.
+
+## OPTIONAL: Remediate all existing machines
+
+If an Azure Policy assignment is created from the Azure portal, on the
+"Remediation" tab a checkbox is available "Create a remediation task". When the
+box is checked, after the policy assignment is created any resources that
+evaluate to "NonCompliant" is automatically be corrected by remediation tasks.
+
+The effect of this setting for guest configuration is that you can deploy a
+configuration across many machines simply by assigning a policy. You won't
+also have to run the remediation task manually for machines that aren't
+compliant.
+
+## Manually trigger remediation outside of Azure Policy
+
+It's also possible to orchestrate remediation outside of the Azure Policy
+experience by updating a guest assignment resource, even if the update
+doesn't make changes to the resource properties.
+
+When a guest configuration assignment is created, the
+[complianceStatus property](/rest/api/guestconfiguration/guest-configuration-assignments/get#compliancestatus)
+is set to "Pending".
+The guest configuration service inside the machine (delivered to Azure
+virtual machines by the
+[Guest configuration extension](../../../virtual-machines/extensions/guest-configuration.md)
+and included with Arc-enabled servers) requests a list of assignments every 5
+minutes.
+If the guest configuration assignment has both requirements, a
+`complianceStatus` of "Pending" and a `configurationMode` of either
+"ApplyandMonitor" or "ApplyandAutoCorrect", the service in the machine
+applies the configuration. After the configuration is applied, at the
+[next interval](./guest-configuration.md#validation-frequency)
+the configuration mode dictates whether the behavior is to only report on
+compliance status and allow drift or to automatically correct.
+
+## Understanding combinations of settings
+
+|~| Audit | ApplyandMonitor | ApplyandAutoCorrect |
+|-|-|-|-|
+| Enforcement Enabled | Only reports status | Configuration applied on VM Create **and re-applied on Update** but otherwise allowed to drift | Configuration applied on VM Create and reapplied on Update and corrected on next interval if drift occurs |
+| Enforcement Disabled | Only reports status | Configuration applied but allowed to drift | Configuration applied on VM Create or Update and corrected on next interval if drift occurs |
+
+## Next steps
+
+- Read the [guest configuration overview](./guest-configuration.md).
+- Setup a custom guest configuration package [development environment](../how-to/guest-configuration-create-setup.md).
+- [Create a package artifact](../how-to/guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](../how-to/guest-configuration-create-test.md)
+ from your development environment.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](../how-to/guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
Title: Learn to audit the contents of virtual machines
-description: Learn how Azure Policy uses the Guest Configuration client to audit settings inside virtual machines.
Previously updated : 05/01/2021
+ Title: Understand the guest configuration feature of Azure Policy
+description: Learn how Azure Policy uses the guest configuration feature to audit or configure settings inside virtual machines.
Last updated : 07/15/2021
-# Understand Azure Policy's Guest Configuration
+# Understand the guest configuration feature of Azure Policy
-Azure Policy can audit settings inside a machine, both for machines running in Azure and
-[Arc Connected Machines](../../../azure-arc/servers/overview.md). The validation is performed by the
-Guest Configuration extension and client. The extension, through the client, validates settings such
-as:
+Azure Policy can audit or configure settings inside a machine, both for machines
+running in Azure and
+[Arc-enabled machines](../../../azure-arc/servers/overview.md).
+Each task is performed by the guest configuration agent in Windows and Linux.
+The guest configuration extension, through the agent, manages settings such as:
- The configuration of the operating system - Application configuration or presence - Environment settings
-At this time, most Azure Policy Guest Configuration policy definitions only audit settings inside
-the machine. They don't apply configurations. The exception is one built-in policy
-[referenced below](#applying-configurations-using-guest-configuration).
+[A video walk-through of this document is available](https://youtu.be/t9L8COY-BkM).
-[A video walk-through of this document is available](https://youtu.be/Y6ryD3gTHOs).
+## Enable guest configuration
-## Enable Guest Configuration
-
-To audit the state of machines in your environment, including machines in Azure and Arc Connected
-Machines, review the following details.
+To manage the state of machines in your environment, including machines in Azure
+and Arc-enabled servers, review the following details.
## Resource provider
-Before you can use Guest Configuration, you must register the resource provider.
-If assignment of a Guest Configuration policy is done through
-the portal, or if the subscription is enrolled in Azure Security Center, the resource
-provider is registered automatically. You can manually register through the
+Before you can use the guest configuration feature of Azure Policy, you must
+register the `Microsoft.GuestConfiguration` resource provider. If assignment of
+a guest configuration policy is done through the portal, or if the subscription
+is enrolled in Azure Security Center, the resource provider is registered
+automatically. You can manually register through the
[portal](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [Azure PowerShell](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell), or
or
## Deploy requirements for Azure virtual machines
-To audit settings inside a machine, a
-[virtual machine extension](../../../virtual-machines/extensions/overview.md) is enabled and the
-machine must have a system-managed identity. The extension downloads applicable policy assignment
-and the corresponding configuration definition. The identity is used to authenticate the machine as
-it reads and writes to the Guest Configuration service. The extension isn't required for Arc
-Connected Machines because it's included in the Arc Connected Machine agent.
+To manage settings inside a machine, a
+[virtual machine extension](../../../virtual-machines/extensions/overview.md) is
+enabled and the machine must have a system-managed identity. The extension
+downloads applicable guest configuration assignment and the corresponding
+dependencies. The identity is used to authenticate the machine as it reads and
+writes to the guest configuration service. The extension isn't required for Arc-enabled
+servers because it's included in the Arc Connected Machine agent.
> [!IMPORTANT]
-> The Guest Configuration extension and a managed identity is required to audit Azure virtual
-> machines. To deploy the extension at scale, assign the following policy initiative:
->
-> `Deploy prerequisites to enable Guest Configuration policies on virtual machines`
+> The guest configuration extension and a managed identity are required to
+> manage Azure virtual machines.
+
+To deploy the extension at scale across many machines, assign the policy initiative
+`Deploy prerequisites to enable guest configuration policies on virtual machines`
+to a management group, subscription, or resource group containing the machines
+that you plan to manage.
+
+If you prefer to deploy the extension and managed identity to a single machine,
+follow the guidance for each:
+
+- [Overview of the Azure Policy Guest Configuration extension](../../../virtual-machines/extensions/guest-configuration.md)
+- [Configure managed identities for Azure resources on a VM using the Azure portal](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
+
+To use guest configuration packages that apply configurations, Azure VM guest
+configuration extension version **1.29.24** or later is required.
### Limits set on the extension
-To limit the extension from impacting applications running inside the machine, the Guest
-Configuration isn't allowed to exceed more than 5% of CPU. This limitation exists for both built-in
-and custom definitions. The same is true for the Guest Configuration service in Arc Connected
-Machine agent.
+To limit the extension from impacting applications running inside the machine,
+the guest configuration agent isn't allowed to exceed more than 5% of CPU. This
+limitation exists for both built-in and custom definitions. The same is true for
+the guest configuration service in Arc Connected Machine agent.
### Validation tools
-Inside the machine, the Guest Configuration client uses local tools to run the audit.
+Inside the machine, the guest configuration agent uses local tools to perform
+tasks.
-The following table shows a list of the local tools used on each supported operating system. For
-built-in content, Guest Configuration handles loading these tools automatically.
+The following table shows a list of the local tools used on each supported
+operating system. For built-in content, guest configuration handles loading
+these tools automatically.
|Operating system|Validation tool|Notes| |-|-|-|
-|Windows|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview) v2| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
+|Windows|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
+|Linux|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path.|
|Linux|[Chef InSpec](https://www.chef.io/inspec/) | Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. | ### Validation frequency
-The Guest Configuration client checks for new or changed guest assignments every 5 minutes. Once a
-guest assignment is received, the settings for that configuration are rechecked on a 15-minute
-interval. Results are sent to the Guest Configuration resource provider when the audit completes.
-When a policy [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers) occurs, the
-state of the machine is written to the Guest Configuration resource provider. This update causes
-Azure Policy to evaluate the Azure Resource Manager properties. An on-demand Azure Policy evaluation
-retrieves the latest value from the Guest Configuration resource provider. However, it doesn't
-trigger a new audit of the configuration within the machine. The status is simultaneously written to
-Azure Resource Graph.
+The guest configuration agent checks for new or changed guest assignments every
+5 minutes. Once a guest assignment is received, the settings for that
+configuration are rechecked on a 15-minute interval. If multiple configurations
+are assigned, each is evaluated sequentially. Long-running configurations impact
+the interval for all configurations, because the next will not run until the
+prior configuration has finished.
+
+Results are sent to the guest configuration service when the audit completes.
+When a policy
+[evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers)
+occurs, the state of the machine is written to the guest configuration resource
+provider. This update causes Azure Policy to evaluate the Azure Resource Manager
+properties. An on-demand Azure Policy evaluation retrieves the latest value from
+the guest configuration resource provider. However, it doesn't trigger a new
+activity within the machine. The status is then written to Azure
+Resource Graph.
## Supported client types
-Guest Configuration policy definitions are inclusive of new versions. Older versions of operating
+Guest configuration policy definitions are inclusive of new versions. Older versions of operating
systems available in Azure Marketplace are excluded if the Guest Configuration client isn't compatible. The following table shows a list of supported operating systems on Azure images. The ".x" text is symbolic to represent new minor versions of Linux distributions. |Publisher|Name|Versions| |-|-|-|
+|Amazon|Linux|2|
|Canonical|Ubuntu Server|14.04 - 20.x| |Credativ|Debian|8 - 10.x| |Microsoft|Windows Server|2012 - 2019| |Microsoft|Windows Client|Windows 10|
+|Oracle|Oracle-Linux|7.x-8.x|
|OpenLogic|CentOS|7.3 -8.x| |Red Hat|Red Hat Enterprise Linux\*|7.4 - 8.x| |SUSE|SLES|12 SP3-SP5, 15.x| \* Red Hat CoreOS isn't supported.
-Custom virtual machine images are supported by Guest Configuration policy definitions as long as
-they're one of the operating systems in the table above.
+Custom virtual machine images are supported by guest configuration policy
+definitions as long as they're one of the operating systems in the table above.
## Network requirements
-Virtual machines in Azure can use either their local network adapter or a private link to
-communicate with the Guest Configuration service.
+Virtual machines in Azure can use either their local network adapter or a
+private link to communicate with the guest configuration service.
-Azure Arc machines connect using the on-premises network infrastructure to reach Azure services and
-report compliance status.
+Azure Arc machines connect using the on-premises network infrastructure to reach
+Azure services and report compliance status.
### Communicate over virtual networks in Azure
-To communicate with the Guest Configuration resource provider in Azure, machines require outbound
-access to Azure datacenters on port **443**. If a network in Azure doesn't allow outbound traffic,
-configure exceptions with [Network Security
-Group](../../../virtual-network/manage-network-security-group.md#create-a-security-rule) rules. The
-[service tags](../../../virtual-network/service-tags-overview.md) "AzureArcInfrastructure" and "Storage" can be
-used to reference the Guest Configuration and Storage services rather than manually maintaining the [list of IP
-ranges](https://www.microsoft.com/download/details.aspx?id=56519) for Azure datacenters. Both tags are required
-because Guest Configuration content packages are hosted by Azure Storage.
+To communicate with the guest configuration resource provider in Azure, machines
+require outbound access to Azure datacenters on port **443**. If a network in
+Azure doesn't allow outbound traffic, configure exceptions with
+[Network Security Group](../../../virtual-network/manage-network-security-group.md#create-a-security-rule)
+rules. The
+[service tags](../../../virtual-network/service-tags-overview.md)
+"AzureArcInfrastructure" and "Storage" can be used to reference the guest
+configuration and Storage services rather than manually maintaining the
+[list of IP ranges](https://www.microsoft.com/download/details.aspx?id=56519)
+for Azure datacenters. Both tags are required because guest configuration
+content packages are hosted by Azure Storage.
### Communicate over Private Link in Azure
-Virtual machines can use [private link](../../../private-link/private-link-overview.md) for
-communication to the Guest Configuration service. Apply tag with the name `EnablePrivateNetworkGC`
-and value `TRUE` to enable this feature. The tag can be applied before
-or after Guest Configuration policy definitions are applied to the machine.
+Virtual machines can use
+[private link](../../../private-link/private-link-overview.md)
+for communication to the guest configuration service. Apply tag with the name
+`EnablePrivateNetworkGC` and value `TRUE` to enable this feature. The tag can be
+applied before or after guest configuration policy definitions are applied to
+the machine.
Traffic is routed using the Azure
-[virtual public IP address](../../../virtual-network/what-is-ip-address-168-63-129-16.md) to
-establish a secure, authenticated channel with Azure platform resources.
+[virtual public IP address](../../../virtual-network/what-is-ip-address-168-63-129-16.md)
+to establish a secure, authenticated channel with Azure platform resources.
-### Azure Arc connected machines
+### Azure Arc-enabled servers
-Nodes located outside Azure that are connected by Azure Arc require connectivity to the Guest
-Configuration service. Details about network and proxy requirements provided in the
+Nodes located outside Azure that are connected by Azure Arc require connectivity
+to the guest configuration service. Details about network and proxy requirements
+provided in the
[Azure Arc documentation](../../../azure-arc/servers/overview.md).
-For Arc connected servers in private datacenters, allow traffic using the following patterns:
+For Arc-enabled servers in private datacenters, allow traffic using the
+following patterns:
- Port: Only TCP 443 required for outbound internet access - Global URL: `*.guestconfiguration.azure.com`
-## Managed identity requirements
-
-Policy definitions in the initiative _Deploy prerequisites to enable Guest Configuration policies on
-virtual machines_ enable a system-assigned managed identity, if one doesn't exist. There are two
-policy definitions in the initiative that manage identity creation. The IF conditions in the policy
-definitions ensure the correct behavior based on the current state of the machine resource in Azure.
-
-If the machine doesn't currently have any managed identities, the effective policy will be:
-[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e)
-
-If the machine currently has a user-assigned system identity, the effective policy will be:
-[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6)
-
-## Guest Configuration definition requirements
-
-Guest Configuration policy definitions use the **AuditIfNotExists** effect. When the definition is
-assigned, a back-end service automatically handles the lifecycle of all requirements in the
-`Microsoft.GuestConfiguration` Azure resource provider.
-
-The **AuditIfNotExists** policy definitions won't return compliance results until all requirements
-are met on the machine. The requirements are described in section
-[Deploy requirements for Azure virtual machines](#deploy-requirements-for-azure-virtual-machines)
-
-> [!IMPORTANT]
-> In a prior release of Guest Configuration, an initiative was required to combine
-> **DeployIfNotExists** and **AuditIfNotExists** definitions. **DeployIfNotExists** definitions are
-> no longer required. The definitions and initiatives are labeled `[Deprecated]` but existing
-> assignments will continue to function. For information see the blog post:
-> [Important change released for Guest Configuration audit policies](https://techcommunity.microsoft.com/t5/azure-governance-and-management/important-change-released-for-guest-configuration-audit-policies/ba-p/1655316)
-
-### What is a Guest Assignment?
-
-When an Azure Policy is assigned, if it's in the category "Guest Configuration" there's metadata
-included to describe a Guest Assignment. You can think of a Guest Assignment as a link between a
-machine and an Azure Policy scenario. For example, the following snippet associates the Azure
-Windows Baseline configuration with minimum version `1.0.0` to any machines in scope of the policy.
-By default, the Guest Assignment will only perform an audit of the machine.
-
-```json
-"metadata": {
- "category": "Guest Configuration",
- "guestConfiguration": {
- "name": "AzureWindowsBaseline",
- "version": "1.*"
- }
-//additional metadata properties exist
-```
-
-Guest Assignments are created automatically per machine by the Guest Configuration service. The
-resource type is `Microsoft.GuestConfiguration/guestConfigurationAssignments`. Azure Policy uses the
-**complianceStatus** property of the Guest Assignment resource to report compliance status. For more
-information, see [getting compliance data](../how-to/get-compliance-data.md).
-
-#### Auditing operating system settings following industry baselines
-
-One initiative in Azure Policy audits operating system settings following a "baseline". The
-definition, _\[Preview\]: Windows machines should meet requirements for the Azure security baseline_
-includes a set of rules based on Active Directory Group Policy.
-
-Most of the settings are available as parameters. Parameters allow you to customize what is audited.
-Align the policy with your requirements or map the policy to third-party information such as
-industry regulatory standards.
+## Assigning policies to machines outside of Azure
-Some parameters support an integer value range. For example, the Maximum Password Age setting could
-audit the effective Group Policy setting. A "1,70" range would confirm that users are required to
-change their passwords at least every 70 days, but no less than one day.
-
-If you assign the policy using an Azure Resource Manager template (ARM template), use a parameters
-file to manage exceptions. Check in the files to a version control system such as Git. Comments
-about file changes provide evidence why an assignment is an exception to the expected value.
-
-#### Applying configurations using Guest Configuration
-
-Only the definition _Configure the time zone on Windows machines_ makes changes to the machine by
-configuring the time zone. Custom policy definitions for configuring settings inside machines aren't
-supported.
+The Audit policy definitions available for guest configuration include the
+**Microsoft.HybridCompute/machines** resource type. Any machines onboarded to
+[Azure Arc for servers](../../../azure-arc/servers/overview.md) that are in the
+scope of the policy assignment are automatically included.
-When assigning definitions that begin with _Configure_, you must also assign the definition _Deploy
-prerequisites to enable Guest Configuration Policy on Windows VMs_. You can combine these
-definitions in an initiative if you choose.
+## Managed identity requirements
-> [!NOTE]
-> The built-in time zone policy is the only definition that supports configuring settings inside
-> machines and custom policy definitions that configure settings inside machines aren't supported.
+Policy definitions in the initiative _Deploy prerequisites to enable guest
+configuration policies on virtual machines_ enable a system-assigned managed
+identity, if one doesn't exist. There are two policy definitions in the
+initiative that manage identity creation. The IF conditions in the policy
+definitions ensure the correct behavior based on the current state of the
+machine resource in Azure.
-#### Assigning policies to machines outside of Azure
+If the machine doesn't currently have any managed identities, the effective
+policy is:
+[Add system-assigned managed identity to enable guest configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e)
-The Audit policy definitions available for Guest Configuration include the
-**Microsoft.HybridCompute/machines** resource type. Any machines onboarded to
-[Azure Arc for servers](../../../azure-arc/servers/overview.md) that are in the scope of the policy
-assignment are automatically included.
+If the machine currently has a user-assigned system identity, the effective
+policy is:
+[Add system-assigned managed identity to enable guest configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6)
## Availability
For single resident region all customer data is stored and processed in the regi
## Troubleshooting guest configuration
-For more information about troubleshooting Guest Configuration, see
+For more information about troubleshooting guest configuration, see
[Azure Policy troubleshooting](../troubleshoot/general.md). ### Multiple assignments
-Guest Configuration policy definitions currently only support assigning the same Guest Assignment
-once per machine, even if the Policy assignment uses different parameters.
+Guest configuration policy definitions currently only support assigning the same
+guest assignment once per machine when the policy assignment uses different
+parameters.
+
+### Assignments to Azure Management Groups
+
+Azure Policy definitions in the category 'Guest Configuration' can be assigned
+to Management Groups only when the effect is 'AuditIfNotExists'. Policy
+definitions with effect 'DeployIfNotExists' aren't supported as assignments to
+Management Groups.
### Client log files
-The Guest Configuration extension writes log files to the following locations:
+The guest configuration extension writes log files to the following locations:
Windows: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log` Linux - Azure VM: `/var/lib/GuestConfig/gc_agent_logs/gc_agent.log`-- Azure VM: `/var/lib/GuestConfig/arc_policy_logs/gc_agent.log`
+- Arc-enabled server: `/var/lib/GuestConfig/arc_policy_logs/gc_agent.log`
### Collecting logs remotely
-The first step in troubleshooting Guest Configuration configurations or modules should be to use the
-`Test-GuestConfigurationPackage` cmdlet following the steps how to
-[create a custom Guest Configuration audit policy for Windows](../how-to/guest-configuration-create.md#step-by-step-creating-a-custom-guest-configuration-audit-policy-for-windows).
+The first step in troubleshooting guest configuration configurations or modules
+should be to use the cmdlets following the steps in
+[How to test guest configuration package artifacts](../how-to/guest-configuration-create-test.md).
If that isn't successful, collecting client logs can help diagnose issues. #### Windows Capture information from log files using
-[Azure VM Run Command](../../../virtual-machines/windows/run-command.md), the following example
-PowerShell script can be helpful.
+[Azure VM Run Command](../../../virtual-machines/windows/run-command.md), the
+following example PowerShell script can be helpful.
```powershell $linesToIncludeBeforeMatch = 0
Select-String -Path $logPath -pattern 'DSCEngine','DSCManagedEngine' -CaseSensit
#### Linux Capture information from log files using
-[Azure VM Run Command](../../../virtual-machines/linux/run-command.md), the following example Bash
-script can be helpful.
+[Azure VM Run Command](../../../virtual-machines/linux/run-command.md), the
+following example Bash script can be helpful.
```bash linesToIncludeBeforeMatch=0
logPath=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log
egrep -B $linesToIncludeBeforeMatch -A $linesToIncludeAfterMatch 'DSCEngine|DSCManagedEngine' $logPath | tail ```
-### Client files
+### Agent files
-The Guest Configuration client downloads content packages to a machine and extracts the contents.
-To verify what content has been downloaded and stored, view the folder locations given below.
+The guest configuration agent downloads content packages to a machine and
+extracts the contents. To verify what content has been downloaded and stored,
+view the folder locations given below.
Windows: `c:\programdata\guestconfig\configuration` Linux: `/var/lib/GuestConfig/Configuration`
-## Guest Configuration samples
+## Guest configuration samples
-Guest Configuration built-in policy samples are available in the following locations:
+Guest configuration built-in policy samples are available in the following
+locations:
- [Built-in policy definitions - Guest Configuration](../samples/built-in-policies.md#guest-configuration) - [Built-in initiatives - Guest Configuration](../samples/built-in-initiatives.md#guest-configuration) - [Azure Policy samples GitHub repo](https://github.com/Azure/azure-policy/tree/master/built-in-policies/policySetDefinitions/Guest%20Configuration)
-### Video overview
-
-The following overview of Azure Policy Guest Configuration is from ITOps Talks 2021.
-
-[Governing baselines in hybrid server environments using Azure Policy Guest Configuration](https://techcommunity.microsoft.com/t5/itops-talk-blog/ops114-governing-baselines-in-hybrid-server-environments-using/ba-p/2109245)
- ## Next steps -- Learn how to view the details each setting from the
- [Guest Configuration compliance view](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration)
-- Review examples at [Azure Policy samples](../samples/index.md).-- Review the [Azure Policy definition structure](./definition-structure.md).-- Review [Understanding policy effects](./effects.md).-- Understand how to [programmatically create policies](../how-to/programmatically-create.md).-- Learn how to [get compliance data](../how-to/get-compliance-data.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+- Setup a custom guest configuration package [development environment](../how-to/guest-configuration-create-setup.md).
+- [Create a package artifact](../how-to/guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](../how-to/guest-configuration-create-test.md)
+ from your development environment.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](../how-to/guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/determine-non-compliance.md
_Non-compliant_ **Component** and **Component ID**.
:::image type="content" source="../media/getting-compliance-data/compliance-components.png" alt-text="Screenshot of Component Compliance tab and compliance details for a Resource Provider mode assignment." border="false":::
-## Compliance details for Guest Configuration
+## Compliance details for guest configuration
For _auditIfNotExists_ policies in the _Guest Configuration_ category, there could be multiple settings evaluated inside the virtual machine and you'll need to view per-setting details. For
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/get-compliance-data.md
Evaluations of assigned policies and initiatives happen as the result of various
pre-defined expectation of when the evaluation cycle completes. Once it completes, updated compliance results are available in the portal and SDKs. -- The [Guest Configuration](../concepts/guest-configuration.md) resource provider is updated with
+- The [guest configuration](../concepts/guest-configuration.md) resource provider is updated with
compliance details by a managed resource. - On-demand scan
governance Guest Configuration Azure Automation Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-azure-automation-migration.md
+
+ Title: Azure Automation State Configuration to guest configuration migration planning
+description: This article provides process and technical guidance for customers interested in moving from DSC version 2 in Azure Automation to version 3 in Azure Policy.
Last updated : 07/1/2021++
+# Azure Automation state configuration to guest configuration migration planning
+
+Guest configuration is the latest implementation of functionality
+that has been provided by Azure Automation State Configuration (also known as
+Azure Automation Desired State Configuration, or AADSC).
+When possible, you should plan to move your content and machines to the new service.
+This article provides guidance on developing a migration strategy from Azure
+Automation to guest configuration.
+
+New features in guest configuration address top asks from customers:
+
+- Increased size limit for configurations ( 100MB )
+- Advanced reporting through Azure Resource Graph including resource ID and state
+- Manage multiple configurations for the same machine
+- When machines drift from the desired state, you control when remediation occurs
+- Linux and Windows both consume PowerShell-based DSC resources
+
+Before you begin, it's a good idea to read the conceptual overview
+information at the page
+[Azure Policy's guest configuration](../concepts/guest-configuration.md).
+
+## Understand migration
+
+The best approach to migration is to redeploy content first, and then
+migrate machines. The expected steps for migration are outlined.
+
+- Export configurations from Azure Automation
+- Discover module requirements and load them in your environment
+- Compile configurations
+- Create and publish guest configuration packages
+- Test guest configuration packages
+- Onboard hybrid machines to Azure Arc
+- Unregister servers from Azure Automation State Configuration
+- Assign configurations to servers using guest configuration
+
+Guest configuration uses DSC version 3 with PowerShell version 7.
+DSC version 3 can coexist with older versions of DSC in
+[Windows](/powershell/scripting/dsc/getting-started/wingettingstarted) and
+[Linux](/powershell/scripting/dsc/getting-started/lnxgettingstarted).
+The implementations are separate. However, there's no conflict detection.
+
+Guest configuration doesn't require publishing modules or configurations in to
+a service, or compiling in a service. Instead, content is developed and tested
+using purpose-built tooling and published anywhere the machine can reach over
+HTTPS (typically Azure Blob Storage).
+
+If you decide the right plan for your migration is to have machines in both
+services for some period of time, while that could be confusing to manage,
+there are no technical barriers. The two services are independent.
+
+## Export content from Azure Automation
+
+Start by discovering and exporting content from Azure Automation State
+Configuration in to a development environment where you create, test, and publish
+content packages for guest configuration.
+
+### Configurations
+
+Only configuration scripts can be exported from Azure Automation. It isn't
+possible to export "Node configurations", or compiled MOF files.
+If you published MOF files directly in to the Automation Account and no longer
+have access to the original file, you must recompile from your private
+configuration scripts, or possibly re-author the configuration if the original
+can't be found.
+
+To export configuration scripts from Azure Automation, first identify the Azure
+Automation account that contains the configurations and the name of the Resource
+Group where the Automation Account is deployed.
+
+Install the PowerShell module "Az.Automation".
+
+```powershell
+Install-Module Az.Automation
+```
+
+Next, use the "Get-AzAutomationAccount" command to identify your Automation
+Accounts and the Resource Group where they're deployed.
+The properties "ResourceGroupName" and "AutomationAccountName"
+are important for next steps.
+
+```powershell
+Get-AzAutomationAccount
+
+SubscriptionId : <your subscription id>
+ResourceGroupName : <your resource group name>
+AutomationAccountName : <your automation account name>
+Location : centralus
+State :
+Plan :
+CreationTime : 6/30/2021 11:56:17 AM -05:00
+LastModifiedTime : 6/30/2021 11:56:17 AM -05:00
+LastModifiedBy :
+Tags : {}
+```
+
+Discover the configurations in your Automation Account. The output
+contains one entry per configuration. If you have many, store the information
+as a variable so it's easier to work with.
+
+```powershell
+Get-AzAutomationDscConfiguration -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name>
+
+ResourceGroupName : <your resource group name>
+AutomationAccountName : <your automation account name>
+Location : centralus
+State : Published
+Name : <your configuration name>
+Tags : {}
+CreationTime : 6/30/2021 12:18:26 PM -05:00
+LastModifiedTime : 6/30/2021 12:18:26 PM -05:00
+Description :
+Parameters : {}
+LogVerbose : False
+```
+
+Finally, export each configuration to a local script file using the command
+"Export-AzAutomationDscConfiguration". The resulting file name uses the
+pattern `\ConfigurationName.ps1`.
+
+```powershell
+Export-AzAutomationDscConfiguration -OutputFolder /<location on your machine> -ResourceGroupName <your resource group name> -AutomationAccountName <your automation account name> -name <your configuration name>
+
+UnixMode User Group LastWriteTime Size Name
+-- - -- - - -
+ 12/31/1600 18:09
+```
+
+#### Export configurations using the PowerShell pipeline
+
+After you've discovered your accounts and the number of configurations,
+you might wish to export all configurations to a local folder on your machine.
+To automate this process, pipe the output of each command above to the next.
+
+The example exports 5 configurations. The output pattern is
+the only indication of success.
+
+```powershell
+Get-AzAutomationAccount | Get-AzAutomationDscConfiguration | Export-AzAutomationDSCConfiguration -OutputFolder /<location on your machine>
+
+UnixMode User Group LastWriteTime Size Name
+-- - -- - - -
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+ 12/31/1600 18:09
+```
+
+#### Consider decomposing complex configuration files
+
+Guest configuration can manage multiple configurations per machine.
+Many configurations written for Azure Automation State Configuration assumed the
+limitation of managing a single configuration per machine. To take advantage of
+the expanded capabilities offered by guest configuration, large
+configuration files can be divided into many smaller configurations where each
+handles a specific scenario.
+
+There is no orchestration in guest configuration to control the order of how
+configurations are sorted, so keep steps in a configuration together in one
+package if they are required to happen sequentially.
+
+### Modules
+
+It isn't possible to export modules from Azure Automation or automatically
+correlate which configurations require which module/version. You must
+have the modules in your local environment to create a new guest configuration
+package. To create a list of modules you need for migration, use PowerShell to
+query Azure Automation for the name and version of modules.
+
+If you are using modules that are custom authored and only exist in your private
+development environment, it isn't possible to export them from Azure
+Automation.
+
+If a custom module is required for a configuration and is in the account, but you
+can't find it in your environment, you won't be able to compile the
+configuration, which means you won't be able to migrate the configuration.
+
+#### List modules imported in Azure Automation
+
+To retrieve a list of all modules that are installed in your automation account,
+use the `Get-AzAutomationModule` command. The property "IsGlobal" tells you
+if the module is built in to Azure Automation always, or if it was published to
+the account.
+
+For example, to create a list of all modules published to any of your accounts.
+
+```powershell
+Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $false
+```
+
+You can also use the PowerShell Gallery as an aid in finding details about
+modules that are publicly available. For example, the list of modules that are
+built in to new Automation Accounts, and that contain DSC resources, is produced
+by the following example.
+
+```powershell
+Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $true | Find-Module -erroraction silentlycontinue | ? {'' -ne $_.Includes.DscResource} | Select Name, Version -Unique | format-table -AutoSize
+
+Name Version
+- -
+AuditPolicyDsc 1.4.0
+ComputerManagementDsc 8.4.0
+PSDscResources 2.12.0
+SecurityPolicyDsc 2.10.0
+xDSCDomainjoin 1.2.23
+xPowerShellExecutionPolicy 3.1.0.0
+xRemoteDesktopAdmin 1.1.0.0
+```
+
+#### Download modules from PowerShell Gallery or PowerShellGet repository
+
+If the modules were imported from the PowerShell Gallery, you can pipe the output
+from `Find-Module` directly in `Install-Module`. Piping the output across commands
+provides a solution to load a developer environment with all modules currently in
+an Automation Account that are available publicly in the PowerShell Gallery.
+
+The same approach could be used to pull modules from a custom NuGet feed, if
+the feed is registered in your local environment as a
+[PowerShellGet repository](/powershell/scripting/gallery/how-to/working-with-local-psrepositories).
+
+The `Find-Module` command in the example doesn't suppress errors, meaning
+any modules not found in the gallery return an error message.
+
+```powershell
+Get-AzAutomationAccount | Get-AzAutomationModule | ? IsGlobal -eq $false | Find-Module | ? {'' -ne $_.Includes.DscResource} | Install-Module
+
+ Installing package xWebAdministration'
+
+ [ ]
+```
+
+#### Inspecting configuration scripts for module requirements
+
+If you've exported configuration scripts from Azure Automation, you can also
+review the contents for details about which modules are required to compile each
+configuration to a MOF file. This approach would only be needed if you find
+configurations in your Automation Accounts where the modules have been removed.
+The configurations would no longer be useful for machines, but they might still
+be in the account.
+
+Towards the top of each file, look for a line that includes 'Import-DscResource'.
+This command is only applicable inside a configuration, and is used to load modules
+at the time of compilation.
+
+For example, the "WindowsIISServerConfig" configuration in the PowerShell Gallery
+contains the lines in this example.
+
+```powershell
+configuration WindowsIISServerConfig
+{
+
+Import-DscResource -ModuleName @{ModuleName = 'xWebAdministration';ModuleVersion = '1.19.0.0'}
+Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
+```
+
+The configuration requires you to have the "xWebAdministration" module version
+"1.19.0.0" and the module "PSDesiredStateConfiguration".
+
+### Test content in Azure guest configuration
+
+The best way to evaluate whether your content from Azure Automation State
+Configuration can be used with guest configuration is to follow
+the step-by-step tutorial in the page
+[How to create custom guest configuration package artifacts](./guest-configuration-create.md).
+
+When you reach the step
+[Author a configuration](./guest-configuration-create.md#author-a-configuration),
+the configuration script that generates a MOF file should be one of the scripts
+you exported from Azure Automation State Configuration. You must have the
+required PowerShell modules installed in your environment before you can compile
+the configuration to a MOF file and create a guest configuration package.
+
+#### What if a module does not work with guest configuration?
+
+Some modules might encounter compatibility issues with guest configuration. The
+most common problems are related to .NET framework vs .NET core. Detailed
+technical information is available on the page,
+[Differences between Windows PowerShell 5.1 and PowerShell (core) 7.x](/powershell/scripting/whats-new/differences-from-windows-powershell)
+
+One option to resolve compatibility issues is to run commands in Windows PowerShell
+from within a module that is imported in PowerShell 7, by running `powershell.exe`.
+You can review a sample module that uses this technique in the Azure-Policy repo
+where it is used to audit the state of
+[Windows DSC Configuration](https://github.com/Azure/azure-policy/blob/bbfc60104c2c5b7fa6dd5b784b5d4713ddd55218/samples/GuestConfiguration/package-samples/resource-modules/WindowsDscConfiguration/DscResources/WindowsDscConfiguration/WindowsDscConfiguration.psm1#L97).
+
+The example also illustrates a small proof of concept.
+
+```powershell
+# example function that could be loaded from module
+function New-TaskResolvedInPWSH7 {
+ # runs the fictitious command 'Get-myNotCompatibleCommand' in Windows PowerShell
+ $compatObject = & powershell.exe -noprofile -NonInteractive -command { Get-myNotCompatibleCommand }
+ # resulting object can be used in PowerShell 7
+ return $compatObject
+}
+```
+
+#### Will I have to add "Reasons" property to Get-TargetResource in all modules I migrate?
+
+Implementing the
+["Reasons" property](../concepts/guest-configuration-custom.md#special-requirements-for-get)
+provides a better experience when viewing
+the results of a configuration assignment from the Azure Portal. If the `Get`
+method in a module doesn't include "Reasons", generic output is returned
+with details from the properties returned by the `Get` method. Therefore,
+it's optional for migration.
+
+## Machines
+
+After you've finished testing content from Azure Automation State Configuration
+in guest configuration, develop a plan for migrating machines.
+
+Azure Automation State Configuration is available for both virtual machines in
+Azure and hybrid machines located outside of Azure. You must plan for each of
+these scenarios using different steps.
+
+### Azure VMs
+
+Azure virtual machines already have a
+[resource](../../../azure-resource-manager/management/overview.md#terminology)
+in Azure, which means they're ready for guest configuration assignments that
+associate them with a configuration. The high-level tasks for migrating Azure
+virtual machines are to remove them from Azure Automation State Configuration
+and then assign configurations using guest configuration.
+
+To remove a machine from Azure Automation State Configuration, follow the steps
+in the page
+[How to remove a configuration and node from Automation State Configuration](../../../automation/state-configuration/remove-node-and-configuration-package.md).
+
+To assign configurations using guest configuration, follow the steps in the
+Azure Policy Quickstarts, such as
+[Quickstart: Create a policy assignment to identify non-compliant resources](../assign-policy-portal.md).
+In step 6 when selecting a policy definition, pick the definition that applies
+a configuration you migrated from Azure Automation State Configuration.
+
+### Hybrid machines
+
+Machines outside of Azure
+[can be registered to Azure Automation State Configuration](../../../automation/automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines),
+but they don't have a machine resource in Azure. The connection
+to Azure Automation is handled by Local Configuration Manager service inside
+the machine and the record of the node is managed as a resource in the Azure
+Automation provider type.
+
+Before removing a machine from Azure Automation State Configuration,
+onboard each node as an
+[Azure Arc-enabled server](../../../azure-arc/servers/overview.md).
+Onboard to Azure Arc creates a machine resource in Azure so the machine
+can be managed by Azure Policy. The machine can be onboarded to Azure Arc at any
+time but you can use Azure Automation State Configuration to automate the process.
+
+You can register a machine to Azure Arc-enabled servers by using PowerShell DSC.
+For details, view the page
+[How to install the Connected Machine agent using Windows PowerShell DSC](../../../azure-arc/servers/onboard-dsc.md).
+Remember however, that Azure Automation State Configuration can manage only one
+configuration per machine, per Automation Account. This means you have the option
+to export, test, and prepare your content for guest configuration, and then
+"switch" the node configuration in Azure Automation to onboard to Azure Arc. As
+the last step, you remove the node registration from Azure Automation State
+Configuration and move forward only managing the machine state through guest
+configuration.
+
+## Troubleshooting issues when exporting content
+
+Details about known issues are provided
+
+### Exporting configurations results in "\\" character in file name
+
+When using PowerShell on MacOS/Linux, you encounter issues dealing with the file
+names output by `Export-AzAutomationDSCConfiguration`.
+
+As a workaround, a module has been published to the PowerShell Gallery named
+[AADSCConfigContent](https://www.powershellgallery.com/packages/AADSCConfigContent/).
+The module has only one command, which exports the content
+of a configuration stored in Azure Automation by making a REST request to the
+service.
+
+## Next steps
+
+- [Create a package artifact](./guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](./guest-configuration-create-test.md)
+ from your development environment.
+- [Publish the package artifact](./guest-configuration-create-publish.md)
+ so it is accessible to your machines.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](./guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](./determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Create Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-assignment.md
+
+ Title: How to create a guest configuration assignment using templates
+description: Learn how to deploy configurations to machines directly from Azure Resource Manager.
Last updated : 08/09/2021++
+# How to create a guest configuration assignment using templates
+
+The best way to
+[assign guest configuration packages](../concepts/guest-configuration-assignments.md)
+to multiple machines is using
+[Azure Policy](./guest-configuration-create-definition.md). You can also
+assign guest configuration packages to a single machine.
+
+## Built-in and custom configurations
+
+To assign a guest configuration package to a single machine, modify the following
+examples. There are two scenarios.
+
+- Apply a custom configuration to a machine using a link to a package that you
+ [published](./guest-configuration-create-publish.md).
+- Apply a [built-in](../samples/built-in-packages.md) configuration to a machine,
+ such as an Azure baseline.
+
+## Extending other resource types, such as Arc-enabled servers
+
+In each of the following sections, the example includes a **type** property
+where the name starts with `Microsoft.Compute/virtualMachines`. The guest
+configuration resource provider `Microsoft.GuestConfiguration` is an
+[extension resource](../../../azure-resource-manager/management/extension-resource-types.md)
+that must reference a parent type.
+
+To modify the example for other resource types such as
+[Arc-enabled servers](../../../azure-arc/servers/overview.md),
+change the parent type to the name of the resource provider.
+For Arc-enabled servers, the resource provider is
+`Microsoft.HybridCompute/machines`.
+
+Replace the following "<>" fields with values specific to you environment:
+
+- **<vm_name>**: Name of the machine resource where the configuration will be applied
+- **<configuration_name>**: Name of the configuration to apply
+- **<vm_location>**: Azure region where the guest configuration assignment will be created
+- **<Url_to_Package.zip>**: For custom content package, an HTTPS link to the .zip file
+- **<SHA256_hash_of_package.zip>**: For custom content package, a SHA256 hash of the .zip file
+
+## Assign a configuration using an Azure Resource Manager template
+
+You can deploy an
+[Azure Resource Manager template](../../../azure-resource-manager/templates/deployment-tutorial-local-template.md?tabs=azure-powershell)
+containing guest configuration assignment resources.
+
+The following example assigns a custom configuration.
+
+```json
+{
+ "apiVersion": "2020-06-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
+ "location": "<vm_location>",
+ "dependsOn": [
+ "Microsoft.Compute/virtualMachines/<vm_name>"
+ ],
+ "properties": {
+ "guestConfiguration": {
+ "name": "<configuration_name>",
+ "contentUri": "<Url_to_Package.zip>",
+ "contentHash": "<SHA256_hash_of_package.zip>",
+ "assignmentType": "ApplyAndMonitor"
+ }
+ }
+ }
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```json
+{
+ "apiVersion": "2020-06-25",
+ "type": "Microsoft.Compute/virtualMachines/providers/guestConfigurationAssignments",
+ "name": "<vm_name>/Microsoft.GuestConfiguration/<configuration_name>",
+ "location": "<vm_location>",
+ "dependsOn": [
+ "Microsoft.Compute/virtualMachines/<vm_name>"
+ ],
+ "properties": {
+ "guestConfiguration": {
+ "name": "AzureWindowsBaseline",
+ "version": "1.*",
+ "assignmentType": "ApplyAndMonitor",
+ "configurationParameter": [
+ {
+ "name": "Minimum Password Length;ExpectedValue",
+ "value": "16"
+ },
+ {
+ "name": "Minimum Password Length;RemediateValue",
+ "value": "16"
+ },
+ {
+ "name": "Maximum Password Age;ExpectedValue",
+ "value": "75"
+ },
+ {
+ "name": "Maximum Password Age;RemediateValue",
+ "value": "75"
+ }
+ ]
+ }
+ }
+ }
+```
+
+## Assign a configuration using Bicep
+
+You can use
+[Azure Bicep](../../../azure-resource-manager/bicep/overview.md)
+to deploy guest configuration assignments.
+
+The following example assigns a custom configuration.
+
+```Bicep
+resource myVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
+ name: '<vm_name>'
+}
+
+resource myConfiguration 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
+ name: '<configuration_name>'
+ scope: myVM
+ location: resourceGroup().location
+ properties: {
+ guestConfiguration: {
+ name: '<configuration_name>'
+ contentUri: '<Url_to_Package.zip>'
+ contentHash: '<SHA256_hash_of_package.zip>'
+ version: '1.*'
+ assignmentType: 'ApplyAndMonitor'
+ }
+ }
+}
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```Bicep
+resource myWindowsVM 'Microsoft.Compute/virtualMachines@2021-03-01' existing = {
+ name: '<vm_name>'
+}
+
+resource AzureWindowsBaseline 'Microsoft.GuestConfiguration/guestConfigurationAssignments@2020-06-25' = {
+ name: 'AzureWindowsBaseline'
+ scope: myWindowsVM
+ location: resourceGroup().location
+ properties: {
+ guestConfiguration: {
+ name: 'AzureWindowsBaseline'
+ version: '1.*'
+ assignmentType: 'ApplyAndMonitor'
+ configurationParameter: [
+ {
+ name: 'Minimum Password Length;ExpectedValue'
+ value: '16'
+ }
+ {
+ name: 'Minimum Password Length;RemediateValue'
+ value: '16'
+ }
+ {
+ name: 'Maximum Password Age;ExpectedValue'
+ value: '75'
+ }
+ {
+ name: 'Maximum Password Age;RemediateValue'
+ value: '75'
+ }
+ ]
+ }
+ }
+}
+```
+
+## Assign a configuration using Terraform
+
+You can use
+[Terraform](https://www.terraform.io/)
+to
+[deploy](/azure/developer/terraform/get-started-windows-powershell)
+guest configuration assignments.
+
+> [!IMPORTANT]
+> The Terraform provider
+> [azurerm_policy_virtual_machine_configuration_assignment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_configuration_policy_assignment)
+> hasn't been updated to support the `assignmentType` property so only
+> configurations that perform audits are supported.
+
+The following example assigns a custom configuration.
+
+```Terraform
+resource "azurerm_virtual_machine_configuration_policy_assignment" "<configuration_name>" {
+ name = "<configuration_name>"
+ location = azurerm_windows_virtual_machine.example.location
+ virtual_machine_id = azurerm_windows_virtual_machine.example.id
+ configuration {
+ name = "<configuration_name>"
+ contentUri = '<Url_to_Package.zip>'
+ contentHash = '<SHA256_hash_of_package.zip>'
+ version = "1.*"
+ assignmentType = "ApplyAndMonitor
+ }
+}
+```
+
+The following example assigns the `AzureWindowBaseline` built-in configuration.
+
+```Terraform
+resource "azurerm_virtual_machine_configuration_policy_assignment" "AzureWindowsBaseline" {
+ name = "AzureWindowsBaseline"
+ location = azurerm_windows_virtual_machine.example.location
+ virtual_machine_id = azurerm_windows_virtual_machine.example.id
+ configuration {
+ name = "AzureWindowsBaseline"
+ version = "1.*"
+ parameter {
+ name = "Minimum Password Length;ExpectedValue"
+ value = "16"
+ }
+ parameter {
+ name = "Minimum Password Length;RemediateValue"
+ value = "16"
+ }
+ parameter {
+ name = "Minimum Password Age;ExpectedValue"
+ value = "75"
+ }
+ parameter {
+ name = "Minimum Password Age;RemediateValue"
+ value = "75"
+ }
+ }
+}
+```
+
+## Next steps
+
+- Read the [guest configuration overview](../concepts/guest-configuration.md).
+- Setup a custom guest configuration package [development environment](../how-to/guest-configuration-create-setup.md).
+- [Create a package artifact](../how-to/guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](../how-to/guest-configuration-create-test.md)
+ from your development environment.
+- [Publish the package artifact](./guest-configuration-create-publish.md)
+ so it is accessible to your machines.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](../how-to/guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-definition.md
+
+ Title: How to create custom guest configuration policy definitions
+description: Learn how to create a guest configuration policy.
Last updated : 07/22/2021++
+# How to create custom guest configuration policy definitions
+
+Before you begin, it's a good idea to read the overview page for
+[guest configuration](../concepts/guest-configuration.md),
+and the details about guest configuration policy effects
+[How to configure remediation options for guest configuration](../concepts/guest-configuration-policy-effects.md).
+
+> [!IMPORTANT]
+> The guest configuration extension is required for Azure virtual machines. To
+> deploy the extension at scale across all machines, assign the following policy
+> initiative: `Deploy prerequisites to enable guest configuration policies on
+> virtual machines`
+>
+> To use guest configuration packages that apply configurations, Azure VM guest
+> configuration extension version **1.29.24** or later,
+> or Arc agent **1.10.0** or later, is required.
+>
+> Custom guest configuration policy definitions using **AuditIfNotExists** are
+> Generally Available, but definitions using **DeployIfNotExists** with guest
+> configuration are **in preview**.
+
+Use the following steps to create your own policies that audit compliance or
+manage the state of Azure or Arc-enabled machines.
+
+## Install PowerShell 7 and required PowerShell modules
+
+First, make sure you've followed all steps on the page
+[How to setup a guest configuration authoring environment](./guest-configuration-create-setup.md)
+to install the required version of PowerShell for your OS and the
+`GuestConfiguration` module.
+
+## Create and publish a guest configuration package artifact
+
+If you haven't already, follow all steps on the page
+[How to create custom guest configuration package artifacts](./guest-configuration-create.md)
+to create and publish a custom guest configuration package
+and
+[How to test guest configuration package artifacts](./guest-configuration-create-test.md) to validate the guest configuration package locally in your
+development environment.
+
+## Policy requirements for guest configuration
+
+The policy definition `metadata` section must include two properties for the
+guest configuration service to automate provisioning and reporting of guest
+configuration assignments. The `category` property must be set to "Guest
+Configuration" and a section named `guestConfiguration` must contain information
+about the guest configuration assignment. The `New-GuestConfigurationPolicy`
+cmdlet creates this text automatically.
+
+The following example demonstrates the `metadata` section that is automatically
+created by `New-GuestConfigurationPolicy`.
+
+```json
+ "metadata": {
+ "category": "Guest Configuration",
+ "guestConfiguration": {
+ "name": "test",
+ "version": "1.0.0",
+ "contentType": "Custom",
+ "contentUri": "CUSTOM-URI-HERE",
+ "contentHash": "CUSTOM-HASH-VALUE-HERE",
+ "configurationParameter": {}
+ }
+ },
+```
+
+The `category` property must be set to "Guest Configuration". If the definition
+effect is set to "DeployIfNotExists", the `then` section must contain deployment
+details about a guest configuration assignment. The
+`New-GuestConfigurationPolicy` cmdlet creates this text automatically.
+
+### Create an Azure Policy definition
+
+Once a guest configuration custom policy package has been created and uploaded,
+create the guest configuration policy definition. The `New-GuestConfigurationPolicy`
+cmdlet takes a custom policy package and creates a policy definition.
+
+The **PolicyId** parameter of `New-GuestConfigurationPolicy` requires a unique
+string. A globally unique identifier (GUID) is recommended. For new definitions,
+generate a new GUID using the cmdlet `New-GUID`. When making updates to the
+definition, use the same unique string for **PolicyId** to ensure the correct
+definition is updated.
+
+Parameters of the `New-GuestConfigurationPolicy` cmdlet:
+
+- **PolicyId**: A GUID or other unique string that identifies the definition.
+- **ContentUri**: Public HTTP(s) URI of guest configuration content package.
+- **DisplayName**: Policy display name.
+- **Description**: Policy description.
+- **Parameter**: Policy parameters provided in hashtable format.
+- **Version**: Policy version.
+- **Path**: Destination path where policy definitions are created.
+- **Platform**: Target platform (Windows/Linux) for guest configuration policy
+ and content package.
+- **Mode**: (ApplyAndMonitor, ApplyAndAutoCorrect, Audit) choose if the policy
+ should audit or deploy the configuration. Default is "Audit".
+- **Tag** adds one or more tag filters to the policy definition
+- **Category** sets the category metadata field in the policy definition
+
+For more information about the "Mode" parameter, see the page
+[How to configure remediation options for guest configuration](../concepts/guest-configuration-policy-effects.md).
+
+Create a policy definition that audits using a custom
+configuration package, in a specified path:
+
+```powershell
+New-GuestConfigurationPolicy `
+ -PolicyId 'My GUID'
+ -ContentUri '<paste the ContentUri output from the Publish command>' `
+ -DisplayName 'My audit policy.' `
+ -Description 'Details about my policy.' `
+ -Path './policies' `
+ -Platform 'Windows' `
+ -Version 1.0.0 `
+ -Verbose
+```
+
+Create a policy definition that deploys a configuration using a custom
+configuration package, in a specified path:
+
+```powershell
+New-GuestConfigurationPolicy `
+ -PolicyId 'My GUID'
+ -ContentUri '<paste the ContentUri output from the Publish command>' `
+ -DisplayName 'My audit policy.' `
+ -Description 'Details about my policy.' `
+ -Path './policies' `
+ -Platform 'Windows' `
+ -Version 1.0.0 `
+ -Mode 'ApplyAndAutoCorrect' `
+ -Verbose
+```
+
+The cmdlet output returns an object containing the definition display name and
+path of the policy files. Definition JSON files that create audit policy definitions
+have the name **auditIfNotExists.json** and files that create policy definitions to
+apply configurations have the name **deployIfNotExists.json**.
+
+#### Filtering guest configuration policies using tags
+
+The policy definitions created by cmdlets in the Guest Configuration can optionally include a
+filter for tags. The **Tag** parameter of `New-GuestConfigurationPolicy` supports an array of
+hashtables containing individual tag entires. The tags are added to the `If` section of the policy
+definition and can't be modified by a policy assignment.
+
+An example snippet of a policy definition that filters for tags is given below.
+
+```json
+"if": {
+ "allOf" : [
+ {
+ "allOf": [
+ {
+ "field": "tags.Owner",
+ "equals": "BusinessUnit"
+ },
+ {
+ "field": "tags.Role",
+ "equals": "Web"
+ }
+ ]
+ },
+ {
+ // Original guest configuration content
+ }
+ ]
+}
+```
+
+#### Using parameters in custom guest configuration policy definitions
+
+Guest configuration supports overriding properties of a Configuration at run time. This feature
+means that the values in the MOF file in the package don't have to be considered static. The
+override values are provided through Azure Policy and don't change how the Configurations are
+authored or compiled.
+
+The cmdlets `New-GuestConfigurationPolicy` and `Get-GuestConfigurationPacakgeComplianceStatus ` include a
+parameter named **Parameter**. This parameter takes a hashtable definition including all details
+about each parameter and creates the required sections of each file used for the Azure Policy
+definition.
+
+The following example creates a policy definition to audit a service, where the user selects from a
+list at the time of policy assignment.
+
+```powershell
+# This DSC Resource text:
+Service 'UserSelectedNameExample'
+ {
+ Name = 'ParameterValue'
+ Ensure = 'Present'
+ State = 'Running'
+ }`
+
+# Would require the following hashtable:
+$PolicyParameterInfo = @(
+ @{
+ Name = 'ServiceName' # Policy parameter name (mandatory)
+ DisplayName = 'windows service name.' # Policy parameter display name (mandatory)
+ Description = 'Name of the windows service to be audited.' # Policy parameter description (optional)
+ ResourceType = 'Service' # DSC configuration resource type (mandatory)
+ ResourceId = 'UserSelectedNameExample' # DSC configuration resource id (mandatory)
+ ResourcePropertyName = 'Name' # DSC configuration resource property name (mandatory)
+ DefaultValue = 'winrm' # Policy parameter default value (optional)
+ AllowedValues = @('BDESVC','TermService','wuauserv','winrm') # Policy parameter allowed values (optional)
+ }
+)
+
+New-GuestConfigurationPolicy
+ -PolicyId 'My GUID'
+ -ContentUri '<paste the ContentUri output from the Publish command>' `
+ -DisplayName 'Audit Windows Service.' `
+ -Description 'Audit if a Windows Service isn't enabled on Windows machine.' `
+ -Path '.\policies' `
+ -Parameter $PolicyParameterInfo `
+ -Version 1.0.0
+```
+
+### Publish the Azure Policy definition
+
+Finally, publish the policy definitions using the `Publish-GuestConfigurationPolicy` cmdlet. The
+cmdlet only has the **Path** parameter that points to the location of the JSON files created by
+`New-GuestConfigurationPolicy`.
+
+To run the Publish command, you need access to create policy definitions in Azure. The specific authorization
+requirements are documented in the [Azure Policy Overview](../overview.md) page. The recommended built-in
+role is **Resource Policy Contributor**.
+
+```powershell
+Publish-GuestConfigurationPolicy -Path '.\policies'
+```
+
+With the policy definition created in Azure, the last step is to assign the definition. See how to assign the
+definition with [Portal](../assign-policy-portal.md), [Azure CLI](../assign-policy-azurecli.md), and
+[Azure PowerShell](../assign-policy-powershell.md).
+
+### Optional: Piping output from each command to the next
+
+The commands in the guest configuration support pipeline parameters by name.
+You can use the `|` operator to pipeline output from each command to the next.
+Piping is useful in developer environments when rapidly iterating because you
+won't need to copy/paste the output of each command.
+
+To run the sequence using the `|` operator:
+
+```powershell
+# End to end flow piping output of each command to the next
+$ConfigName = myConfigName
+$ResourceGroupName = myResourceGroupName
+$StorageAccountName = myStorageAccountName
+$DisplayName = 'Configure Linux machine per my scenario.'
+$Description = 'Details about my policy.'
+New-GuestConfigurationPackage -Name $ConfigName -Configuration ./$ConfigName.mof -Path ./package/ -Type AuditAndSet -Force |
+Publish-GuestConfigurationPackage -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName -Force |
+New-GuestConfigurationPolicy -PolicyId 'My GUID' -DisplayName $DisplayName -Description $Description -Path './policies' -Platform 'Linux' -Version 1.0.0 -Mode 'ApplyAndAutoCorrect' |
+Publish-GuestConfigurationPolicy
+```
+
+## Policy lifecycle
+
+If you would like to release an update to the policy definition, make the change for both the guest
+configuration package and the Azure Policy definition details.
+
+> [!NOTE]
+> The `version` property of the guest configuration assignment only effects packages that
+> are hosted by Microsoft. The best practice for versioning custom content is to include
+> the version in the file name.
+
+First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
+unique from previous versions. You can include a version number in the name such as
+`PackageName_1.0.0`. The number in this example is only used to make the package unique, not to
+specify that the package should be considered newer or older than other packages.
+
+Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
+the following explanations.
+
+- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version
+ number greater than what is currently published.
+- **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI
+ to the location of the package. Including a package version in the file name will ensure the value
+ of this property changes in each release.
+- **contentHash**: This property is updated automatically by the `New-GuestConfigurationPolicy`
+ cmdlet. It's a hash value of the package created by `New-GuestConfigurationPackage`. The property
+ must be correct for the `.zip` file you publish. If only the **contentUri** property is updated,
+ the Extension won't accept the content package.
+
+The easiest way to release an updated package is to repeat the process described in this article and
+provide an updated version number. That process guarantees all properties have been correctly
+updated.
+
+## Next steps
+
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](./determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Create Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-group-policy.md
Title: How to create Guest Configuration policy definitions from Group Policy baseline for Windows
-description: Learn how to convert Group Policy from the Windows Server 2019 Security Baseline into a policy definition.
+ Title: How-to create a guest configuration policy from Group Policy
+description: Learn how to convert Group Policy into a policy definition.
Last updated 03/31/2021
-# How to create Guest Configuration policy definitions from Group Policy baseline for Windows
-
-Before creating custom policy definitions, it's a good idea to read the conceptual overview
-information at [Azure Policy Guest Configuration](../concepts/guest-configuration.md). To learn
-about creating custom Guest Configuration policy definitions for Linux, see
-[How to create Guest Configuration policies for Linux](./guest-configuration-create-linux.md). To
-learn about creating custom Guest Configuration policy definitions for Windows, see
-[How to create Guest Configuration policies for Windows](./guest-configuration-create.md).
-
-When auditing Windows, Guest Configuration uses a
-[Desired State Configuration](/powershell/scripting/dsc/overview/overview) (DSC) resource module to
-create the configuration file. The DSC configuration defines the condition that the machine should
-be in. If the evaluation of the configuration is **non-compliant**, the policy effect
-*auditIfNotExists* is triggered.
-[Azure Policy Guest Configuration](../concepts/guest-configuration.md) only audits settings inside
-machines.
+# How-to create a guest configuration policy from Group Policy
+
+Before you begin, it's a good idea to read the overview page for
+[guest configuration](../concepts/guest-configuration.md),
+and the details about guest configuration policy effects
+[How to configure remediation options for guest configuration](../concepts/guest-configuration-policy-effects.md).
> [!IMPORTANT]
-> The Guest Configuration extension is required to perform audits in Azure virtual machines. To
-> deploy the extension at scale across all Windows machines, assign the following policy
-> definitions:
-> - [Deploy prerequisites to enable Guest Configuration Policy on Windows VMs.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ecd903d-91e7-4726-83d3-a229d7f2e293)
+> Converting Group Policy to guest configuration is **in preview**. Not all types
+> of Group Policy settings have corresponding DSC resources available for
+> PowerShell 7.
+>
+> All of the commands on this page must be run in **Windows PowerShell 5.1**.
+> The resulting output MOF files should then be packaged using the
+> `GuestConfiguration` module in PowerShell 7.1.3 or later.
+>
+> Custom guest configuration policy definitions using **AuditIfNotExists** are
+> Generally Available, but definitions using **DeployIfNotExists** with guest
+> configuration are **in preview**.
+>
+> The guest configuration extension is required for Azure virtual machines. To
+> deploy the extension at scale across all machines, assign the following policy
+> initiative: `Deploy prerequisites to enable guest configuration policies on
+> virtual machines`
> > Don't use secrets or confidential information in custom content packages.
-The DSC community has published the
-[BaselineManagement module](https://github.com/microsoft/BaselineManagement) to convert exported
-Group Policy templates to DSC format. Together with the GuestConfiguration cmdlet, the
-BaselineManagement module creates Azure Policy Guest Configuration package for Windows from Group
-Policy content. For details about using the BaselineManagement module, see the article
-[Quickstart: Convert Group Policy into DSC](/powershell/scripting/dsc/quickstarts/gpo-quickstart).
+The open source community has published the module
+[BaselineManagement](https://github.com/microsoft/BaselineManagement)
+to convert exported
+[Group Policy](/support/windows-server/group-policy/group-policy-overview)
+templates to PowerShell DSC format. Together with the `GuestConfiguration`
+module, you can create a guest configuration package for Windows
+from exported Group Policy Objects. The guest configuration package can then
+be used to audit or configure servers using local policy, even if they aren't
+domain joined.
-In this guide, we walk through the process to create an Azure Policy Guest Configuration package
-from a Group Policy Object (GPO). While the walkthrough outlines conversion of the Windows Server
-2019 Security Baseline, the same process can be applied to other GPOs.
+In this guide, we walk through the process to create an Azure Policy guest
+configuration package from a Group Policy Object (GPO).
-## Download Windows Server 2019 Security Baseline and install related PowerShell modules
+## Download required PowerShell modules
-To install the **DSC**, **GuestConfiguration**, **Baseline Management**, and related Azure modules
-in PowerShell:
+To install all required modules in PowerShell:
-1. From a PowerShell prompt, run the following command:
+```powershell
+Install-Module guestconfiguration
+Install-Module baselinemanagement
+```
- ```azurepowershell-interactive
- # Install the BaselineManagement module, Guest Configuration DSC resource module, and relevant Azure modules from PowerShell Gallery
- Install-Module az.resources, az.policyinsights, az.storage, guestconfiguration, gpregistrypolicyparser, securitypolicydsc, auditpolicydsc, baselinemanagement -scope currentuser -Repository psgallery -AllowClobber
- ```
+To backup Group Policy Objects (GPOs) from an Active Directory environment,
+you need the PowerShell commands available in the Remote Server Administration
+Toolkit (RSAT).
-1. Create a directory for and download the Windows Server 2019 Security Baseline from the Windows
- Security Compliance toolkit.
+To enable RSAT for Group Policy Management Console on Windows 10:
- ```azurepowershell-interactive
- # Download the 2019 Baseline files from https://docs.microsoft.com/windows/security/threat-protection/security-compliance-toolkit-10
- New-Item -Path 'C:\git\policyfiles\downloads' -Type Directory
- Invoke-WebRequest -Uri 'https://download.microsoft.com/download/8/5/C/85C25433-A1B0-4FFA-9429-7E023E7DA8D8/Windows%2010%20Version%201909%20and%20Windows%20Server%20Version%201909%20Security%20Baseline.zip' -Out C:\git\policyfiles\downloads\Server2019Baseline.zip
- ```
+```powerShell
+Add-WindowsCapability -Online -Name 'Rsat.GroupPolicy.Management.Tools~~~~0.0.1.0'
+Add-WindowsCapability -Online -Name 'Rsat.ActiveDirectory.DS-LDS.Tools~~~~0.0.1.0'
+```
-1. Unblock and expand the downloaded Server 2019 Baseline.
+## Export and convert Group Policy to guest configuration
- ```azurepowershell-interactive
- Unblock-File C:\git\policyfiles\downloads\Server2019Baseline.zip
- Expand-Archive -Path C:\git\policyfiles\downloads\Server2019Baseline.zip -DestinationPath C:\git\policyfiles\downloads\
- ```
+There are three options to export Group Policy files and convert them to DSC to
+use in guest configuration.
-1. Validate the Server 2019 Baseline contents using **MapGuidsToGpoNames.ps1**.
+- Export a single Group Policy Object
+- Export the merged Group Policy Objects for an OU
+- Export the merged Group Policy Objects from within a machine
- ```azurepowershell-interactive
- # Show content details of downloaded GPOs
- C:\git\policyfiles\downloads\Scripts\Tools\MapGuidsToGpoNames.ps1 -rootdir C:\git\policyfiles\downloads\GPOs\ -Verbose
- ```
+### Single Group Policy Object
-## Convert from Group Policy to Azure Policy Guest Configuration
+Identify the GUID of the Group Policy Object to export by using the commands in
+the `Group Policy` module. In a large environment, consider piping the output
+to `where-object` and filtering by name.
-Next, we convert the downloaded Server 2019 Baseline into a Guest Configuration Package using the
-Guest Configuration and Baseline Management modules.
+Run each of the following in a **Windows PowerShell 5.1** environment on a
+**domain joined** Windows machine:
-1. Convert the Group Policy to Desired State Configuration using the Baseline Management Module.
+```powershell
+# List all Group Policy Objects
+Get-GPO -all
+```
- ```azurepowershell-interactive
- ConvertFrom-GPO -Path 'C:\git\policyfiles\downloads\GPOs\{3657C7A2-3FF3-4C21-9439-8FDF549F1D68}\' -OutputPath 'C:\git\policyfiles\' -OutputConfigurationScript -Verbose
- ```
+Backup the Group Policy to files. The command also accepts a "Name" parameter,
+but using the GUID of the policy is less error prone.
-1. Rename, reformat, and run the converted scripts before creating a policy content package.
+```powershell
+Backup-GPO -Guid 'f0cf623e-ae29-4768-9bb4-406cce1f3cff' -Path C:\gpobackup\
+```
- ```azurepowershell-interactive
- Rename-Item -Path C:\git\policyfiles\DSCFromGPO.ps1 -NewName C:\git\policyfiles\Server2019Baseline.ps1
- (Get-Content -Path C:\git\policyfiles\Server2019Baseline.ps1).Replace('DSCFromGPO', 'Server2019Baseline') | Set-Content -Path C:\git\policyfiles\Server2019Baseline.ps1
- (Get-Content -Path C:\git\policyfiles\Server2019Baseline.ps1).Replace('PSDesiredStateConfiguration', 'PSDscResources') | Set-Content -Path C:\git\policyfiles\Server2019Baseline.ps1
- C:\git\policyfiles\Server2019Baseline.ps1
- ```
+```
-1. Create an Azure Policy Guest Configuration content package.
+The output of the command returns the details of the files.
- ```azurepowershell-interactive
- New-GuestConfigurationPackage -Name Server2019Baseline -Configuration c:\git\policyfiles\localhost.mof -Verbose
- ```
+ConfigurationScript Configuration Name
+- - -
+C:\convertfromgpo\myCustomPolicy1.ps1 C:\convertfromgpo\localhost.mof myCustomPolicy1
+```
-## Create Azure Policy Guest Configuration
+Review the exported PowerShell script to make sure all settings have been
+populated and no error messages were written. Create a new configuration package
+using the MOF file by following the guidance in page
+[How to create custom guest configuration package artifacts](./guest-configuration-create.md).
+The steps to create and test the guest configuration package should be run in
+a PowerShell 7 environment.
-1. The next step is to publish the file to Azure Blob Storage. The command
- `Publish-GuestConfigurationPackage` requires the `Az.Storage` module.
+### Merged Group Policy Objects for an OU
- ```azurepowershell-interactive
- Publish-GuestConfigurationPackage -Path ./AuditBitlocker.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
- ```
+Export the merged combination of Group Policy Objects (similar to a resultant
+set of policy) at a specified Organizational Unit. The merge operation takes in
+to account link state, enforcement, and access, but not WMI filters.
-1. Once a Guest Configuration custom policy package has been created and uploaded, create the Guest
- Configuration policy definition. Use the `New-GuestConfigurationPolicy` cmdlet to create the
- Guest Configuration.
+```powershell
+Merge-GPOsFromOU -Path C:\mergedfromou\ -OUDistinguishedName 'OU=mySubOU,OU=myOU,DC=mydomain,DC=local' -OutputConfigurationScript
+```
- ```azurepowershell-interactive
- $NewGuestConfigurationPolicySplat = @{
- ContentUri = $Uri
- DisplayName = 'Server 2019 Configuration Baseline'
- Description 'Validation of using a completely custom baseline configuration for Windows VMs'
- Path = 'C:\git\policyfiles\policy'
- Platform = Windows
- }
- New-GuestConfigurationPolicy @NewGuestConfigurationPolicySplat
- ```
+The output of the command returns the details of the files.
-1. Publish the policy definitions using the `Publish-GuestConfigurationPolicy` cmdlet. The cmdlet
- only has the **Path** parameter that points to the location of the JSON files created by
- `New-GuestConfigurationPolicy`. To run the Publish command, you need access to create policy
- definitions in Azure. The specific authorization requirements are documented in the
- [Azure Policy Overview](../overview.md#getting-started) page. The best built-in role is
- **Resource Policy Contributor**.
+```powershell
+Configuration Name ConfigurationScript
+- - -
+C:\mergedfromou\mySubOU\output\localhost.mof mySubOU C:\mergedfromou\mySubOU\output\mySubOU.ps1
+```
- ```azurepowershell-interactive
- Publish-GuestConfigurationPolicy -Path C:\git\policyfiles\policy\ -Verbose
- ```
+### Merged Group Policy Objects from within a machine
-## Assign Guest Configuration policy definition
+You can also merge the policies applied to a specific machine, by running the
+`Merge-GPOs` command from Windows PowerShell. WMI Filters are only evaluated
+if you merge from within a machine.
-With the policy created in Azure, the last step is to assign the initiative. See how to assign the
-initiative with [Portal](../assign-policy-portal.md), [Azure CLI](../assign-policy-azurecli.md), and
-[Azure PowerShell](../assign-policy-powershell.md).
+```powershell
+Merge-GPOs -OutputConfigurationScript -Path c:\mergedgpo
+```
-> [!IMPORTANT]
-> Guest Configuration policy definitions must **always** be assigned using the initiative that
-> combines the _AuditIfNotExists_ and _DeployIfNotExists_ policies. If only the _AuditIfNotExists_
-> policy is assigned, the prerequisites aren't deployed and the policy always shows that '0' servers
-> are compliant.
-
-Assigning a policy definition with _DeployIfNotExists_ effect requires an additional level of
-access. To grant the least privilege, you can create a custom role definition that extends
-**Resource Policy Contributor**. The following example creates a role named **Resource Policy
-Contributor DINE** with the additional permission _Microsoft.Authorization/roleAssignments/write_.
-
- ```azurepowershell-interactive
- $subscriptionid = '00000000-0000-0000-0000-000000000000'
- $role = Get-AzRoleDefinition "Resource Policy Contributor"
- $role.Id = $null
- $role.Name = "Resource Policy Contributor DINE"
- $role.Description = "Can assign Policies that require remediation."
- $role.Actions.Clear()
- $role.Actions.Add("Microsoft.Authorization/roleAssignments/write")
- $role.AssignableScopes.Clear()
- $role.AssignableScopes.Add("/subscriptions/$subscriptionid")
- New-AzRoleDefinition -Role $role
- ```
+The output of the command will return the details of the files.
+
+```powershell
+Configuration Name ConfigurationScript PolicyDetails
+- - - -
+C:\mergedgpo\localhost.mof MergedGroupPolicy_ws1 C:\mergedgpo\MergedGroupPolicy_ws1.ps1 {@{Name=myEnforcedPolicy; Ap...
+```
+
+## OPTIONAL: Download sample Group Policy files for testing
+
+If you aren't ready to export Group Policy files from an Active Directory environment, you can
+download Windows Server security baseline from the Windows Security and Compliant Toolkit.
+
+Create a directory for and download the Windows Server 2019 Security Baseline from the Windows
+Security Compliance toolkit.
+
+```azurepowershell-interactive
+# Download the 2019 Baseline files from https://docs.microsoft.com/windows/security/threat-protection/security-compliance-toolkit-10
+New-Item -Path 'C:\git\policyfiles\downloads' -Type Directory
+Invoke-WebRequest -Uri 'https://download.microsoft.com/download/8/5/C/85C25433-A1B0-4FFA-9429-7E023E7DA8D8/Windows%2010%20Version%201909%20and%20Windows%20Server%20Version%201909%20Security%20Baseline.zip' -Out C:\git\policyfiles\downloads\Server2019Baseline.zip
+```
+
+Unblock and expand the downloaded Server 2019 Baseline.
+
+```azurepowershell-interactive
+Unblock-File C:\git\policyfiles\downloads\Server2019Baseline.zip
+Expand-Archive -Path C:\git\policyfiles\downloads\Server2019Baseline.zip -DestinationPath C:\git\policyfiles\downloads\
+```
+
+Validate the Server 2019 Baseline contents using **MapGuidsToGpoNames.ps1**.
+
+```azurepowershell-interactive
+# Show content details of downloaded GPOs
+C:\git\policyfiles\downloads\Scripts\Tools\MapGuidsToGpoNames.ps1 -rootdir C:\git\policyfiles\downloads\GPOs\ -Verbose
+```
## Next steps -- Learn about auditing VMs with [Guest Configuration](../concepts/guest-configuration.md).-- Understand how to [programmatically create policies](./programmatically-create.md).-- Learn how to [get compliance data](./get-compliance-data.md).
+- [Create a package artifact](./guest-configuration-create.md)
+ for guest configuration.
+- [Test the package artifact](./guest-configuration-create-test.md)
+ from your development environment.
+- [Publish the package artifact](./guest-configuration-create-publish.md)
+ so it is accessible to your machines.
+- Use the `GuestConfiguration` module to
+ [create an Azure Policy definition](./guest-configuration-create-definition.md)
+ for at-scale management of your environment.
+- [Assign your custom policy definition](../assign-policy-portal.md) using
+ Azure portal.
+- Learn how to view
+ [compliance details for guest configuration](./determine-non-compliance.md#compliance-details-for-guest-configuration) policy assignments.
governance Guest Configuration Create Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
- Title: How to create Guest Configuration policies for Linux
-description: Learn how to create an Azure Policy Guest Configuration policy for Linux.
Previously updated : 03/31/2021---
-# How to create Guest Configuration policies for Linux
-
-Before creating custom policies, read the overview information at
-[Azure Policy Guest Configuration](../concepts/guest-configuration.md).
-
-To learn about creating Guest Configuration policies for Windows, see the page
-[How to create Guest Configuration policies for Windows](./guest-configuration-create.md)
-
-When auditing Linux, Guest Configuration uses [Chef InSpec](https://community.chef.io/tools/chef-inspec). The InSpec
-profile defines the condition that the machine should be in. If the evaluation of the configuration
-fails, the policy effect **auditIfNotExists** is triggered and the machine is considered
-**non-compliant**.
-
-[Azure Policy Guest Configuration](../concepts/guest-configuration.md) can only be used to audit
-settings inside machines. Remediation of settings inside machines isn't yet available.
-
-Use the following actions to create your own configuration for validating the state of an Azure or
-non-Azure machine.
-
-> [!IMPORTANT]
-> Custom policy definitions with Guest Configuration in the Azure Government and
-> Azure China 21Vianet environments is a Preview feature.
->
-> The Guest Configuration extension is required to perform audits in Azure virtual machines. To
-> deploy the extension at scale across all Linux machines, assign the following policy definition:
-> `Deploy prerequisites to enable Guest Configuration Policy on Linux VMs`
->
-> Don't use secrets or confidential information in custom content packages.
-
-## Install the PowerShell module
-
-The Guest Configuration module automates the process of creating custom content including:
--- Creating a Guest Configuration content artifact (.zip)-- Automated testing of the artifact-- Creating a policy definition-- Publishing the policy-
-The module can be installed on a machine running Windows, macOS, or Linux with PowerShell 6.2 or
-later running locally, or with [Azure Cloud Shell](https://shell.azure.com), or with the
-[Azure PowerShell Core Docker image](https://hub.docker.com/r/azuresdk/azure-powershell-core).
-
-> [!NOTE]
-> Compilation of configurations isn't supported on Linux.
-
-### Base requirements
-
-Operating Systems where the module can be installed:
--- Linux-- macOS-- Windows-
-> [!NOTE]
-> The cmdlet `Test-GuestConfigurationPackage` requires OpenSSL version 1.0, due to a dependency on
-> OMI. This causes an error on any environment with OpenSSL 1.1 or later.
->
-> Running the cmdlet `Test-GuestConfigurationPackage` is only supported on Windows
-> for Guest Configuration module version 2.1.0.
-
-The Guest Configuration resource module requires the following software:
--- PowerShell 6.2 or later. If it isn't yet installed, follow
- [these instructions](/powershell/scripting/install/installing-powershell).
-- Azure PowerShell 1.5.0 or higher. If it isn't yet installed, follow
- [these instructions](/powershell/azure/install-az-ps).
- - Only the Az modules 'Az.Accounts' and 'Az.Resources' are required.
-
-### Install the module
-
-To install the **GuestConfiguration** module in PowerShell:
-
-1. From a PowerShell prompt, run the following command:
-
- ```azurepowershell-interactive
- # Install the Guest Configuration DSC resource module from PowerShell Gallery
- Install-Module -Name GuestConfiguration
- ```
-
-1. Validate that the module has been imported:
-
- ```azurepowershell-interactive
- # Get a list of commands for the imported GuestConfiguration module
- Get-Command -Module 'GuestConfiguration'
- ```
-
-## Guest Configuration artifacts and policy for Linux
-
-Even in Linux environments, Guest Configuration uses Desired State Configuration as a language
-abstraction. The implementation is based in native code (C++) so it doesn't require loading
-PowerShell. However, it does require a configuration MOF describing details about the environment.
-DSC is acting as a wrapper for InSpec to standardize how it's executed, how parameters are provided,
-and how output is returned to the service. Little knowledge of DSC is required when working with
-custom InSpec content.
-
-#### Configuration requirements
-
-The name of the custom configuration must be consistent everywhere. The name of the .zip file for
-the content package, the configuration name in the MOF file, and the guest assignment name in the
-Azure Resource Manager template (ARM template), must be the same.
-
-PowerShell cmdlets assist in creating the package. No root level folder or version folder is
-required. The package format must be a .zip file. and cannot exceed a total size of 100 MB when
-uncompressed.
-
-### Custom Guest Configuration configuration on Linux
-
-Guest Configuration on Linux uses the `ChefInSpecResource` resource to provide the engine with the
-name of the [InSpec profile](https://docs.chef.io/inspec/profiles/). **Name** is the only
-required resource property. Create a YAML file and a Ruby script file, as detailed below.
-
-First, create the YAML file used by InSpec. The file provides basic information about the
-environment. An example is given below:
-
-```yaml
-name: linux-path
Title: Linux path
-maintainer: Test
-summary: Test profile
-license: MIT
-version: 1.0.0
-supports:
- - os-family: unix
-```
-
-Save this file with name `inspec.yml` to a folder named `linux-path` in your project directory.
-
-Next, create the Ruby file with the InSpec language abstraction used to audit the machine.
-
-```ruby
-describe file('/tmp') do
- it { should exist }
-end
-```
-
-Save this file with name `linux-path.rb` in a new folder named `controls` inside the `linux-path`
-directory.
-
-Finally, create a configuration, import the **PSDesiredStateConfiguration** resource module, and
-compile the configuration.
-
-```powershell
-# import PSDesiredStateConfiguration module
-import-module PSDesiredStateConfiguration
-
-# Define the configuration and import GuestConfiguration
-Configuration AuditFilePathExists
-{
- Import-DscResource -ModuleName 'GuestConfiguration'
-
- Node AuditFilePathExists
- {
- ChefInSpecResource 'Audit Linux path exists'
- {
- Name = 'linux-path'
- }
- }
-}
-
-# Compile the configuration to create the MOF files
-AuditFilePathExists -out ./Config
-```
-
-Save this file with name `config.ps1` in the project folder. Run it in PowerShell by executing
-`./config.ps1` in the terminal. A new MOF file is be created.
-
-The `Node AuditFilePathExists` command isn't technically required but it produces a file named
-`AuditFilePathExists.mof` rather than the default, `localhost.mof`. Having the .MOF file name follow
-the configuration makes it easy to organize many files when operating at scale.
-
-You should now have a project structure as below:
-
-```file
-/ AuditFilePathExists
- / Config
- AuditFilePathExists.mof
- / linux-path
- inspec.yml
- / controls
- linux-path.rb
-```
-
-The supporting files must be packaged together. The completed package is used by Guest Configuration
-to create the Azure Policy definitions.
-
-The `New-GuestConfigurationPackage` cmdlet creates the package. Parameters of the
-`New-GuestConfigurationPackage` cmdlet when creating Linux content:
--- **Name**: Guest Configuration package name.-- **Configuration**: Compiled configuration document full path.-- **Path**: Output folder path. This parameter is optional. If not specified, the package is created
- in current directory.
-- **ChefInspecProfilePath**: Full path to InSpec profile. This parameter is supported only when
- creating content to audit Linux.
-
-Run the following command to create a package using the configuration given in the previous step:
-
-```azurepowershell-interactive
-New-GuestConfigurationPackage `
- -Name 'AuditFilePathExists' `
- -Configuration './Config/AuditFilePathExists.mof' `
- -ChefInSpecProfilePath './'
-```
-
-After creating the Configuration package but before publishing it to Azure, you can test the package
-from your workstation or continuous integration and continuous deployment (CI/CD) environment. The
-GuestConfiguration cmdlet `Test-GuestConfigurationPackage` includes the same agent in your
-development environment as is used inside Azure machines. Using this solution, you can perform
-integration testing locally before releasing to billed cloud environments.
-
-Since the agent is actually evaluating the local environment, in most cases you need to run the
-Test- cmdlet on the same OS platform as you plan to audit.
-
-Parameters of the `Test-GuestConfigurationPackage` cmdlet:
--- **Name**: Guest Configuration policy name.-- **Parameter**: Policy parameters provided in hashtable format.-- **Path**: Full path of the Guest Configuration package.-
-Run the following command to test the package created by the previous step:
-
-```azurepowershell-interactive
-Test-GuestConfigurationPackage `
- -Path ./AuditFilePathExists/AuditFilePathExists.zip
-```
-
-The cmdlet also supports input from the PowerShell pipeline. Pipe the output of
-`New-GuestConfigurationPackage` cmdlet to the `Test-GuestConfigurationPackage` cmdlet.
-
-```azurepowershell-interactive
-New-GuestConfigurationPackage -Name AuditFilePathExists -Configuration ./Config/AuditFilePathExists.mof -ChefInspecProfilePath './' | Test-GuestConfigurationPackage
-```
-
-The next step is to publish the file to Azure Blob Storage. The command `Publish-GuestConfigurationPackage` requires the `Az.Storage`
-module.
-
-Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
--- **Path**: Location of the package to be published-- **ResourceGroupName**: Name of the resource group where the storage account is located-- **StorageAccountName**: Name of the storage account where the package should be published-- **StorageContainerName**: (default: _guestconfiguration_) Name of the storage container in the
- storage account
-- **Force**: Overwrite existing package in the storage account with the same name-
-The following example publishes the package to a storage container name 'guestconfiguration'.
-
-```azurepowershell-interactive
-Publish-GuestConfigurationPackage -Path ./AuditFilePathExists/AuditFilePathExists.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
-```
-
-Once a Guest Configuration custom policy package has been created and uploaded, create the Guest
-Configuration policy definition. The `New-GuestConfigurationPolicy` cmdlet takes a custom policy
-package and creates a policy definition.
-
-Parameters of the `New-GuestConfigurationPolicy` cmdlet:
--- **ContentUri**: Public HTTP(s) URI of Guest Configuration content package.-- **DisplayName**: Policy display name.-- **Description**: Policy description.-- **Parameter**: Policy parameters provided in hashtable format.-- **Version**: Policy version.-- **Path**: Destination path where policy definitions are created.-- **Platform**: Target platform (Windows/Linux) for Guest Configuration policy and content package.-- **Tag** adds one or more tag filters to the policy definition-- **Category** sets the category metadata field in the policy definition-
-The following example creates the policy definitions in a specified path from a custom policy
-package:
-
-```azurepowershell-interactive
-New-GuestConfigurationPolicy `
- -ContentUri 'https://storageaccountname.blob.core.windows.net/packages/AuditFilePathExists.zip?st=2019-07-01T00%3A00%3A00Z&se=2024-07-01T00%3A00%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=JdUf4nOCo8fvuflOoX%2FnGo4sXqVfP5BYXHzTl3%2BovJo%3D' `
- -DisplayName 'Audit Linux file path.' `
- -Description 'Audit that a file path exists on a Linux machine.' `
- -Path './policies' `
- -Platform 'Linux' `
- -Version 1.0.0 `
- -Verbose
-```
-
-The following files are created by `New-GuestConfigurationPolicy`:
--- **auditIfNotExists.json**-
-The cmdlet output returns an object containing the initiative display name and path of the policy
-files.
-
-Finally, publish the policy definitions using the `Publish-GuestConfigurationPolicy` cmdlet. The
-cmdlet only has the **Path** parameter that points to the location of the JSON files created by
-`New-GuestConfigurationPolicy`.
-
-To run the Publish command, you need access to create Policies in Azure. The specific authorization
-requirements are documented in the [Azure Policy Overview](../overview.md) page. The best built-in
-role is **Resource Policy Contributor**.
-
-```azurepowershell-interactive
-Publish-GuestConfigurationPolicy `
- -Path './policies'
-```
-
-The `Publish-GuestConfigurationPolicy` cmdlet accepts the path from the PowerShell pipeline. This
-feature means you can create the policy files and publish them in a single set of piped commands.
-
-```azurepowershell-interactive
-New-GuestConfigurationPolicy `
- -ContentUri 'https://storageaccountname.blob.core.windows.net/packages/AuditFilePathExists.zip?st=2019-07-01T00%3A00%3A00Z&se=2024-07-01T00%3A00%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=JdUf4nOCo8fvuflOoX%2FnGo4sXqVfP5BYXHzTl3%2BovJo%3D' `
- -DisplayName 'Audit Linux file path.' `
- -Description 'Audit that a file path exists on a Linux machine.' `
- -Path './policies' `
-| Publish-GuestConfigurationPolicy
-```
-
-With the policy created in Azure, the last step is to assign the definition. See how to assign the
-definition with [Portal](../assign-policy-portal.md), [Azure CLI](../assign-policy-azurecli.md), and
-[Azure PowerShell](../assign-policy-powershell.md).
-
-### Using parameters in custom Guest Configuration policies
-
-Guest Configuration supports overriding properties of a Configuration at run time. This feature
-means that the values in the MOF file in the package don't have to be considered static. The
-override values are provided through Azure Policy and don't change how the Configurations are
-authored or compiled.
-
-With InSpec, parameters are typically handled as input either at runtime or as code using
-attributes. Guest Configuration obfuscates this process so input can be provided when policy is
-assigned. An attributes file is automatically created within the machine. You don't need to create
-and add a file in your project. There are two steps to adding parameters to your Linux audit
-project.
-
-Define the input in the Ruby file where you script what to audit on the machine. An example is given
-below.
-
-```ruby
-attr_path = attribute('path', description: 'The file path to validate.')
-
-describe file(attr_path) do
- it { should exist }
-end
-```
-
-Add the property **AttributesYmlContent** in your configuration with any string as the value. The
-Guest Configuration agent automatically creates the YAML file used by InSpec to store attributes.
-See the following example.
-
-```powershell
-Configuration AuditFilePathExists
-{
- Import-DscResource -ModuleName 'GuestConfiguration'
-
- Node AuditFilePathExists
- {
- ChefInSpecResource 'Audit Linux path exists'
- {
- Name = 'linux-path'
- AttributesYmlContent = "fromParameter"
- }
- }
-}
-```
-
-Recompile the MOF file using the examples given in this document.
-
-The cmdlets `New-GuestConfigurationPolicy` and `Test-GuestConfigurationPolicyPackage` include a
-parameter named **Parameter**. This parameter takes a hashtable including all details about each
-parameter and automatically creates all the required sections of the files used to create each Azure
-Policy definition.
-
-The following example creates a policy definition to audit a file path, where the user provides the
-path at the time of policy assignment.
-
-```azurepowershell-interactive
-$PolicyParameterInfo = @(
- @{
- Name = 'FilePath' # Policy parameter name (mandatory)
- DisplayName = 'File path.' # Policy parameter display name (mandatory)
- Description = 'File path to be audited.' # Policy parameter description (optional)
- ResourceType = 'ChefInSpecResource' # Configuration resource type (mandatory)
- ResourceId = 'Audit Linux path exists' # Configuration resource property name (mandatory)
- ResourcePropertyName = 'AttributesYmlContent' # Configuration resource property name (mandatory)
- DefaultValue = '/tmp' # Policy parameter default value (optional)
- }
-)
-
-# The hashtable also supports a property named 'AllowedValues' with an array of strings to limit input to a list
-
-$uri = 'https://storageaccountname.blob.core.windows.net/packages/AuditFilePathExists.zip?st=2019-07-01T00%3A00%3A00Z&se=2024-07-01T00%3A00%3A00Z&sp=rl&sv=2018-03-28&sr=b&sig=JdUf4nOCo8fvuflOoX%2FnGo4sXqVfP5BYXHzTl3%2BovJo%3D'
-
-New-GuestConfigurationPolicy -ContentUri $uri `
- -DisplayName 'Audit Linux file path.' `
- -Description 'Audit that a file path exists on a Linux machine.' `
- -Path './policies' `
- -Parameter $PolicyParameterInfo `
- -Platform 'Linux' `
- -Version 1.0.0
-```
-
-## Policy lifecycle
-
-If you would like to release an update to the policy, make the change for both the Guest
-Configuration package and the Azure Policy definition details.
-
-> [!NOTE]
-> The `version` property of the Guest Configuration assignment only effects packages that
-> are hosted by Microsoft. The best practice for versioning custom content is to include
-> the version in the file name.
-
-First, when running `New-GuestConfigurationPackage`, specify a name for the package that makes it
-unique from previous versions. You can include a version number in the name such as
-`PackageName_1.0.0`. The number in this example is only used to make the package unique, not to
-specify that the package should be considered newer or older than other packages.
-
-Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
-the following explanations.
--- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version
- number greater than what is currently published.
-- **contentUri**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a URI
- to the location of the package. Including a package version in the file name will ensure the value
- of this property changes in each release.
-- **contentHash**: This property is updated automatically by the `New-GuestConfigurationPolicy`
- cmdlet. It's a hash value of the package created by `New-GuestConfigurationPackage`. The property
- must be correct for the `.zip` file you publish. If only the **contentUri** property is updated,
- the Extension won't accept the content package.
-
-The easiest way to release an updated package is to repeat the process described in this article and
-provide an updated version number. That process guarantees all properties have been correctly
-updated.
-
-### Filtering Guest Configuration policies using Tags
-
-The policies created by cmdlets in the Guest Configuration module can optionally include a filter
-for tags. The **-Tag** parameter of `New-GuestConfigurationPolicy` supports an array of hashtables
-containing individual tag entires. The tags will be added to the `If` section of the policy
-definition and cannot be modified by a policy assignment.
-
-An example snippet of a policy definition that will filter for tags is given below.
-
-```json
-"if": {
- "allOf" : [
- {
- "allOf": [
- {
- "field": "tags.Owner",
- "equals": "BusinessUnit"
- },
- {
- "field": "tags.Role",
- "equals": "Web"
- }
- ]
- },
- {
- // Original Guest Configuration content will follow
- }
- ]
-}
-```
-
-## Optional: Signing Guest Configuration packages
-
-Guest Configuration custom policies use SHA256 hash to validate the policy package hasn't changed.
-Optionally, customers may also use a certificate to sign packages and force the Guest Configuration
-extension to only allow signed content.
-
-To enable this scenario, there are two steps you need to complete. Run the cmdlet to sign the
-content package, and append a tag to the machines that should require code to be signed.
-
-To use the Signature Validation feature, run the `Protect-GuestConfigurationPackage` cmdlet to sign
-the package before it's published. This cmdlet requires a 'Code Signing' certificate.
-
-Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
--- **Path**: Full path of the Guest Configuration package.-- **PublicGpgKeyPath**: Public GPG key path. This parameter is only supported when signing content
- for Linux.
-
-A good reference for creating GPG keys to use with Linux machines is provided by an article on
-GitHub, [Generating a new GPG key](https://help.github.com/en/articles/generating-a-new-gpg-key).
-
-GuestConfiguration agent expects the certificate public key to be present in the path
-`/usr/local/share/ca-certificates/extra` on Linux machines. For the node to verify signed content,
-install the certificate public key on the machine before applying the custom policy. This process
-can be done using any technique inside the VM, or by using Azure Policy. An example template is
-[provided here](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-push-certificate-windows).
-The Key Vault access policy must allow the Compute resource provider to access certificates during
-deployments. For detailed steps, see
-[Set up Key Vault for virtual machines in Azure Resource Manager](../../../virtual-machines/windows/key-vault-setup.md#use-templates-to-set-up-key-vault).
-
-After your content is published, append a tag with name `GuestConfigPolicyCertificateValidation` and
-value `enabled` to all virtual machines where code signing should be required. See the
-[Tag samples](../samples/built-in-policies.md#tags) for how tags can be delivered at scale using
-Azure Policy. Once this tag is in place, the policy definition generated using the
-`New-GuestConfigurationPolicy` cmdlet enables the requirement through the Guest Configuration
-extension.
-
-## Next steps
--- Learn about auditing VMs with [Guest Configuration](../concepts/guest-configuration.md).-- Understand how to [programmatically create policies](./programmatically-create.md).-- Learn how to [get compliance data](./get-compliance-data.md).
governance Guest Configuration Create Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-publish.md
+
+ Title: How to publish custom guest configuration package artifacts
+description: Learn how to publish a guest configuration package file top Azure blob storage and get a SAS token for secure access.
Last updated : 07/22/2021++
+# How to publish custom guest configuration package artifacts
+
+Before you begin, it's a good idea to read the overview page for
+[guest configuration](../concepts/guest-configuration.md).
+
+Guest configuration custom .zip packages must be stored in a location that is
+accessible via HTTPS by the managed machines. Examples include GitHub
+repositories, an Azure Repo, Azure storage, or a web server within your private
+datacenter.
+
+Configuration packages that support `Audit` and `AuditandSet` are published the
+same way. There isn't a need to do anything special during publishing based on
+the package mode.
+
+## Publish a configuration package
+
+The preferred location to store a configuration package is Azure Blob Storage.
+There are no special requirements for the storage account, but it's a good idea
+to host the file in a region near your machines. If you prefer to not make the
+package public, you can include a
+[SAS token](../../../storage/common/storage-sas-overview.md)
+in the URL or implement a
+[service endpoint](../../../storage/common/storage-network-security.md#grant-access-from-a-virtual-network)
+for machines in a private network.
+
+If you don't have a storage account, use the following example to create one.
+
+```powershell
+# Creates a new resource group, storage account, and container
+New-AzResourceGroup -name myResourceGroupN