Updates from: 06/04/2021 03:08:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-user-input.md
Previously updated : 05/25/2021 Last updated : 06/03/2021
zone_pivot_groups: b2c-policy-type
In this article, you collect a new attribute during your sign-up journey in Azure Active Directory B2C (Azure AD B2C). You'll obtain the users' city, configure it as a drop-down, and define whether it's required to be provided.
-> [!NOTE]
+> [!IMPORTANT]
> This sample uses the built-in claim 'city'. Instead, you can choose one of the supported [Azure AD B2C built-in attributes](user-profile-attributes.md) or a custom attribute. To use a custom attribute, [enable custom attributes](user-flow-custom-attributes.md). To use a different built-in or custom attribute, replace 'city' with the attribute of your choice, for example the built-in attribute *jobTitle* or a custom attribute like *extension_loyaltyId*. ## Prerequisites
The `LocalizedCollections` is an array of `Name` and `Value` pairs. The order fo
"ElementType": "ClaimType", "ElementId": "city", "TargetCollection": "Restriction",
- "Override": false,
+ "Override": true,
"Items": [ { "Name": "Berlin",
Open the extensions file of your policy. For example, <em>`SocialAndLocalAccount
<DataType>string</DataType> <UserInputType>DropdownSingleSelect</UserInputType> <Restriction>
- <Enumeration Text="Bellevue" Value="bellevue" SelectByDefault="false" />
- <Enumeration Text="Redmond" Value="redmond" SelectByDefault="false" />
- <Enumeration Text="Kirkland" Value="kirkland" SelectByDefault="false" />
+ <Enumeration Text="Berlin" Value="berlin" />
+ <Enumeration Text="London" Value="bondon" />
+ <Enumeration Text="Seattle" Value="seattle" />
</Restriction> </ClaimType> <!--
Open the extensions file of your policy. For example, <em>`SocialAndLocalAccount
</BuildingBlocks>--> ```
+Include the [SelectByDefault](claimsschema.md#enumeration) attribute on an `Enumeration` element to make it selected by default when the page first loads. For example, to pre-select the *London* item, change the `Enumeration` element as the following example:
+
+```xml
+<Restriction>
+ <Enumeration Text="Berlin" Value="berlin" />
+ <Enumeration Text="London" Value="bondon" SelectByDefault="true" />
+ <Enumeration Text="Seattle" Value="seattle" />
+</Restriction>
+```
+ ## Add a claim to the user interface The following technical profiles are [self-asserted](self-asserted-technical-profile.md), invoked when a user is expected to provide input:
To return the city claim back to the relying party application, add an output cl
1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Sign up now** to sign up. Finish entering the user information including the city name, and then click **Create**. You should see the contents of the token that was returned.
-You should
- ::: zone-end The sign-up screen should look similar to the following screenshot:
The token sent back to your application includes the `city` claim.
"email": "joe@outlook.com", "given_name": "Emily", "family_name": "Smith",
- "city": "Bellevue"
+ "city": "Berlin"
... } ``` ::: zone pivot="b2c-custom-policy"
+## [Optional] Localize the UI
+
+Azure AD B2C allows you to accommodate your policy to different languages. For more information, [learn about customizing the language experience](language-customization.md). To localize the sign-up page, [set up the list of supported languages](language-customization.md#set-up-the-list-of-supported-languages), and [provide language-specific labels](language-customization.md#provide-language-specific-labels).
+
+> [!NOTE]
+> When using the `LocalizedCollection` with the language-specific labels, you can remove the `Restriction` collection from the [claim definition](#define-a-claim).
+
+The following example demonstrates how to provide the list of cities for English and Spanish. Both set the `Restriction` collection of the claim *city* with a list of items for English and Spanish. The [SelectByDefault](claimsschema.md#enumeration) makes an item selected by default when the page first loads.
+
+```xml
+<!--
+<BuildingBlocks>-->
+ <Localization Enabled="true">
+ <SupportedLanguages DefaultLanguage="en" MergeBehavior="Append">
+ <SupportedLanguage>en</SupportedLanguage>
+ <SupportedLanguage>es</SupportedLanguage>
+ </SupportedLanguages>
+ <LocalizedResources Id="api.localaccountsignup.en">
+ <LocalizedCollections>
+ <LocalizedCollection ElementType="ClaimType" ElementId="city" TargetCollection="Restriction">
+ <Item Text="Berlin" Value="Berlin"></Item>
+ <Item Text="London" Value="London" SelectByDefault="true"></Item>
+ <Item Text="Seattle" Value="Seattle"></Item>
+ </LocalizedCollection>
+ </LocalizedCollections>
+ </LocalizedResources>
+ <LocalizedResources Id="api.localaccountsignup.es">
+ <LocalizedCollections>
+ <LocalizedCollection ElementType="ClaimType" ElementId="city" TargetCollection="Restriction">
+ <Item Text="Berlina" Value="Berlin"></Item>
+ <Item Text="Londres" Value="London" SelectByDefault="true"></Item>
+ <Item Text="Seattle" Value="Seattle"></Item>
+ </LocalizedCollection>
+ </LocalizedCollections>
+ </LocalizedResources>
+ </Localization>
+<!--
+</BuildingBlocks>-->
+```
+
+After you add the localization element, [edit the content definition with the localization](language-customization.md#edit-the-content-definition-with-the-localization). In the following example, English (en) and Spanish (es) custom localized resources are added to the sign-up page:
+
+```xml
+<!--
+<BuildingBlocks>
+ <ContentDefinitions> -->
+ <ContentDefinition Id="api.localaccountsignup">
+ <LocalizedResourcesReferences MergeBehavior="Prepend">
+ <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.localaccountsignup.en" />
+ <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.localaccountsignup.es" />
+ </LocalizedResourcesReferences>
+ </ContentDefinition>
+ <!--
+ </ContentDefinitions>
+</BuildingBlocks>-->
+```
+ ## Next steps - Learn more about the [ClaimsSchema](claimsschema.md) element in the IEF reference.
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 04/21/2021 Last updated : 06/03/2021 zone_pivot_groups: b2c-policy-type
To localize the email, you must send localized strings to Mailjet, or your email
<LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Your code is</LocalizedString> <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sincerely</LocalizedString> </LocalizedStrings>
- </LocalizedStrings>
</LocalizedResources> <LocalizedResources Id="api.custom-email.es"> <LocalizedStrings>
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 06/04/2021 Last updated : 06/02/2021
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## May 2021
+
+### New articles
+
+- [Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md)
+- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)
+- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)
+- [Enable authentication in your own web application using Azure Active Directory B2C](enable-authentication-web-application.md)
+- [Azure Active Directory B2C TLS and cipher suite requirements](https-cipher-tls-requirements.md)
+
+### Updated articles
+
+- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
+- [Mitigate credential attacks in Azure AD B2C](threat-management.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
++ ## April 2021 ### New articles
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/administration-concepts.md
Previously updated : 03/10/2021 Last updated : 06/01/2021
For more information about forest types in Azure AD DS, see [What are resource f
In Azure AD DS, the available performance and features are based on the SKU. You select a SKU when you create the managed domain, and you can switch SKUs as your business requirements change after the managed domain has been deployed. The following table outlines the available SKUs and the differences between them:
-| SKU name | Maximum object count | Backup frequency | Maximum number of outbound forest trusts |
-||-||-|
-| Standard | Unlimited | Every 5 days | 0 |
-| Enterprise | Unlimited | Every 3 days | 5 |
-| Premium | Unlimited | Daily | 10 |
+| SKU name | Maximum object count | Backup frequency |
+||-||
+| Standard | Unlimited | Every 5 days |
+| Enterprise | Unlimited | Every 3 days |
+| Premium | Unlimited | Daily |
Before these Azure AD DS SKUs, a billing model based on the number of objects (user and computer accounts) in the managed domain was used. There is no longer variable pricing based on the number of objects in the managed domain.
The backup frequency determines how often a snapshot of the managed domain is ta
As the SKU level increases, the frequency of those backup snapshots increases. Review your business requirements and recovery point objective (RPO) to determine the required backup frequency for your managed domain. If your business or application requirements change and you need more frequent backups, you can switch to a different SKU.
-### Outbound forest trusts
-
-The previous section detailed one-way outbound forest trusts from a managed domain to an on-premises AD DS environment. The SKU determines the maximum number of forest trusts you can create for a managed domain. Review your business and application requirements to determine how many trusts you actually need, and pick the appropriate Azure AD DS SKU. Again, if your business requirements change and you need to create additional forest trusts, you can switch to a different SKU.
- ## Next steps To get started, [create an Azure AD DS managed domain][create-instance].
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
Previously updated : 07/06/2020 Last updated : 06/01/2021 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Complete the fields in the *Basics* window of the Azure portal to create a manag
> > There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
-1. The **SKU** determines the performance, backup frequency, and maximum number of forest trusts you can create. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku].
+1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku].
For this tutorial, select the *Standard* SKU. 1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains. By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment.
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 07/06/2020 Last updated : 06/01/2021 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Complete the fields in the *Basics* window of the Azure portal to create a manag
> > There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones]
-1. The **SKU** determines the performance, backup frequency, and maximum number of forest trusts you can create. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku].
+1. The **SKU** determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see [Azure AD DS SKU concepts][concepts-sku].
For this tutorial, select the *Standard* SKU. 1. A *forest* is a logical construct used by Active Directory Domain Services to group one or more domains. By default, a managed domain is created as a *User* forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment.
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/expression-builder.md
+
+ Title: Understand how expression builder works with Application Provisioning in Azure Active Directory
+description: Understand how expression builder works with Application Provisioning in Azure Active Directory.
+++++++ Last updated : 06/02/2021++++
+# Understand how expression builder in Application Provisioning works
+
+You can use expressions to map attributes. Previously, you had to create these expressions manually and enter them into the expression box. Expression builder is a tool you can use to help you create expressions.
++
+For reference on building expressions, see [Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md).
+
+## Finding expression builder
+
+In application provisioning, you use expressions for attribute mappings. You access Express Builder on the attribute-mapping page by selecting **Show advanced options** and then select **Expression builder**.
++
+## Using expression builder
+
+To use expression builder, select a function and attribute and then enter a suffix if needed. Then select **Add expression** to add the expression to the code box. To learn more about the functions available and how to use them, see [Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md).
+
+Test the expression by providing values and selecting **Test expression**. The output of the expression test will appear in the **View expression output** box.
+
+When you're satisfied with the expression, move it to an attribute mapping. Copy and paste it into the expression box for the attribute mapping you're working on.
+
+## Next steps
+
+[Reference for writing expressions for attribute mappings](functions-for-customizing-application-data.md)
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
Title: Understand how Application Provisioning in Azure Active Directory
-description: Understand how Application Provisioning works in Azure Active Directory .
+description: Understand how Application Provisioning works in Azure Active Directory.
-+
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 05/11/2021 Last updated : 05/28/2021
When a group is in scope and a member is out of scope, the group will be provisi
If a user and their manager are both in scope for provisioning, the service will provision the user and then update the manager. However if on day one the user is in scope and the manager is out of scope, we will provision the user without the manager reference. When the manager comes into scope, the manager reference will not be updated until you restart provisioning and cause the service to re evaluate all the users again.
+## On-premises application provisioning
+The following information is a current list of known limitations with the Azure AD ECMA Connector Host and on-prem application provisioning.
+
+### Application and directories
+The following applications and directories are not yet supported.
+
+**AD DS - (user / group writeback from Azure AD, using the on-prem provisioning preview)**
+ - When a user is managed by Azure AD Connect, the source of authority is on-prem Active Directory. Therefore, user attributes cannot be changed in Azure AD. This preview does not change the source of authority for users managed by Azure AD Connect.
+ - Attempting to use Azure AD Connect and the on-prem provisioning to provision groups / users into AD DS can lead to creation of a loop, where Azure AD Connect can overwrite a change that was made by the provisioning service in the cloud. Microsoft is working on a dedicated capability for group / user writeback. Upvote the UserVoice feedback [here](https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/16887037-enable-user-writeback-to-on-premise-ad-from-azure) to track the status of the preview. Alternatively, you can use [Microsoft Identity Manager](https://docs.microsoft.com/microsoft-identity-manager/microsoft-identity-manager-2016) for user / group writeback from Azure AD to AD.
+
+**Connectors other than SQL**
+ - The Azure AD ECMA Connector Host is officially supported for generic SQL (GSQL) connector. While it is possible to use other connectors such as the web services connector or custom ECMA connectors, it is **not yet supported**.
+
+**Azure Active Directory**
+ - On-prem provisioning allows you to take a user already in Azure AD and provision them into a third-party application. **It does not allow you to bring a user into the directory from a third-party application.** Customers will need to rely on our native HR integrations, Azure AD Connect, MIM, or Microsoft Graph to bring users into the directory.
+
+### Attributes and objects
+The following attributes and objects are not supported:
+ - Multi-valued attributes
+ - Reference attributes (for example, manager).
+ - Groups
+ - Complex anchors (for example, ObjectTypeName+UserName).
+ - On-premises applications are sometimes not federated with Azure AD and require local passwords. The on-premises provisioning preview **does not support provisioning one-time passwords or synchronizing passwords** between Azure AD and third-party applications.
+ - export_password' virtual attribute, SetPassword, and ChangePassword operations are not supported
+
+#### SSL certificates
+ - The Azure AD ECMA Connector Host currently requires either SSL certificate to be trusted by Azure or the Provisioning Agent to be used. Certificate subject must match the host name the Azure AD ECMA Connector Host is installed on.
+
+#### Anchor attributes
+ - The Azure AD ECMA Connector Host currently does not support anchor attribute changes (renames) or target systems, which require multiple attributes to form an anchor.
+
+#### Attribute discovery and mapping
+ - The attributes that the target application supports are discovered and surfaced in the Azure portal in Attribute Mappings. Newly added attributes will continue to be discovered. However, if an attribute type has changed (for example, string to boolean), and the attribute is part of the mappings, the type will not change automatically in the Azure portal. Customers will need to go into advanced settings in mappings and manually update the attribute type.
+ ## Next steps - [How provisioning works](how-provisioning-works.md)
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
+
+ Title: 'Azure AD on-premises application provisioning architecture | Microsoft Docs'
+description: Describes overview of on-premises application provisioning architecture.
++++++ Last updated : 05/28/2021+++++
+# Azure AD on-premises application provisioning architecture
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+## Overview
+
+The following diagram shows an over view of how on-premises application provisioning works.
+
+![Architecture](.\media\on-premises-application-provisioning-architecture\arch-3.png)
+
+There are three primary components to provisioning users into an on-premises application.
+
+- The Provisioning agent provides connectivity between Azure AD and your on-premises environment.
+- The ECMA host converts provisioning requests from Azure AD to requests made to your target application. It serves as a gateway between Azure AD and your application. It allows you to import existing ECMA2 connectors used with Microsoft Identity Manager. Note, the ECMA host is not required if you have built a SCIM application or SCIM gateway.
+- The Azure AD provisioning service serves as the synchronization engine.
+
+>[!NOTE]
+> MIM Sync is not required. However, you can use MIM sync to build and test your ECMA connector before importing it into the ECMA host.
++
+### Firewall requirements
+
+You do not need to open inbound connections to the corporate network. The provisioning agents only use outbound connections to the provisioning service, which means that there is no need to open firewall ports for incoming connections. You also do not need a perimeter (DMZ) network because all connections are outbound and take place over a secure channel.
+
+## Agent best practices
+- Ensure the auto Azure AD Connect Provisioning Agent Auto Update service is running. It is enabled by default when installing the agent. Auto update is required for Microsoft to support your deployment.
+- Avoid all forms of inline inspection on outbound TLS communications between agents and Azure. This type of inline inspection causes degradation to the communication flow.
+- The agent has to communicate with both Azure and your application, so the placement of the agent affects the latency of those two connections. You can minimize the latency of the end-to-end traffic by optimizing each network connection. Each connection can be optimized by:-
+- Reducing the distance between the two ends of the hop.
+- Choosing the right network to traverse. For example, traversing a private network rather than the public Internet may be faster, due to dedicated links.
+
+## Provisioning Agent questions
+**What is the GA version of the Provisioning Agent?**
+
+Refer to [Azure AD Connect Provisioning Agent: Version release history](provisioning-agent-release-version-history.md) for the latest GA version of the Provisioning Agent.
+
+**How do I know the version of my Provisioning Agent?**
+
+ 1. Sign in to the Windows server where the Provisioning Agent is installed.
+ 2. Go to Control Panel -> Uninstall or Change a Program menu
+ 3. Look for the version corresponding to the entry Microsoft Azure AD Connect Provisioning Agent
+
+**Does Microsoft automatically push Provisioning Agent updates?**
+
+Yes, Microsoft automatically updates the provisioning agent if the Windows service Microsoft Azure AD Connect Agent Updater is up and running. Ensuring that your agent is up to date is required for support to troubleshoot issues.
+
+**Can I install the Provisioning Agent on the same server running Azure AD Connect or Microsoft Identity Manager (MIM)?**
+
+Yes, you can install the Provisioning Agent on the same server that runs Azure AD Connect or MIM, but they are not required.
+
+**How do I configure the Provisioning Agent to use a proxy server for outbound HTTP communication?**
+
+The Provisioning Agent supports use of outbound proxy. You can configure it by editing the agent config file **C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config**. Add the following lines into it, towards the end of the file just before the closing </configuration> tag. Replace the variables [proxy-server] and [proxy-port] with your proxy server name and port values.
+```
+ <system.net>
+ <defaultProxy enabled="true" useDefaultCredentials="true">
+ <proxy
+ usesystemdefault="true"
+ proxyaddress="http://[proxy-server]:[proxy-port]"
+ bypassonlocal="true"
+ />
+ </defaultProxy>
+ </system.net>
+```
+**How do I ensure that the Provisioning Agent is able to communicate with the Azure AD tenant and no firewalls are blocking ports required by the agent?**
+
+You can also check whether all of the required ports are open.
+
+**How do I uninstall the Provisioning Agent?**
+1. Sign in to the Windows server where the Provisioning Agent is installed.
+2. Go to Control Panel -> Uninstall or Change a Program menu
+3. Uninstall the following programs:
+ - Microsoft Azure AD Connect Provisioning Agent
+ - Microsoft Azure AD Connect Agent Updater
+ - Microsoft Azure AD Connect Provisioning Agent Package
++
+## Next Steps
+
+- [App provisioning](user-provisioning.md)
+- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md)
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
active-directory On Premises Ecma Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-configure.md
+
+ Title: 'Azure AD ECMA Connector Host configuration'
+description: This article describes how to configure the Azure AD ECMA Connector Host.
++++++ Last updated : 05/28/2021+++++
+# Configure the Azure AD ECMA Connector Host and the provisioning agent.
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+This article provides guidance on how to configure the Azure AD ECMA Connector Host and the provisioning agent once you have successfully installed them.
+
+Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+
+ ![Installation flow](./media/on-premises-ecma-configure/flow-1.png)
+
+For more installation and configuration information see:
+ - [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md)
+ - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md)
+ - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
+## Configure the Azure AD ECMA Connector Host
+Configuring the Azure AD ECMA Connector Host occurs in 2 parts.
+
+ - **Configure the settings** - configure the port and certificate for the Azure AD ECMA Connector Host to use. This is only done the first time the ECMA Connector Host is started.
+ - **Create a connector** - create a connector (for example, SQL or LDAP) to allow the Azure AD ECMA Connector Host to export or import data to a data source.
+
+### Configure the Settings
+When you first start the Azure AD ECMA Connector Host you will see a port number which will already be filled out using the default 8585.
+
+ ![Configure your settings](.\media\on-premises-ecma-configure\configure-1.png)
+
+For the preview, you will need to generate a new self-signed certificate.
+
+ >[!NOTE]
+ >This preview uses a time-sensitive cerfiticate. The auto-generated certificate will be self-signed, part of the trusted root and the SAN matches the hostname.
++
+### Create a connector
+Now you must create a connector for the Azure AD ECMA Connector Host to use. This connector will allow the ECMA Connector Host to export (and import if desired) data to the data source for the connector you create.
+
+The configuration steps for each of the individual connectors are longer and are provided in their own documents.
+
+Use one of the links below to create and configure a connector.
+
+- [Generic SQL connector](on-premises-sql-connector-configure.md) - a connector that will work with SQL databases such as Microsoft SQL or MySQL.
++
+## Establish connectivity between Azure AD and the Azure AD ECMA Connector Host
+The following sections will guide you through establishing connectivity with the on-premises Azure AD ECMA Connector Host and Azure AD.
+
+#### Ensure ECMA2Host service is running
+1. On the server the running the Azure AD ECMA Connector Host, click Start.
+2. Type run and enter services.msc in the box
+3. In the services, ensure that **Microsoft ECMA2Host** is present and running. If not, click **Start**.
+ ![Service is running](.\media\on-premises-ecma-configure\configure-2.png)
+
+#### Add Enterprise application
+1. Sign-in to the Azure portal as an application administrator
+2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**.
+3. Click on **New Application**.
+ ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
+4. Locate your application and click **Create**.
+
+### Configure the application and test
+ 1. Once it has been created, click he **Provisioning page**.
+ 2. Click **get started**.
+ ![get started](.\media\on-premises-ecma-configure\configure-6.png)
+ 3. On the **Provisioning page**, change the mode to **Automatic**
+ ![Change mode](.\media\on-premises-ecma-configure\configure-7.png)
+ 4. In the on-premises connectivity section, select the agent that you just deployed and click assign agent(s).
+ ![Assign an agent](.\media\on-premises-ecma-configure\configure-8.png)</br>
+
+ >[!NOTE]
+ >After adding the agent, you need to wait 10-20 minutes for the registration to complete. The connectivity test will not work until the registration completes.
+ >
+ >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Navigating to your server > search for services in the windows search bar > identify the Azure AD Connect Provisioning Agent Service > right click on the service and restart.
+
+
+ 5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host.
+
+ |Property|Value|
+ |--|--|
+ |Tenant URL|https://localhost:8585/ecma2host_connectorName/scim|
+
+ 6. Enter the secret token value that you defined when creating the connector.
+ 7. Click Test Connection and wait one minute.
+ ![Test the connection](.\media\on-premises-ecma-configure\configure-5.png)
+
+ >[!NOTE]
+ >Be sure to wait 10-20 minutes after assigning the agent to test the connection. The connection will fail if registration has not completed.
+ 8. Once connection test is successful, click **save**.</br>
+ ![Successful test](.\media\on-premises-ecma-configure\configure-9.png)
+
+## Configure who is in scope for provisioning
+Now that you have the Azure AD ECMA Connector Host talking with Azure AD you can move on to configuring who is in scope for provisioning. The sections below will provide information on how scope your users.
+
+### Assign users to your application
+Azure AD allows you to scope who should be provisioned based on assignment to an application and / or by filtering on a particular attribute. Determine who should be in scope for provisioning and define your scoping rules as necessary. For more information, see [Manage user assignment for an app in Azure Active Directory](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+
+### Configure your attribute mappings
+You will need to map the user attributes in Azure AD to the attributes in the target application. The Azure AD Provisioning service relies on the SCIM standard for provisioning and as a result, the attributes surfaced have the SCIM name space. The example below shows how you can map the mail and objectId attributes in Azure AD to the Email and InternalGUID attributes in an application.
+
+>[!NOTE]
+>The default mapping contains userPrincipalName to an attribute name PLACEHOLDER. You will need to change the PLACEHOLDER attribute to one that is found in your application. For more information, see [Matching users in the source and target systems](customize-application-attributes.md#matching-users-in-the-source-and-target--systems).
+
+|Attribute name in Azure AD|Attribute name in SCIM|Attribute name in target application|
+|--|--|--|
+|mail|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:Email|Email|
+|objectId|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:InternalGUID|InternalGUID|
+
+#### Configure attribute mapping
+ 1. In the Azure AD portal, under **Enterprise applications**, click he **Provisioning page**.
+ 2. Click **get started**.
+ 3. Expand **Mappings** and click **Provision Azure Active Directory Users**
+ ![provision a user](.\media\on-premises-ecma-configure\configure-10.png)
+ 4. Click **Add new mapping**
+ ![Add a mapping](.\media\on-premises-ecma-configure\configure-11.png)
+ 5. Specify the source and target attributes and click **OK**.</br>
+ ![Edit attributes](.\media\on-premises-ecma-configure\configure-12.png)
++
+For more information on mapping user attributes from applications to Azure AD, see [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md).
+
+### Test your configuration by provisioning users on demand
+To test your configuration, you can use on-demand provisioning of user. For information on provisioning users on-demand see [On-demand provisioning](provision-on-demand.md).
+
+ 1. Navigate to the single sign-on blade and then back to the provisioning blade. From the new provisioning overview blade, click on on-demand.
+ 2. Test provisioning a few users on-demand as described [here](provision-on-demand.md).
+ ![Test provisioning](.\media\on-premises-ecma-configure\configure-13.png)
+
+### Start provisioning users
+ 1. Once on-demand provisioning is successful, change back to the provisioning configuration page. Ensure that the scope is set to only assigned users and group, turn **provisioning On**, and click **Save**.
+ ![Start provisioning](.\media\on-premises-ecma-configure\configure-14.png)
+ 2. Wait several minutes for provisioning to start (it may take up to 40 minutes). You can learn more about the provisioning service performance here. After the provisioning job has been completed, as described in the next section, you can change the provisioning status to Off, and click Save. This will stop the provisioning service from running in the future.
+
+### Verify users have been successfully provisioned
+After waiting, check your data source to see if new users are being provisioned.
+ ![Verify users are provisioned](.\media\on-premises-ecma-configure\configure-15.png)
+
+## Monitor your deployment
+
+1. Use the provisioning logs to determine which users have been provisioned successfully or unsuccessfully.
+2. Build custom alerts, dashboards, and queries using the Azure Monitor integration.
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states here.
+
+## Next Steps
+
+- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md)
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
active-directory On Premises Ecma Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-install.md
+
+ Title: 'Azure AD ECMA Connector Host installation'
+description: This article describes how to install the Azure AD ECMA Connector Host.
++++++ Last updated : 05/28/2021+++++
+# Installation of the Azure AD ECMA Connector Host
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+The Azure AD ECMA Connector Host is included and part of the Azure AD Connect Provisioning Agent Package. The provisioning agent and Azure AD ECMA Connector Host are two separate windows services that are installed using one installer, deployed on the same machine.
+
+Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+
+ ![Installation flow](./media/on-premises-ecma-install/flow-1.png)
+
+For more installation and configuration information see:
+ - [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md)
+ - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md)
+ - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
++
+## Download and install the Azure AD Connect Provisioning Agent Package
+
+ 1. Sign into the Azure portal
+ 2. Navigate to enterprise applications > Add a new application
+ 3. Search for the "On-premises provisioning" application and add it to your tenant image
+ 4. Navigate to the provisioning blade
+ 5. Click on on-premises connectivity
+ 6. Download the agent installer
+ 7. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
+ 8. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
+ ![Microsoft Azure AD Connect Provisioning Agent Package screen](media/on-premises-ecma-install/install-1.png)</br>
+ 9. After this operation finishes, the configuration wizard starts. Click **Next**.
+ ![Welcome screen](media/on-premises-ecma-install/install-2.png)</br>
+ 10. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)** and click **Next**.
+ ![Select extension](media/on-premises-ecma-install/install-3.png)</br>
+ 12. Use your global administrator account and sign in to Azure AD.
+ ![Azure signin](media/on-premises-ecma-install/install-4.png)</br>
+ 13. On the **Agent Configuration** screen, click **Confirm**.
+ ![Confirm installation](media/on-premises-ecma-install/install-5.png)</br>
+ 14. Once the installation is complete, you should see a message at the bottom of the wizard. Click **Finish**.
+ ![Click finish](media/on-premises-ecma-install/install-6.png)</br>
+ 15. Click **Close**.
+
+Now that the agent package has been successfully installed, you will need to configure the Azure AD ECMA Connector Host and create or import connectors.
+## Next Steps
++
+- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
active-directory On Premises Ecma Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-prerequisites.md
+
+ Title: 'Prerequisites for Azure AD ECMA Connector Host'
+description: This article describes the prerequisites and hardware requirements you need for using the Azure AD ECMA Connector Host.
++++++ Last updated : 05/28/2021+++++
+# Prerequisites for the Azure AD ECMA Connector Host
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+This article provides guidance on the prerequisites that are needed for using the Azure AD ECMA Connector Host.
+
+Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+
+ ![Installation flow](./media/on-premises-ecma-prerequisites/flow-1.png)
+
+For more installation and configuration information, see:
+ - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md)
+ - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md)
+ - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
+
+## On-premises pre-requisites
+ - A target system, such as a SQL database, in which users can be created, updated, and deleted.
+ - An ECMA 2.0 or later connector for that target system, which supports export, schema retrieval, and optionally full import or delta import operations. If you do not have an ECMA Connector ready during configuration, then you can still validate the end-to-end flow if you have a SQL Server in your environment and use the Generic SQL Connector.
+ - A Windows Server 2016 or later computer with an Internet-accessible TCP/IP address, connectivity to the target system, and with outbound connectivity to login.microsoftonline.com (for example, a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy). The server should have at least 3 GB of RAM.
+ - A computer with .NET Framework 4.7.1
+
+## Cloud requirements
+
+ - An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5).
+ [!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)]
+
+ - Hybrid Administrator role for configuring the provisioning agent and the Application Administrator or Cloud Administrator roles for configuring provisioning in the Azure portal.
++
+## Next Steps
+
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
+
+ Title: 'Troubleshooting issues with the ECMA Connector Host and Azure AD'
+description: Describes how to troubleshoot various issues you may encounter when installing and using the ECMCA connector host.
++++++ Last updated : 05/28/2021+++++
+# Troubleshooting ECMA Connector Host issues
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
++
+## Troubleshoot test connection issues.
+After configuring the ECMA Host and Provisioning Agent, it's time to test connectivity from the Azure AD Provisioning service to the Provisioning Agent > ECMA Host > Application. This end to end test can be performed by clicking test connection in the application in the Azure portal. When test connection fails, try the following troubleshooting steps:
+
+ 1. Verify that the agent and ECMA host are running:
+ 1. On the server with the agent installed, open **Services** by going to **Start** > **Run** > **Services.msc**.
+ 2. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater**, **Microsoft Azure AD Connect Provisioning Agent**, and **Microsoft ECMA2Host** services are present and their status is *Running*.
+![ECMA service running](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
+
+ 2. Navigate to the folder where the ECMA Host was installed > Troubleshooting > Scripts > TestECMA2HostConnection and run the script. This script will send a SCIM GET or POST request in order to validate that the ECMA Connector Host is operating and responding to requests.
+ It should be run on the same computer as the ECMA Connector Host service itself.
+ 3. Ensure that the agent is active by navigating to your application in the Azure portal, click on admin connectivity, click on the agent dropdown, and ensure your agent is active.
+ 4. Check if the secret token provided is the same as the secret token on-prem (you will need to go on-prem and provide the secret token again and then copy it into the Azure portal).
+ 5. Ensure that you have assigned one or more agents to the application in the Azure portal.
+ 6. After assigning an agent, you need to wait 10-20 minutes for the registration to complete. The connectivity test will not work until the registration completes.
+ 7. Ensure that you are using a valid certificate. Navigating the settings tab of the ECMA host allows you to generate a new certificate.
+ 8. Restart the provisioning agent by navigating to the task bar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click stop and then start.
+ 9. When providing the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace localhost with your hostname, but it is not required. Replace "connectorName" with the name of the connector you specified in the ECMA host.
+ ```
+ https://localhost:8585/ecma2host_connectorName/scim
+ ```
+
+## Unable to configure ECMA host, view logs in event viewer, or start ECMA host service
+
+#### The following issues can be resolved by running the ECMA host as an admin:
+
+* I get an error when opening the ECMA host wizard
+ ![ECMA wizard error](./media/on-premises-ecma-troubleshoot/tshoot-2.png)
+
+* I've been able to configure the ECMA host wizard, but am not able to see the ECMA host logs. In this case you will need to open the host as an admin and setup a connector end to end. This can be simplified by exporting an existing connector and importing it again.
+
+ ![Host logs](./media/on-premises-ecma-troubleshoot/tshoot-3.png)
+
+* I've been able to configure the ECMA host wizard, but am not able to start the ECMA host service
+ ![Host service](./media/on-premises-ecma-troubleshoot/tshoot-4.png)
++
+## Turning on verbose logging
+
+By default, the swithValue for the ECMA Connector Host is set to Error. This means it will only log events that are errors. To enable verbose logging for the ECMA host service and / or Wizard. Set the "switchValue" to Verbose in both locations as shown below.
+
+File location for verbose service logging: c:\program files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config
+ ```
+ <?xml version="1.0" encoding="utf-8"?>
+ <configuration>
+ <startup>
+ <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6" />
+ </startup>
+ <appSettings>
+ <add key="Debug" value="true" />
+ </appSettings>
+ <system.diagnostics>
+ <sources>
+ <source name="ConnectorsLog" switchValue="Verbose">
+ <listeners>
+ <add initializeData="ConnectorsLog" type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ConnectorsLog" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack">
+ <filter type=""/>
+ </add>
+ </listeners>
+ </source>
+ <!-- Choose one of the following switchTrace: Off, Error, Warning, Information, Verbose -->
+ <source name="ECMA2Host" switchValue="Verbose">
+ <listeners>
+ <add initializeData="ECMA2Host" type="System.Diagnos
+ ```
+
+File location for verbose wizard logging: C:\Program Files\Microsoft ECMA2Host\Wizard\Microsoft.ECMA2Host.ConfigWizard.exe.config
+ ```
+ <source name="ConnectorsLog" switchValue="Verbose">
+ <listeners>
+ <add initializeData="ConnectorsLog" type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ConnectorsLog" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack">
+ <filter type=""/>
+ </add>
+ </listeners>
+ </source>
+ <!-- Choose one of the following switchTrace: Off, Error, Warning, Information, Verbose -->
+ <source name="ECMA2Host" switchValue="Verbose">
+ <listeners>
+ <add initializeData="ECMA2Host" type="System.Diagnostics.EventLogTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ECMA2HostListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" />
+ ```
+
+## Target attribute missing
+The provisioning service automatically discovers attributes in your target application. If you see that a target attribute is missing in the target attribute list in the Azure portal, perform the following troubleshooting step:
+
+ 1. Review the "Select Attributes" page of your ECMA host configuration to verify that the attribute has been selected to be exposed to the Azure portal.
+ 2. Ensure that the ECMA host service is turned on.
+ 3. Review the ECMA host logs to verify that a /schemas request was made and review the attributes in the response. This information will be valuable for support to troubleshoot the issue.
+
+## Collect logs from event viewer as a zip file
+Navigate to the folder where the ECMA Host was installed > Troubleshooting > Scripts. Run the `CollectTroubleshootingInfo` script as an admin. It allows you to capture the logs in a zip file and export them.
+
+## Reviewing events in the event viewer
+
+Once the ECMA Connector host schema mapping has been configured, start the service so it will listen for incoming connections. Then, monitor for incoming requests. To do this, do the following:
+
+ 1. Click on the start menu, type **event viewer**, and click on Event Viewer.
+ 2. In **Event Viewer**, expand **Applications and Services** Logs, and select **Microsoft ECMA2Host Logs**.
+ 3. As changes are received by the connector host, events will be written to the application log.
+++
+## Understanding incoming SCIM requests
+
+Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app support and the requests from the host to agent to Azure AD rely on SCIM. You can learn more about our SCIM implementation [here](use-scim-to-provision-users-and-groups.md).
+
+Be aware that at the beginning of each provisioning cycle, before performing on-demand provisioning, and when doing the test connection the Azure AD provisioning service generally makes a get user call for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) to ensure the target endpoint is available and returning SCIM-compliant responses.
++
+## How do I troubleshoot the provisioning agent?
+### Agent failed to start
+
+You might receive an error message that states:
+
+**Service 'Microsoft Azure AD Connect Provisioning Agent' failed to start. Verify that you have sufficient privileges to start the system services.**
+
+This problem is typically caused by a group policy that prevented permissions from being applied to the local NT Service log-on account created by the installer (NT SERVICE\AADConnectProvisioningAgent). These permissions are required to start the service.
+
+To resolve this problem, follow these steps.
+
+1. Sign in to the server with an administrator account.
+1. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
+1. Under **Services**, double-click **Microsoft Azure AD Connect Provisioning Agent**.
+1. On the **Log On** tab, change **This account** to a domain admin. Then restart the service.
+
+This test verifies that your agents can communicate with Azure over port 443. Open a browser, and go to the previous URL from the server where the agent is installed.
+
+### Agent times out or certificate is invalid
+
+You might get the following error message when you attempt to register the agent.
+
+![Agent times out](./media/on-premises-ecma-troubleshoot/tshoot-5.png)
+
+This problem is usually caused by the agent being unable to connect to the Hybrid Identity Service and requires you to configure an HTTP proxy. To resolve this problem, configure an outbound proxy.
+
+The provisioning agent supports use of an outbound proxy. You can configure it by editing the agent config file *C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config*.
+Add the following lines into it, toward the end of the file just before the closing `</configuration>` tag.
+Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server name and port values.
+
+```xml
+ <system.net>
+ <defaultProxy enabled="true" useDefaultCredentials="true">
+ <proxy
+ usesystemdefault="true"
+ proxyaddress="http://[proxy-server]:[proxy-port]"
+ bypassonlocal="true"
+ />
+ </defaultProxy>
+ </system.net>
+```
+### Agent registration fails with security error
+
+You might get an error message when you install the cloud provisioning agent.
+
+This problem is typically caused by the agent being unable to execute the PowerShell registration scripts due to local PowerShell execution policies.
+
+To resolve this problem, change the PowerShell execution policies on the server. You need to have Machine and User policies set as *Undefined* or *RemoteSigned*. If they're set as *Unrestricted*, you'll see this error. For more information, see [PowerShell execution policies](/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-6).
+
+### Log files
+
+By default, the agent emits minimal error messages and stack trace information. You can find these trace logs in the folder **C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace**.
+
+To gather additional details for troubleshooting agent-related problems, follow these steps.
+
+1. Install the AADCloudSyncTools PowerShell module as described [here](../../active-directory/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
+2. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. You can use the following switches to fine-tune your data collection.
+ - SkipVerboseTrace to only export current logs without capturing verbose logs (default = false)
+ - TracingDurationMins to specify a different capture duration (default = 3 mins)
+ - OutputPath to specify a different output path (default = UserΓÇÖs Documents)
+++
+Azure AD allows you to monitor the provisioning service in the cloud as well as collect logs on-premises. The provisioning service emits logs for each user that was evaluated as part of the synchronization process. Those logs can be consumed through the [Azure portal UI, APIs, and log analytics](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs#ways-of-interacting-with-the-provisioning-logs). In addition, the ECMA host generates logs on-premises, showing each provisioning request received and the response sent to Azure AD.
+
+### Agent installation fails
+* The error `System.ComponentModel.Win32Exception: The specified service already exists` indicates that the previous ECMA Host was unsuccessfully uninstalled. Please uninstall the host application. Navigate to program files and remove the ECMA Host folder. You may want to store the configuration file for backup.
+* The following error indicates a pre-req has not been fulfilled. Ensure that you have .NET 4.7.1 installed.
+
+ ```
+ Method Name : <>c__DisplayClass0_1 :
+ RegisterNotLoadedAssemblies Error during load assembly: System.Management.Automation.resources.dll
+ Outer Exception Data
+ Message: Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\System.Management.Automation.resources.dll' or one of its dependencies. The system cannot find the file specified.
+
+ ```
++
+## Next Steps
+
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
+
+ Title: 'Export a Microsoft Identity Manager connector for use with Azure AD ECMA Connector Host'
+description: Describes how to create and export a connector from MIM Sync to be used with Azure AD ECMA Connector Host.
++++++ Last updated : 06/01/2021++++++
+# Export a Microsoft Identity Manager connector for use with Azure AD ECMA Connector Host
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+You can import into Azure AD ECMA Connector Host a configuration for a specific connector from a FIM Sync or MIM Sync installation. Note that the MIM Sync installation is only used for configuration, not for the ongoing synchronization from Azure AD.
+
+>[!IMPORTANT]
+>Currently, only the Generic SQL (GSQL) connector is support for use with the Azure AD ECMA Connector Host.
++
+## Creating and exporting a connector configuration in MIM Sync
+If you already have MIM Sync with your ECMA connector already configured, then skip to step 10.
+
+ 1. Prepare a Windows Server 2016 server, which is distinct from the server that will be used for running the Azure AD ECMA Connector Host. This host server should either have a SQL Server 2016 database co-located, or have network connectivity to a SQL Server 2016 database. One way to set up this server is by deploying an Azure Virtual Machine with the image **SQL Server 2016 SP1 Standard on Windows Server 2016**. Note that this server does not need Internet connectivity, other than remote desktop access for setup purposes.
+ 2. Create an account for use during the MIM Sync installation. This can be a local account on that Windows Server. To create a local account, launch control panel, open user accounts, and add a user account **mimsync**.
+ 3. Add the account created in the previous step to the local Administrators group.
+ 4. Give the account created earlier the ability to run a service. Launch Local Security Policy, click on Local Policies, User Rights Assignment, and **Log on as a service**. Add the account mentioned earlier.
+ 5. Install MIM Sync on this host. If you do not have MIM Sync binaries, then you can install an evaluation by downloading the ZIP file from [https://www.microsoft.com/en-us/download/details.aspx?id=48244](https://www.microsoft.com/en-us/download/details.aspx?id=48244), mounting the ISO image, and copying the folder **Synchronization Service** to the Windows Server host. Then run the setup program contained in that folder. Note that evaluation software is time-limited and will expire, and is not intended for production use.
+ 6. Once the installation of MIM Sync is complete, log out and log back in.
+ 7. Install your connector on that same server as MIM Sync. (For illustration purposes, this test lab guide will illustrate using one of the Microsoft-supplied connectors for download from [https://www.microsoft.com/en-us/download/details.aspx?id=51495](https://www.microsoft.com/en-us/download/details.aspx?id=51495) ).
+ 8. Launch the Synchronization Service UI. Click on **Management Agents**. Click **Create**, and specify the connector management agent. Be sure to select a connector management agent that is ECMA-based.
+ 9. Give the connector a name, and configure the parameters needed to import and export data to the connector. Be sure to configure that the connector can import and export single-valued string attributes of a user or person object type.
+ 10. On the MIM Sync server computer, launch the Synchronization Service UI, if not already running. Click on **Management Agents**.
+ 11. Select the connector, and click **Export Management Agent**. Save the XML file, as well as the DLL and related software for your connector, to the Windows Server which will be holding the ECMA Connector host.
+
+At this point, the MIM Sync server is no longer needed.
+
+ 1. Sign into the Windows Server as the account which the Azure AD ECMA Connector Host will run as.
+ 2. Change to the directory c:\program files\Microsoft ECMA2host\Service\ECMA and ensure there are one or more DLLs already present in that directory. (Those DLLs correspond to Microsoft-delivered connectors).
+ 3. Copy the MA DLL for your connector, and any of its prerequisite DLLs, to that same ECMA subdirectory of the Service directory.
+ 4. Change to the directory C:\program files\Microsoft ECMA2Host\Wizard and run the program Microsoft.ECMA2Host.ConfigWizard.exe to set up the ECMA Connector Host configuration.
+ 5. A new window will appear with a list of connectors. By default, no connectors will be present. Click **;New connector**.
+ 6. Specify the management agent xml file that was exported from MIM earlier. Continue with the configuration and schema mapping instructions from the section Configuring a connector above.
+++
+## Next Steps
++
+- [App provisioning](user-provisioning.md)
+- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md)
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
+
+ Title: Azure AD on-premises app provisioning to SCIM-enabled apps
+description: This article describes how to on-premises app provisioning to SCIM-enabled apps.
+++++++ Last updated : 05/28/2021++++
+# Azure AD on-premises application provisioning to SCIM-enabled apps
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+The Azure AD provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This document outlines how you can use the Azure AD provisioning service to provision users into an on-premises application that is SCIM enabled. If you're looking to provision users into non-SCIM on-premises applications, such as a non-AD LDAP directory or SQL DB, see here (link to new doc that we will need to create). If you're looking to provisioning users into cloud apps such as DropBox, Atlassian, etc. review the app specific [tutorials](../../active-directory/saas-apps/tutorial-list.md).
+
+![architecture](./media/on-premises-scim-provisioning/scim-4.png)
++
+## Pre-requisites
+- An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5).
+ [!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)]
+- Administrator role for installing the agent. This is a one time effort and should be an Azure account that is either a hybrid admin or global admin.
+- Administrator role for configuring the application in the cloud (Application admin, Cloud application admin, Global Administrator, Custom role with perms)
+
+## Steps for on-premises app provisioning to SCIM-enabled apps
+Use the steps below to provision to SCIM-enabled apps.
+
+ 1. Add the "Agent-based SCIM provisioning" app from the [gallery](../../active-directory/manage-apps/add-application-portal.md).
+ 2. Navigate to your app > Provisioning > Download the provisioning agent.
+ 3. Click on on-premises connectivity and download the provisioning agent.
+ 4. Copy the agent onto the virtual machine or server that your SCIM endpoint is hosted on.
+ 5. Open the provisioning agent installer, agree to the terms of service, and click install.
+ 6. Open the provisioning agent wizard and select on-premises provisioning when prompted for the extension that you would like to enable.
+ 7. Provide credentials for an Azure AD Administrator when prompted to authorize (Hybrid administrator or Global administrator required).
+ 8. Click confirm to confirm the installation was successful.
+ 9. Navigate back to your application > on-premises connectivity.
+ 10. Select the agent that you installed, from the dropdown list, and click assign agent.
+ 11. Wait 10 minutes or restart the Azure AD Connect Provisioning agent service on your server / VM.
+ 12. Provide URL for your SCIM endpoint in the tenant URL field (e.g. Https://localhost:8585/scim).
+ ![assign agent](./media/on-premises-scim-provisioning/scim-2.png)
+ 13. Click test connection and save the credentials.
+ 14. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application.
+ 15. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application.
+ 16. Test provisioning a few users [on-demand](provision-on-demand.md).
+ 17. Add additional users into scope by assigning them to your application.
+ 18. Navigate to the provisioning blade and hit start provisioning.
+ 19. Monitor using the [provisioning logs](../../active-directory/reports-monitoring/concept-provisioning-logs.md).
+
+
+## Things to be aware of
+* Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](use-scim-to-provision-users-and-groups.md).
+ * Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation (the code is as-is)
+* Support the /schemaDiscovery endpoint to reduce configuration required in the Azure portal.
+
+## Next Steps
+
+- [App provisioning](user-provisioning.md)
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
+
+ Title: Azure AD ECMA Connector Host generic SQL connector configuration
+description: This document describes how to configure the Azure AD ECMA Connector Host generic SQL connector.
+++++++ Last updated : 05/28/2021++++
+# Azure AD ECMA Connector Host generic SQL connector configuration
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
++
+This document describes how to create a new SQL connector with the Azure AD ECMA Connector Host and how to configure it. You will need to do this once you have successfully installed Azure AD ECMA Connector Host.
+
+>[!NOTE]
+> This document covers only the configuration of the Generic SQL connector. For step-by-step example of setting up the Generic SQL connector, see [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+
+Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+
+ ![Installation flow](./media/on-premises-sql-connector-configure/flow-1.png)
+
+For more installation and configuration information see:
+ - [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md)
+ - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md)
+ - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md)
+
+Depending on the options you select, some of the wizard screens may or may not be available and the information may be slightly different. For purposes of this configuration, the user object type is used. Use the information below to guide you in your configuration.
++
+## Create a generic SQL connector
+
+To create a generic SQL connector use the following steps:
+
+ 1. Click on the ECMA Connector Host shortcut on the desktop.
+ 2. Select **New Connector**.
+ ![Choose new connector](.\media\on-premises-sql-connector-configure\sql-1.png)
+
+ 3. On the **Properties** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter properties](.\media\on-premises-sql-connector-configure\sql-2.png)
+
+ |Property|Description|
+ |--|--|
+ |Name|The name for this connector|
+ |Autosync timer (minutes)|Minimum allowed is 120 minutes.|
+ |Secret Token|123456 [This must be a string of 10-20 ASCII letters and/or digits.]|
+ |Description|The description of the connector|
+ |Extension DLL|For a generic sql connector, select Microsoft.IAM.Connector.GenericSql.dll.|
+ 4. On the **Connectivity** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter connectivity](.\media\on-premises-sql-connector-configure\sql-3.png)
+
+ |Property|Description|
+ |--|--|
+ |DSN File|The Data Source Name file used to connect to the SQL server|
+ |User Name|The username of an individual with rights to the SQL server. This must be in the form of hostname\sqladminaccount for standalone servers, or domain\sqladminaccount for domain member servers.|
+ |Password|The password of the username provided above.|
+ |DN is Anchor|Unless the your environment is known to require these settings, leave DN is Anchor and Export Type:Object Replace deselected.|
+ |Export TypeObjectReplace||
+ 5. On the **Schema 1** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter schema 1](.\media\on-premises-sql-connector-configure\sql-4.png)
+
+ |Property|Description|
+ |--|--|
+ |Object type detection method|The method used to detect the object type the connector will be provisioning.|
+ |Fixed value list/Table/View/SP|This should contain User.|
+ |Column Name for Table/View/SP||
+ |Stored Procedure Parameters||
+ |Provide SQL query for detecting object types||
+ 6. On the **Schema 2** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes. This schema screen maybe slightly different or have additional information depending on the object types that were selected in the previous step.
+ ![Enter schema 2](.\media\on-premises-sql-connector-configure\sql-5.png)
+
+ |Property|Description|
+ |--|--|
+ |User:Attribute Detection|This should be set to Table.|
+ |User:Table/View/SP|his should contain Employees.|
+ |User:Name of Multi-Values Table/Views||
+ |User:Stored Procedure Parameters||
+ |User:Provide SQL query for detecting object types||
+ 7. On the **Schema 3** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes. The attributes that you see will depend on the information provided in the previous step.
+ ![Enter schema 3](.\media\on-premises-sql-connector-configure\sql-6.png)
+
+ |Property|Description|
+ |--|--|
+ |Select DN attribute for User||
+ 8. On the **Schema 4** page, review the attributes DataType and the Direction of flow for the connector. You can adjust them if needed and click Next.
+ ![Enter schema 4](.\media\on-premises-sql-connector-configure\sql-7.png)
+ 9. On the **Global** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter global information](.\media\on-premises-sql-connector-configure\sql-8.png)
+
+ |Property|Description|
+ |--|--|
+ |Water Mark Query||
+ |Data Source Time Zone|Select the time zone that the data source is located in.|
+ |Data Source Date Time Format|Specify the format for the data source.|
+ |Use named parameters to execute a stored procedure||
+ |Operation Methods||
+ |Extension Name||
+ |Set Password SP Name||
+ |Set Password SP Parameters||
+ 10. On the **Select partition** page, ensure that the correct partitions are selected and click Next.
+ ![Enter partition information](.\media\on-premises-sql-connector-configure\sql-9.png)
+
+ 11. On the **Run Profiles** page, select the run profiles that you wish to use and click Next.
+ ![Enter run profiles](.\media\on-premises-sql-connector-configure\sql-10.png)
+
+ |Property|Description|
+ |--|--|
+ |Export|Run profile that will export data to SQL. This run profile is required.|
+ |Full import|Run profile that will import all data from SQL sources specified earlier.|
+ |Delta import|Run profile that will import only changes from SQL since the last full or delta import.|
+
+ 12. On the **Run Profiles** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter Export information](.\media\on-premises-sql-connector-configure\sql-11.png)
+
+ |Property|Description|
+ |--|--|
+ |Operation Method||
+ |Table/View/SP||
+ |Start Index Parameter Name||
+ |End Index Parameter Name||
+ |Stored Procedure Parameters||
+
+ 13. On the **Object Types** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter object types](.\media\on-premises-sql-connector-configure\sql-12.png)
+
+ |Property|Description|
+ |--|--|
+ |Target Object|The object that you are configuring.|
+ |Anchor|The attribute that will be used as the objects anchor. This attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host using this attribute after the initial cycle. This anchor value should be the same as the anchor value in schema 3.|
+ |Query attribute|Used by the ECMA host to query the in-memory cache. This attribute should be unique.|
+ |DN|The attribute that is used for the target objects distinguished name. The autogenerate option should be selected in most cases. If deselected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType|
+
+ 14. The ECMA host discovers the attributes supported by the target system. You can choose which of those attributes you would like to expose to Azure AD. These attributes can then be configured in the Azure portal for provisioning. On the **Select Attributes** page, select attributes from the drop-down to add.
+ ![Enter attributes](.\media\on-premises-sql-connector-configure\sql-13.png)
+
+15. On the **Deprovisioning** page, review the deprovisioning information and make adjustments as necessary. Click Finish.
+ ![Enter deprovisioning information](.\media\on-premises-sql-connector-configure\sql-14.png)
+++
+## Next Steps
+
+- [App provisioning](user-provisioning.md)
+- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)
+- [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)
+- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 05/11/2021 Last updated : 06/01/2021
We recommend the following production configuration:
|:-|:-| |Number of Azure AD Connect provisioning agents to deploy|Two (for high availability and failover) |Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows 2012 R2+ with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
![Flow to on-premises agents](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img4.png)
We recommend the following production configuration:
|:-|:-| |Number of Azure AD Connect provisioning agents to deploy on-premises|Two per disjoint Active Directory forest| |Number of provisioning connector apps to configure|One app per child domain|
-|Server host for Azure AD Connect provisioning agent|Windows 2012 R2+ with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
+|Server host for Azure AD Connect provisioning agent|Windows Server 2016 with line of sight to geolocated Active Directory domain controllers</br>Can coexist with Azure AD Connect service|
![Single cloud HR app tenant disjoint Active Directory forest](media/plan-cloud-hr-provision/plan-cloudhr-provisioning-img5.png) ### Azure AD Connect provisioning agent requirements
-The cloud HR app to Active Directory user provisioning solution requires that you deploy one or more Azure AD Connect provisioning agents on servers that run Windows 2012 R2 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
+The cloud HR app to Active Directory user provisioning solution requires that you deploy one or more Azure AD Connect provisioning agents on servers that run Windows Server 2016 or greater. The servers must have a minimum of 4-GB RAM and .NET 4.7.1+ runtime. Ensure that the host server has network access to the target Active Directory domain.
To prepare the on-premises environment, the Azure AD Connect provisioning agent configuration wizard registers the agent with your Azure AD tenant, [opens ports](../app-proxy/application-proxy-add-on-premises-application.md#open-ports), [allows access to URLs](../app-proxy/application-proxy-add-on-premises-application.md#allow-access-to-urls), and supports [outbound HTTPS proxy configuration](../saas-apps/workday-inbound-tutorial.md#how-do-i-configure-the-provisioning-agent-to-use-a-proxy-server-for-outbound-http-communication).
-The provisioning agent uses a service account to communicate with the Active Directory domains. Before you install the agent, create a service account in Active Directory Users and Computers that meets the following requirements:
--- A password that doesn't expire-- Delegated control permissions to read, create, delete, and manage user accounts
+The provisioning agent configures a [Global Managed Service Account (GMSA)](../cloud-sync/how-to-prerequisites.md#group-managed-service-accounts)
+to communicate with the Active Directory domains. If you want to use a non-GMSA service account for provisioning, you can [skip GMSA configuration](../cloud-sync/how-to-manage-registry-options.md#skip-gmsa-configuration) and specify your service account during configuration.
You can select domain controllers that should handle provisioning requests. If you have several geographically distributed domain controllers, install the provisioning agent in the same site as your preferred domain controllers. This positioning improves the reliability and performance of the end-to-end solution. For high availability, you can deploy more than one Azure AD Connect provisioning agent. Register the agent to handle the same set of on-premises Active Directory domains.
+## Design HR provisioning app deployment topology
+
+Depending on the number of Active Directory domains involved in the inbound user provisioning configuration, you may consider one of the following deployment topologies.
+
+### Deployment topology 1: Single app to provision all users from Cloud HR to single on-premises Active Directory domain
+
+This is the most common deployment topology. Use this topology, if you need to provision all users from Cloud HR to a single AD domain and same provisioning rules apply to all users.
++
+**Salient configuration aspects**
+* Setup two provisioning agent nodes for high availability and failover.
+* Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register your AD domain with your Azure AD tenant.
+* When configuring the provisioning app, select the AD domain from the dropdown of registered domains.
+* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+
+### Deployment topology 2: Separate apps to provision distinct user sets from Cloud HR to single on-premises Active Directory domain
+
+This topology supports business requirements where attribute mapping and provisioning logic differs based on user type (employee/contractor), user location or user's business unit. You can also use this topology to delegate the administration and maintenance of inbound user provisioning based on division or country basis.
++
+**Salient configuration aspects**
+* Setup two provisioning agent nodes for high availability and failover.
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
+* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+
+> [!NOTE]
+> If you do not have a test AD domain and use a TEST OU container in AD, then you may use this topology to create two separate apps *HR2AD (Prod)* and *HR2AD (Test)*. Use the *HR2AD (Test)* app to test your attribute mapping changes before promoting it to the *HR2AD (Prod)* app.
+
+### Deployment topology 3: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (no cross-domain visibility)
+
+Use this topology to manage multiple independent child AD domains belonging to the same forest. It also offers the flexibility of delegating the administration of each provisioning job by domain boundary. For example: In the diagram below, *EMEA administrators* can independently manage the provisioning configuration of users belonging to the EMEA region.
++
+**Salient configuration aspects**
+* Setup two provisioning agent nodes for high availability and failover.
+* Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register all child AD domains with your Azure AD tenant.
+* When configuring the provisioning app, select the respective child AD domain from the dropdown of available AD domains.
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
+* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
++
+### Deployment topology 4: Separate apps to provision distinct user sets from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
+
+Use this topology to manage multiple child AD domains with cross-domain visibility for resolving cross-domain manager references and checking for forest-wide uniqueness when generating values for attributes like *userPrincipalName*, *samAccountName* and *mail*.
++
+**Salient configuration aspects**
+* Setup two provisioning agent nodes for high availability and failover.
+* Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent.
+* Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant.
+* When configuring each provisioning app, select the parent AD domain from the dropdown of available AD domains.
+* Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment).
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
+* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+
+### Deployment topology 5: Single app to provision all users from Cloud HR to multiple on-premises Active Directory domains (with cross-domain visibility)
+
+Use this topology if you want to use a single provisioning app to manage users belonging to all your child AD domains. This topology is recommended if provisioning rules are consistent across all domains and there is no requirement for delegated administration of provisioning jobs. This topology supports resolving cross-domain manager references and can perform forest-wide uniqueness check.
++
+**Salient configuration aspects**
+* Setup two provisioning agent nodes for high availability and failover.
+* Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent.
+* Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant.
+* When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains.
+* Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment).
+* If you are using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
+
+### Deployment topology 6: Separate apps to provision distinct users from Cloud HR to disconnected on-premises Active Directory forests
+
+Use this topology if your IT infrastructure has disconnected/disjoint AD forests and you need to provision users to different forests based on business affiliation. For example: Users working for subsidiary *Contoso* need to be provisioned into the *contoso.com* domain, while users working for subsidiary *Fabrikam* need to be provisioned into the *fabrikam.com* domain.
++
+**Salient configuration aspects**
+* Setup two different sets of provisioning agents for high availability and failover, one for each forest.
+* When configuring each provisioning app, select the appropriate parent AD domain from the dropdown of available AD domain names.
+* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
++ ## Plan scoping filters and attribute mapping When you enable provisioning from the cloud HR app to Active Directory or Azure AD, the Azure portal controls the attribute values through attribute mapping.
When you initiate the Joiners process, you might need to generate unique attribu
The Azure AD function [SelectUniqueValues](../app-provisioning/functions-for-customizing-application-data.md#selectuniquevalue) evaluates each rule and then checks the value generated for uniqueness in the target system. For an example, see [Generate unique value for the userPrincipalName (UPN) attribute](../app-provisioning/functions-for-customizing-application-data.md#generate-unique-value-for-userprincipalname-upn-attribute). > [!NOTE]
-> This function is currently only supported for Workday to Active Directory user provisioning. It can't be used with other provisioning apps.
+> This function is currently only supported for Workday to Active Directory and SAP SuccessFactors to Active Directory user provisioning. It can't be used with other provisioning apps.
### Configure Active Directory OU container assignment
With this expression, if the Municipality value is Dallas, Austin, Seattle, or L
When you initiate the Joiners process, you need to set and deliver a temporary password of new user accounts. With cloud HR to Azure AD user provisioning, you can roll out the Azure AD [self-service password reset](../authentication/tutorial-enable-sspr.md) (SSPR) capability for the user on day one.
-SSPR is a simple means for IT administrators to enable users to reset their passwords or unlock their accounts. You can provision the **Mobile Number** attribute from the cloud HR app to Active Directory and sync it with Azure AD. After the **Mobile Number** attribute is in Azure AD, you can enable SSPR for the user's account. Then on day one, the new user can use the registered and verified mobile number for authentication.
+SSPR is a simple means for IT administrators to enable users to reset their passwords or unlock their accounts. You can provision the **Mobile Number** attribute from the cloud HR app to Active Directory and sync it with Azure AD. After the **Mobile Number** attribute is in Azure AD, you can enable SSPR for the user's account. Then on day one, the new user can use the registered and verified mobile number for authentication. Refer to the [SSPR documentation](../authentication/howto-sspr-authenticationdata.md) for details on how to pre-populate authentication contact information.
## Plan for initial cycle
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
+
+ Title: Azure AD ECMA Connector Host Generic SQL Connector tutorial
+description: This tutorial describes how to use the On-premises application provisioning generic SQL connector.
++++++ Last updated : 03/17/2021++++++++
+# Azure AD ECMA Connector Host Generic SQL Connector tutorial
+
+>[!IMPORTANT]
+> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+
+This tutorial describes the steps you need to perform to automatically provision and deprovision users from Azure AD into a SQL DB. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+This tutorial covers how to setup and use the generic SQL connector with the Azure AD ECMA Connector Host. Your test environment should mirror the environment presented below before attempting this tutorial.
+
+![Architecure](.\media\tutorial-ecma-sql-connector\sql-1.png)
+
+- This tutorial uses 2 virtual machines. One is the domain controller (DC1.contoso.com) and the second is an application server(APP1.contoso.com).
+- SQL Server 2019 and SQL Server Management Studio is installed on APP1.
+- Both VMs have connectivity to the internet.
+- SQL Server Agent has been started
+- You have an Azure AD tenant to test with. This tutorial uses ecmabmcontoso.onmicrosoft.com. Substitute your tenant with this one.
+- You have 3 or 4 users created in your tenant for testing.
+
+For additional information on setting up this environment, see [Tutorial: Basic Active Directory environment](../../active directory/cloud sync/tutorial-basic-ad-azure.md)
+
+## Step 1 - Prepare the sample database
+On a server running SQL Server, run the SQL script found in [Appendix A](#appendix-a). This script creates a sample database with the name CONTOSO. This is the database that we will be provisioning users in to.
++
+## Step 2 - Create the DSN connection file
+The Generic SQL Connector is a DSN file to connect to the SQL server. First we need to create a file with the ODBC connection information.
+
+1. Start the ODBC management utility on your server:
+ ![ODBC management](./media/tutorial-ecma-sql-connector/odbc.png)
+2. Select the tab **File DSN**. Click **Add...**.
+ ![Add file dsn](./media/tutorial-ecma-sql-connector/dsn-2.png)
+3. Select SQL Server Native Client 11.0 and click **Next**.
+ ![Choose native client](./media/tutorial-ecma-sql-connector/dsn-3.png)
+4. Give the file a name, such as **GenericSQL** and click **Next**.
+ ![Name the connector](./media/tutorial-ecma-sql-connector/dsn-4.png)
+5. Click **Finish**.
+ ![Finish](./media/tutorial-ecma-sql-connector/dsn-5.png)
+6. Now configure the connection. Enter **APP1** for the name of the server and click **Next**.
+ ![Enter server name](./media/tutorial-ecma-sql-connector/dsn-6.png)
+7. Keep Windows Authentication and click **Next**.
+ ![Windows authentication](./media/tutorial-ecma-sql-connector/dsn-7.png)
+8. Provide the name of the sample database, **CONTOSO**.
+ ![Enter database name](./media/tutorial-ecma-sql-connector/dsn-8.png)
+9. Keep everything default on this screen. Click **Finish**.
+ ![Click finish](./media/tutorial-ecma-sql-connector/dsn-9.png)
+10. To verify everything is working as expected, click **Test Data Source**.
+ ![Test data source](./media/tutorial-ecma-sql-connector/dsn-10.png)
+11. Make sure the test is successful.
+ ![Success](./media/tutorial-ecma-sql-connector/dsn-11.png)
+12. Click **OK**. Click **OK**. Close ODBC Data Source Administrator.
+
+## Step 3 - Download and install the Azure AD Connect Provisioning Agent Package
+
+ 1. Sign in to the server you'll use with enterprise admin permissions.
+ 2. Sign in to the Azure portal, and then go to **Azure Active Directory**.
+ 3. In the left menu, select **Azure AD Connect**.
+ 4. Select **Manage cloud sync** > **Review all agents**.
+ 5. Download the Azure AD Connect provisioning agent package from the Azure portal.
+ 6. Accept the terms and click download.
+ 7. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
+ 8. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
+ ![Microsoft Azure AD Connect Provisioning Agent Package screen](media/on-premises-ecma-install/install-1.png)</br>
+ 9. After this operation finishes, the configuration wizard starts. Click **Next**.
+ ![Welcome screen](media/on-premises-ecma-install/install-2.png)</br>
+ 10. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)** and click **Next**.
+ ![Select extension](media/on-premises-ecma-install/install-3.png)</br>
+ 12. Use your global administrator account and sign in to Azure AD.
+ ![Azure signin](media/on-premises-ecma-install/install-4.png)</br>
+ 13. On the **Agent Configuration** screen, click **Confirm**.
+ ![Confirm installation](media/on-premises-ecma-install/install-5.png)</br>
+ 14. Once the installation is complete, you should see a message at the bottom of the wizard. Click **Finish**.
+ ![Finish button](media/on-premises-ecma-install/install-6.png)</br>
+ 15. Click **Close**.
+
+## Step 4 - Configure the Azure AD ECMA Connector Host
+1. On the desktop, click the ECMA shortcut.
+2. Once the ECMA Connector Host Configuration starts, leave the default port 8585 and click **Generate**. This will generate a certificate. The auto-generated certificate will be self-signed / part of the trusted root and the SAN matches the hostname.
+ ![Configure your settings](.\media\on-premises-ecma-configure\configure-1.png)
+3. Click **Save**.
+
+## Step 5 - Create a generic SQL connector
+ 1. Click on the ECMA Connector Host shortcut on the desktop.
+ 2. Select **New Connector**.
+ ![Choose new connector](.\media\on-premises-sql-connector-configure\sql-1.png)
+
+ 3. On the **Properties** page, fill in the boxes with the values specified in the table below and click **Next**.
+ ![Enter properties](.\media\tutorial-ecma-sql-connector\conn-1.png)
+
+ |Property|Value|
+ |--|--|
+ |Name|SQL|
+ |Autosync timer (minutes)|120|
+ |Secret Token|Enter your own key here. It should be 12 characters minimum.|
+ |Extension DLL|For a generic sql connector, select Microsoft.IAM.Connector.GenericSql.dll.|
+ 4. On the **Connectivity** page, fill in the boxes with the values specified in the table below and click **Next**.
+ ![Enter connectivity](.\media\tutorial-ecma-sql-connector\conn-2.png)
+
+ |Property|Value|
+ |--|--|
+ |DSN File|Navigate to the file created at the beginning of the tutorial in Step 2.|
+ |User Name|contoso\administrator|
+ |Password|the administrators password.|
+ 5. On the **Schema 1** page, fill in the boxes with the values specified in the table below and click **Next**.
+ ![Enter schema 1](.\media\tutorial-ecma-sql-connector\conn-3.png)
+
+ |Property|Value|
+ |--|--|
+ |Object type detection method|Fixed Value|
+ |Fixed value list/Table/View/SP|User|
+ 6. On the **Schema 2** page,fill in the boxes with the values specified in the table below and click **Next**.
+ ![Enter schema 2](.\media\tutorial-ecma-sql-connector\conn-4.png)
+
+ |Property|Value|
+ |--|--|
+ |User:Attribute Detection|Table|
+ |User:Table/View/SP|Employees|
+ 7. On the **Schema 3** page, fill in the boxes with the values specified in the table below and click **Next**.
+ ![Enter schema 3](.\media\tutorial-ecma-sql-connector\conn-5.png)
+
+ |Property|Description|
+ |--|--|
+ |Select Anchor for :User|User:ContosoLogin|
+ |Select DN attribute for User|AzureID|
+ 8. On the **Schema 4** page, leave the defaults and click **Next**.
+ ![Enter schema 4](.\media\tutorial-ecma-sql-connector\conn-6.png)
+ 9. On the **Global** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter global information](.\media\tutorial-ecma-sql-connector\conn-7.png)
+
+ |Property|Description|
+ |--|--|
+ |Data Source Date Time Format|yyyy-MM-dd HH:mm:ss|
+ 10. On the **Select partition** page, click **Next**.
+ ![Enter partition information](.\media\tutorial-ecma-sql-connector\conn-8.png)
+
+ 11. On the **Run Profiles** page, keep **Export** and add **Full Import**. Click **Next**.
+ ![Enter run profiles](.\media\tutorial-ecma-sql-connector\conn-9.png)
+
+ 12. On the **Export** page, fill in the boxes and click next. Use the table below the image for guidance on the individual boxes.
+ ![Enter Export information](.\media\tutorial-ecma-sql-connector\conn-10.png)
+
+ |Property|Description|
+ |--|--|
+ |Operation Method|Table|
+ |Table/View/SP|Employees|
+
+ 12. On the **Full Import** page, fill in the boxes and click **Next**. Use the table below the image for guidance on the individual boxes.
+ ![Enter Full import information](.\media\tutorial-ecma-sql-connector\conn-11.png)
+
+ |Property|Description|
+ |--|--|
+ |Operation Method|Table|
+ |Table/View/SP|Employees|
+
+ 13. On the **Object Types** page, fill in the boxes and click **Next**. Use the table below the image for guidance on the individual boxes.
+
+ **Anchor** - this attribute should be unique in the target system. The Azure AD provisioning service will query the ECMA host using this attribute after the initial cycle. This anchor value should be the same as the anchor value in schema 3.
+
+ **Query attribute** - used by the ECMA host to query the in-memory cache. This attribute should be unique.
+
+ **DN** - The autogenerate option should be selected in most cases. If deselected, ensure that the DN attribute is mapped to an attribute in Azure AD that stores the DN in this format: CN = anchorValue, Object = objectType
+
+ ![Enter object types](.\media\tutorial-ecma-sql-connector\conn-12.png)
+
+ |Property|Description|
+ |--|--|
+ |Target Object|User|
+ |Anchor|ContosoLogin|
+ |Query attribute|AzureID|
+ |DN|AzureID|
+ |Autogenerated|Checked|
+
+
+ 14. On the **Select Attributes** page, add all of the attributes in the drop-down and click **Next**.
+ ![Enter attributes](.\media\tutorial-ecma-sql-connector\conn-13.png)
+
+ The set attribute dropdown will show any attribute that has been discovered in the target system and has **not been** chosen in the previous select attributes page.
+ 15. On the **Deprovisioning** page, under **Disable flow**, select **Delete**. Click **Finish**.
+ ![Enter deprovisioning information](.\media\tutorial-ecma-sql-connector\conn-14.png)
+
+## Step 6 - Ensure ECMA2Host service is running
+1. On the server the running the Azure AD ECMA Connector Host, click Start.
+2. Type run and enter services.msc in the box
+3. In the services, ensure that **Microsoft ECMA2Host** is present and running. If not, click **Start**.
+ ![Service is running](.\media\on-premises-ecma-configure\configure-2.png)
+
+## Step 7 - Add Enterprise application
+1. Sign-in to the Azure portal as an application administrator
+2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**.
+3. Click on **New Application**.
+ ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
+4. Search the gallery for the test application **on-premises provisioning** and click **Create**.
+ ![Create new application](.\media\tutorial-ecma-sql-connector\app-1.png)
+
+## Step 8 - Configure the application and test
+1. Once it has been created, click he **Provisioning page**.
+2. Click **get started**.
+ ![get started](.\media\on-premises-ecma-configure\configure-6.png)
+3. On the **Provisioning page**, change the mode to **Automatic**
+ ![Mode to automatic](.\media\on-premises-ecma-configure\configure-7.png)
+4. In the on-premises connectivity section, select the agent that you just deployed and click **assign agent(s)**.
+ >[!NOTE]
+ >After adding the agent, you need to wait 10 minutes for the registration to complete. The connectivity test will not work until the registration completes.
+ >
+ >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Navigating to your server > search for services in the windows search bar > identify the Azure AD Connect Provisioning Agent Service > right click on the service and restart.
+
+ ![Restart an agent](.\media\on-premises-ecma-configure\configure-8.png)
+5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host.
+
+ |Property|Value|
+ |--|--|
+ |Tenant URL|https://localhost:8585/ecma2host_SQL/scim|
+
+6. Enter the secret token value that you defined when creating the connector.
+7. Click Test Connection and wait one minute.
+ ![Assign an agent](.\media\on-premises-ecma-configure\configure-5.png)
+8. Once connection test is successful, click **save**.</br>
+ ![Test an agent](.\media\on-premises-ecma-configure\configure-9.png)
+
+## Step 9 - Assign users to application
+Now that you have the Azure AD ECMA Connector Host talking with Azure AD you can move on to configuring who is in scope for provisioning.
+
+1. In the Azure portal select **Enterprise Applications**
+2. Click on the **on-premises provisioning** application
+3. On the left, under **Manage** click on **Users and groups**
+4. Click **Add user/group**
+ ![Add user](.\media\tutorial-ecma-sql-connector\app-2.png)
+5. Under **Users** click **None selected**
+ ![None selected](.\media\tutorial-ecma-sql-connector\app-3.png)
+6. Select users from the right and click **Select**.</br>
+ ![Select users](.\media\tutorial-ecma-sql-connector\app-4.png)
+7. Now click **Assign**.
+ ![Assign users](.\media\tutorial-ecma-sql-connector\app-5.png)
+
+## Step 10 - Configure attribute mappings
+Now we need to map attributes between the on-premises application and our SQL server.
+
+#### Configure attribute mapping
+ 1. In the Azure AD portal, under **Enterprise applications**, click he **Provisioning page**.
+ 2. Click **get started**.
+ 3. Expand **Mappings** and click **Provision Azure Active Directory Users**
+ ![provision a user](.\media\on-premises-ecma-configure\configure-10.png)
+ 5. Click **Add new mapping**
+ ![Add a mapping](.\media\on-premises-ecma-configure\configure-11.png)
+ 6. Specify the source and target attributes and and add all of the mappings in the table below.
+
+ |Mapping Type|Source attribute|Target attribute|
+ |--|--|--|
+ |Direct|userPrincipalName|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:ContosoLogin|
+ |Direct|objectID|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:AzureID|
+ |Direct|mail|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:Email|
+ |Direct|givenName|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:FirstName|
+ |Direct|surName|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:LastName|
+ |Direct|mailNickname|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:textID|
+
+ 7. Click **Save**
+ ![Save the mapping](.\media\tutorial-ecma-sql-connector\app-6.png)
+
+## Step 11 - Test provisioning
+Now that our attributes are mapped we can test on-demand provisioning with one of our users.
+
+ 1. In the Azure portal select **Enterprise Applications**
+ 2. Click on the **on-premises provisioning** application
+ 3. On the left, click **Provisioning**.
+ 4. Click **Provision on-demand**
+ 5. Search for one of your test users and click **Provision**
+ ![Test provisioning](.\media\on-premises-ecma-configure\configure-13.png)
+
+### Step 12 - Start provisioning users
+ 1. Once on-demand provisioning is successful, change back to the provisioning configuration page. Ensure that the scope is set to only assigned users and group, turn **provisioning On**, and click **Save**.
+ ![Start provisioning](.\media\on-premises-ecma-configure\configure-14.png)
+ 2. Wait several minutes for provisioning to start (it may take up to 40 minutes). You can learn more about the provisioning service performance here. After the provisioning job has been completed, as described in the next section, you can change the provisioning status to Off, and click Save. This will stop the provisioning service from running in the future.
+
+### Step 13 - Verify users have been successfully provisioned
+After waiting, check the SQL database to ensure users are being provisioned.
+ ![Verify users are provisioned](.\media\on-premises-ecma-configure\configure-15.png)
+
+## Appendix A
+**SQL script to create the sample database**
+
+```SQL
+Creating the Database
+Create Database CONTOSO
+Go
+-Using the Database--
+Use [CONTOSO]
+Go
+-
+
+/****** Object: Table [dbo].[Employees] Script Date: 1/6/2020 7:18:19 PM ******/
+SET ANSI_NULLS ON
+GO
+
+SET QUOTED_IDENTIFIER ON
+GO
+
+CREATE TABLE [dbo].[Employees](
+ [ContosoLogin] [nvarchar](128) NULL,
+ [FirstName] [nvarchar](50) NOT NULL,
+ [LastName] [nvarchar](50) NOT NULL,
+ [Email] [nvarchar](128) NULL,
+ [InternalGUID] [uniqueidentifier] NULL,
+ [AzureID] [uniqueidentifier] NULL,
+ [textID] [nvarchar](128) NULL
+) ON [PRIMARY]
+GO
+
+ALTER TABLE [dbo].[Employees] ADD CONSTRAINT [DF_Employees_InternalGUID] DEFAULT (newid()) FOR [InternalGUID]
+GO
+
+```
++++
+## Next Steps
+
+- [App provisioning](user-provisioning.md)
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 05/11/2021 Last updated : 05/28/2021
-# What is automated SaaS app user provisioning in Azure Active Directory?
+# What is app provisioning in Azure Active Directory?
-In Azure Active Directory (Azure AD), the term **app provisioning** refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), and more.
+In Azure Active Directory (Azure AD), the term **app provisioning** refers to automatically creating user identities and roles for applications.
+
+![architecture](./media/user-provisioning/arch-1.png)
-Just getting started with app management and single sign-on (SSO) in Azure AD? Check out the [Quickstart Series](../manage-apps/view-applications-portal.md).
+In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
-To learn more about SCIM and join the Tech Community conversation, see [Provisioning with SCIM Tech Community](https://aka.ms/scimoverview).
+Azure AD to SaaS application provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
-![Provisioning overview diagram](./media/user-provisioning/provisioning-overview.png)
+Azure AD supports provisioning users into SaaS applications as well as applications hosted on-premises or an IaaS solution such as a virtual machine. You may have a legacy application that relies on an LDAP user store or a SQL DB. The Azure AD provisioning service allows you to create, update, and delete users into on-premises applications without having to open up firewalls or dealing with TCP ports.
-This feature lets you:
+Using lightweight agents, you can provision users into on-premises application and govern access. When used in conjunction with the application proxy, Azure AD can allow you to manage access to your on-premises application, providing automatic user provisioning (with the provisioning service) as well as single sign-on (with app proxy).
+
+App provisioning lets you:
- **Automate provisioning**: Automatically create new accounts in the right systems for new people when they join your team or organization. - **Automate deprovisioning:** Automatically deactivate accounts in the right systems when people leave the team or organization.
This feature lets you:
- **Use rich customization:** Take advantage of customizable attribute mappings that define what user data should flow from the source system to the target system. - **Get alerts for critical events:** The provisioning service provides alerts for critical events, and allows for Log Analytics integration where you can define custom alerts to suite your business needs.
+## What is System for Cross-domain Identity Management (SCIM)?
+
+To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. However, anyone whoΓÇÖs tried to manage users in more than one app will tell you that every app tries to perform the same simple actions, such as creating or updating users, adding users to groups, or deprovisioning users. Yet, all these simple actions are implemented just a little bit differently, using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
+
+To address these challenges, the SCIM specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used in conjunction with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
+
+For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). For pre-integrated applications in the gallery (Slack, Azure Databricks, Snowflake, etc.), you can skip the developer documentation and use the tutorials provided [here](../../active-directory/saas-apps/tutorial-list.md).
+
+## Manual vs. automatic provisioning
+
+Applications in the Azure AD gallery support one of two provisioning modes:
+
+* **Manual** provisioning means there is no automatic Azure AD provisioning connector for the app yet. User accounts must be created manually, for example by adding users directly into the app's administrative portal, or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
+
+* **Automatic** means that an Azure AD provisioning connector has been developed for this application. You should follow the setup tutorial specific to setting up provisioning for the application. App tutorials can be found at [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](../../active-directory/saas-apps/tutorial-list.md).
+
+The provisioning mode supported by an application is also visible on the **Provisioning** tab once you've added the application to your **Enterprise apps**.
+ ## Benefits of automatic provisioning As the number of applications used in modern organizations continues to grow, IT admins are tasked with access management at scale. Standards such as Security Assertions Markup Language (SAML) or Open ID Connect (OIDC) allow admins to quickly set up single sign-on (SSO), but access also requires users to be provisioned into the app. To many admins, provisioning means manually creating every user account or uploading CSV files each week, but these processes are time-consuming, expensive, and error-prone. Solutions such as SAML just-in-time (JIT) have been adopted to automate provisioning, but enterprises also need a solution to deprovision users when they leave the organization or no longer require access to certain apps based on role change.
Azure AD features pre-integrated support for many popular SaaS apps and human re
* **Applications that support SCIM 2.0**. For information on how to generically connect applications that implement SCIM 2.0-based user management APIs, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
-## What is System for Cross-domain Identity Management (SCIM)?
-
-To help automate provisioning and deprovisioning, apps expose proprietary user and group APIs. However, anyone whoΓÇÖs tried to manage users in more than one app will tell you that every app tries to perform the same simple actions, such as creating or updating users, adding users to groups, or deprovisioning users. Yet, all these simple actions are implemented just a little bit differently, using different endpoint paths, different methods to specify user information, and a different schema to represent each element of information.
-
-To address these challenges, the SCIM specification provides a common user schema to help users move into, out of, and around apps. SCIM is becoming the de facto standard for provisioning and, when used in conjunction with federation standards like SAML or OpenID Connect, provides administrators an end-to-end standards-based solution for access management.
-
-For detailed guidance on developing a SCIM endpoint to automate the provisioning and deprovisioning of users and groups to an application, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md). For pre-integrated applications in the gallery (Slack, Azure Databricks, Snowflake, etc.), you can skip the developer documentation and use the tutorials provided [here](../saas-apps/tutorial-list.md).
-
-## Manual vs. automatic provisioning
-
-Applications in the Azure AD gallery support one of two provisioning modes:
-
-* **Manual** provisioning means there is no automatic Azure AD provisioning connector for the app yet. User accounts must be created manually, for example by adding users directly into the app's administrative portal, or uploading a spreadsheet with user account detail. Consult the documentation provided by the app, or contact the app developer to determine what mechanisms are available.
-
-* **Automatic** means that an Azure AD provisioning connector has been developed for this application. You should follow the setup tutorial specific to setting up provisioning for the application. App tutorials can be found at [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-In the Azure AD gallery, applications that support automatic provisioning are designated by a **Provisioning** icon. Switch to the new gallery preview experience to see these icons (in the banner at the top of the **Add an application page**, select the link that says **Click here to try out the new and improved app gallery**).
-
-![Provisioning icon in the application gallery](./media/user-provisioning/browse-gallery.png)
-
-The provisioning mode supported by an application is also visible on the **Provisioning** tab once you've added the application to your **Enterprise apps**.
- ## How do I set up automatic provisioning to an application? For pre-integrated applications listed in the gallery, step-by-step guidance is available for setting up automatic provisioning. See the [list of tutorials for integrated gallery apps](../saas-apps/tutorial-list.md). The following video demonstrates how to set up automatic user provisioning for SalesForce.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/whats-new-docs.md
Title: "What's new in Azure Active Directory application provisioning" description: "New and updated documentation for the Azure Active Directory application provisioning." Previously updated : 05/04/2021 Last updated : 06/02/2021
Welcome to what's new in Azure Active Directory application provisioning documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the provisioning service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2021
+
+### Updated articles
+
+- [Tutorial: Develop a sample SCIM endpoint in Azure Active Directory](use-scim-to-build-users-and-groups-endpoints.md)
+- [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md)
+- [Syncing extension attributes for Azure Active Directory Application Provisioning](user-provisioning-sync-attributes-for-mapping.md)
+- [What is automated SaaS app user provisioning in Azure Active Directory?](user-provisioning.md)
+- [Workday attribute reference for Azure Active Directory](workday-attribute-reference.md)
+- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)
+- [Enable automatic user provisioning for your multi-tenant application in Azure Active Directory](isv-automatic-provisioning-multi-tenant-apps.md)
+- [Known issues for Application Provisioning in Azure Active Directory](known-issues.md)
+- [Plan an automatic user provisioning deployment in Azure Active Directory](plan-auto-user-provisioning.md)
+- [Plan cloud HR application to Azure Active Directory user provisioning](plan-cloud-hr-provision.md)
+- [On-demand provisioning in Azure Active Directory](provision-on-demand.md)
+- [Azure Active Directory Connect Provisioning Agent: Version release history](provisioning-agent-release-version-history.md)
+- [SAP SuccessFactors attribute reference for Azure Active Directory](sap-successfactors-attribute-reference.md)
+- [How Azure Active Directory provisioning integrates with SAP SuccessFactors](sap-successfactors-integration-reference.md)
+- [Using SCIM and Microsoft Graph together to provision users and enrich your application with the data it needs](scim-graph-scenarios.md)
+- [Skip deletion of user accounts that go out of scope in Azure Active Directory](skip-out-of-scope-deletions.md)
+- [No users are being provisioned](application-provisioning-config-problem-no-users-provisioned.md)
+- [Known issues and resolutions with SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md)
+- [Problem configuring user provisioning to an Azure AD Gallery application](application-provisioning-config-problem.md)
+- [Understand how provisioning integrates with Azure Monitor logs](application-provisioning-log-analytics.md)
+- [Application provisioning in quarantine status](application-provisioning-quarantine-status.md)
+- [Check the status of user provisioning](application-provisioning-when-will-provisioning-finish-specific-user.md)
+- [Tutorial: Reporting on automatic user account provisioning](check-status-user-account-provisioning.md)
+- [Managing user account provisioning for enterprise apps in the Azure portal](configure-automatic-user-provisioning-portal.md)
+- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)
+- [Attribute-based application provisioning with scoping filters](define-conditional-rules-for-provisioning-user-accounts.md)
+- [How-to: Export provisioning configuration and roll back to a known good state](export-import-provisioning-configuration.md)
+- [Reference for writing expressions for attribute mappings in Azure Active Directory](functions-for-customizing-application-data.md)
+- [How Application Provisioning works in Azure Active Directory](how-provisioning-works.md)
++ ## April 2021 ### Updated articles
Welcome to what's new in Azure Active Directory application provisioning documen
- [Managing user account provisioning for enterprise apps in the Azure portal](configure-automatic-user-provisioning-portal.md) - [Reference for writing expressions for attribute mappings in Azure AD](functions-for-customizing-application-data.md) - [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)--
-## February 2021
-
-### Updated articles
--- [How Azure Active Directory provisioning integrates with Workday](workday-integration-reference.md)-- [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md)-- [What is automated SaaS app user provisioning in Azure AD?](user-provisioning.md)-- [Tutorial: Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)-- [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)-- [How provisioning works](how-provisioning-works.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/whats-new-docs.md
Title: "What's new in Azure Active Directory application proxy" description: "New and updated documentation for the Azure Active Directory application proxy." Previously updated : 04/27/2021 Last updated : 06/02/2021
Welcome to what's new in Azure Active Directory application proxy documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2021
+
+### Updated articles
+
+- [Secure access to on-premises APIs with Azure Active Directory Application Proxy](application-proxy-secure-api-access.md)
+- [Integrate Azure Active Directory Application Proxy with SharePoint (SAML)](application-proxy-integrate-with-sharepoint-server-saml.md)
+- [Enable remote access to SharePoint with Azure Active Directory Application Proxy](application-proxy-integrate-with-sharepoint-server.md)
++ ## April 2021 Application proxy content has moved out of the [application management content set](/azure/active-directory/manage-apps/) and into its own content set.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
A security key **MUST** implement the following features and extensions from the
The following providers offer FIDO2 security keys of different form factors that are known to be compatible with the passwordless experience. We encourage you to evaluate the security properties of these keys by contacting the vendor as well as FIDO Alliance.
-| Provider | Contact |
-| | |
-| Yubico | [https://www.yubico.com/solutions/passwordless/](https://www.yubico.com/solutions/passwordless/) |
-| Feitian | [https://ftsafe.us/pages/microsoft](https://ftsafe.us/pages/microsoft) |
-| HID | [https://www.hidglobal.com/contact-us](https://www.hidglobal.com/contact-us) |
-| Ensurity | [https://www.ensurity.com/contact](https://www.ensurity.com/contact) |
-| TrustKey Solutions | [https://www.trustkeysolutions.com/security-keys/](https://www.trustkeysolutions.com/security-keys/) |
-| AuthenTrend | [https://authentrend.com/about-us/#pg-35-3](https://authentrend.com/about-us/#pg-35-3) |
-| Gemalto (Thales Group) | [https://safenet.gemalto.com/multi-factor-authentication/authenticators/passwordless-authentication/](https://safenet.gemalto.com/multi-factor-authentication/authenticators/passwordless-authentication/) |
-| OneSpan Inc. | [https://www.onespan.com/products/fido](https://www.onespan.com/products/fido) |
-| IDmelon Technologies Inc. | [https://www.idmelon.com/#idmelon](https://www.idmelon.com/#idmelon) |
-| Hypersecu | [https://www.hypersecu.com/hyperfido](https://www.hypersecu.com/hyperfido) |
-| VinCSS | [https://passwordless.vincss.net](https://passwordless.vincss.net) |
-| KONA I | [https://konai.com/business/security/fido](https://konai.com/business/security/fido) |
-| Excelsecu | [https://www.excelsecu.com/productdetail/esecufido2secu.html](https://www.excelsecu.com/productdetail/esecufido2secu.html) |
-| Token2 Switzerland | [https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key](https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key) |
-| GoTrustID Inc. | [https://www.gotrustid.com/idem-key](https://www.gotrustid.com/idem-key) |
-| Kensington | [https://www.kensington.com/solutions/product-category/why-biometrics/](https://www.kensington.com/solutions/product-category/why-biometrics/) |
-| Nymi | [https://www.nymi.com/product](https://www.nymi.com/product) |
+| Provider | Biometric | USB | NFC | BLE | FIPS Certified | Contact |
+||:--:|::|::|::|:--:|--|
+| AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 |
+| Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact |
+| Excelsecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
+| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://ftsafe.us/pages/microsoft |
+| Gemalto (Thales Group) | ![n] | ![y]| ![y]| ![n]| ![n] | https://safenet.gemalto.com/access-management/authenticators/fido-devices |
+| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key |
+| HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us |
+| Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido |
+| IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon |
+| Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ |
+| KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
+| Nymi | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.nymi.com/product |
+| OneSpan Inc. | ![y] | ![n]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
+| TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ |
+| VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
+| Yubico | ![n] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ |
++
+<!--Image references-->
+[y]: ./media/fido2-compatibility/yes.png
+[n]: ./media/fido2-compatibility/no.png
> [!NOTE] > If you purchase and plan to use NFC-based security keys, you need a supported NFC reader for the security key. The NFC reader isn't an Azure requirement or limitation. Check with the vendor for your NFC-based security key for a list of supported NFC readers.
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 06/15/2020 Last updated : 06/02/2021
Azure AD Multi-Factor Authentication can be used, and licensed, in a few differe
| If you're a user of | Capabilities and use cases | | | |
-| Microsoft 365 Business Premium and EMS or Microsoft 365 E3 and E5 | EMS E3, Microsoft 365 E3, and Microsoft 365 Business Premium includes Azure AD Premium P1. EMS E5 or Microsoft 365 E5 includes Azure AD Premium P2. You can use the same Conditional Access features noted in the following sections to provide multi-factor authentication to users. |
-| Azure AD Premium P1 | You can use [Azure AD Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements. |
-| Azure AD Premium P2 | Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts. |
-| All Microsoft 365 plans | Azure AD Multi-Factor Authentication can be [enabled on a per-user basis](howto-mfa-userstates.md), or enabled or disabled for all users using [security defaults](../fundamentals/concept-fundamentals-security-defaults.md). Management of Azure AD Multi-Factor Authentication is through the Microsoft 365 portal. For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see [secure Microsoft 365 resources with multi-factor authentication](/microsoft-365/admin/security-and-compliance/set-up-multi-factor-authentication). |
-| Azure AD free | You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to enable multi-factor authentication for all users but you cannot enable Multi-Factor Authentication on per-user basis. You don't have granular control of enabled users or scenarios, but it does provide that additional security step.<br /> Even when security defaults aren't used to enable multi-factor authentication for everyone, users assigned the *Azure AD Global Administrator* role can be configured to use multi-factor authentication. This feature of the free tier makes sure the critical administrator accounts are protected by multi-factor authentication. |
+| [Microsoft 365 Business Premium](https://www.microsoft.com/microsoft-365/business) and [EMS](https://www.microsoft.com/security/business/enterprise-mobility-security) or [Microsoft 365 E3 and E5](https://www.microsoft.com/microsoft-365/enterprise/compare-office-365-plans) | EMS E3, Microsoft 365 E3, and Microsoft 365 Business Premium includes Azure AD Premium P1. EMS E5 or Microsoft 365 E5 includes Azure AD Premium P2. You can use the same Conditional Access features noted in the following sections to provide multi-factor authentication to users. |
+| [Azure AD Premium P1](../fundamentals/active-directory-get-started-premium.md) | You can use [Azure AD Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements. |
+| [Azure AD Premium P2](../fundamentals/active-directory-get-started-premium.md) | Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts. |
+| [All Microsoft 365 plans](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans) | Azure AD Multi-Factor Authentication can be enabled all users using [security defaults](../fundamentals/concept-fundamentals-security-defaults.md). Management of Azure AD Multi-Factor Authentication is through the Microsoft 365 portal. For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see [secure Microsoft 365 resources with multi-factor authentication](/microsoft-365/admin/security-and-compliance/set-up-multi-factor-authentication). MFA can also be [enabled on a per-user basis](howto-mfa-userstates.md). |
+| [Azure AD free](../verifiable-credentials/how-to-create-a-free-developer-account.md) | You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to enable multi-factor authentication for all users but you cannot enable Multi-Factor Authentication on per-user basis. You don't have granular control of enabled users or scenarios, but it does provide that additional security step.<br /> Even when security defaults aren't used to enable multi-factor authentication for everyone, users assigned the *Azure AD Global Administrator* role can be configured to use multi-factor authentication. This feature of the free tier makes sure the critical administrator accounts are protected by multi-factor authentication. |
## Feature comparison of versions
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
# Combined security information registration for Azure Active Directory overview
-Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Multi-Factor Authentication and SSPR.
+Before combined registration, users registered authentication methods for Azure AD Multi-Factor Authentication and self-service password reset (SSPR) separately. People were confused that similar methods were used for Multi-Factor Authentication and SSPR but they had to register for both features. Now, with combined registration, users can register once and get the benefits of both Multi-Factor Authentication and SSPR. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ)
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-licensing.md
Previously updated : 03/08/2021 Last updated : 06/03/2021
This article details the different ways that self-service password reset can be
## Compare editions and features
-SSPR requires a license only for the tenant.
- The following table outlines the different SSPR scenarios for password change, reset, or on-premises writeback, and which SKUs provide the feature. | Feature | Azure AD Free | Microsoft 365 Business Standard | Microsoft 365 Business Premium | Azure AD Premium P1 or P2 |
The following table outlines the different SSPR scenarios for password change, r
For additional licensing information, including costs, see the following pages: +
+* [Microsoft 365 licensing guidance for security & compliance](https://docs.microsoft.com/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance)
* [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/) * [Azure Active Directory features and capabilities](https://www.microsoft.com/cloud-platform/azure-active-directory-features) * [Enterprise Mobility + Security](https://www.microsoft.com/cloud-platform/enterprise-mobility-security)
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Previously updated : 05/04/2021 Last updated : 06/03/2021
This document focuses on enabling security key based passwordless authentication
To use security keys for logging in to web apps and services, you must have a browser that supports the WebAuthN protocol. These include Microsoft Edge, Chrome, Firefox, and Safari. - ## Prepare devices For Azure AD joined devices the best experience is on Windows 10 version 1903 or higher.
There are some optional settings for managing security keys per tenant.
- **Enforce key restrictions** should be set to **Yes** only if your organization wants to only allow or disallow certain FIDO security keys, which are identified by their AAGuids. You can work with your security key provider to determine the AAGuids of their devices. If the key is already registered, AAGUID can also be found by viewing the authentication method details of the key per user. +
+## Disable a key
+
+To remove a FIDO2 key associated with a user account, delete the key from the userΓÇÖs authentication method.
+
+1. Login to the Azure AD portal and search for the user account from which the FIDO key is to be removed.
+1. Select **Authentication methods** > right-click **FIDO2 security key** and click **Delete**.
+
+ ![View Authentication Method details](media/howto-authentication-passwordless-deployment/security-key-view-details.png)
+
+## Security key Authenticator Attestation GUID (AAGUID)
+
+The FIDO2 specification requires each security key provider to provide an Authenticator Attestation GUID (AAGUID) during attestation. An AAGUID is a 128-bit identifier indicating the key type, such as the make and model.
+
+>[!NOTE]
+>The manufacturer must ensure that the AAGUID is identical across all substantially identical keys made by that manufacturer, and different (with high probability) from the AAGUIDs of all other types of keys. To ensure, the AAGUID for a given type of security key should be randomly generated. For more information, see [Web Authentication: An API for accessing Public Key Credentials - Level 2 (w3.org)](https://w3c.github.io/webauthn/).
+
+There are two ways to get your AAGUID. You can either ask your security key provider or view the authentication method details of the key per user.
+
+![View AAGUID for security key](media/howto-authentication-passwordless-deployment/security-key-aaguid-details.png)
+ ## User registration and management of FIDO2 security keys 1. Browse to [https://myprofile.microsoft.com](https://myprofile.microsoft.com).
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
# Tutorial: Enable Azure Active Directory self-service password reset writeback to an on-premises environment
-With Azure Active Directory (Azure AD) self-service password reset (SSPR), users can update their password or unlock their account using a web browser. In a hybrid environment where Azure AD is connected to an on-premises Active Directory Domain Services (AD DS) environment, this scenario can cause passwords to be different between the two directories.
+With Azure Active Directory (Azure AD) self-service password reset (SSPR), users can update their password or unlock their account using a web browser. We recommend this video on [How to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ). In a hybrid environment where Azure AD is connected to an on-premises Active Directory Domain Services (AD DS) environment, this scenario can cause passwords to be different between the two directories.
Password writeback can be used to synchronize password changes in Azure AD back to your on-premises AD DS environment. Azure AD Connect provides a secure mechanism to send these password changes back to an existing on-premises directory from Azure AD.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
Previously updated : 05/19/2021 Last updated : 06/03/2021
There are multiple scenarios that organizations can now enable using filters for
Filters for devices are an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API. > [!IMPORTANT]
-> Device state and filters for devices cannot be used together in Conditional Access policy. Filters for devices provides more granular targeting including support for targeting device state information through the `trustType` and `isCompliant` property.
+> Device state and filters for devices cannot be used together in Conditional Access policy.
The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios).
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
From this page, you can optionally limit the users and groups that will be subje
> [!WARNING] > To disable continuous access evaluation please select **Enable preview** then **Disable preview** and select **Save**.
+> [!NOTE]
+>You can query the Microsoft Graph via [**continuousAccessEvaluationPolicy**](/graph/api/continuousaccessevaluationpolicy-get?view=graph-rest-beta&tabs=http#request-body) to verify the configuration of CAE in your tenant. An HTTP 200 response and associated response body indicate whether CAE is enabled or disabled in your tenant. CAE is not configured if Microsoft Graph returns an HTTP 404 response.
+ ![Enabling the CAE preview in the Azure portal](./media/concept-continuous-access-evaluation/enable-cae-preview.png) ## Troubleshooting
Sign-in Frequency will be honored with or without CAE.
## Next steps
-[Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
+- [Announcing continuous access evaluation](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/moving-towards-real-time-policy-and-security-enforcement/ba-p/1276933)
+- [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md)
+- [Claims challenges, claims requests, and client capabilities](../develop/claims-challenge.md)
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation-preview) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
-This article shows you how to use CAE-enabled APIs in your applications.
+This article shows you how to use CAE-enabled APIs in your applications. Applications not using MSAL can add support for [claims challenges, claims requests, and client capabilities](claims-challenge.md) to use CAE.
## Implementation considerations
You can test your application by signing in a user to the application then using
## Next steps
-To learn more, see [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md).
+- [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) conceptual overview
+- [Claims challenges, claims requests, and client capabilities](claims-challenge.md)
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 04/16/2021 Last updated : 06/03/2021
There are certain sets of claims that define how and when they're used in tokens
| Claim set | Description | ||| | Core claim set | Are present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. |
-| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can omit or modify basic claims by using the claims mapping policies. |
+| Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can [omit or modify basic claims](active-directory-claims-mapping.md#omit-the-basic-claims-from-tokens) by using the claims mapping policies. |
| Restricted claim set | Can't be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. | ### Table 1: JSON Web Token (JWT) restricted claim set
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
We also have a [tutorial for Blazor Server](tutorial-blazor-server.md).
Every app that uses Azure Active Directory (Azure AD) for authentication must be registered with Azure AD. Follow the instructions in [Register an application](quickstart-register-app.md) with these specifications: - For **Supported account types**, select **Accounts in this organizational directory only**.-- Leave the **Redirect URI** drop down set to **Web** and enter `https://localhost:5001/authentication/login-callback`. The default port for an app running on Kestrel is 5001. If the app is available on a different port, specify that port number instead of `5001`.
+- Set the **Redirect URI** drop down to **Single-page application (SPA)** and enter `https://localhost:5001/authentication/login-callback`. The default port for an app running on Kestrel is 5001. If the app is available on a different port, specify that port number instead of `5001`.
Once registered, under **Manage**, select **Authentication** > **Implicit grant and hybrid flows**. Select **Access tokens** and **ID tokens**, and then select **Save**.
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-overview.md
Learn how core authentication and Azure AD concepts apply to the Microsoft ident
## Next steps
-If you have an Azure account you already have access to an Azure Active Directory tenant, but most the Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, a "dev tenant."
+If you have an Azure account you already have access to an Azure Active Directory tenant, but most Microsoft identity platform developers need their own Azure AD tenant for use while developing applications, a "dev tenant."
Learn how to create your own tenant for use while building your applications:
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/manage-stale-devices.md
Previously updated : 04/30/2021 Last updated : 06/02/2021
Get-AzureADDevice -All:$true | Where {$_.ApproximateLastLogonTimeStamp -le $dt}
Using the same commands we can pipe the output to the set command to disable the devices over a certain age. ```powershell
-$dt = [datetime]ΓÇÖ2017/01/01ΓÇÖ
-Get-AzureADDevice -All:$true | Where {$_.ApproximateLastLogonTimeStamp -le $dt} | Set-AzureADDevice
+$dt = (Get-Date).AddDays(-90)
+Get-AzureADDevice -All:$true | Where {$_.ApproximateLastLogonTimeStamp -le $dt} | Set-AzureADDevice -AccountEnabled $false
``` ## What you should know
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Here are the settings defined in the Group.Unified SettingsTemplate. Unless othe
| <ul><li>DefaultClassification<li>Type: String<li>Default: "" | The classification that is to be used as the default classification for a group if none was specified.<br>This setting does not apply when EnableMIPLabels == True.| | <ul><li>PrefixSuffixNamingRequirement<li>Type: String<li>Default: "" | String of a maximum length of 64 characters that defines the naming convention configured for Microsoft 365 groups. For more information, see [Enforce a naming policy for Microsoft 365 groups](groups-naming-policy.md). | | <ul><li>CustomBlockedWordsList<li>Type: String<li>Default: "" | Comma-separated string of phrases that users will not be permitted to use in group names or aliases. For more information, see [Enforce a naming policy for Microsoft 365 groups](groups-naming-policy.md). |
-| <ul><li>EnableMSStandardBlockedWords<li>Type: Boolean<li>Default: "False" | Do not use
+| <ul><li>EnableMSStandardBlockedWords<li>Type: Boolean<li>Default: "False" | Deprecated. Do not use.
| <ul><li>AllowGuestsToBeGroupOwner<li>Type: Boolean<li>Default: False | Boolean indicating whether or not a guest user can be an owner of groups. | | <ul><li>AllowGuestsToAccessGroups<li>Type: Boolean<li>Default: True | Boolean indicating whether or not a guest user can have access to Microsoft 365 groups content. This setting does not require an Azure Active Directory Premium P1 license.| | <ul><li>GuestUsageGuidelinesUrl<li>Type: String<li>Default: "" | The url of a link to the guest usage guidelines. |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information is accurate as of June 2021.
+>This information last updated on June 3rd, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| DYNAMICS 365 FOR SALES AND CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR TEAM MEMBERS ENTERPRISE EDITION | DYN365_ENTERPRISE_TEAM_MEMBERS | 8e7a3d30-d97d-43ab-837c-d7701cef83dc | DYN365_Enterprise_Talent_Attract_TeamMember (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_Enterprise_Talent_Onboard_TeamMember (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYN365_ENTERPRISE_TEAM_MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>Dynamics_365_for_Retail_Team_members (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics_365_for_Talent_Team_members (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TEAM MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
-| DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics 365 for Talent - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
| DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) | | ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | | ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>AZURE ADVANCED THREAT PROTECTION (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 F1 | M365_F1 | 44575883-256e-4a79-9da4-ebe9acabe2b2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SharePoint Online Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 | SPE_F1 | 66b55226-6b4f-492c-910c-a3b7a3c9d993 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>WIN10_ENT_LOC_F1 (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Flow for Office 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>PowerApps for Office 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>SharePoint Online Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Windows 10 Enterprise E3 (local only) (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT FLOW FREE | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) |
-| MICROSOFT 365 GCC G3 | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>CONTENT EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
+| MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
| MICROSOFT 365 PHONE SYSTEM | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR FACULTY | MCOEV_FACULTY | d979703c-028d-4de5-acbf-7955566b69b9 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM(4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) |
-| MICROSOFT INTUNE DEVICE for GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| MICROSOFT INTUNE DEVICE FOR GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| MICROSOFT POWER APPS PLAN 2 TRIAL | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | COMMON DATA SERVICE ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW P2 VIRAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS TRIAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | | MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| MICROSOFT POWER APPS PLAN 2 TRIAL | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | COMMON DATA SERVICE - VIRAL(17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW P2 VIRAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS TRIAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
+| MICROSOFT STREAM | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) |
| MICROSOFT TEAM (FREE) | TEAMS_FREE | 16ddbbfc-09ea-4de2-b1d7-312db6112d70 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCOFREE (617d9209-3b90-4879-96e6-838c42b2701d)<br/>TEAMS_FREE (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS_FREE_SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MCO FREE FOR MICROSOFT TEAMS (FREE) (617d9209-3b90-4879-96e6-838c42b2701d)<br/>MICROSOFT TEAMS (FREE) (4fa4026d-ce74-4962-a151-8e96d57ea8e4)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>TEAMS FREE SERVICE (bd6f2ac2-991a-49f9-b23c-18c96a02c228)<br/>WHITEBOARD (FIRSTLINE) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
-| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1(0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT ONLINE (PLAN 1) (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (s8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 |
| Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for students | ENTERPRISEPREMIUM_STUDENT | ee656612-49fa-43e5-b67e-cb1fdf7699df | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 Advanced Compliance | EQUIVIO_ANALYTICS | 1b1b1f7a-8355-43b6-829f-336cfccb744c | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| OFFICE 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_EXCHANGE (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE RIGHTS MANAGEMENT (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>COMMON DATA SERVICE - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>COMMON DATA SERVICE FOR TEAMS_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>CUSTOMER LOCKBOX (9f431833-0334-42de-a7dc-70aa40db46db)<br/>DATA CLASSIFICATION IN MICROSOFT 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH CONNECTORS SEARCH WITH INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô PREMIUM (efb0351d-3b08-4503-993d-383af8de41e3)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 COMMUNICATION COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MICROSOFT 365 ADVANCED AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT 365 APPS FOR ENTERPRISE (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT 365 AUDIO CONFERENCING (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT 365 DEFENDER (bf28f719-7844-4079-9c78-c1307898e192)<br/>MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT BOOKINGS(199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT COMMUNICATIONS DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>MICROSOFT CUSTOMER KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>MICROSOFT DATA INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>MICROSOFT DEFENDER FOR OFFICE 365 (PLAN 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>MICROSOFT DEFENDER FOR OFFICE 365 (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFT EXCEL ADVANCED ANALYTICS (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>MICROSOFT INFORMATION GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>MICROSOFT KAIZALA (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>MICROSOFT MYANALYTICS (FULL) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT RECORDS MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>OFFICE 365 ADVANCED SECURITY MANAGEMENT (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>OFFICE 365 PRIVILEGED ACCESS MANAGEMENT (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWER AUTOMATE FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS FOR OFFICE 365 PLAN 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM ENCRYPTION IN OFFICE 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT FOR OFFICE (PLAN E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>RETIRED - MICROSOFT COMMUNICATIONS COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD (PLAN 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E5 WITHOUT AUDIO CONFERENCING | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>DESKLESS (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>INTUNE_365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_FIRSTLINE_1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>COMMON DATA SERVICE FOR TEAMS_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>FLOW FOR OFFICE 365 K1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>MICROSOFT AZURE RIGHTS MANAGEMENT SERVICE (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFT FORMS (PLAN F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>MICROSOFT KAIZALA PRO PLAN 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>POWERAPPS FOR OFFICE 365 K1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>PROJECT FOR OFFICE (PLAN F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINT KIOSK (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (FIRSTLINE) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| OFFICE 365 GCC G3 | ENTERPRISEPACK_GOV | 535a3a29-c5f0-42fe-8215-d3b9e1f38c4a | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/> POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>CONTENT EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| OFFICE 365 G3 GCC | ENTERPRISEPACK_GOV | 535a3a29-c5f0-42fe-8215-d3b9e1f38c4a | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
| OFFICE 365 MIDSIZE BUSINESS | MIDSIZEPACK | 04a7fb0d-32e0-4241-b4f5-3f7618cd1162 | EXCHANGE_S_STANDARD_MIDMARKET (fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>MCOSTANDARD_MIDMARKET (b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTENTERPRISE_MIDMARKET (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | EXCHANGE ONLINE PLAN 1(fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR MIDSIZE(b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINT PLAN 1 (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | OFFICE 365 SMALL BUSINESS | LITEPACK | bd09678e-b83c-4d3f-aaba-3dad4abd128b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | OFFICE 365 SMALL BUSINESS PREMIUM | LITEPACK_P2 | fc14ec4a-4169-49a4-a51e-2c852931814b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE_PRO_PLUS_SUBSCRIPTION_SMBIZ (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE 365 SMALL BUSINESS SUBSCRIPTION (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | VISIO ONLINE PLAN 1 | VISIOONLINE_PLAN1 | 4b244418-9658-4451-a2b8-b5e2b364e9bd | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO ONLINE PLAN 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
-| VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR GOVERNMENT (4ae99959-6b0f-43b0-b1ce-68146001bdba)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) |
+| VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) |
| WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
-| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>VIRTUALIZATION RIGHTS FOR WINDOWS 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118) |
-| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)
+| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>VIRTUALIZATION RIGHTS FOR WINDOWS 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) |
+| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) |
| WINDOWS STORE FOR BUSINESS | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | ## Service plans that cannot be assigned at the same time
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
There are some cases where the invitation email is recommended over a direct lin
- Sometimes the invited user object may not have an email address because of a conflict with a contact object (for example, an Outlook contact object). In this case, the user must click the redemption URL in the invitation email. - The user may sign in with an alias of the email address that was invited. (An alias is an additional email address associated with an email account.) In this case, the user must click the redemption URL in the invitation email.
+### Just-in-time redemption limitation with conflicting Contact object
+Sometimes the invited external guest user's email may conflict with an existing [Contact object](https://docs.microsoft.com/en-us/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from signing in or redeeming an invitation through a direct link using [SAML/WS-Fed IdP](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/direct-federation), [Microsoft Accounts](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/microsoft-account), [Google Federation](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/google-federation), or [Email One-Time Passcode](https://docs.microsoft.com/en-us/azure/active-directory/external-identities/one-time-passcode) accounts.
+
+To unblock users who can't redeem an invitation due to a conflicting [Contact object](https://docs.microsoft.com/en-us/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true), follow these steps:
+1. Delete the conflicting Contact object.
+2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
+3. Re-invite the guest user.
+4. Wait for the user to redeem invitation
+5. Add the user's Contact email back into Exchange and any DLs they should be a part of
+ ## Redemption through the invitation email When you add a guest user to your directory by [using the Azure portal](./b2b-quickstart-add-guest-users-portal.md), an invitation email is sent to the guest in the process. You can also choose to send invitation emails when youΓÇÖre [using PowerShell](./b2b-quickstart-invite-powershell.md) to add guest users to your directory. HereΓÇÖs a description of the guestΓÇÖs experience when they redeem the link in the email.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities" description: "New and updated documentation for the Azure Active Directory external identities." Previously updated : 05/04/2021 Last updated : 06/02/2021
Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2021
+
+### New articles
+
+- [Azure Active Directory B2B collaboration FAQs](faq.yml)
+
+### Updated articles
+
+- [Azure Active Directory B2B collaboration FAQs](faq.yml)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)
+- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)
+- [Billing model for Azure AD External Identities](external-identities-pricing.md)
+- [Example: Configure SAML/WS-Fed IdP federation with Active Directory Federation Services (AD FS) (preview)](direct-federation-adfs.md)
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Add Google as an identity provider for B2B guest users](google-federation.md)
+- [Identity Providers for External Identities](identity-providers.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
+- [Azure Active Directory external identities: What's new](whats-new-docs.md)
+- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md)
+- [Invite internal users to B2B collaboration](invite-internal-users.md)
++ ## April 2021 ### Updated articles
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/identity-secure-score.md
Previously updated : 03/23/2021 Last updated : 06/02/2021
Each recommendation is measured based on your Azure AD configuration. If you are
![Ignore or mark action as covered by third party](./media/identity-secure-score/identity-secure-score-ignore-or-third-party-reccomendations.png)
+- **To address** - You recognize that the improvement action is necessary and plan to address it at some point in the future. This state also applies to actions that are detected as partially, but not fully completed.
+- **Planned** - There are concrete plans in place to complete the improvement action.
+- **Risk accepted** - Security should always be balanced with usability, and not every recommendation will work for your environment. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You won't be given any points, but the action will no longer be visible in the list of improvement actions. You can view this action in history or undo it at any time.
+- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You'll gain the points that the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft will have no visibility into the completeness of implementation if the improvement action is marked as either of these statuses.
+ ## How does it help me? The secure score helps you to:
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
For user accounts that are used as service accounts, apply the following setting
* **LogonWorkstations**: Restrict permissions where the service account can sign in. If it runs locally on a machine and accesses only resources on that machine, restrict it from signing in anywhere else.
-* [**Cannot change password**](/powershell/module/addsadministration/set-aduser): Prevent the service account from changing its own password by setting the parameter to false.
+* [**Cannot change password**](/powershell/module/activedirectory/set-aduser): Prevent the service account from changing its own password by setting the parameter to false.
## Build a lifecycle management process
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 4/30/2021 Last updated : 5/31/2021
The What's new in Azure Active Directory? release notes provide information abou
+## November 2020
+
+### Azure Active Directory TLS 1.0, TLS 1.1, and 3DES deprecation
+
+**Type:** Plan for change
+**Service category:** All Azure AD applications
+**Product capability:** Standards
+
+Azure Active Directory will deprecate the following protocols in Azure Active Directory worldwide regions starting June 30, 2021:
+
+- TLS 1.0
+- TLS 1.1
+- 3DES cipher suite (TLS_RSA_WITH_3DES_EDE_CBC_SHA)
+
+Affected environments are:
+- Azure Commercial Cloud
+- Office 365 GCC and WW
+
+For guidance to remove deprecating protocols dependencies, please refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
+++
+### New Federated Apps available in Azure AD Application gallery - November 2020
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In November 2020 we have added following 52 new applications in our App gallery with Federation support:
+
+[Travel & Expense Management](https://app.expenseonce.com/Account/Login), [Tribeloo](../saas-apps/tribeloo-tutorial.md), [Itslearning File Picker](https://pmteam.itslearning.com/), [Crises Control](../saas-apps/crises-control-tutorial.md), [CourtAlert](https://www.courtalert.com/), [StealthMail](https://stealthmail.com/), [Edmentum - Study Island](https://app.studyisland.com/cfw/login/), [Virtual Risk Manager](../saas-apps/virtual-risk-manager-tutorial.md), [TIMU](../saas-apps/timu-tutorial.md), [Looker Analytics Platform](../saas-apps/looker-analytics-platform-tutorial.md), [Talview - Recruit](https://recruit.talview.com/login), Real Time Translator, [Klaxoon](https://access.klaxoon.com/login), [Podbean](../saas-apps/podbean-tutorial.md), [zcal](https://zcal.co/signup), [expensemanager](https://api.expense-manager.com/), [Netsparker Enterprise](../saas-apps/netsparker-enterprise-tutorial.md), [En-trak Tenant Experience Platform](https://portal.en-trak.app/), [Appian](../saas-apps/appian-tutorial.md), [Panorays](../saas-apps/panorays-tutorial.md), [Builterra](https://portal.builterra.com/), [EVA Check-in](https://my.evacheckin.com/organization), [HowNow WebApp SSO](../saas-apps/hownow-webapp-sso-tutorial.md), [Coupa Risk Assess](../saas-apps/coupa-risk-assess-tutorial.md), [Lucid (All Products)](../saas-apps/lucid-tutorial.md), [GoBright](https://portal.brightbooking.eu/), [SailPoint IdentityNow](../saas-apps/sailpoint-identitynow-tutorial.md),[Resource Central](../saas-apps/resource-central-tutorial.md), [UiPathStudioO365App](https://www.uipath.com/product/platform), [Jedox](../saas-apps/jedox-tutorial.md), [Cequence Application Security](../saas-apps/cequence-application-security-tutorial.md), [PerimeterX](../saas-apps/perimeterx-tutorial.md), [TrendMiner](../saas-apps/trendminer-tutorial.md), [Lexion](../saas-apps/lexion-tutorial.md), [WorkWare](../saas-apps/workware-tutorial.md), [ProdPad](../saas-apps/prodpad-tutorial.md), [AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md), [AppSec Flow SSO](../saas-apps/appsec-flow-sso-tutorial.md), [Luum](../saas-apps/luum-tutorial.md), [Freight Measure](https://www.gpcsl.com/freight.html), [Terraform Cloud](../saas-apps/terraform-cloud-tutorial.md), [Nature Research](../saas-apps/nature-research-tutorial.md), [Play Digital Signage](https://login.playsignage.com/login), [RemotePC](../saas-apps/remotepc-tutorial.md), [Prolorus](../saas-apps/prolorus-tutorial.md), [Hirebridge ATS](../saas-apps/hirebridge-ats-tutorial.md), [Teamgage](https://www.teamgage.com/Account/ExternalLoginAzure), [Roadmunk](../saas-apps/roadmunk-tutorial.md), [Sunrise Software Relations CRM](https://cloud.relations-crm.com/), [Procaire](../saas-apps/procaire-tutorial.md), [Mentor® by eDriving: Business](https://www.edriving.com/), [Gradle Enterprise](https://gradle.com/)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Public preview - Custom roles for enterprise apps
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+ [Custom RBAC roles for delegated enterprise application management](../roles/custom-available-permissions.md) is now in public preview. These new permissions build on the custom roles for app registration management, which allows fine-grained control over what access your admins have. Over time, additional permissions to delegate management of Azure AD will be released.
+
+Some common delegation scenarios:
+- assignment of user and groups that can access SAML based single sign-on applications
+- the creation of Azure AD Gallery applications
+- update and read of basic SAML Configurations for SAML based single sign-on applications
+- management of signing certificates for SAML based single sign-on applications
+- update of expiring sign in certificates notification email addresses for SAML based single sign-on applications
+- update of the SAML token signature and sign-in algorithm for SAML based single sign-on applications
+- create, delete, and update of user attributes and claims for SAML-based single sign-on applications
+- ability to turn on, off, and restart provisioning jobs
+- updates to attribute mapping
+- ability to read provisioning settings associated with the object
+- ability to read provisioning settings associated with your service principal
+- ability to authorize application access for provisioning
+++
+### Public preview - Azure AD Application Proxy natively supports single sign-on access to applications that use headers for authentication
+
+**Type:** New feature
+**Service category:** App Proxy
+**Product capability:** Access Control
+
+Azure Active Directory (Azure AD) Application Proxy natively supports single sign-on access to applications that use headers for authentication. You can configure header values required by your application in Azure AD. The header values will be sent down to the application via Application Proxy. To learn more, see [Header-based single sign-on for on-premises apps with Azure AD App Proxy](../manage-apps/application-proxy-configure-single-sign-on-with-headers.md)
+
++
+### General Availability - Azure AD B2C Phone Sign-up and Sign-in using Custom Policy
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies, allow developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication-user-flows.md).
+
++
+### New provisioning connectors in the Azure AD Application Gallery - November 2020
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Adobe Identity Management](../saas-apps/adobe-identity-management-provisioning-tutorial.md)
+- [Blogin](../saas-apps/blogin-provisioning-tutorial.md)
+- [Clarizen One](../saas-apps/clarizen-one-provisioning-tutorial.md)
+- [Contentful](../saas-apps/contentful-provisioning-tutorial.md)
+- [GitHub AE](../saas-apps/github-ae-provisioning-tutorial.md)
+- [Playvox](../saas-apps/playvox-provisioning-tutorial.md)
+- [PrinterLogic SaaS](../saas-apps/printer-logic-saas-provisioning-tutorial.md)
+- [Tic - Tac Mobile](../saas-apps/tic-tac-mobile-provisioning-tutorial.md)
+- [Visibly](../saas-apps/visibly-provisioning-tutorial.md)
+
+For more information, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+### Public Preview - Email Sign-In with ProxyAddresses now deployable via Staged Rollout
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+Tenant administrators can now use Staged Rollout to deploy Email Sign-In with ProxyAddresses to specific Azure AD groups. This can help while trying out the feature before deploying it to the entire tenant via the Home Realm Discovery policy. Instructions for deploying Email Sign-In with ProxyAddresses via Staged Rollout are in the [documentation](../authentication/howto-authentication-use-email-signin.md).
+
++
+### Limited Preview - Sign-in Diagnostic
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+With the initial preview release of the Sign-in Diagnostic, admins can now review user sign-ins. Admins can receive contextual, specific, and relevant details and guidance on what happened during a sign-in and how to fix problems. The diagnostic is available in both the Azure AD level, and Conditional Access Diagnose and Solve blades. The diagnostic scenarios covered in this release are Conditional Access, Multi-Factor Authentication, and successful sign-in.
+
+For more information, see [What is sign-in diagnostic in Azure AD?](../reports-monitoring/overview-sign-in-diagnostics.md).
+
++
+### Improved Unfamiliar Sign-in Properties
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+ Unfamiliar sign-in properties detections has been updated. Customers may notice more high-risk unfamiliar sign-in properties detections. For more information, see [What is risk?](../identity-protection/concept-identity-protection-risks.md)
+
++
+### Public Preview refresh of Cloud Provisioning agent now available (Version: 1.1.281.0)
+
+**Type:** Changed feature
+**Service category:** Azure AD Cloud Provisioning
+**Product capability:** Identity Lifecycle Management
+
+Cloud provisioning agent has been released in public preview and is now available through the portal. This release contains several improvements including, support for GMSA for your domains, which provides better security, improved initial sync cycles, and support for large groups. Check out the release version [history](../app-provisioning/provisioning-agent-release-version-history.md) for more details.
+
++
+### BitLocker recovery key API endpoint now under /informationProtection
+
+**Type:** Changed feature
+**Service category:** Device Access Management
+**Product capability:** Device Lifecycle Management
+
+Previously, you could recover BitLocker keys via the /bitlocker endpoint. We'll eventually be deprecating this endpoint, and customers should begin consuming the API that now falls under /informationProtection.
+
+See [BitLocker recovery API](/graph/api/resources/bitlockerrecoverykey?view=graph-rest-beta&preserve-view=true) for updates to the documentation to reflect these changes.
+++
+### General Availability of Application Proxy support for Remote Desktop Services HTML5 Web Client
+
+**Type:** Changed feature
+**Service category:** App Proxy
+**Product capability:** Access Control
+
+Azure AD Application Proxy support for Remote Desktop Services (RDS) Web Client is now in General Availability. The RDS web client allows users to access Remote Desktop infrastructure through any HTLM5-capable browser such as Microsoft Edge, Internet Explorer 11, Google Chrome, and so on. Users can interact with remote apps or desktops like they would with a local device from anywhere.
+
+By using Azure AD Application Proxy, you can increase the security of your RDS deployment by enforcing pre-authentication and Conditional Access policies for all types of rich client apps. To learn more, see [Publish Remote Desktop with Azure AD Application Proxy](../manage-apps/application-proxy-integrate-with-remote-desktop-services.md)
+
++
+### New enhanced Dynamic Group service is in Public Preview
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** Collaboration
+
+Enhanced dynamic group service is now in Public Preview. New customers that create dynamic groups in their tenants will be using the new service. The time required to create a dynamic group will be proportional to the size of the group that is being created instead of the size of the tenant. This update will improve performance for large tenants significantly when customers create smaller groups.
+
+The new service also aims to complete member addition and removal because of attribute changes within a few minutes. Also, single processing failures won't block tenant processing. To learn more about creating dynamic groups, see our [documentation](../enterprise-users/groups-create-rule.md).
+
++ ## October 2020 ### Azure AD On-Premises Hybrid Agents Impacted by Azure TLS Certificate Changes
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
Previously updated : 4/30/2021 Last updated : 5/31/2021
This page is updated monthly, so revisit it regularly. If you're looking for ite
+## May 2021
+
+### Public preview - Azure AD verifiable credentials
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+Azure AD customers can now easily design and issue verifiable credentials to represent proof of employment, education, or any other claim while respecting privacy. Digitally validate any piece of information about anyone and any business. [Learn more](https://docs.microsoft.com/azure/active-directory/verifiable-credentials).
+
++
+### Public Preview - build and test expressions for user provisioning
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** Identity Lifecycle Management
+
+The expression builder allows you to create and test expressions, without having to wait for the full sync cycle. [Learn more](../app-provisioning/functions-for-customizing-application-data.md).
+++
+### Public preview - enhanced audit logs for Conditional Access policy changes
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+An important aspect of managing Conditional Access is understanding changes to your policies over time. Policy changes may cause disruptions for your end users, so maintaining a log of changes and enabling admins to revert to previous policy versions is critical.
+
+In addition to showing who made a policy change and when, the audit logs will now also contain a modified properties value so that admins have greater visibility into what assignments, conditions, or controls changed. If you want to revert to a previous version of a policy, you can copy the JSON representation of the old version and use the Conditional Access APIs to quickly change the policy back to its previous state. [Learn more](../conditional-access/concept-conditional-access-policies.md).
+++
+### Public preview - Sign-in logs include authentication methods used during sign-in
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** Monitoring & Reporting
+
+
+Admins can now see the sequential steps users took to sign-in, including which authentication methods were used during sign-in.
+
+To access these details, go to the Azure AD sign-in logs, select a sign-in, and then navigate to the Authentication Method Details tab. Here we have included information such as which method was used, details about the method (e.g. phone number, phone name), authentication requirement satisfied, and result details. [Learn more](../reports-monitoring/concept-sign-ins.md).
+++
+### Public preview - PIM adds support for ABAC conditions in Azure Storage roles
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Along with the public preview of attributed based access control for specific Azure RBAC role, you can also add ABAC conditions inside Privileged Identity Management for your eligible assignments. [Learn more](../../role-based-access-control/conditions-overview.md#conditions-and-privileged-identity-management-pim).
+++
+### General availability - Conditional Access and Identity Protection Reports in B2C
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+B2C now supports Conditional Access and Identity Protection for business-to-consumer (B2C) apps and users. This enables customers to protect their users with granular risk- and location-based access controls. With these features, customers can now look at the signals and create a policy to provide more security and access to your customers. [Learn more](https://docs.microsoft.com/azure/active-directory-b2c/conditional-access-identity-protection-overview).
+++
+### General availability - KMSI and Password reset now in next generation of user flows
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+The next generation of B2C user flows now supports [keep me signed in (KMSI)](https://docs.microsoft.com/azure/active-directory-b2c/session-behavior?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) and password reset. The KMSI functionality allows customers to extend the session lifetime for the users of their web and native applications by using a persistent cookie. This feature keeps the session active even when the user closes and reopens the browser, and is revoked when the user signs out. Password reset allows users to reset their password from the "Forgot your password
+' link. This also allows the admin to force reset the user's expired password in the Azure AD B2C directory. [Learn more](https://docs.microsoft.com/azure/active-directory-b2c/add-password-reset-policy?pivots=b2c-user-flow).
+
++
+### General availability - New Log Analytics workbook Application role assignment activity
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+A new workbook has been added for surfacing audit events for application role assignment changes. [Learn more](../governance/entitlement-management-logs-and-reporting.md).
+++
+### General availability - Next generation Azure AD B2C user flows
+
+**Type:** New feature
+**Service category:** B2C - Consumer Identity Management
+**Product capability:** B2B/B2C
+
+The new simplified user flow experience offers feature parity with preview features and is the home for all new features. Users will be able to enable new features within the same user flow, reducing the need to create multiple versions with every new feature release. The new, user-friendly UX also simplifies the selection and creation of user flows. Refer to [Create user flows in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-user-flows?pivots=b2c-user-flow) for guidance on using this feature. [Learn more](../../active-directory-b2c/user-flow-versions.md).
+++
+### General availability - Azure Active Directory threat intelligence for sign-in risk
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+This new detection serves as an ad-hoc method to allow our security teams to notify you and protect your users by raising their session risk to a High risk when we observe an attack happening, as well as marking the associated sign-ins as risky. This detection follows the existing Azure Active Directory threat intelligence for user risk detection to provide complete coverage of the various attacks observed by Microsoft security teams. [Learn more](../identity-protection/concept-identity-protection-risks.md#user-risk).
+
++
+### General availability - Conditional Access named locations improvements
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+IPv6 support in named locations is now generally available. Updates include:
+
+- Added the capability to define IPv6 address ranges
+- Increased limit of named locations from 90 to 195
+- Increased limit of IP ranges per named location from 1200 to 2000
+- Added capabilities to search and sort named locations and filter by location type and trust type
+- Added named locations a sign-in belonged to in the sign-in logs
+
+Additionally, to prevent admins from defining problematic named locations, additional checks have been added to reduce the chance of misconfiguration. [Learn more](../conditional-access/location-condition.md).
+++
+### General Availability - Restricted guest access permissions in Azure AD
+
+**Type:** New feature
+**Service category:** User Management
+**Product capability:** Directory
+
+Directory level permissions for guest users have been updated. These permissions allow administrators to require additional restrictions and controls on external guest user access.
+
+Admins can now add additional restrictions for external guests' access to user and groups' profile and membership information. Also, customers can manage external user access at scale by hiding group memberships, including restricting guest users from seeing memberships of the group(s) they are in. To learn more, see [Restrict guest access permissions in Azure Active Directory](../enterprise-users/users-restrict-guest-permissions.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - May 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [AuditBoard](../saas-apps/auditboard-provisioning-tutorial.md)
+- [Cisco Umbrella User Management](../saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md)
+- [Insite LMS](../saas-apps/insite-lms-provisioning-tutorial.md)
+- [kpifire](../saas-apps/kpifire-provisioning-tutorial.md)
+- [UNIFI](../saas-apps/unifi-provisioning-tutorial.md)
+
+For more information about how to better secure your organization using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+++
+### New Federated Apps available in Azure AD Application gallery - May 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In May 2021 we have added following 29 new applications in our App gallery with Federation support
+
+[InviteDesk](https://app.invitedesk.com/login), [Webrecruit ATS](https://id-test.webrecruit.co.uk/), [Workshop](../saas-apps/workshop-tutorial.md), [Gravity Sketch](https://landingpad.me/), [JustLogin](../saas-apps/justlogin-tutorial.md), [Custellence](https://custellence.com/sso/), [WEVO](https://hello.wevoconversion.com/login), [AppTec360 MDM](https://www.apptec360.com/ms/autopilot.html), [Filemail](https://www.filemail.com/login),[Ardoq](../saas-apps/ardoq-tutorial.md), [Leadfamly](../saas-apps/leadfamly-tutorial.md), [Documo](../saas-apps/documo-tutorial.md), [Autodesk SSO](../saas-apps/autodesk-sso-tutorial.md), [Check Point Harmony Connect](../saas-apps/check-point-harmony-connect-tutorial.md), [BrightHire](https://app.brighthire.ai/), [Rescana](../saas-apps/rescana-tutorial.md), [Bluewhale](https://cloud.bluewhale.dk/), [AlacrityLaw](../saas-apps/alacritylaw-tutorial.md), [Equisolve](../saas-apps/equisolve-tutorial.md), [Zip](../saas-apps/zip-tutorial.md), [Cognician](../saas-apps/cognician-tutorial.md), [Acra](https://www.acrasuite.com/), [VaultMe](https://app.vaultme.com/#/signIn), [TAP App Security](../saas-apps/tap-app-security-tutorial.md), [Cavelo Office365 Cloud Connector](https://dashboard.prod.cavelodata.com/), [Clebex](../saas-apps/clebex-tutorial.md), [Banyan Command Center](../saas-apps/banyan-command-center-tutorial.md), [Check Point Remote Access VPN](../saas-apps/check-point-remote-access-vpn-tutorial.md), [LogMeIn](../saas-apps/logmein-tutorial.md)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Improved Conditional Access Messaging for Android and iOS
+
+**Type:** Changed feature
+**Service category:** Device Registration and Management
+**Product capability:** End User Experiences
+
+
+We have updated the wording on the Conditional Access screen shown to users when they are blocked from accessing corporate resources until they enroll their device in Mobile Device Management. These improvements apply to the Android and iOS/iPadOS platforms. The following have been changed:
+
+- ΓÇ£Help us keep your device secureΓÇ¥ has changed to ΓÇ£Set up your device to get accessΓÇ¥
+- ΓÇ£Your sign-in was successful but your admin requires your device to be managed by Microsoft to access this resource.ΓÇ¥ to ΓÇ£[OrganizationΓÇÖs name] requires you to secure this device before you can access [organizationΓÇÖs name] email, files, and data.ΓÇ¥
+- ΓÇ£Enroll NowΓÇ¥ to ΓÇ£ContinueΓÇ¥
+
+Note that the information in [Enroll your Android enterprise device](https://support.microsoft.com/topic/enroll-your-android-enterprise-device-d661c82d-fa28-5dfd-b711-6dff41ae83bb) is out of date.
+++
+### Azure Information Protection service will begin asking for consent
+
+**Type:** Changed feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+The Azure Information Protection service signs users into the tenant that encrypted the document as part of providing access to the document. Starting June, Azure AD will begin prompting the user for consent when this access is performed across organizations. This ensures that the user understands that the organization which owns the document will collect some information about the user as part of the document access. [Learn more](hhttps://docs.microsoft.com/azure/information-protection/known-issues#sharing-external-doc-types-across-tenants).
+
++
+### Provisioning logs schema change impacting Graph API and Azure Monitor integration
+
+**Type:** Changed feature
+**Service category:** App Provisioning
+**Product capability:** Monitoring & Reporting
+
+
+The attributes "Action" and "statusInfo" will be changed to "provisioningAction" and "provisoiningStatusInfo." Please update any scripts that you have created using the [provisioning logs Graph API](/graph/api/resources/provisioningobjectsummary?view=graph-rest-beta&preserve-view=true) or [Azure Monitor integrations](../app-provisioning/application-provisioning-log-analytics.md).
+
+++
+### New ARM API to manage PIM for Azure Resources and Azure AD roles
+
+**Type:** Changed feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+An updated version of PIM's API for Azure Resource role and Azure AD role has been released. The PIM API for Azure Resource role is now released under the ARM API standard which aligns with the role management API for regular Azure role assignment. On the other hand, the PIM API for Azure AD roles is also released under graph API aligned with the unifiedRoleManagement APIs. Some of the benefit of this change include:
+
+- Alignment of the PIM API with objects in ARM and Graph for role managementReducing the need to call PIM to onboard new Azure resources.
+- All Azure resources automatically work with new PIM API.
+- Reducing the need to call PIM for role definition or keeping a PIM resource ID
+- Supporting app-only API permissions in PIM for both Azure AD and Azure Resource roles
+
+Previous version of PIM's API under /privilegedaccess will continue to function but we recommend you to move to this new API going forward. [Learn more](../privileged-identity-management/pim-apis.md).
+
++
+### Revision of roles in Azure AD entitlement management
+
+**Type:** Changed feature
+**Service category:** Roles
+**Product capability:** Entitlement Management
+
+A new role Identity Governance Administrator has recently been introduced.This role will be the replacement for the User Administrator role in managing catalogs and access packages in Azure AD entitlement management. If you have assigned administrators to the User Administrator role or have them activate this role to manage access packages in Azure AD entitlement management, please switch to the Identity Governance Administrator role instead. The User Administrator role will no longer be providing administrative rights to catalogs or access packages. [Learn more](../governance/identity-governance-overview.md#appendixleast-privileged-roles-for-managing-in-identity-governance-features).
+++ ## April 2021 ### Bug fixed - Azure AD will no longer double-encode the state parameter in responses
For more information, go to [Change approval settings for an access package in A
-## November 2020
-
-### Azure Active Directory TLS 1.0, TLS 1.1, and 3DES deprecation
-
-**Type:** Plan for change
-**Service category:** All Azure AD applications
-**Product capability:** Standards
-
-Azure Active Directory will deprecate the following protocols in Azure Active Directory worldwide regions starting June 30, 2021:
--- TLS 1.0-- TLS 1.1-- 3DES cipher suite (TLS_RSA_WITH_3DES_EDE_CBC_SHA)-
-Affected environments are:
-- Azure Commercial Cloud-- Office 365 GCC and WW-
-For guidance to remove deprecating protocols dependencies, please refer to [Enable support for TLS 1.2 in your environment for Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
---
-### New Federated Apps available in Azure AD Application gallery - November 2020
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In November 2020 we have added following 52 new applications in our App gallery with Federation support:
-
-[Travel & Expense Management](https://app.expenseonce.com/Account/Login), [Tribeloo](../saas-apps/tribeloo-tutorial.md), [Itslearning File Picker](https://pmteam.itslearning.com/), [Crises Control](../saas-apps/crises-control-tutorial.md), [CourtAlert](https://www.courtalert.com/), [StealthMail](https://stealthmail.com/), [Edmentum - Study Island](https://app.studyisland.com/cfw/login/), [Virtual Risk Manager](../saas-apps/virtual-risk-manager-tutorial.md), [TIMU](../saas-apps/timu-tutorial.md), [Looker Analytics Platform](../saas-apps/looker-analytics-platform-tutorial.md), [Talview - Recruit](https://recruit.talview.com/login), Real Time Translator, [Klaxoon](https://access.klaxoon.com/login), [Podbean](../saas-apps/podbean-tutorial.md), [zcal](https://zcal.co/signup), [expensemanager](https://api.expense-manager.com/), [Netsparker Enterprise](../saas-apps/netsparker-enterprise-tutorial.md), [En-trak Tenant Experience Platform](https://portal.en-trak.app/), [Appian](../saas-apps/appian-tutorial.md), [Panorays](../saas-apps/panorays-tutorial.md), [Builterra](https://portal.builterra.com/), [EVA Check-in](https://my.evacheckin.com/organization), [HowNow WebApp SSO](../saas-apps/hownow-webapp-sso-tutorial.md), [Coupa Risk Assess](../saas-apps/coupa-risk-assess-tutorial.md), [Lucid (All Products)](../saas-apps/lucid-tutorial.md), [GoBright](https://portal.brightbooking.eu/), [SailPoint IdentityNow](../saas-apps/sailpoint-identitynow-tutorial.md),[Resource Central](../saas-apps/resource-central-tutorial.md), [UiPathStudioO365App](https://www.uipath.com/product/platform), [Jedox](../saas-apps/jedox-tutorial.md), [Cequence Application Security](../saas-apps/cequence-application-security-tutorial.md), [PerimeterX](../saas-apps/perimeterx-tutorial.md), [TrendMiner](../saas-apps/trendminer-tutorial.md), [Lexion](../saas-apps/lexion-tutorial.md), [WorkWare](../saas-apps/workware-tutorial.md), [ProdPad](../saas-apps/prodpad-tutorial.md), [AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md), [AppSec Flow SSO](../saas-apps/appsec-flow-sso-tutorial.md), [Luum](../saas-apps/luum-tutorial.md), [Freight Measure](https://www.gpcsl.com/freight.html), [Terraform Cloud](../saas-apps/terraform-cloud-tutorial.md), [Nature Research](../saas-apps/nature-research-tutorial.md), [Play Digital Signage](https://login.playsignage.com/login), [RemotePC](../saas-apps/remotepc-tutorial.md), [Prolorus](../saas-apps/prolorus-tutorial.md), [Hirebridge ATS](../saas-apps/hirebridge-ats-tutorial.md), [Teamgage](https://www.teamgage.com/Account/ExternalLoginAzure), [Roadmunk](../saas-apps/roadmunk-tutorial.md), [Sunrise Software Relations CRM](https://cloud.relations-crm.com/), [Procaire](../saas-apps/procaire-tutorial.md), [Mentor® by eDriving: Business](https://www.edriving.com/), [Gradle Enterprise](https://gradle.com/)
-
-You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
---
-### Public preview - Custom roles for enterprise apps
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
- [Custom RBAC roles for delegated enterprise application management](../roles/custom-available-permissions.md) is now in public preview. These new permissions build on the custom roles for app registration management, which allows fine-grained control over what access your admins have. Over time, additional permissions to delegate management of Azure AD will be released.
-
-Some common delegation scenarios:
-- assignment of user and groups that can access SAML based single sign-on applications-- the creation of Azure AD Gallery applications-- update and read of basic SAML Configurations for SAML based single sign-on applications-- management of signing certificates for SAML based single sign-on applications-- update of expiring sign in certificates notification email addresses for SAML based single sign-on applications-- update of the SAML token signature and sign-in algorithm for SAML based single sign-on applications-- create, delete, and update of user attributes and claims for SAML-based single sign-on applications-- ability to turn on, off, and restart provisioning jobs-- updates to attribute mapping-- ability to read provisioning settings associated with the object-- ability to read provisioning settings associated with your service principal-- ability to authorize application access for provisioning---
-### Public preview - Azure AD Application Proxy natively supports single sign-on access to applications that use headers for authentication
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-Azure Active Directory (Azure AD) Application Proxy natively supports single sign-on access to applications that use headers for authentication. You can configure header values required by your application in Azure AD. The header values will be sent down to the application via Application Proxy. To learn more, see [Header-based single sign-on for on-premises apps with Azure AD App Proxy](../manage-apps/application-proxy-configure-single-sign-on-with-headers.md)
-
--
-### General Availability - Azure AD B2C Phone Sign-up and Sign-in using Custom Policy
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-With phone number sign-up and sign-in, developers and enterprises can allow their customers to sign up and sign in using a one-time password sent to the user's phone number via SMS. This feature also lets the customer change their phone number if they lose access to their phone. With the power of custom policies, allow developers and enterprises to communicate their brand through page customization. Find out how to [set up phone sign-up and sign-in with custom policies in Azure AD B2C](../../active-directory-b2c/phone-authentication-user-flows.md).
-
--
-### New provisioning connectors in the Azure AD Application Gallery - November 2020
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Adobe Identity Management](../saas-apps/adobe-identity-management-provisioning-tutorial.md)-- [Blogin](../saas-apps/blogin-provisioning-tutorial.md)-- [Clarizen One](../saas-apps/clarizen-one-provisioning-tutorial.md)-- [Contentful](../saas-apps/contentful-provisioning-tutorial.md)-- [GitHub AE](../saas-apps/github-ae-provisioning-tutorial.md)-- [Playvox](../saas-apps/playvox-provisioning-tutorial.md)-- [PrinterLogic SaaS](../saas-apps/printer-logic-saas-provisioning-tutorial.md)-- [Tic - Tac Mobile](../saas-apps/tic-tac-mobile-provisioning-tutorial.md)-- [Visibly](../saas-apps/visibly-provisioning-tutorial.md)-
-For more information, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
-
--
-### Public Preview - Email Sign-In with ProxyAddresses now deployable via Staged Rollout
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Tenant administrators can now use Staged Rollout to deploy Email Sign-In with ProxyAddresses to specific Azure AD groups. This can help while trying out the feature before deploying it to the entire tenant via the Home Realm Discovery policy. Instructions for deploying Email Sign-In with ProxyAddresses via Staged Rollout are in the [documentation](../authentication/howto-authentication-use-email-signin.md).
-
--
-### Limited Preview - Sign-in Diagnostic
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-With the initial preview release of the Sign-in Diagnostic, admins can now review user sign-ins. Admins can receive contextual, specific, and relevant details and guidance on what happened during a sign-in and how to fix problems. The diagnostic is available in both the Azure AD level, and Conditional Access Diagnose and Solve blades. The diagnostic scenarios covered in this release are Conditional Access, Multi-Factor Authentication, and successful sign-in.
-
-For more information, see [What is sign-in diagnostic in Azure AD?](../reports-monitoring/overview-sign-in-diagnostics.md).
-
--
-### Improved Unfamiliar Sign-in Properties
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
- Unfamiliar sign-in properties detections has been updated. Customers may notice more high-risk unfamiliar sign-in properties detections. For more information, see [What is risk?](../identity-protection/concept-identity-protection-risks.md)
-
--
-### Public Preview refresh of Cloud Provisioning agent now available (Version: 1.1.281.0)
-
-**Type:** Changed feature
-**Service category:** Azure AD Cloud Provisioning
-**Product capability:** Identity Lifecycle Management
-
-Cloud provisioning agent has been released in public preview and is now available through the portal. This release contains several improvements including, support for GMSA for your domains, which provides better security, improved initial sync cycles, and support for large groups. Check out the release version [history](../app-provisioning/provisioning-agent-release-version-history.md) for more details.
-
--
-### BitLocker recovery key API endpoint now under /informationProtection
-
-**Type:** Changed feature
-**Service category:** Device Access Management
-**Product capability:** Device Lifecycle Management
-
-Previously, you could recover BitLocker keys via the /bitlocker endpoint. We'll eventually be deprecating this endpoint, and customers should begin consuming the API that now falls under /informationProtection.
-
-See [BitLocker recovery API](/graph/api/resources/bitlockerrecoverykey?view=graph-rest-beta&preserve-view=true) for updates to the documentation to reflect these changes.
---
-### General Availability of Application Proxy support for Remote Desktop Services HTML5 Web Client
-
-**Type:** Changed feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-Azure AD Application Proxy support for Remote Desktop Services (RDS) Web Client is now in General Availability. The RDS web client allows users to access Remote Desktop infrastructure through any HTLM5-capable browser such as Microsoft Edge, Internet Explorer 11, Google Chrome, and so on. Users can interact with remote apps or desktops like they would with a local device from anywhere.
-
-By using Azure AD Application Proxy, you can increase the security of your RDS deployment by enforcing pre-authentication and Conditional Access policies for all types of rich client apps. To learn more, see [Publish Remote Desktop with Azure AD Application Proxy](../manage-apps/application-proxy-integrate-with-remote-desktop-services.md)
-
--
-### New enhanced Dynamic Group service is in Public Preview
-
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-
-Enhanced dynamic group service is now in Public Preview. New customers that create dynamic groups in their tenants will be using the new service. The time required to create a dynamic group will be proportional to the size of the group that is being created instead of the size of the tenant. This update will improve performance for large tenants significantly when customers create smaller groups.
-
-The new service also aims to complete member addition and removal because of attribute changes within a few minutes. Also, single processing failures won't block tenant processing. To learn more about creating dynamic groups, see our [documentation](../enterprise-users/groups-create-rule.md).
-
-
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
documentationcenter: ''
editor: ''- ms.assetid: b93e595b-354a-479d-85ec-a95553dd9cc2 na ms.devlang: na Previously updated : 05/03/2021 Last updated : 06/02/2021
Legend:
- Non-bold - Supported option - Local account - Local user account on the server - Domain account - Domain user account-- sMSA - [standalone Managed Service account](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10))-- gMSA - [group Managed Service account](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11))
+- sMSA - [standalone Managed Service account](../../active-directory/fundamentals/service-accounts-on-premises.md)
+- gMSA - [group Managed Service account](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)
| | LocalDB</br>Express | LocalDB/LocalSQL</br>Custom | Remote SQL</br>Custom | | | | | |
The VSA is intended to be used with scenarios where the sync engine and SQL are
This feature requires Windows Server 2008 R2 or later. If you install Azure AD Connect on Windows Server 2008, then the installation falls back to using a [user account](#user-account) instead. #### Group managed service account
-If you use a remote SQL server, then we recommend to using a **group managed service account**. For more information on how to prepare your Active Directory for Group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
+If you use a remote SQL server, then we recommend to using a **group managed service account**. For more information on how to prepare your Active Directory for Group Managed Service account, see [Group Managed Service Accounts Overview](https://docs.microsoft.com/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview).
To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**. ![VSA](./media/reference-connect-accounts-permissions/serviceaccount.png)
-It is also supported to use a [standalone managed service account](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10)). However, these can only be used on the local machine and there is no benefit to use them over the default virtual service account.
+It is also supported to use a [standalone managed service account](../../active-directory/fundamentals/service-accounts-on-premises.md). However, these can only be used on the local machine and there is no benefit to use them over the default virtual service account.
This feature requires Windows Server 2012 or later. If you need to use an older operating system and use remote SQL, then you must use a [user account](#user-account).
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Previously updated : 4/6/2021 Last updated : 6/2/2021
With tenant restrictions, organizations can specify the list of tenants that the
This article focuses on tenant restrictions for Microsoft 365, but the feature protects all apps that send the user to Azure AD for single sign-on. If you use SaaS apps with a different Azure AD tenant from the tenant used by your Microsoft 365, make sure that all required tenants are permitted (e.g. in B2B collaboration scenarios). For more information about SaaS cloud apps, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps).
-Additionally, the tenant restrictions feature now supports [blocking the use of all Microsoft consumer applications](#blocking-consumer-applications-public-preview) (MSA apps) such as OneDrive, Hotmail, and Xbox.com. This uses a separate header to the `login.live.com` endpoint, and is detailed at the end of the document.
+Additionally, the tenant restrictions feature now supports [blocking the use of all Microsoft consumer applications](#blocking-consumer-applications) (MSA apps) such as OneDrive, Hotmail, and Xbox.com. This uses a separate header to the `login.live.com` endpoint, and is detailed at the end of the document.
## How it works
Depending on the capabilities of your proxy infrastructure, you may be able to s
For specific details, refer to your proxy server documentation.
-## Blocking consumer applications (public preview)
+## Blocking consumer applications
Applications from Microsoft that support both consumer accounts and organizational accounts, like [OneDrive](https://onedrive.live.com/) or [Microsoft Learn](/learn/), can sometimes be hosted on the same URL. This means that users that must access that URL for work purposes also have access to it for personal use, which may not be permitted under your operating guidelines.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 05/04/2021 Last updated : 06/02/2021
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2021
+
+### Updated articles
+
+- [Azure Active Directory application management: What's new](whats-new-docs.md)
++ ## April 2021 ### New articles
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
+
+ Title: Analyze sign-ins with the Azure AD sign-ins log
+description: In this quickstart, you learn how you can use the sign-ins log to determine the reason for a failed sign-in to Azure AD.
+++++ Last updated : 06/03/2021++++++
+# Customer intent: As an IT admin, you need to know how to use the sign-ins log so that you can fix sign-in issues.
+++
+# Quickstart: Analyze sign-ins with the Azure AD sign-ins log
+
+With the information in the Azure AD sign-ins log, you can figure out what happened if a sign-in of a user failed. This quickstart shows how to you can locate failed sign-in using the sign-ins log.
++
+## Prerequisites
+
+To complete the scenario in this quickstart, you need:
+
+- **Access to an Azure AD tenant** - If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
+
+## Perform a failed sign-in
+
+The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log.
+
+**To complete this step:**
+
+1. Sign in to your [Azure portal](https://portal.azure.com/) as Isabella Simonsen using an incorrect password.
+
+2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](reference-reports-latencies.md#activity-reports).
+++
+## Find the failed sign-in
+
+This section provides you with the steps to analyze a failed sign-in:
+
+- **Filter sign-ins**: Remove all records that are not relevant to your analysis. For example, set a filter to display only the records of a specific user.
+- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also lookup the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
++
+**To review the failed sign-in:**
+
+1. Navigate to the [sign-ins log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/SignIns).
+
+2. To list only records for Isabella Simonsen:
+
+ a. In the toolbar, click **Add filters**.
+
+ ![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
+
+ b. In the **Pick a field** list, select **User**, and then click **Apply**.
+
+ c. In the **Username** textbox, type **Isabella Simonsen**, and then click **Apply**.
+
+ d. In the toolbar, click **Refresh**.
+
+3. To analyze the issue, click **Troubleshooting and support**.
+
+ ![Add filter](./media/quickstart-analyze-sign-in/troubleshooting-and-support.png)
+
+4. Copy the **Sign-in error code**.
+
+ ![Sign-in error code](./media/quickstart-analyze-sign-in/sign-in-error-code.png)
++
+5. Paste the error code into the textbox of the [sign-in error lookup tool](https://login.microsoftonline.com/error), and then click **Submit**.
+
+Review the outcome of the tool and determine whether it provides you with additional information.
+
+![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
++
+## Additional tests
+
+Now, that you know how to find an entry in the sign-in log by name, you should also try to find the record using the following filters:
+
+- **Date** - Try to find Isabella using a **Start** and an **End**.
+
+ ![Date filter](./media/quickstart-analyze-sign-in/start-and-end-filter.png)
+
+- **Status** - Try to find Isabella using **Status: Failure**.
+
+ ![Status failure](./media/quickstart-analyze-sign-in/status-failure.png)
++++
+## Clean up resources
+
+When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users-azure-active-directory.md#delete-a-user).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What are Azure Active Directory reports?](overview-reports.md)
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/concept-understand-roles.md
The following table is offered as an aid to understanding these role categories.
Category | Role - | -
-Azure AD-specific roles | Application Administrator<br>Application Developer<br>Authentication Administrator<br>B2C IEF Keyset Administrator<br>B2C IEF Policy Administrator<br>Cloud Application Administrator<br>Cloud Device Administrator<br>Conditional Access Administrator<br>Device Administrators<br>Directory Readers<br>Directory Synchronization Accounts<br>Directory Writers<br>External ID User Flow Administrator<br>External ID User Flow Attribute Administrator<br>External Identity Provider Administrator<br>Groups Administrator<br>Guest Inviter<br>Helpdesk Administrator<br>Hybrid Identity Administrator<br>License Administrator<br>Partner Tier1 Support<br>Partner Tier2 Support<br>Password Administrator<br>Privileged Authentication Administrator<br>Privileged Role Administrator<br>Reports Reader<br>User Account Administrator
+Azure AD-specific roles | Application Administrator<br>Application Developer<br>Authentication Administrator<br>B2C IEF Keyset Administrator<br>B2C IEF Policy Administrator<br>Cloud Application Administrator<br>Cloud Device Administrator<br>Conditional Access Administrator<br>Device Administrators<br>Directory Readers<br>Directory Synchronization Accounts<br>Directory Writers<br>External ID User Flow Administrator<br>External ID User Flow Attribute Administrator<br>External Identity Provider Administrator<br>Groups Administrator<br>Guest Inviter<br>Helpdesk Administrator<br>Hybrid Identity Administrator<br>License Administrator<br>Partner Tier1 Support<br>Partner Tier2 Support<br>Password Administrator<br>Privileged Authentication Administrator<br>Privileged Role Administrator<br>Reports Reader<br>User Administrator
Cross-service roles | Global Administrator<br>Compliance Administrator<br>Compliance Data Administrator<br>Global Reader<br>Security Administrator<br>Security Operator<br>Security Reader<br>Service Support Administrator Service-specific roles | Azure DevOps Administrator<br>Azure Information Protection Administrator<br>Billing Administrator<br>CRM Service Administrator<br>Customer LockBox Access Approver<br>Desktop Analytics Administrator<br>Exchange Service Administrator<br>Insights Administrator<br>Insights Business Leader<br>Intune Service Administrator<br>Kaizala Administrator<br>Lync Service Administrator<br>Message Center Privacy Reader<br>Message Center Reader<br>Modern Commerce User<br>Network Administrator<br>Office Apps Administrator<br>Power BI Service Administrator<br>Power Platform Administrator<br>Printer Administrator<br>Printer Technician<br>Search Administrator<br>Search Editor<br>SharePoint Service Administrator<br>Teams Communications Administrator<br>Teams Communications Support Engineer<br>Teams Communications Support Specialist<br>Teams Devices Administrator<br>Teams Administrator
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-create.md
$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope $resourceScope -Rol
## Assign a custom role scoped to a resource
-Like built-in roles, custom roles are assigned by default at the default organization-wide scope to grant access permissions over all app registrations in your organization. But unlike built-in roles, custom roles can also be assigned at the scope of a single Azure AD resource. This allows you to give the user the permission to update credentials and basic properties of a single app without having to create a second custom role.
+Like built-in roles, custom roles are assigned by default at the default organization-wide scope to grant access permissions over all app registrations in your organization. Additionally, custom roles and some relevant built-in roles (depending on the type of Azure AD resource) can also be assigned at the scope of a single Azure AD resource. This allows you to give the user the permission to update credentials and basic properties of a single app without having to create a second custom role.
1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Application Developer permissions. 1. Select **App registrations**.
Like built-in roles, custom roles are assigned by default at the default organiz
- Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032). - For more about role permissions, see [Azure AD built-in roles](permissions-reference.md).-- For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2froles%2fcontext%2fugr-context).
+- For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2froles%2fcontext%2fugr-context).
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
In this article, you can find the information needed to restrict a user's admini
> | - | | - | > | Configure application proxy app | Application Administrator | | > | Configure connector group properties | Application Administrator | |
-> | Create application registration when ability is disabled for all users | Application Developer | Cloud Application Administrator, Application Administrator |
+> | Create application registration when ability is disabled for all users | Application Developer | Cloud Application Administrator<br/>Application Administrator |
> | Create connector group | Application Administrator | | > | Delete connector group | Application Administrator | | > | Disable application proxy | Application Administrator | |
In this article, you can find the information needed to restrict a user's admini
> | Configure notifications | Contributor ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Owner | > | Configure settings | Owner ([see documentation](../hybrid/how-to-connect-health-operations.md)) | | > | Configure sync notifications | Contributor ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Owner |
-> | Read ADFS security reports | Security Reader | Contributor, Owner
-> | Read all configuration | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
-> | Read sync errors | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
-> | Read sync services | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
-> | View metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
-> | View metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
-> | View sync service metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor, Owner |
+> | Read ADFS security reports | Security Reader | Contributor<br/>Owner
+> | Read all configuration | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
+> | Read sync errors | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
+> | Read sync services | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
+> | View metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
+> | View metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
+> | View sync service metrics and alerts | Reader ([see documentation](../fundamentals/users-default-permissions.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context)) | Contributor<br/>Owner |
## Custom domain names
In this article, you can find the information needed to restrict a user's admini
> | Disable device | Cloud Device Administrator | | > | Enable device | Cloud Device Administrator | | > | Read basic configuration | Default user role ([see documentation](../fundamentals/users-default-permissions.md)) | |
-> | Read BitLocker keys | Security Reader | Password Administrator, Security Administrator |
+> | Read BitLocker keys | Security Reader | Password Administrator<br/>Security Administrator |
## Enterprise applications
In this article, you can find the information needed to restrict a user's admini
> | Create enterprise application | Cloud Application Administrator | Application Administrator | > | Manage Application Proxy | Application Administrator | | > | Manage user settings | Global Administrator | |
-> | Read access review of a group or of an app | Security Reader | Security Administrator, User Administrator |
+> | Read access review of a group or of an app | Security Reader | Security Administrator<br/>User Administrator |
> | Read all configuration | Default user role ([see documentation](../fundamentals/users-default-permissions.md)) | |
-> | Update enterprise application assignments | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
-> | Update enterprise application owners | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
-> | Update enterprise application properties | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
-> | Update enterprise application provisioning | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
-> | Update enterprise application self-service | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
-> | Update single sign-on properties | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator, Application Administrator |
+> | Update enterprise application assignments | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
+> | Update enterprise application owners | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
+> | Update enterprise application properties | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
+> | Update enterprise application provisioning | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
+> | Update enterprise application self-service | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
+> | Update single sign-on properties | Enterprise application owner ([see documentation](../fundamentals/users-default-permissions.md)) | Cloud Application Administrator<br/>Application Administrator |
## Entitlement management
In this article, you can find the information needed to restrict a user's admini
> | Manage group expiration | User Administrator | | > | Manage group settings | Groups Administrator | User Administrator | > | Read all configuration (except hidden membership) | Directory readers | Default user role ([see documentation](../fundamentals/users-default-permissions.md)) |
-> | Read hidden membership | Group member | Group owner, Password Administrator, Exchange Administrator, SharePoint Administrator, Teams Administrator, User Administrator |
-> | Read membership of groups with hidden membership | Helpdesk Administrator | User Administrator, Teams Administrator |
+> | Read hidden membership | Group member | Group owner<br/>Password Administrator<br/>Exchange Administrator<br/>SharePoint Administrator<br/>Teams Administrator<br/>User Administrator |
+> | Read membership of groups with hidden membership | Helpdesk Administrator | User Administrator<br/>Teams Administrator |
> | Revoke license | License Administrator | User Administrator | > | Update group membership | Group owner ([see documentation](../fundamentals/users-default-permissions.md)) | User Administrator | > | Update group owners | Group owner ([see documentation](../fundamentals/users-default-permissions.md)) | User Administrator |
In this article, you can find the information needed to restrict a user's admini
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read audit logs | Reports Reader | Security Reader, Security Administrator |
+> | Read audit logs | Reports Reader | Security Reader<br/>Security Administrator |
## Monitoring - Sign-ins > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read sign-in logs | Reports Reader | Security Reader, Security Administrator |
+> | Read sign-in logs | Reports Reader | Security Reader<br/>Security Administrator |
## Multi-factor authentication
In this article, you can find the information needed to restrict a user's admini
> | Task | Least privileged role | Additional roles | > | - | | - | > | Manage role assignments | Privileged Role Administrator | |
-> | Read access review of an Azure AD role | Security Reader | Security Administrator, Privileged Role Administrator |
+> | Read access review of an Azure AD role | Security Reader | Security Administrator<br/>Privileged Role Administrator |
> | Read all configuration | Default user role ([see documentation](../fundamentals/users-default-permissions.md)) | | ## Security - Authentication methods
In this article, you can find the information needed to restrict a user's admini
> | Manage named locations | Conditional Access Administrator | Security Administrator | > | Manage terms of use | Conditional Access Administrator | Security Administrator | > | Read all configuration | Security Reader | Security Administrator |
-> | Read named locations | Security Reader | Conditional Access Administrator, Security Administrator |
+> | Read named locations | Security Reader | Conditional Access Administrator<br/>Security Administrator |
## Security - Identity security score
In this article, you can find the information needed to restrict a user's admini
> | Update User Principal Name for limited admins (see documentation) | User Administrator | | > | Update User Principal Name property on privileged admins (see documentation) | Global Administrator | | > | Update user settings | Global Administrator | |
-> | Update Authentication methods | Authentication Administrator | Privileged Authentication Administrator, Global Administrator |
+> | Update Authentication methods | Authentication Administrator | Privileged Authentication Administrator<br/>Global Administrator |
## Support > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Submit support ticket | Service Administrator | Application Administrator, Azure Information Protection Administrator, Billing Administrator, Cloud Application Administrator, Compliance Administrator, Dynamics 365 Administrator, Desktop Analytics Administrator, Exchange Administrator, Password Administrator, Intune Administrator, Skype for Business Administrator, Power BI Administrator, Privileged Authentication Administrator, SharePoint Administrator, Teams Communications Administrator, Teams Administrator, User Administrator, Workplace Analytics Administrator |
+> | Submit support ticket | Service Support Administrator | Application Administrator<br/>Azure Information Protection Administrator<br/>Billing Administrator<br/>Cloud Application Administrator<br/>Compliance Administrator<br/>Dynamics 365 Administrator<br/>Desktop Analytics Administrator<br/>Exchange Administrator<br/>Intune Administrator<br/>Password Administrator<br/>Power BI Administrator<br/>Privileged Authentication Administrator<br/>SharePoint Administrator<br/>Skype for Business Administrator<br/>Teams Administrator<br/>Teams Communications Administrator<br/>User Administrator<br/>Workplace Analytics Administrator |
## Next steps
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
If you do not want members of the group to have standing access to the role, you
## Why we enforce creation of a special group for assigning it to a role
-If a group is assigned a role, any IT admin who can manage group membership could also indirectly manage the membership of that role. For example, assume that a group Contoso_User_Administrators is assigned to User account admin role. An Exchange Administrator who can modify group membership could add themselves to the Contoso_User_Administrators group and in that way become a User account admin. As you can see, an admin could elevate their privilege in a way you did not intend.
+If a group is assigned a role, any IT admin who can manage group membership could also indirectly manage the membership of that role. For example, assume that a group Contoso_User_Administrators is assigned to User Administrator role. An Exchange Administrator who can modify group membership could add themselves to the Contoso_User_Administrators group and in that way become a User Administrator. As you can see, an admin could elevate their privilege in a way you did not intend.
Azure AD allows you to protect a group assigned to a role by using a new property called isAssignableToRole for groups. Only cloud groups that had the isAssignableToRole property set to ΓÇÿtrueΓÇÖ at creation time can be assigned to a role. This property is immutable; once a group is created with this property set to ΓÇÿtrueΓÇÖ, it canΓÇÖt be changed. You can't set the property on an existing group. We designed how groups are assigned to roles to help prevent potential breaches from happening: - Only Global Administrators and Privileged Role Administrators can create a role-assignable group (with the "isAssignableToRole" property enabled). - It can't be an Azure AD dynamic group; that is, it must have a membership type of "Assigned." Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners.-- To prevent elevation of privilege, the credentials of members and owners of a role-assignable group can be changed only by a Privileged Authentication Administrator or a Global Administrator.
+- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group.
- No nesting. A group can't be added as a member of a role-assignable group. ## Limitations
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 05/26/2021 Last updated : 06/03/2021
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [B2C IEF Keyset Administrator](#b2c-ief-keyset-administrator) | Can manage secrets for federation and encryption in the Identity Experience Framework (IEF). | aaf43236-0c0d-4d5f-883a-6955382ac081 | > | [B2C IEF Policy Administrator](#b2c-ief-policy-administrator) | Can create and manage trust framework policies in the Identity Experience Framework (IEF). | 3edaf663-341e-4475-9f94-5c398ef6c070 | > | [Billing Administrator](#billing-administrator) | Can perform common billing related tasks like updating payment information. | b0f54661-2d74-4c50-afa3-1ec803f12efe |
+> | [Cloud App Security Administrator](#cloud-app-security-administrator) | Can manage all aspects of the Cloud App Security product. | 892c5842-a9a6-463a-8041-72aa08ca3cf6 |
> | [Cloud Application Administrator](#cloud-application-administrator) | Can create and manage all aspects of app registrations and enterprise apps except App Proxy. | 158c047a-c907-4556-b7ef-446551a6b5f7 | > | [Cloud Device Administrator](#cloud-device-administrator) | Limited access to manage devices in Azure AD. | 7698a772-787b-4ac8-901f-60d6b08affd2 | > | [Compliance Administrator](#compliance-administrator) | Can read and manage compliance configuration and reports in Azure AD and Microsoft 365. | 17315797-102d-40b4-93e0-432062caca18 |
Makes purchases, manages subscriptions, manages support tickets, and monitors se
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Cloud App Security Administrator
+
+Users with this role have full permissions in Cloud App Security. They can add administrators, add Microsoft Cloud App Security (MCAS) policies and settings, upload logs, and perform governance actions.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Cloud Application Administrator Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. This role grants the ability to create and manage all aspects of enterprise applications and application registrations. Users assigned to this role are not added as owners when creating new application registrations or enterprise applications.
Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Privileged Authentication Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-User (no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User<br/>(no admin role, but member of a role-assignable group) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
active-directory Issue Verify Verifiable Credentials Your Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/issue-verify-verifiable-credentials-your-tenant.md
Register an application called 'VC Wallet App' in Azure AD and obtain a client I
![issuer endpoints](media/issue-verify-verifable-credentials-your-tenant/application-endpoints.png)
-## Set up your node app with access to Azure Key Vault
-
-To authenticate a user's credential issuance request, the issuer website uses your cryptographic keys in Azure Key Vault. To access Azure Key Vault, your website needs a client ID and client secret that can be used to authenticate to Azure Key Vault.
-
-1. While viewing the VC wallet app overview page select **Certificates & secrets**.
- ![certificates and secrets](media/issue-verify-verifable-credentials-your-tenant/vc-wallet-app-certs-secrets.png)
-1. In the **Client secrets** section choose **New client secret**
- 1. Add a description like "Node VC client secret"
- 1. Expires: in one year.
- ![Application secret with a one year expiration](media/issue-verify-verifable-credentials-your-tenant/add-client-secret.png)
-1. Copy down the SECRET. You need this information to update your sample node app.
-
->[!WARNING]
-> You have one chance to copy down the secret. The secret is one way hashed after this. Do not copy the ID.
-
-After creating your application and client secret in Azure AD, you need to grant the application the necessary permissions to perform operations on your Key Vault. Making these permission changes is required to enable the website to access and use the private keys stored there.
-
-1. Go to Key Vault.
-2. Select the key vault we are using for these tutorials.
-3. Choose **Access Policies** on left nav
-4. Choose **+Add Access Policy**.
-5. In the **Key permissions** section choose **Get**, and **Sign**.
-6. Select **Principal** and use the application ID to search for the application we registered earlier. Select it.
-7. Select **Add**.
-8. Choose **SAVE**.
-
-For more information about Key Vault permissions and access control read the [key vault RBAC guide](../../key-vault/general/rbac-guide.md)
-
-![assign key vault permissions](media/issue-verify-verifable-credentials-your-tenant/key-vault-permissions.png)
-## Make changes to match your environment
So far, we have been working with our sample app. The app uses [Azure Active Directory B2C](../../active-directory-b2c/overview.md) and we are now switching to use Azure AD so we need to make some changes not just to match your environment but also to support additional claims that were not used before.
Now when a user is presented with the "sign in" to get issued your verifiable cr
1. From the verifiable credentials page create a new credential called **modifiedCredentialExpert** using the old display file and the new rules file (**modified-credentialExpert.json**). 1. After the credential creation process completes from the **Overview** page copy the **Issue Credential URL** and save it because we need it in the next section.
-## Before we continue
+## Set up your node app with access to Azure Key Vault
+
+To authenticate a user's credential issuance request, the issuer website uses your cryptographic keys in Azure Key Vault. To access Azure Key Vault, your website needs a client ID and client secret that can be used to authenticate to Azure Key Vault.
+
+First we need to register another application. This registration is for the website. The registration for the wallet app earlier is only to allow users to sign in to the directory with the wallet app, in our case it happens to be in the same directory but the wallet app registration could have been done in a different directory as well. A good practice is to separate app registrations if the responsibility of the applications is different. In this case we need our website to get access to Key Vault.
+
+1. Follow the instructions for registering an application with [Azure AD](../develop/quickstart-register-app.md) When registering, use the values below.
+
+ - Name: "VC Website"
+ - Supported account types: Accounts in this organizational directory only
+
+ :::image type="content" source="media/issue-verify-verifable-credentials-your-tenant/vc-website-app-app-registration.png" alt-text="Screenshot that shows how to register an application.":::
+
+1. After you register the application, write down the Application (client) ID. You need this value later.
+
+ :::image type="content" source="media/issue-verify-verifable-credentials-your-tenant/vc-website-app-app-details.png" alt-text="Screenshot that shows the application client ID.":::
+
+1. While viewing the VC website app overview page select **Certificates & secrets**.
+
+ :::image type="content" source="media/issue-verify-verifable-credentials-your-tenant/vc-website-app-certificates-secrets.png" alt-text="Screenshot that shows the Certificates and Secrets pane.":::
+
+1. In the **Client secrets** section choose **New client secret**
+ 1. Add a description like "Node VC client secret"
+ 1. Expires: in one year.
+
+ ![Application secret with a one year expiration](media/issue-verify-verifable-credentials-your-tenant/add-client-secret.png)
+
+1. Copy down the SECRET. You need this information to update your sample node app.
+
+>[!WARNING]
+> You have one chance to copy down the secret. The secret is one way hashed after this. Do not copy the ID.
+
+After creating your application and client secret in Azure AD, you need to grant the application the necessary permissions to perform operations on your Key Vault. Making these permission changes is required to enable the website to access and use the private keys stored there.
+
+1. Go to Key Vault.
+2. Select the key vault we are using for these tutorials.
+3. Choose **Access Policies** on left nav
+4. Choose **+Add Access Policy**.
+5. In the **Key permissions** section choose **Get**, and **Sign**.
+6. Select **Principal** and use the application ID to search for the application we registered earlier. Select it.
+7. Select **Add**.
+8. Choose **SAVE**.
++
+For more information about Key Vault permissions and access control read the [key vault RBAC guide](../../key-vault/general/rbac-guide.md).
+
+## Make changes to the sample app
We need to put a few values together before we can make the necessary code changes. We use these values in the next section to make the sample code use your own keys stored in your vault. So far we should have the following values ready.
There are a few other values we need to get before we can make the changes one t
2. Paste your DID in the search bar. 4. From the formatted response find the section called **verificationMethod**
-5. Under "verificationMethod" copy the id and label it as the kvSigningKeyId
+5. Under "verificationMethod" copy the `id` and label it as the kvSigningKeyId
```json= "verificationMethod": [
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | 1.21 GA |
+| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | *1.21 GA |
| 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA | | 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA | | 1.21 | Apr-08-21 | May 2021 | Jun 2021 | 1.24 GA |
+| 1.22 | Aug-04-21 | Sept 2021 | Oct 2021 | 1.25 GA |
+| 1.23 | Dec 2021 | Jan 2022 | Feb 2022 | 1.26 GA |
-
+>[!NOTE]
+>AKS version 1.18 will continue to be available until July 31st 2021. After this date, AKS will return to its regular three version window support. It is important to note the following as the support from June 30th to July 31st 2021 will be limited in scope. Below lists what the users will be limited to:
+> - Creation of new clusters and nodepools on 1.18.
+> - CRUD operations on 1.18 clusters.
+> - Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
## FAQ
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
<!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions
-[preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/websocket-api.md
Previously updated : 05/25/2021 Last updated : 06/02/2021
In this article, you will:
> [!div class="checklist"] > * Understand Websocket passthrough flow. > * Add a WebSocket API to your API Management instance.
+> * Test your WebSocket API.
+> * View the metrics and logs for your WebSocket API.
> * Learn the limitations of WebSocket API. ## Prerequisites
Per the [WebSocket protocol](https://tools.ietf.org/html/rfc6455), when a client
1. Click **Create**.
+## Test your WebSocket API
+
+1. Navigate to your WebSocket API.
+1. Within your WebSocket API, select the onHandshake operation.
+1. Select the **Test** tab to access the Test console.
+1. Optionally, provide query string parameters required for the WebSocket handshake.
+
+ :::image type="content" source="./media/websocket-api/test-websocket-api.png" alt-text="test API example":::
+
+1. Click **Connect**.
+1. View connection status in **Output**.
+1. Enter value in **Payload**.
+1. Click **Send**.
+1. View received messages in **Output**.
+1. Repeat preceding steps to test different payloads.
+1. When testing is complete, select **Disconnect**.
+ ## Limitations WebSocket APIs are available and supported in public preview through Azure portal, Management API, and Azure Resource Manager. Below are the current restrictions of WebSocket support in API Management:
app-service App Service Hybrid Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-hybrid-connections.md
The Hybrid Connections feature requires a relay agent in the network that hosts
This tool runs on Windows Server 2012 and later. The HCM runs as a service and connects outbound to Azure Relay on port 443.
-> [!NOTE]
-> Hybrid Connection Manager cannot coexist with Biztalk Hybrid Connection Manager or Service Bus for Windows Server. Hence when installing HCM, any versions of these packages should be removed first.
->
- After installing HCM, you can run HybridConnectionManagerUi.exe to use the UI for the tool. This file is in the Hybrid Connection Manager installation directory. In Windows 10, you can also just search for *Hybrid Connection Manager UI* in your search box. :::image type="content" source="media/app-service-hybrid-connections/hybridconn-hcm.png" alt-text="Screenshot of Hybrid Connection Manager":::
The status of "Connected" means that at least one HCM is configured with that Hy
* Does your host have outbound access to Azure on port 443? You can test from your HCM host using the PowerShell command *Test-NetConnection Destination -P Port* * Is your HCM potentially in a bad state? Try restarting the ΓÇÿAzure Hybrid Connection Manager Service" local service.
+* Do you have conflicting software installed? Hybrid Connection Manager cannot coexist with Biztalk Hybrid Connection Manager or Service Bus for Windows Server. Hence when installing HCM, any versions of these packages should be removed first.
+ If your status says **Connected** but your app cannot reach your endpoint then: * make sure you are using a DNS name in your Hybrid Connection. If you use an IP address then the required client DNS lookup may not happen. If the client running in your web app does not do a DNS lookup, then the Hybrid Connection will not work
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-troubleshooting-502.md
Validate that the Custom Health Probe is configured correctly as the preceding t
### Cause
-When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. If the application gateway does not receive a response from back-end application in this interval, the user request gets a 502 error.
+When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway does not receive a response from back-end application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway does not receive a resposne from the back-end application in this interval, the request will be tried against a second back-end pool member. If the second request fails the user request gets a 502 error.
### Solution
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/ssl-overview.md
Previously updated : 08/21/2020 Last updated : 06/03/2021
Authentication Certificates have been deprecated and replaced by Trusted Root Ce
> > In order for a TLS/SSL certificate to be trusted, that certificate of the backend server must have been issued by a CA that is well-known. If the certificate was not issued by a trusted CA, the application gateway will then check to see if the certificate of the issuing CA was issued by a trusted CA, and so on until either a trusted CA is found (at which point a trusted, secure connection will be established) or no trusted CA can be found (at which point the application gateway will mark the backend unhealthy). Therefore, it is recommended the backend server certificate contain both the root and intermediate CAs. -- If the certificate is self-signed, or signed by unknown intermediaries, then to enable end-to-end TLS in the v2 SKU a trusted root certificate must be defined. Application Gateway only communicates with backends whose server certificateΓÇÖs root certificate matches one of the list of trusted root certificates in the backend http setting associated with the pool.
+- If the backend server certificate is self-signed, or signed by unknown CA/intermediaries, then to enable end to end TLS in Application Gateway v2 a trusted root certificate must be uploaded. Application Gateway will only communicate with backends whose server certificateΓÇÖs root certificate matches one of the list of trusted root certificates in the backend http setting associated with the pool.
- In addition to the root certificate match, Application Gateway v2 also validates if the Host setting specified in the backend http setting matches that of the common name (CN) presented by the backend serverΓÇÖs TLS/SSL certificate. When trying to establish a TLS connection to the backend, Application Gateway v2 sets the Server Name Indication (SNI) extension to the Host specified in the backend http setting.
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/enable-from-template.md
If you're new to Azure Automation and Azure Monitor, it's important that you und
}, "_artifactsLocation": { "type": "string",
- "defaultValue": "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-automation/",
+ "defaultValue": "[deployment().properties.templateLink.uri]",
"metadata": { "description": "URI to artifacts location" }
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
To achieve comprehensive business continuity on Azure, build your application ar
| Virtual Machines: [Ev4-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | :large_blue_diamond: | | Virtual Machines: [Fsv2-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | :large_blue_diamond: | | Virtual Machines: [M-Series](../virtual-machines/windows/create-powershell-availability-zone.md) | :large_blue_diamond: |
-| [Virtual WAN](../virtual-wan/virtual-wan-about.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | :large_blue_diamond: |
-| Virtual WAN: [ExpressRoute](../virtual-wan/virtual-wan-about.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | :large_blue_diamond: |
+| [Virtual WAN](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | :large_blue_diamond: |
+| Virtual WAN: [ExpressRoute](../virtual-wan/virtual-wan-faq.md#how-are-availability-zones-and-resiliency-handled-in-virtual-wan) | :large_blue_diamond: |
| Virtual WAN: [Point-to-Site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | :large_blue_diamond: | | Virtual WAN: [Site-to-Site VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md) | :large_blue_diamond: |
azure-app-configuration Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-azure-functions-csharp.md
Previously updated : 09/28/2020 Last updated : 06/02/2021 #Customer intent: As an Azure Functions developer, I want to manage all my app settings in one place using Azure App Configuration.
This project will use [dependency injection in .NET Azure Functions](../azure-fu
} ```
- The `Function1` class and the `Run` method should not be static. Remove the `static` modifier if it was autogenerated.
+ > [!NOTE]
+ > The `Function1` class and the `Run` method should not be static. Remove the `static` modifier if it was autogenerated.
## Test the function locally
This project will use [dependency injection in .NET Azure Functions](../azure-fu
In this quickstart, you created a new App Configuration store and used it with an Azure Functions app via the [App Configuration provider](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration). To learn how to update your Azure Functions app to dynamically refresh configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable dynamic configuration in Azure Functions](./enable-dynamic-configuration-azure-functions-csharp.md)
+> [Enable dynamic configuration in Azure Functions](./enable-dynamic-configuration-azure-functions-csharp.md)
azure-app-configuration Reload Key Vault Secrets Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/reload-key-vault-secrets-dotnet.md
+
+ Title: Reload secrets and certificates automatically
+
+description: Learn how to set up your application to automatically reload secrets and certificates from Key Vault.
++
+ms.assetid:
+
+ms.devlang: csharp
+ Last updated : 05/25/2021+++
+#Customer intent: I want my app to reload secrets or certificates from Key Vault without restarting my app.
++
+# Reload secrets and certificates from Key Vault automatically
+
+App Configuration and Key Vault are complementary services used side by side in many applications. App Configuration helps you use the services together by creating keys in your App Config store that reference secrets or certificates stored in Key Vault. Since Key Vault stores the public and private key pair of a certificate as a secret, your application can retrieve any certificate as a secret from Key Vault.
+
+As a good security practice, [secrets](../key-vault/secrets/tutorial-rotation.md) and [certificates](../key-vault/certificates/tutorial-rotate-certificates.md) should be rotated periodically. Once they have been rotated in Key Vault, you would want your application to pick up the latest secret and certificate values. There are two ways to achieve this without restarting your application:
+- Update a sentinel key-value to trigger the refresh of your entire configuration, thereby reloading all Key Vault secrets and certificates. For more information, see how to [use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md).
+- Periodically reload some or all secrets and certificates from Key Vault.
+
+In the first option, you will have to update the sentinel key-value in App Configuration whenever you rotate secrets and certificates in Key Vault. This approach works well when you want to force an immediate reload of secrets and certificates in your application. However, when secrets and certificates are rotated automatically in Key Vault, your application may experience errors if you don't update the sentinel key-value in time. The second option allows you to completely automate this process. You can configure your application to reload secrets and certificates from Key Vault within your acceptable delay from the time of rotation. This tutorial will walk you through the second option.
++
+## Prerequisites
+
+- This tutorial shows you how to set up your application to automatically reload secrets and certificates from Key Vault. It builds on the tutorial for implementing Key Vault references in your code. Before you continue, finish [Tutorial: Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md) first.
+
+- [Microsoft.Azure.AppConfiguration.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.AppConfiguration.AspNetCore) package v4.4.0 or later.
++
+## Add an auto-rotating certificate to Key Vault
+
+ Follow the [Tutorial: Configure certificate auto-rotation in Key Vault](../key-vault/certificates/tutorial-rotate-certificates.md) to add an auto-rotating certificate called **ExampleCertificate** to the Key Vault created in the previous tutorial.
++
+## Add a reference to the Key Vault certificate in App Configuration
+
+1. In the Azure portal, select **All resources**, and then select the App Configuration store instance that you created in the previous tutorial.
+
+1. Select **Configuration Explorer**.
+
+1. Select **+ Create** > **Key vault reference**, and then specify the following values:
+ - **Key**: Select **TestApp:Settings:KeyVaultCertificate**.
+ - **Label**: Leave this value blank.
+ - **Subscription**, **Resource group**, and **Key vault**: Enter the values corresponding to the Key Vault you created in the previous tutorial.
+ - **Secret**: Select the secret named **ExampleCertificate** that you created in the previous section.
+ - **Secret Version**: **Latest version**.
+
+> [!Note]
+> If you reference a specific version, reloading the secret or certificate from Key Vault will always return the same value.
++
+## Update code to reload Key Vault secrets and certificates
+
+In your *Program.cs* file, update the `AddAzureAppConfiguration` method to set up a refresh interval for your Key Vault certificate using the `SetSecretRefreshInterval` method. With this change, your application will reload the public-private key pair for **ExampleCertificate** every 12 hours.
+
+```csharp
+config.AddAzureAppConfiguration(options =>
+{
+ options.Connect(settings["ConnectionStrings:AppConfig"])
+ .ConfigureKeyVault(kv =>
+ {
+ kv.SetCredential(new DefaultAzureCredential());
+ kv.SetSecretRefreshInterval("TestApp:Settings:KeyVaultCertificate", TimeSpan.FromHours(12));
+ });
+});
+```
+
+The first argument in `SetSecretRefreshInterval` method is the key of the Key Vault reference in App Configuration. This argument is optional. If the key parameter is omitted, the refresh interval will apply to all those secrets and certificates which do not have individual refresh intervals.
+
+Refresh interval defines the frequency at which your secrets and certificates will be reloaded from Key Vault, regardless of any changes to their values in Key Vault or App Configuration. If you want to reload secrets and certificates when their value changes in App Configuration, you can monitor them using the `ConfigureRefresh` method. For more information, see how to [use dynamic configuration in an ASP.NET Core app](./enable-dynamic-configuration-aspnet-core.md).
+
+Choose the refresh interval according to your acceptable delay after your secrets and certificates have been updated in Key Vault. It's also important to consider the [Key Vault service limits](../key-vault/general/service-limits.md) to avoid being throttled.
++
+## Clean up resources
+++
+## Next steps
+
+In this tutorial, you learned how to set up your application to automatically reload secrets and certificates from Key Vault. To learn how to use Managed Identity to streamline access to App Configuration and Key Vault, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
To add a secret to the vault, you need to take just a few additional steps. In t
## Next steps
-In this tutorial, you created an App Configuration key that references a value stored in Key Vault. To learn how to add an Azure-managed service identity that streamlines access to App Configuration and Key Vault, continue to the next tutorial.
+In this tutorial, you created a key in App Configuration that references a secret stored in Key Vault.
+To learn how to automatically reload secrets and certificates from Key Vault, continue to the next tutorial:
> [!div class="nextstepaction"]
-> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
+> [Reload secrets and certificates from Key Vault automatically](./reload-key-vault-secrets-dotnet.md)
+
+To learn how to use Managed Identity to streamline access to App Configuration and Key Vault, refer to the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-arc Backup Restore Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/backup-restore-postgresql-hyperscale.md
Previously updated : 12/09/2020 Last updated : 06/02/2021
The Timestamp column indicates the point in time UTC at which the backup was tak
## Restore a backup In this section we are showing you how to do a full restore or a point in time restore. When you restore a full backup, you restore the entire content of the backup. When you do a point in time restore, you restore up to the point in time you indicate. Any transaction that was done later than this point in time is not restored.
+> [!CAUTION]
+> You can only restore to a server group that has the same number of worker nodes that it had when the backup was taken. If you increased or reduced the number of worker nodes since the backup was taken, before you restore, you need to increase/reduce the number of worker nodes - or create a new server group - to match the content of the backup. The restore will fail when the number of worker nodes do not match.
+ ### Restore a full backup To restore the entire content of a backup run the command: ```console
azdata arc postgres backup delete --help
``` ## Next steps-- Read about [scaling out (adding worker nodes)](scale-out-postgresql-hyperscale-server-group.md) your server group
+- Read about [scaling out (adding worker nodes)](scale-out-in-postgresql-hyperscale-server-group.md) your server group
- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) your server group
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
See details at [Table colocation](../../postgresql/concepts-hyperscale-colocatio
## Next steps - [Read about creating Azure Arc enabled PostgreSQL Hyperscale](create-postgresql-hyperscale-server-group.md)-- [Read about scaling out Azure Arc enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](scale-out-postgresql-hyperscale-server-group.md)
+- [Read about scaling out Azure Arc enabled PostgreSQL Hyperscale server groups created in your Arc Data Controller](scale-out-in-postgresql-hyperscale-server-group.md)
- [Read about Azure Arc enabled Data Services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services) - [Read about Azure Arc](https://aka.ms/azurearc)
azure-arc Configure Security Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-security-postgres-hyperscale.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
If the AZDATA_PASSWORD **session** environment variable exists but has not value
For audit scenarios please configure your server group to use the `pgaudit` extensions of Postgres. For more details about `pgaudit` see [`pgAudit` GitHub project](https://github.com/pgaudit/pgaudit/blob/master/README.md). To enable the `pgaudit` extension in your server group read [Use PostgreSQL extensions](using-extensions-in-postgresql-hyperscale-server-group.md). + ## Next steps - See [`pgcrypto` extension](https://www.postgresql.org/docs/current/pgcrypto.html) - See [Use PostgreSQL extensions](using-extensions-in-postgresql-hyperscale-server-group.md)
azure-arc Configure Server Parameters Postgresql Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/configure-server-parameters-postgresql-hyperscale.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
azdata arc postgres server edit -n postgres01 -e 'search_path = "$user"'
``` ## Next steps-- Read about [scaling out (adding worker nodes)](scale-out-postgresql-hyperscale-server-group.md) your server group
+- Read about [scaling out (adding worker nodes)](scale-out-in-postgresql-hyperscale-server-group.md) your server group
- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) your server group
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Previously updated : 03/02/2021 Last updated : 06/02/2021
If you installed Azure Arc data controller in the past, on the same cluster and
# Cleanup azure arc data service artifacts kubectl delete crd datacontrollers.arcdata.microsoft.com kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
-kubectl delete crd postgresql-11s.arcdata.microsoft.com
-kubectl delete crd postgresql-12s.arcdata.microsoft.com
+kubectl delete crd postgresqls.arcdata.microsoft.com
``` ## Overview
The bootstrapper.yaml template file defaults to pulling the bootstrapper contain
- Add an image pull secret to the bootstrapper container. See example below. - Change the image location for the bootstrapper image. See example below.
-The example below assumes that you created a image pull secret name `regcred` as indicated in the Kubernetes documentation.
+The example below assumes that you created a image pull secret name `arc-private-registry`.
```yaml
-#just showing only the relevant part of the bootstrapper.yaml template file here
-containers:
- - env:
- - name: ACCEPT_EULA
- value: "Y"
- #image: mcr.microsoft.com/arcdata/arc-bootstrapper:public-preview-dec-2020 <-- template value to change
- image: <your registry DNS name or IP address>/<your repo>/arc-bootstrapper:<your tag>
- imagePullPolicy: IfNotPresent
- name: bootstrapper
- resources: {}
- securityContext:
- runAsUser: 21006
- terminationMessagePath: /dev/termination-log
- terminationMessagePolicy: File
- dnsPolicy: ClusterFirst
+#Just showing only the relevant part of the bootstrapper.yaml template file here
+ spec:
+ serviceAccountName: sa-bootstrapper
+ nodeSelector:
+ kubernetes.io/os: linux
imagePullSecrets:
- - name: regcred
- restartPolicy: Always
- schedulerName: default-scheduler
- securityContext: {}
- serviceAccount: sa-mssql-controller
- serviceAccountName: sa-mssql-controller
- terminationGracePeriodSeconds: 30
-
+ - name: arc-private-registry #Create this image pull secret if you are using a private container registry
+ containers:
+ - name: bootstrapper
+ image: mcr.microsoft.com/arcdata/arc-bootstrapper:latest #Change this registry location if you are using a private container registry.
+ imagePullPolicy: Always
``` ## Create a secret for the data controller administrator
Edit the following as needed:
The following example shows a completed data controller yaml file. Update the example for your environment, based on your requirements, and the information above.
-```yaml
+```yml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: sa-mssql-controller
+ apiVersion: arcdata.microsoft.com/v1alpha1 kind: datacontroller metadata:
metadata:
spec: credentials: controllerAdmin: controller-login-secret
- #dockerRegistry: arc-private-registry - optional if you are using a private container registry that requires authentication using an image pull secret
+ dockerRegistry: arc-private-registry #Create a registry secret named 'arc-private-registry' if you are going to pull from a private registry instead of MCR.
serviceAccount: sa-mssql-controller docker: imagePullPolicy: Always
- imageTag: public-preview-dec-2020
+ imageTag: latest
registry: mcr.microsoft.com repository: arcdata security:
spec:
- name: controller port: 30080
- serviceType: LoadBalancer
+ serviceType: LoadBalancer # Modify serviceType based on your Kubernetes environment
- name: serviceProxy port: 30777
- serviceType: LoadBalancer
+ serviceType: LoadBalancer # Modify serviceType based on your Kubernetes environment
settings: ElasticSearch: vm.max_map_count: "-1" azure:
- connectionMode: Indirect
- location: eastus
- resourceGroup: myresourcegroup
- subscription: c82c901a-129a-435d-86e4-cc6b294590ae
+ connectionMode: indirect
+ location: eastus # Choose a different Azure location if you want
+ resourceGroup: <your resource group>
+ subscription: <your subscription GUID>
controller: displayName: arc enableBilling: "True"
spec:
storage: data: accessMode: ReadWriteOnce
- className: default
+ className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
size: 15Gi logs: accessMode: ReadWriteOnce
- className: default
+ className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
size: 10Gi ```
kubectl get pods --namespace arc
You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. ```console
-kubectl describe po/<pod name> --namespace arc
+kubectl describe pod/<pod name> --namespace arc
#Example:
-#kubectl describe po/control-2g7bl --namespace arc
+#kubectl describe pod/control-2g7bl --namespace arc
``` Azure Arc extension for Azure Data Studio provides a notebook to walk you through the experience of how to set up Azure Arc enabled Kubernetes and configure it to monitor a git repository that contains a sample SQL Managed Instance yaml file. When everything is connected, a new SQL Managed Instance will be deployed to your Kubernetes cluster.
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
You may now implement the next step.
1. Accept the Privacy and license terms and click **Select** at the bottom 1. In the Deploy PostgreSQL Hyperscale server group - Azure Arc blade, enter the following information: - Enter a name for the server group
+ - The number of worker nodes
- Enter and confirm a password for the _postgres_ administrator user of the server group - Select the storage class as appropriate for data - Select the storage class as appropriate for logs
This starts the creation of the Azure Arc enabled PostgreSQL Hyperscale server g
In a few minutes, your creation should successfully complete.
+### Important parameters you should consider:
+
+- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+++
+|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
+|||||
+|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
+| | | | |
+
+While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
+
+- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
+ - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
+ - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
+ - to set the storage class for the backups: in this Preview of the Azure Arc enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
+ - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
+ - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
++ ## Next steps - [Manage your server group using Azure Data Studio](manage-postgresql-hyperscale-server-group-with-azure-data-studio.md) - [Monitor your server group](monitor-grafana-kibana.md)
In a few minutes, your creation should successfully complete.
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale. -- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Previously updated : 04/28/2021 Last updated : 06/02/2021
To deploy and operate an Azure Arc enabled Postgres Hyperscale server group from
> [!IMPORTANT] > You can not operate an Azure Arc enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *Indirect* connectivity mode.
-After you deployed an Arc data controller enabled for Direct connectivity mode:
-1. Open a browser to following URL [https://portal.azure.com](https://portal.azure.com)
+After you deployed an Arc data controller enabled for Direct connectivity mode, you may chose one the following 3 options to deploy a Azure Arc enabled Postgres Hyperscale server group:
+
+### Option 1: Deploy from the Azure Marketplace
+1. Open a browser to the following URL [https://portal.azure.com](https://portal.azure.com)
2. In the search window at the top of the page search for "*azure arc postgres*" in the Azure Market Place and select **Azure Database for PostgreSQL server groups - Azure Arc**. 3. In the page that opens, click **+ Create** at the top left corner. 4. Fill in the form like you deploy an other Azure resource.
+### Option 2: Deploy from the Azure Database for PostgreSQL deployment option page
+1. Open a browser to the following URL https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer.
+2. Click the tile at the bottom right. It is titled: Azure Arc enabled PostgreSQL Hyperscale (Preview).
+3. Fill in the form like you deploy an other Azure resources.
+
+### Option 3: Deploy from the Azure Arc center
+1. Open a browser to the following URL https://ms.portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview
+1. From the center of the page, click [Deploy] under the tile titled *Deploy Azure services* and then click [Deploy] in tile titled PostgreSQL Hyperscale (Preview).
+2. or, from the navigation pane on the left of the page, in the Services section, click [PostgreSQL Hyperscale (Preview)] and then click [+ Create] at the top left of the pane.
++
+#### Important parameters you should consider:
+
+- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+++
+|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
+|||||
+|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
+| | | | |
+
+While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
-### Important parameters you should consider are:
+- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
+ - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
+ - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
+ - to set the storage class for the backups: in this Preview of the Azure Arc enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
+ - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
+ - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `-vcm` followed by the name of a volume claim and a volume type.
-- **The number of worker nodes** you want to deploy to scale out and potentially reach better performance. Before proceeding, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). For example, if you deploy a server group with two worker nodes, the deployment creates three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers). ## Next steps
After you deployed an Arc data controller enabled for Direct connectivity mode:
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale. -- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Create Postgresql Hyperscale Server Group Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-kubernetes-native-tools.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
metadata:
type: Opaque apiVersion: arcdata.microsoft.com/v1alpha1
-kind: postgresql-12
+kind: postgresql
metadata:
- generation: 1
name: pg1 spec: engine:
+ version: 12
extensions: - name: citus scale:
- shards: 3
+ workers: 3
scheduling: default: resources:
spec:
requests: cpu: "1" memory: 2Gi
- service:
- type: LoadBalancer
+
+ primary:
+ type: LoadBalancer # Modify service type based on your Kubernetes environment
storage: backups:
- className: default
- size: 5Gi
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
data:
- className: default
- size: 5Gi
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
logs:
- className: default
- size: 1Gi
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
``` ### Customizing the login and password.
Creating the PostgreSQL Hyperscale server group will take a few minutes to compl
> The example commands below assume that you created a PostgreSQL Hyperscale server group named 'pg1' and Kubernetes namespace with the name 'arc'. If you used a different namespace/PostgreSQL Hyperscale server group name, you can replace 'arc' and 'pg1' with your names. ```console
-kubectl get postgresql-12/pg1 --namespace arc
+kubectl get postgresqls/pg1 --namespace arc
``` ```console
kubectl get pods --namespace arc
You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. ```console
-kubectl describe po/<pod name> --namespace arc
+kubectl describe pod/<pod name> --namespace arc
#Example:
-#kubectl describe po/pg1-0 --namespace arc
+#kubectl describe pod/pg1-0 --namespace arc
``` ## Troubleshooting creation problems
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
Previously updated : 02/11/2021 Last updated : 06/02/2021
The main parameters should consider are:
- **the version of the PostgreSQL engine** you want to deploy: by default it is version 12. To deploy version 12, you can either omit this parameter or pass one of the following parameters: `--engine-version 12` or `-ev 12`. To deploy version 11, indicate `--engine-version 11` or `-ev 11`. -- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). To indicate the number of worker nodes to deploy, use the parameter `--workers` or `-w` followed by an integer greater or equal to 2. For example, if you want to deploy a server group with 2 worker nodes, indicate `--workers 2` or `-w 2`. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). To indicate the number of worker nodes to deploy, use the parameter `--workers` or `-w` followed by an integer. The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate `--workers 2` or `-w 2`. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+++
+|You need |Shape of the server group you will deploy |-w parameter to use |Note |
+|||||
+|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |Use -w n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |Use -w 0 and load the Citus extension. Use the following parameters if deploying from command line: -w 0 --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
+|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use -w 0 or do not specify -w. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
+| | | | |
+
+While using -w 1 works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used. - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale. -- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Arc enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
Previously updated : 02/11/2021 Last updated : 06/02/2021
This is an example yaml file:
apiVersion: v1 data: password: <your base64 encoded password>
- username: <your base64 encoded user name. 'sa' is not allowed>
+ username: <your base64 encoded username>
kind: Secret metadata: name: sql1-login-secret
apiVersion: sql.arcdata.microsoft.com/v1alpha1
kind: sqlmanagedinstance metadata: name: sql1
+ annotations:
+ exampleannotation1: exampleannotationvalue1
+ exampleannotation2: exampleannotationvalue2
+ labels:
+ examplelabel1: examplelabelvalue1
+ examplelabel2: examplelabelvalue2
spec:
- limits:
- memory: 4Gi
- vcores: "4"
- requests:
- memory: 2Gi
- vcores: "1"
- service:
- type: LoadBalancer
+ scheduling:
+ default:
+ resources:
+ limits:
+ cpu: "2"
+ memory: 4Gi
+ requests:
+ cpu: "1"
+ memory: 2Gi
+
+ primary:
+ type: LoadBalancer
storage:
+ backups:
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
data:
- className: default
- size: 5Gi
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
+ datalogs:
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
logs:
- className: default
- size: 1Gi
+ volumes:
+ - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment
+ size: 5Gi
``` ### Customizing the login and password
PowerShell
#Example #[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example'))- ``` Linux/macOS
kubectl create -n <your target namespace> -f <path to your yaml file>
#kubectl create -n arc -f C:\arc-data-services\sqlmi.yaml ``` - ## Monitoring the creation status Creating the SQL managed instance will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
kubectl get pods --namespace arc
You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues. ```console
-kubectl describe po/<pod name> --namespace arc
+kubectl describe pod/<pod name> --namespace arc
#Example:
-#kubectl describe po/sql1-0 --namespace arc
+#kubectl describe pod/sql1-0 --namespace arc
``` ## Troubleshooting creation problems
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/delete-azure-resources.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
This article describes how to delete resources from Azure.
> [!WARNING] > When you delete resources as described in this article, these actions are irreversible.
+## Before
+
+Before you delete a resource such as Azure Arc SQL managed instance or Azure Arc data controller, you need to export and upload the usage information to Azure for accurate billing calculation by following the instructions described in [Upload billing data to Azure](view-billing-data-in-azure.md#upload-billing-data-to-azure).
+
+## Direct connectivity mode
+
+When a cluster is connected to Azure with direct connectivity mode, use the Azure portal to manage the resources. Use the portal for all create, read, update, & delete (CRUD) operations for data controller, Managed Instance, and PostgreSQL groups.
+
+See [Manage Azure resources by using the Azure portal](../../azure-resource-manager/management/manage-resources-portal.md).
+
+## Indirect connectivity mode
+ In indirect connect mode, deleting an instance from Kubernetes will not remove it from Azure and deleting an instance from Azure will not remove it from Kubernetes. For indirect connect mode, deleting a resource is a two step process and this will be improved in the future. Kubernetes will be the source of truth and the portal will be updated to reflect it. In some cases, you may need to manually delete Azure Arc enabled data services resources in Azure. You can delete these resources using any of the following options.
az resource delete --name <data controller name> --resource-type Microsoft.Azure
### Delete a resource group using the Azure CLI
-You can also use the Azure CLI to [delete a resource group](../../azure-resource-manager/management/delete-resource-group.md).
+You can also use the Azure CLI to [delete a resource group](../../azure-resource-manager/management/delete-resource-group.md).
azure-arc Deploy Data Controller Direct Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/deploy-data-controller-direct-mode.md
az k8s-extension create -c "$ENV:resourceName" -g "$ENV:resourceGroup" --name "$
az k8s-extension show -g "$ENV:resourceGroup" -c "$ENV:resourceName" --name "$ENV:ADSExtensionName" --cluster-type connectedclusters ```
+#### Deploy Azure Arc data services extension using private container registry and credentials
+
+Use the below command if you are deploying from your private repository:
+
+```
+az k8s-extension create -c "<connected cluster name>" -g "<resource group>" --name "<extension name>" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "<namespace>" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=<registry info> --config imageCredentials.username=<username> --config systemDefaultValues.image=<registry/repo/arc-bootstrapper:<imagetag>> --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+```
+
+ For example
+```
+az k8s-extension create -c "my-connected-cluster" -g "my-resource-group" --name "arc-data-services" --cluster-type connectedClusters --extension-type microsoft.arcdataservices --scope cluster --release-namespace "arc" --config Microsoft.CustomLocation.ServiceAccount=sa-bootstrapper --config imageCredentials.registry=mcr.microsoft.com --config imageCredentials.username=arcuser --config systemDefaultValues.image=mcr.microsoft.com/arcdata/arc-bootstrapper:latest --config-protected imageCredentials.password=$ENV:DOCKER_PASSWORD --debug
+```
++ > [!NOTE] > The Arc data services extension install can take a couple of minutes to finish.
azure-arc Get Connection Endpoints And Connection Strings Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/get-connection-endpoints-and-connection-strings-postgres-hyperscale.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
postgres=#
> When this happens, you need to reconnect with azdata as explained above. ## From CLI with kubectl-- If your server group is of Postgres version 12 (default), then the following command: ```console
-kubectl get postgresql-12/<server group name> -n <namespace name>
-```
-- If your server group is of Postgres version 11, then the following command:
-```console
-kubectl get postgresql-11/<server group name> -n <namespace name>
+kubectl get postgresqls/<server group name> -n <namespace name>
``` Those commands will produce output like the one below. You can use that information to form your connection strings:
host=192.168.1.121; dbname=postgres user=postgres password={your_password_here}
``` ## Next steps-- Read about [scaling out (adding worker nodes)](scale-out-postgresql-hyperscale-server-group.md) your server group
+- Read about [scaling out (adding worker nodes)](scale-out-in-postgresql-hyperscale-server-group.md) your server group
- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) your server group
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale. -- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Postgresql Hyperscale Server Group Placement On Kubernetes Cluster Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/postgresql-hyperscale-server-group-placement-on-kubernetes-cluster-nodes.md
Previously updated : 02/11/2021 Last updated : 06/02/2021
You can achieve this in several ways:
## Next steps
-[Scale out your Azure Arc enabled PostgreSQL Hyperscale server group by adding more worker nodes](scale-out-postgresql-hyperscale-server-group.md)
+[Scale out your Azure Arc enabled PostgreSQL Hyperscale server group by adding more worker nodes](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 05/04/2021 Last updated : 06/02/2021 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc enabled data services.
+## May 2021
-## April 2021
+This preview release is published on June 2, 2021.
-This preview release is published on April 29, 2021.
+As a preview feature, the technology presented in this article is subject to [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-### What's new
+### Breaking change
-This section describes the new features introduced or enabled for this release.
+- Kubernetes native deployment templates have been modified. Update update your .yml templates.
+ - Updated templates for data controller, bootstrapper, & SQL Managed instance: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
+ - Updated templates for PostgreSQL Hyperscale: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
+
+### What's new
#### Platform -- Direct connected clusters automatically upload telemetry information automatically Azure.
+- Create and delete data controller, SQL managed instance, and PostgreSQL Hyperscale server groups from Azure portal.
+- Validate portal actions when deleting Azure Arc data services. For instance, the portal alerts when you attempt to delete the data controller when there are SQL Managed Instances deployed using the data controller.
+- Create custom configuration profiles to support custom settings when you deploy Arc enabled data controller using the Azure portal.
+- Optionally, automatically upload your logs to Azure Log analytics workspace in the directly connected mode.
#### Azure Arc enabled PostgreSQL Hyperscale -- Azure Arc enabled PostgreSQL Hyperscale is now supported in Direct connect mode. You now can deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure Market Place in the Azure portal. -- Azure Arc enabled PostgreSQL Hyperscale ships with the Citus 10.0 extension which features columnar table storage-- Azure Arc enabled PostgreSQL Hyperscale now supports full user/role management.-- Azure Arc enabled PostgreSQL Hyperscale now supports additional extensions with `Tdigest` and `pg_partman`.-- Azure Arc enabled PostgreSQL Hyperscale now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group.-- Azure Arc enabled PostgreSQL Hyperscale now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
+This release introduces the following features or capabilities:
+
+- Delete an Azure Arc PostgreSQL Hyperscale from the Azure portal when its Data Controller was configured for Direct connectivity mode.
+- Deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure database for Postgres deployment page in the Azure portal. See [Select Azure Database for PostgreSQL deployment option - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer).
+- Specify storage classes and Postgres extensions when deploying Azure Arc enabled PostgreSQL Hyperscale from the Azure portal.
+- Reduce the number of worker nodes in your Azure Arc enabled PostgreSQL Hyperscale. You can do this operation (known as scale in as opposed to scale out when you increase the number of worker nodes) from `azdata` command line.
#### Azure Arc enabled SQL Managed Instance -- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group. -- Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
+- New [Azure CLI extension](/cli/azure/azure-cli-extensions-overview) for Arc enabled SQL Managed Instance has the same commands as `azdata arc sql mi <command>`. All Arc enabled SQL Managed Instance commands are located at `az sql mi-arc`. All Arc related `azdata` commands will be deprecated and moved to Azure CLI in a future release.
+
+ To add the extension:
+
+ ```azurecli
+ az extension add --source https://azurearcdatacli.blob.core.windows.net/cli-extensions/arcdata-0.0.1-py2.py3-none-any.whl -y
+ az sql mi-arc --help
+ ```
+
+- Manually trigger a failover of using Transact-SQL. Do the following commands in order:
+
+ 1. On the primary replica endpoint connection:
+
+ ```sql
+ ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY);
+ ```
+
+ 1. On the secondary replica endpoint connection:
+
+ ```sql
+ ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY);
+ ```
+
+- Transact-SQL `BACKUP` command is blocked unless using `COPY_ONLY` setting. This supports point in time restore capability.
### Known issues -- You can create a data controller in direct connect mode with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
+#### Platform
+
+- You can create a data controller, SQL managed instance, or PostgreSQL Hyperscale server group on a connected cluster with the Azure portal. Deployment with other Azure Arc enabled data services tools are not supported. Specifically, you can't deploy a data controller in direct connect mode with any of the following tools during this release.
- Azure Data Studio - Azure Data CLI (`azdata`) - Kubernetes native tools (`kubectl`) [Deploy Azure Arc data controller | Direct connect mode](deploy-data-controller-direct-mode.md) explains how to create the data controller in the portal.
+- You can still use `kubectl` to create resources directly on a Kubernetes cluster, however they will not be reflected in the Azure portal.
+ - In direct connected mode, upload of usage, metrics, and logs using `azdata arc dc upload` is currently blocked. Usage is automatically uploaded. Upload for data controller created in indirect connected mode should continue to work. - Automatic upload of usage data in direct connectivity mode will not succeed if using proxy via `ΓÇôproxy-cert <path-t-cert-file>`. - Azure Arc enabled SQL Managed instance and Azure Arc enabled PostgreSQL Hyperscale are not GB18030 certified. - Currently, only one Azure Arc data controller in direct connected mode per kubernetes cluster is supported.
-#### Azure Arc enabled SQL Managed Instance
--- Deployment of Azure Arc enabled SQL Managed Instance in direct mode can only be done from the Azure portal, and not available from tools such as azdata, Azure Data Studio, or kubectl.- #### Azure Arc enabled PostgreSQL Hyperscale - Point in time restore is not supported for now on NFS storage.
This section describes the new features introduced or enabled for this release.
- Passing an invalid value to the `--extensions` parameter when editing the configuration of a server group to enable additional extensions incorrectly resets the list of enabled extensions to what it was at the create time of the server group and prevents user from creating additional extensions. The only workaround available when that happens is to delete the server group and redeploy it.
+## April 2021
+
+This preview release is published on April 29, 2021.
+
+### What's new
+
+This section describes the new features introduced or enabled for this release.
+
+#### Platform
+
+- Direct connected clusters automatically upload telemetry information automatically Azure.
+
+#### Azure Arc enabled PostgreSQL Hyperscale
+
+- Azure Arc enabled PostgreSQL Hyperscale is now supported in Direct connect mode. You now can deploy Azure Arc enabled PostgreSQL Hyperscale from the Azure Market Place in the Azure portal.
+- Azure Arc enabled PostgreSQL Hyperscale ships with the Citus 10.0 extension which features columnar table storage
+- Azure Arc enabled PostgreSQL Hyperscale now supports full user/role management.
+- Azure Arc enabled PostgreSQL Hyperscale now supports additional extensions with `Tdigest` and `pg_partman`.
+- Azure Arc enabled PostgreSQL Hyperscale now supports configuring vCore and memory settings per role of the PostgreSQL instance in the server group.
+- Azure Arc enabled PostgreSQL Hyperscale now supports configuring database engine/server settings per role of the PostgreSQL instance in the server group.
+
+#### Azure Arc enabled SQL Managed Instance
+
+- Restore a database to SQL Managed Instance with three replicas and it will be automatically added to the availability group.
+- Connect to a secondary read-only endpoint on SQL Managed Instances deployed with three replicas. Use `azdata arc sql endpoint list` to see the secondary read-only connection endpoint.
+ ## March 2021 The March 2021 release was initially introduced on April 5th 2021, and the final stages of release were completed April 9th 2021.
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale. -- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
+
+ Title: Scale out and in your Azure Database for PostgreSQL Hyperscale server group
+description: Scale out and in you Azure Database for PostgreSQL Hyperscale server group
++++++ Last updated : 06/02/2021+++
+# Scale out and in your Azure Arc enabled PostgreSQL Hyperscale server group by adding more worker nodes
+This document explains how to scale out and scale in an Azure Arc enabled PostgreSQL Hyperscale server group. It does so by taking you through a scenario. **If you do not want to run through the scenario and want to just read about how to scale out, jump to the paragraph [Scale out](#scale-out)** or [Scale in]().
+
+You scale out when you add Postgres instances (Postgres Hyperscale worker nodes) to your Azure Arc enabled PosrgreSQL Hyperscale.
+
+You scale in when you remove Postgres instances (Postgres Hyperscale worker nodes) from your Azure Arc enabled PosrgreSQL Hyperscale.
+++
+## Get started
+If you are already familiar with the scaling model of Azure Arc enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc enabled Data
+- [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
+- [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
+- [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
+- [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
+- [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
+- [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
+- [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+
+> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+
+## Scenario
+This scenario refers to the PostgreSQL Hyperscale server group that was created as an example in the [Create an Azure Arc enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md) documentation.
+
+### Load test data
+The scenario uses a sample of publicly available GitHub data, available from the [Citus Data website](https://www.citusdata.com/) (Citus Data is part of Microsoft).
+
+#### Connect to your Azure Arc enabled PostgreSQL Hyperscale server group
+
+##### List the connection information
+Connect to your Azure Arc enabled PostgreSQL Hyperscale server group by first getting the connection information:
+The general format of this command is
+```console
+azdata arc postgres endpoint list -n <server name>
+```
+For example:
+```console
+azdata arc postgres endpoint list -n postgres01
+```
+
+Example output:
+
+```console
+[
+ {
+ "Description": "PostgreSQL Instance",
+ "Endpoint": "postgresql://postgres:<replace with password>@12.345.123.456:1234"
+ },
+ {
+ "Description": "Log Search Dashboard",
+ "Endpoint": "https://12.345.123.456:12345/kibana/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:\"postgres01\"'))"
+ },
+ {
+ "Description": "Metrics Dashboard",
+ "Endpoint": "https://12.345.123.456:12345/grafana/d/postgres-metrics?var-Namespace=arc3&var-Name=postgres01"
+ }
+]
+```
+
+##### Connect with the client tool of your choice.
+
+Run the following query to verify that you currently have two (or more Hyperscale worker nodes), each corresponding to a Kubernetes pod:
+
+```sql
+SELECT * FROM pg_dist_node;
+```
+
+```console
+ nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
+--++-+-+-+-+-+-+-+-+
+ 1 | 1 | pg1-1.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+ 2 | 2 | pg1-2.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+(2 rows)
+```
+
+#### Create a sample schema
+Create two tables by running the following query:
+
+```sql
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+```
+
+JSONB is the JSON datatype in binary form in PostgreSQL. It stores a flexible schema in a single column and with PostgreSQL. The schema will have a GIN index on it to index every key and value within it. With a GIN index, it becomes fast and easy to query with various conditions directly on that payload. So weΓÇÖll go ahead and create a couple of indexes before we load our data:
+
+```sql
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+```
+
+To shard standard tables, run a query for each table. Specify the table we want to shard, and the key we want to shard it on. WeΓÇÖll shard both the events and users table on user_id:
+
+```sql
+SELECT create_distributed_table('github_events', 'user_id');
+SELECT create_distributed_table('github_users', 'user_id');
+```
+
+#### Load sample data
+Load the data with COPY ... FROM PROGRAM:
+
+```sql
+COPY github_users FROM PROGRAM 'curl "https://examples.citusdata.com/users.csv"' WITH ( FORMAT CSV );
+COPY github_events FROM PROGRAM 'curl "https://examples.citusdata.com/events.csv"' WITH ( FORMAT CSV );
+```
+
+#### Query the data
+And now measure how long a simple query takes with two nodes:
+
+```sql
+SELECT COUNT(*) FROM github_events;
+```
+Make a note of the query execution time.
++
+## Scale out
+The general format of the scale-out command is:
+```console
+azdata arc postgres server edit -n <server group name> -w <target number of worker nodes>
+```
++
+In this example, we increase the number of worker nodes from 2 to 4, by running the following command:
+
+```console
+azdata arc postgres server edit -n postgres01 -w 4
+```
+
+Upon adding nodes, and you'll see a Pending state for the server group. For example:
+```console
+azdata arc postgres server list
+```
+
+```console
+Name State Workers
+- -
+postgres01 Pending 4/5 4
+```
+
+Once the nodes are available, the Hyperscale Shard Rebalancer runs automatically, and redistributes the data to the new nodes. The scale-out operation is an online operation. While the nodes are added and the data is redistributed across the nodes, the data remains available for queries.
+
+### Verify the new shape of the server group (optional)
+Use either of the methods below to verify that the server group is now using the additional worker nodes you added.
+
+#### With azdata:
+Run the command:
+```console
+azdata arc postgres server list
+```
+
+It returns the list of server groups created in your namespace and indicates their number of worker nodes. For example:
+```console
+Name State Workers
+- -
+postgres01 Ready 4
+```
+
+#### With kubectl:
+Run the command:
+```console
+kubectl get postgresqls
+```
+
+It returns the list of server groups created in your namespace and indicates their number of worker nodes. For example:
+```console
+NAME STATE READY-PODS EXTERNAL-ENDPOINT AGE
+postgres01 Ready 4/4 10.0.0.4:31066 4d20h
+```
+
+#### With a SQL query:
+Connect to your server group with the client tool of your choice and run the following query:
+
+```sql
+SELECT * FROM pg_dist_node;
+```
+
+```console
+ nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
+--++-+-+-+-+-+-+-+-+
+ 1 | 1 | pg1-1.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+ 2 | 2 | pg1-2.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+ 3 | 3 | pg1-3.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+ 4 | 4 | pg1-4.pg1-svc.default.svc.cluster.local | 5432 | default | f | t | primary | default | f | t
+(4 rows)
+```
+
+## Return to the scenario
+
+If you would like to compare the execution time of the select count query against the sample data set, use the same count query. It can be used across the four worker nodes, without any changes in the SQL statement.
+
+```sql
+SELECT COUNT(*) FROM github_events;
+```
+Note the execution time.
++
+> [!NOTE]
+> Depending on your environment - for example if you have deployed your test server group with `kubeadm` on a single node VM - you may see a modest improvement in the execution time. To get a better idea of the type of performance improvement you could reach with Azure Arc enabled PostgreSQL Hyperscale, watch the following short videos:
+>* [High performance HTAP with Azure PostgreSQL Hyperscale (Citus)](https://www.youtube.com/watch?v=W_3e07nGFxY)
+>* [Building HTAP applications with Python & Azure PostgreSQL Hyperscale (Citus)](https://www.youtube.com/watch?v=YDT8_riLLs0)
+
+## Scale in
+To scale in (reduce the number of worker nodes in your server group), you use the same command as to scale out but you indicate a smaller number of worker nodes. The worker nodes that are removed are the latest ones added to the server group. When you run this command, the system moves the data out of the nodes that are removed and redistributes (rebalances) it automatically to the remaining nodes.
+
+The general format of the scale-in command is:
+```console
+azdata arc postgres server edit -n <server group name> -w <target number of worker nodes>
+```
++
+The scale-in operation is an online operation. Your applications continue to access the data with no downtime while the nodes are removed and the data is redistributed across the remaining nodes.
+
+## Next steps
+
+- Read about how to [scale up and down (memory, vCores) your Azure Arc enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md)
+- Read about how to set server parameters in your Azure Arc enabled PostgreSQL Hyperscale server group
+- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
+ * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
+ * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
+ * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
+ * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+
+ > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc enabled PostgreSQL Hyperscale.
+
+- [Storage configuration and Kubernetes storage concepts](storage-configuration.md)
+- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Scale Up Down Postgresql Hyperscale Server Group Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-up-down-postgresql-hyperscale-server-group-using-cli.md
Previously updated : 09/22/2020 Last updated : 06/02/2021 # Scale up and down an Azure Database for PostgreSQL Hyperscale server group using CLI (azdata or kubectl)
azdata arc postgres server edit -n postgres01 --cores-request coordinator='',wor
## Next steps -- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Show Configuration Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/show-configuration-postgresql-hyperscale-server-group.md
Previously updated : 09/22/2020 Last updated : 06/02/2021
Returns the below output in a format and content very similar to the one returne
## Next steps - [Read about the concepts of Azure Arc enabled PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md)-- [Read about how to scale out (add worker nodes) a server group](scale-out-postgresql-hyperscale-server-group.md)
+- [Read about how to scale out (add worker nodes) a server group](scale-out-in-postgresql-hyperscale-server-group.md)
- [Read about how to scale up/down (increase or reduce memory and/or vCores) a server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) - [Read about storage configuration](storage-configuration.md) - [Read how to monitor a database instance](monitor-grafana-kibana.md)
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
After installing the Connected Machine agent for Linux, the following system-wid
|Service name |Display name |Process name |Description | |-|-|-|| |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Azure Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |gcad.servce |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
+ |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
|extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.| * There are several log files available for troubleshooting. They are described in the following table.
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
Title: Best practices for Azure Cache for Redis description: Learn how to use your Azure Cache for Redis effectively by following these best practices.-+ Last updated 01/06/2020-+ # Best practices for Azure Cache for Redis
By following these best practices, you can help maximize the performance and cos
* **Use TLS encryption** - Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible. If your client library or tool doesn't support TLS, then enabling unencrypted connections can be done [through the Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In such cases where encrypted connections aren't possible, placing your cache and client application into a virtual network would be recommended. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
-* **Idle Timeout** - Azure Redis currently has 10 minute idle timeout for connections, so your setting should be to less than 10 minutes.
+* **Idle Timeout** - Azure Cache for Redis currently has 10-minute idle timeout for connections, so your setting should be to less than 10 minutes. Most common client libraries have keep-alive configuration that pings Azure Redis automatically. However, in clients that don't have a keep-alive setting, customer applications are responsible for keeping the connection alive.
## Memory management
If you would like to test how your code works under error conditions, consider u
## Performance testing
-* **Start by using `redis-benchmark.exe`** to get a feel for possible throughput/latency before writing your own perf tests. Redis-benchmark documentation can be [found here](https://redis.io/topics/benchmarks). Note that `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TLS port through the Portal](cache-configure.md#access-ports) before you run the test. A windows compatible version of redis-benchmark.exe can be found [here](https://github.com/MSOpenTech/redis/releases).
+* **Start by using `redis-benchmark.exe`** to get a feel for possible throughput/latency before writing your own perf tests. Redis-benchmark documentation can be [found here](https://redis.io/topics/benchmarks). The `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TLS port through the Portal](cache-configure.md#access-ports) before you run the test. A windows compatible version of redis-benchmark.exe can be found [here](https://github.com/MSOpenTech/redis/releases).
* The client VM used for testing should be **in the same region** as your Redis cache instance. * **We recommend using Dv2 VM Series** for your client as they have better hardware and will give the best results. * Make sure the client VM you use has **at least as much compute and bandwidth* as the cache being tested.
-* **Test under failover conditions** on your cache. It's important to ensure that you don't performance test your cache only under steady state conditions. Test under failover conditions, too, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see how your application behaves in terms of throughput and latency during failover conditions. Failover can happen during updates and during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
+* **Test under failover conditions** on your cache. It's important to ensure that you don't test the performance of your cache only under steady state conditions. Test under failover conditions, too, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see how your application behaves in terms of throughput and latency during failover conditions. Failover can happen during updates and during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](cache-planning-faq.md#azure-cache-for-redis-performance) * **Enable VRSS** on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)). Example PowerShell script: >PowerShell -ExecutionPolicy Unrestricted Enable-NetAdapterRSS -Name ( Get-NetAdapter).Name
azure-cache-for-redis Cache Web App Arm With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-arm-with-redis-cache-provision.md
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-In this topic, you will learn how to create an Azure Resource Manager template that deploys an Azure Web App with Azure Cache for Redis. You will learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this template for your own deployments, or customize it to meet your requirements.
+In this article, you learn how to create an Azure Resource Manager template that deploys an Azure Web App with Azure Cache for Redis.
+You learn the following deployment details:
+
+- how to define which resources are deployed
+- how to define parameters that are specified when the deployment is executed
+
+You can use this template for your own deployments, or customize it to meet your requirements.
For more information about creating templates, see [Authoring Azure Resource Manager Templates](../azure-resource-manager/templates/template-syntax.md). To learn about the JSON syntax and properties for cache resource types, see [Microsoft.Cache resource types](/azure/templates/microsoft.cache/allversions). For the complete template, see [Web App with Azure Cache for Redis template](https://github.com/Azure/azure-quickstart-templates/blob/master/201-web-app-with-redis-cache/azuredeploy.json). ## What you will deploy
-In this template, you will deploy:
+In this template, you deploy:
* Azure Web App * Azure Cache for Redis
-To run the deployment automatically, click the following button:
+To run the deployment automatically, select the following button:
[![Deploy to Azure](./media/cache-web-app-arm-with-redis-cache-provision/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-web-app-with-redis-cache%2Fazuredeploy.json)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-whats-new.md
## Azure TLS Certificate Change
-Microsoft is updating Azure services to use TLS server certificates from a different set of Certificate Authorities (CAs). This change is rolled out in phases from August 13, 2020 to October 26, 2020 (estimated). Azure is making this change because [the current CA certificates don't comply with one of the CA/Browser Forum Baseline requirements](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951). The problem was reported on July 1, 2020 and applies to multiple popular Public Key Infrastructure (PKI) providers worldwide. Most TLS certificates used by Azure services today come from the *Baltimore CyberTrust Root* PKI. The Azure Cache for Redis service will continue to be chained to the Baltimore CyberTrust Root. Its TLS server certificates, however, will be issued by new Intermediate Certificate Authorities (ICAs) starting on October 12, 2020.
+Microsoft is updating Azure services to use TLS server certificates from a different set of Certificate Authorities (CAs). This change is rolled out in phases from August 13, 2020 to October 26, 2020 (estimated). Azure is making this change because [the current CA certificates don't one of the CA/Browser Forum Baseline requirements](https://bugzilla.mozilla.org/show_bug.cgi?id=1649951). The problem was reported on July 1, 2020 and applies to multiple popular Public Key Infrastructure (PKI) providers worldwide. Most TLS certificates used by Azure services today come from the *Baltimore CyberTrust Root* PKI. The Azure Cache for Redis service will continue to be chained to the Baltimore CyberTrust Root. Its TLS server certificates, however, will be issued by new Intermediate Certificate Authorities (ICAs) starting on October 12, 2020.
> [!NOTE] > This change is limited to services in public [Azure regions](https://azure.microsoft.com/global-infrastructure/geographies/). It excludes sovereign (e.g., China) or government clouds.
The following table provides information about the certificates that are being r
### What actions should I take?
-If your application uses the operating system certificate store or pins the Baltimore root among others, no action is needed. On the other hand, if your application pins any intermediate or leaf TLS certificate, we recommend that you pin the following roots:
+If your application uses the operating system certificate store or pins the Baltimore root among others, no action is needed.
+
+If your application pins any intermediate or leaf TLS certificate, we recommend you pin the following roots:
| Certificate | Thumbprint | | -- | -- |
If your application uses the operating system certificate store or pins the Balt
> >
-To continue to pin intermediate certificates, add the following to the pinned intermediate certificates list, which includes few additional ones to minimize future changes:
+To continue to pin intermediate certificates, add the following to the pinned intermediate certificates list, which includes few more to minimize future changes:
| Common name of the CA | Thumbprint | | -- | -- |
To continue to pin intermediate certificates, add the following to the pinned in
| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 6c3af02e7f269aa73afd0eff2a88a4a1f04ed1e5 | | [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 30e01761ab97e59a06b41ef20af6f2de7ef4f7b0 |
-If your application validates certificate in code, you will need to modify it to recognize the properties (e.g., Issuers, Thumbprint) of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
+If your application validates certificate in code, you need to modify it to recognize the properties for example, Issuers, Thumbprint of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
## Next steps
-If you have additional questions, contact us through [support](https://azure.microsoft.com/support/options/).
+If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-networking-options.md
To learn how to control the outbound IP using a virtual network, see [Tutorial:
The following APIs let you programmatically manage regional virtual network integrations: + **Azure CLI**: Use the [`az functionapp vnet-integration`](/cli/azure/functionapp/vnet-integration) commands to add, list, or remove a regional virtual network integration.
-+ **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/101-function-premium-vnet-integration/).
++ **ARM templates**: Regional virtual network integration can be enabled by using an Azure Resource Manager template. For a full example, see [this Functions quickstart template](https://azure.microsoft.com/resources/templates/function-premium-vnet-integration/). ## Troubleshooting
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Some connections in Azure Functions are configured to use an identity instead of
Identity-based connections are supported by the following trigger and binding extensions in all plans:
+> [!NOTE]
+> Identity-based connections are not supported with Durable Functions.
+ | Extension name | Extension version | |-|-| | Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) |
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-facility-ontology.md
zone_pivot_groups: facility-ontology-schema
Facility ontology defines how Azure Maps Creator internally stores facility data in a Creator dataset. In addition to defining internal facility data structure, facility ontology is also exposed externally through the WFS API. When WFS API is used to query facility data in a dataset, the response format is defined by the ontology supplied to that dataset.
-At a high level, facility ontology divides the dataset into feature classes. All feature classes share a common set of properties, such as `ID` and `Geometry`. In addition to the common property set, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependant on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
+At a high level, facility ontology divides the dataset into feature classes. All feature classes share a common set of properties, such as `ID` and `Geometry`. In addition to the common property set, each feature class defines a set of properties. Each property is defined by its data type and constraints. Some feature classes have properties that are dependent on other feature classes. Dependant properties evaluate to the `ID` of another feature class.
## Changes and Revisions
The `unit` feature class defines a physical and non-overlapping area that can be
|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.| |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. | |`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
+|`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." |
|`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. | | `levelId` | [level.Id](#level) | true | The ID of a level feature. | |`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
The `zone` feature class defines a virtual area, like a WiFi zone or emergency a
## level
-The `level` class feature defines aAn area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`](#unit).
+The `level` class feature defines an area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`](#unit).
**Geometry Type**: MultiPolygon
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-services-concepts.md
Some of the Weather service (Preview) APIs allow user to specify if the data is
|20 |percent | |21 |float | |22 |integer |-
+|31 |MicrogramsPerCubicMeterOfAir |
## Weather icons
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 04/08/2021 Last updated : 06/03/2021 # Supported resources for metric alerts in Azure Monitor
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) | |Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) | |Microsoft.Batch/batchAccounts | Yes | No | [Batch Accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
+|Microsoft.Bing/accounts | Yes | No | [Bing Accounts](../essentials/metrics-supported.md#microsoftbingaccounts) |
|Microsoft.BotService/botServices | Yes | No | [Bot Services](../essentials/metrics-supported.md#microsoftbotservicebotservices) | |Microsoft.Cache/redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) | |microsoft.Cdn/profiles | Yes | No | [CDN Profiles](../essentials/metrics-supported.md#microsoftcdnprofiles) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Compute/cloudServices/roles | Yes | No | [Cloud Service Roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | |Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Virtual Machine Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) |
+|Microsoft.ConnectedVehicle/platformAccounts | Yes | No |[Connected Vehicle Platform Accounts](../essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) |
|Microsoft.ContainerInstance/containerGroups | Yes| No | [Container Groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | |Microsoft.ContainerRegistry/registries | No | No | [Container Registries](../essentials/metrics-supported.md#microsoftcontainerregistryregistries) | |Microsoft.ContainerService/managedClusters | Yes | No | [Managed Clusters](../essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) |
azure-monitor Alerts Metric Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-overview.md
# Understand how metric alerts work in Azure Monitor
-Metric alerts in Azure Monitor work on top of multi-dimensional metrics. These metrics could be [platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported), [custom metrics](../essentials/metrics-custom-overview.md), [popular logs from Azure Monitor converted to metrics](./alerts-metric-logs.md) and Application Insights metrics. Metric alerts evaluate at regular intervals to check if conditions on one or more metric time-series are true and notify you when the evaluations are met. Metric alerts are stateful, that is, they only send out notifications when the state changes.
+Metric alerts in Azure Monitor work on top of multi-dimensional metrics. These metrics could be [platform metrics](alerts-metric-near-real-time.md#metrics-and-dimensions-supported), [custom metrics](../essentials/metrics-custom-overview.md), [popular logs from Azure Monitor converted to metrics](./alerts-metric-logs.md) and Application Insights metrics. Metric alerts evaluate at regular intervals to check if conditions on one or more metric time-series are true and notify you when the evaluations are met. Metric alerts are stateful by default, that is, they only send out notifications when the state changes (fired, resolved). If you want to make them stateless, see [make metric alerts occur every time my condition is met](alerts-troubleshoot-metric.md#make-metric-alerts-occur-every-time-my-condition-is-met).
## How do metric alerts work?
Increasing look-back periods and number of violations can also allow filtering a
> - Metric alert rule that monitors multiple resources ΓÇô When a new resource is added to the scope > - Metric alert rule that monitors a metric that isnΓÇÖt emitted continuously (sparse metric) ΓÇô When the metric is emitted after a period longer than 24 hours in which it wasnΓÇÖt emitted -- ## Monitoring at scale using metric alerts in Azure Monitor So far, you have seen how a single metric alert could be used to monitor one or many metric time-series related to a single Azure resource. Many times, you might want the same alert rule applied to many resources. Azure Monitor also supports monitoring multiple resources (of the same type) with one metric alert rule, for resources that exist in the same Azure region.
For metric alerts, typically you will get notified in under 5 minutes if you set
You can find the full list of supported resource types in this [article](./alerts-metric-near-real-time.md#metrics-and-dimensions-supported). - ## Next steps - [Learn how to create, view, and manage metric alerts in Azure](../alerts/alerts-metric.md)-- [Learn how to create alerts within Azure Montior Metrics Explorer](../essentials/metrics-charts.md#alert-rules)
+- [Learn how to create alerts within Azure Monitor Metrics Explorer](../essentials/metrics-charts.md#alert-rules)
- [Learn how to deploy metric alerts using Azure Resource Manager templates](./alerts-metric-create-templates.md) - [Learn more about action groups](./action-groups.md) - [Learn more about Dynamic Thresholds condition type](../alerts/alerts-dynamic-thresholds.md)
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 06/02/2021 Last updated : 06/03/2021 # Troubleshooting problems in Azure Monitor metric alerts
Consider the following restrictions for metric alert rule names:
- Metric alert rule names must be unique within a resource group - Metric alert rule names canΓÇÖt contain the following characters: * # & + : < > ? @ % { } \ / - Metric alert rule names canΓÇÖt end with a space or a period-- The combined resource group name and alert rule name canΓÇÖt exceed 253 characters
+- The combined resource group name and alert rule name canΓÇÖt exceed 252 characters
> [!NOTE] > If the alert rule name contains characters that aren't alphabetic or numeric (for example: spaces, punctuation marks or symbols), these characters may be URL-encoded when retrieved by certain clients.
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
If you are using Azure Network Security Groups, simply add an **inbound port rul
Open ports 80 (http) and 443 (https) for incoming traffic from these addresses (IP addresses are grouped by location):
-### Addresses grouped by location
+### IP Addresses
+
+If you're looking for the actual IP addresses so you can add them to the list of allowed IP's in your firewall, please download the JSON file describing Azure IP Ranges. These files contain the most up-to-date information.
+
+After downloading the appropriate file, open it using your favorite text editor and search for "ApplicationInsightsAvailability" to go straight to the section of the file describing the service tag for availability tests.
> [!NOTE] > These addresses are listed using Classless Inter-Domain Routing (CIDR) notation. This means that an entry like `51.144.56.112/28` is equivalent to 16 IPs starting at `51.144.56.112` and ending at `51.144.56.127`.
-```
-Australia East
-20.40.124.176/28
-20.40.124.240/28
-20.40.125.80/28
-
-Brazil South
-191.233.26.176/28
-191.233.26.128/28
-191.233.26.64/28
-
-France Central (Formerly France South)
-20.40.129.96/28
-20.40.129.112/28
-20.40.129.128/28
-20.40.129.144/28
-
-France Central
-20.40.129.32/28
-20.40.129.48/28
-20.40.129.64/28
-20.40.129.80/28
-
-East Asia
-52.229.216.48/28
-52.229.216.64/28
-52.229.216.80/28
-
-North Europe
-52.158.28.64/28
-52.158.28.80/28
-52.158.28.96/28
-52.158.28.112/28
-
-Japan East
-52.140.232.160/28
-52.140.232.176/28
-52.140.232.192/28
-
-West Europe
-51.144.56.96/28
-51.144.56.112/28
-51.144.56.128/28
-51.144.56.144/28
-51.144.56.160/28
-51.144.56.176/28
-
-UK South
-51.105.9.128/28
-51.105.9.144/28
-51.105.9.160/28
-
-UK West
-20.40.104.96/28
-20.40.104.112/28
-20.40.104.128/28
-20.40.104.144/28
-
-Southeast Asia
-52.139.250.96/28
-52.139.250.112/28
-52.139.250.128/28
-52.139.250.144/28
-
-West US
-40.91.82.48/28
-40.91.82.64/28
-40.91.82.80/28
-40.91.82.96/28
-40.91.82.112/28
-40.91.82.128/28
-
-Central US
-13.86.97.224/28
-13.86.97.240/28
-13.86.98.48/28
-13.86.98.0/28
-13.86.98.16/28
-13.86.98.64/28
-
-North Central US
-23.100.224.16/28
-23.100.224.32/28
-23.100.224.48/28
-23.100.224.64/28
-23.100.224.80/28
-23.100.224.96/28
-23.100.224.112/28
-23.100.225.0/28
-
-South Central US
-20.45.5.160/28
-20.45.5.176/28
-20.45.5.192/28
-20.45.5.208/28
-20.45.5.224/28
-20.45.5.240/28
-
-East US
-20.42.35.32/28
-20.42.35.64/28
-20.42.35.80/28
-20.42.35.96/28
-20.42.35.112/28
-20.42.35.128/28
-
-```
-
-#### Azure Government
-
-Not needed if you are an Azure Public cloud customer.
-
-```
-USGov Virginia
-52.227.229.80/31
--
-USGov Arizona
-52.244.35.112/31
--
-USGov Texas
-52.243.157.80/31
--
-USDoD Central
-52.182.23.96/31
--
-USDoD East
-52.181.33.96/31
-
-```
+#### Azure Public Cloud
+Download [Public Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+
+#### Azure US Government Cloud
+Download [Government Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57063).
+
+#### Azure China Cloud
+Download [China Cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
+
+### Discovery API
+You may also want to [programmatically retrieve](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api-public-preview) the current list of service tags together with IP address range details.
## Application Insights & Log Analytics APIs
Managing changes to Source IP addresses can be quite time consuming. Using **Ser
| | | | | | Agent | agent.azureserviceprofiler.net<br/>*.agent.azureserviceprofiler.net | 20.190.60.38<br/>20.190.60.32<br/>52.173.196.230<br/>52.173.196.209<br/>23.102.44.211<br/>23.102.45.216<br/>13.69.51.218<br/>13.69.51.175<br/>138.91.32.98<br/>138.91.37.93<br/>40.121.61.208<br/>40.121.57.2<br/>51.140.60.235<br/>51.140.180.52<br/>52.138.31.112<br/>52.138.31.127<br/>104.211.90.234<br/>104.211.91.254<br/>13.70.124.27<br/>13.75.195.15<br/>52.185.132.101<br/>52.185.132.170<br/>20.188.36.28<br/>40.89.153.171<br/>52.141.22.239<br/>52.141.22.149<br/>102.133.162.233<br/>102.133.161.73<br/>191.232.214.6<br/>191.232.213.239 | 443 | Portal | gateway.azureserviceprofiler.net | dynamic | 443
-| Storage | *.core.windows.net | dynamic | 443
+| Storage | *.core.windows.net | dynamic | 443
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-web-app-availability.md
The following population tags can be used for the geo-location attribute when de
| USDoD East | usgov-ddeast-azr | | USDoD Central | usgov-ddcentral-azr |
+### Azure China
+
+| Display Name | Population Name |
+|-||
+| China East | mc-cne-azr |
+| China East 2 | mc-cne2-azr |
+| China North | mc-cnn-azr |
+| China North 2 | mc-cnn2-azr |
+ #### Azure | Display Name | Population Name |
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 05/27/2021 Last updated : 06/02/2021
The default pricing for Log Analytics is a **Pay-As-You-Go** model based on data
- Number of VMs monitored - Type of data collected from each monitored VM
-In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers** which enable you to save as much as 25% compared to the Pay-As-You-Go price. The commitment tier pricing enables you to make a commitment to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) will be billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period. During the commitment period, you can change to a higher commitment tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower commitment tier until after the commitment period is finished. Billing for the commitment tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Commitment Tier pricing.
+In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers** which enable you to save as much as 30% compared to the Pay-As-You-Go price. The commitment tier pricing enables you to make a commitment to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) will be billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period. During the commitment period, you can change to a higher commitment tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower commitment tier until after the commitment period is finished. Billing for the commitment tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Commitment Tier pricing.
> [!NOTE]
-> Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Additionally, three new larger commitment tiers have been added at 1000, 2000 and 5000 GB/day.
+> Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Additionally, three new larger commitment tiers have been added at 1000, 2000 and 5000 GB/day.
In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
To use this template via PowerShell, after [installing the Azure Az PowerShell m
New-AzResourceGroupDeployment -ResourceGroupName "YourResourceGroupName" -TemplateFile "template.json" ```
-To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the sku), omit the `capacityReservationLevel` property. Learn more about [creating ARM templates](/azure/azure-resource-manager/templates/template-tutorial-create-first-template?tabs=azure-powershell), [/azure/azure-resource-manager/templates/template-tutorial-create-first-template?tabs=azure-powershell](adding a resource to your template), and [applying templates](https://docs.microsoft.com/azure/azure-monitor/resource-manager-samples).
+To set the pricing tier to other values such as Pay-As-You-Go (called `pergb2018` for the sku), omit the `capacityReservationLevel` property. Learn more about [creating ARM templates](../../azure-resource-manager/templates/template-tutorial-create-first-template.md), [adding a resource to your template](../../azure-resource-manager/templates/template-tutorial-add-resource.md), and [applying templates](../resource-manager-samples.md).
## Legacy pricing tiers
There are some additional Log Analytics limits, some of which depend on the Log
- Change [performance counter configuration](../agents/data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](../agents/data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](../agents/data-sources-syslog.md).
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/monitor-workspace.md
Last updated 10/20/2020
# Monitor health of Log Analytics workspace in Azure Monitor
-To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace and contains error and warnings that occur in your workspace. You should regularly review this data and create alerts to be proactively notified when there are any important incidents in your workspace.
+To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace and contains error and warnings that occur in your workspace. It is recommended to create alerts for issues in level "Warning" and "Error".
## _LogOperation function
The **_LogOperation** function returns the columns in the following table.
|:|:| | TimeGenerated | Time that the incident occurred in UTC. | | Category | Operation category group. Can be used to filter on types of operations and help create more precise system auditing and alerts. See the section below for a list of categories. |
-| Operation | Description of the operation type. This can indicate one of the Log Analytics limits, type of operation, or part of a process. |
-| Level | Severity level of the issue:<br>- Info: No specific attention needed.<br>- Warning: Process was not completed as expected, and attention is needed.<br>- Error: Process failed and urgent attention is needed.
-| Detail | Detailed description of the operation include specific error message if it exists. |
+| Operation | Description of the operation type. The operation can indicate that one of the Log Analytics limits was reached, a backend process related issue, or any other service message. |
+| Level | Severity level of the issue:<br>- Info: No specific attention needed.<br>- Warning: Process was not completed as expected, and attention is needed.<br>- Error: Process failed, attention needed.
+| Detail | Detailed description of the operation, includes the specific error message. |
| _ResourceId | Resource ID of the Azure resource related to the operation. | | Computer | Computer name if the operation is related to an Azure Monitor agent. | | CorrelationId | Used to group consecutive related operations. |
The following table describes the categories from the _LogOperation function.
| Category | Description | |:|:|
-| Ingestion | Operations that are part of the data ingestion process. See below for more details. |
+| Ingestion | Operations that are part of the data ingestion process. |
| Agent | Indicates an issue with agent installation. | | Data collection | Operations related to data collections processes. | | Solution targeting | Operation of type *ConfigurationScope* was processed. |
The following table describes the categories from the _LogOperation function.
### Ingestion
-Ingestion operations are issues that occurred during data ingestion including notification about reaching the Azure Log Analytics workspace limits. Error conditions in this category might suggest data loss, so they are particularly important to monitor. The table below provides details on these operations. See [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) for service limits for Log Analytics workspaces.
--
-| Operation | Level | Detail | Related article |
-|:|:|:|:|
-| Custom log | Error | Custom fields column limit reached. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Custom log | Error | Custom logs ingestion failed. | |
-| Metadata. | Error | Configuration error detected. | |
-| Data collection | Error | Data was dropped because the request was created earlier than the number of set days. | [Manage usage and costs with Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-reached)
-| Data collection | Info | Collection machine configuration is detected.| |
-| Data collection | Info | Data collection started due to new day. | [Manage usage and costs with Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-reached) |
-| Data collection | Warning | Data collection stopped due to daily limit reached.| [Manage usage and costs with Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-reached) |
-| Data processing | Error | Invalid JSON format. | [Send log data to Azure Monitor with the HTTP Data Collector API (public preview)](../logs/data-collector-api.md#request-body) |
-| Data processing | Warning | Value has been trimmed to the max allowed size. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Data processing | Warning | Field value trimmed as size limit reached. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Ingestion rate | Info | Ingestion rate limit approaching 70%. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Ingestion rate | Warning | Ingestion rate limit approaching the limit. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Ingestion rate | Error | Rate limit reached. | [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) |
-| Storage | Error | Cannot access the storage account as credentials used are invalid. |
--
+Ingestion operations are issues that occurred during data ingestion including notification about reaching the Azure Log Analytics workspace limits. Error conditions in this category might suggest data loss, so they are important to monitor. The table below provides details on these operations. See [Azure Monitor service limits](../service-limits.md#log-analytics-workspaces) for service limits for Log Analytics workspaces.
+
+
+#### Operation: Data collection stopped
+Data collection stopped due to reaching the daily limit.
+
+In the past 7 days, logs collection reached the daily set limit. The limit is set either as the workspace is set to "free tier", or daily collection limit was configured for this workspace.
+Note, after reaching the set limit, your data collection will automatically stop for the day and will resume only during the next collection day.
+
+Recommended Actions:
+* Check _LogOperation table for collection stopped and collection resumed events.</br>
+`_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Data collection"`
+* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the collection limit was reached.
+* Data collected after the daily collection limit is reached will be lost, use ΓÇÿworkspace insightsΓÇÖ blade to review usage rates from each source.
+Or, you can decide to ([Manage your maximum daily data volume](./manage-cost-storage.md#manage-your-maximum-daily-data-volume) \ [change the pricing tier](./manage-cost-storage.md#changing-pricing-tier) to one that will suite your collection rates pattern).
+* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-reached) on "Data collection resumed" Operation event.
+
+#### Operation: Ingestion rate
+Ingestion rate limit approaching\passed the limit.
+
+ Your ingestion rate has passed the 80%; at this point there is not issue. Note, data collected exceeding the threshold will be dropped. </br>
+
+Recommended Actions:
+* Check _LogOperation table for ingestion rate event
+`_LogOperation | where TimeGenerated >= ago(7d) | where Category == "Ingestion" | where Operation has "Ingestion rate"`
+ Note: Operation table in the workspace every 6 hours while the threshold continues to be exceeded.
+* [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-reached) on "Data collection stopped" Operation event, this alert will allow you to get notified when the limit is reached.
+* Data collected while ingestion rate reached 100% will be dropped and lost.
+
+'workspace insights' blade to review your usage patterns and try to reduce them.</br>
+For further information: </br>
+[Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate) </br>
+[Manage usage and costs for Azure Monitor Logs](./manage-cost-storage.md#alert-when-daily-cap-reached)
+
+
+#### Operation: Maximum table column count
+Custom fields count have reached the limit.
+
+Recommended Actions:
+For custom tables, you can move to [Parsing the data](./parse-text.md) in queries.
+
+#### Operation: Field content validation
+One of the fields of the data being ingested had more than 32 Kb in size, so it got truncated.
+
+Log Analytics limits ingested fields size to 32 Kb, larger size fields will be trimmed to 32 Kb. We donΓÇÖt recommend sending fields larger than 32 Kb as the trim process might remove important information.
+
+Recommended Actions:
+Check the source of the affected data type:
+* If the data is being sent through the HTTP Data Collector API, you will need to change your code\script to split the data before itΓÇÖs ingested.
+* For custom logs, collected by Log Analytics agent, change the logging settings of the application\tool.
+* For any other data type, raise a support case.
+</br>Read more: [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate)
+
+### Data collection
+#### Operation: Azure Activity Log collection
+Description: In some situations, like moving a subscription to a different tenant, the Azure Activity logs might stop flowing in into the workspace. In those situations, we need to reconnect the subscription following the process described in this article.
+
+Recommended Actions:
+* If the subscription mentioned on the warning message no longer exists, navigate to the ΓÇÿAzure Activity logΓÇÖ blade under ΓÇÿWorkspace Data SourcesΓÇÖ, select the relevant subscription, and finally select the ΓÇÿDisconnectΓÇÖ button.
+* If you no longer have access to the subscription mentioned on the warning message:
+ * Follow step 1 to disconnect the subscription.
+ * To continue collecting logs from this subscription, contact the subscription owner to fix the permissions, re-enable activity log collection.
+* [Create a diagnostic setting](../essentials/activity-log.md#send-to-log-analytics-workspace) to send the Activity log to a Log Analytics workspace.
+
+### Agent
+#### Operation: Linux Agent
+Config settings on the portal have changed.
+
+Recommended Action
+This issue is raised in case there is an issue for the Agent to retrieve the new config settings.
+To mitigate this issue, you will need to reinstall the agent.
+Check _LogOperation table for agent event.</br>
+
+ `_LogOperation | where TimeGenerated >= ago(6h) | where Category == "Agent" | where Operation == "Linux Agent" | distinct _ResourceId`
+
+The list will list the resource IDs where the Agent has the wrong configuration.
+To mitigate the issue, you will need to reinstall the Agents listed.
## Alert rules
-Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. You should use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription is charged for each alert rule with a cost depending on the frequency that it's evaluated.
+Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs).
A recommended strategy is to start with two alert rules based on the level of the issue. Use a short frequency such as every 5 minutes for Errors and a longer frequency such as 24 hours for Warnings. Since Errors indicate potential data loss, you want to respond to them quickly to minimize any loss. Warnings typically indicate an issue that does not require immediate attention, so you can review them daily.
The following example creates a warning alert when the data collection has reach
- Frequency: 5 (minutes) - Alert rule name: Daily data limit reached - Severity: Warning (Sev 1)---
+
## Next steps - Learn more about [log alerts](../alerts/alerts-log.md).
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Welcome to what's new in the Azure Monitor docs for April 2021. This article lis
- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md) - [Enable Snapshot Debugger for .NET apps in Azure App Service](app/snapshot-debugger-appservice.md) - [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](app/snapshot-debugger-function-app.md)-- [<a id=troubleshooting></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](app/snapshot-debugger-troubleshoot.md)
+- [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](app/snapshot-debugger-troubleshoot.md)
- [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md) - [Set up Azure Monitor for your Python application](app/opencensus-python.md) - [Upgrading from Application Insights Java 2.x SDK](app/java-standalone-upgrade-from-2x.md)
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
+
+ Title: Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs
+description: Describes best practices about session slots and slot table entries for Azure NetApp Files NFS protocol.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 06/01/2021++
+# Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries
+
+This article helps you understand concurrency best practices about session slots and slot table entries for Azure NetApp Files NFS protocol.
+
+## NFSv3
+
+NFSv3 does not have a mechanism to negotiate concurrency between the client and the server. The client and the server each defines its limit without consulting the other. For the best performance, you should line up the maximum number of client-side `sunrpc` slot table entries with that supported without pushback on the server. When a client overwhelms the server network stackΓÇÖs ability to process a workload, the server responds by decreasing the window size for the connection, which is not an ideal performance scenario.
+
+By default, modern Linux kernels define the per-connection `sunrpc` slot table entry size `sunrpc.max_tcp_slot_table_entries` as supporting 65,536 outstanding operations, as shown in the following table.
+
+| Azure NetApp Files NFSv3 server <br> Maximum execution contexts per connection | Linux client <br> Default maximum `sunrpc` slot table entries per connection |
+|-|-|
+| 128 | 65,536 |
+
+These slot table entries define the limits of concurrency. Values this high are unnecessary. For example, using a queueing theory *Littles Law*, you will find that the I/O rate is determined by concurrency (that is, outstanding I/O) and latency. As such, the algorithm proves that 65,536 slots are orders of magnitude higher than what is needed to drive even extremely demanding workloads.
+
+`Littles Law: (concurrency = operation rate × latency in seconds)`
+
+A concurrency level as low as 155 is sufficient to achieve 155,000 Oracle DB NFS operations per second using Oracle Direct NFS, which is a technology similar in concept to the `nconnect` mount option:
+
+* Considering a latency of 0.5 ms, a concurrency of 55 is needed to achieve 110,000 IOPS.
+* Considering a latency of 1 ms, a concurrency of 155 is needed to achieve 155,000 IOPS.
+
+![Oracle DNFS latency curve](../media/azure-netapp-files/performance-oracle-dnfs-latency-curve.png)
+
+See [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) for details.
+
+The `sunrpc.max_tcp_slot_table_entries` tunable is a connection-level tuning parameter. *As a best practice, set this value to 128 or less per connection, not surpassing 3,000 slots environment wide.*
+
+### Examples of slot count based on concurrency recommendation
+
+Examples in this section demonstrate the slot count based on concurrency recommendation.
+
+#### Example 1 ΓÇô One NFS client, 65,536 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128 based on the server-side limit of 128
+
+Example 1 is based on a single client workload with the default `sunrpc.max_tcp_slot_table_entry` value of 65,536 and a single network connection, that is, no `nconnect`. In this case, a concurrency of 128 is achievable.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP`)
+ * The client in theory can issue no more than 65,536 requests in flight to the server per connection.
+ * The server will accept no more than 128 requests in flight from this single connection.
+
+#### Example 2 ΓÇô One NFS client, 128 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128
+
+Example 2 is based on a single client workload with a `sunrpc.max_tcp_slot_table_entry` value of 128, but without the `nconnect` mount option. With this setting, a concurrency of 128 is achievable from a single network connection.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP) `
+ * The client will issue no more than 128 requests in flight to the server per connection.
+ * The server will accept no more than 128 requests in flight from this single connection.
+
+#### Example 3 ΓÇô One NFS client, 100 `sunrpc.max_tcp_slot_table_entries`, and `nconnect=8` for a maximum concurrency of 800
+
+Example 3 is based on a single client workload, but with a lower `sunrpc.max_tcp_slot_table_entry` value of 100. This time, the `nconnect=8` mount option used spreading the workload across 8 connection. With this setting, a concurrency of 800 is achievable spread across the 8 connections. This amount is the concurrency needed to achieve 400,000 IOPS.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection 1 (10.10.10.10:2049, 10.10.10.11:6543,TCP), Connection 2 (10.10.10.10:2049, 10.10.10.11:6454,TCP)… Connection 8 (10.10.10.10:2049, 10.10.10.11:7321,TCP)`
+ * Connection 1
+ * The client will issue no more than 100 requests in flight to the server from this connection.
+ * The server is expected to accept no more than 128 requests in flight from the client for this connection.
+ * Connection 2
+ * The client will issue no more than 100 requests in flight to the server from this connection.
+ * The server is expected to accept no more than 128 requests in flight from the client for this connection.
+ * `…`
+ * `…`
+ * Connection 8
+ * The client will issue no more than 100 requests in flight to the server from this connection.
+ * The server is expected to accept no more than 128 requests in flight from the client for this connection.
+
+#### Example 4 ΓÇô 250 NFS clients, 8 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 2000
+
+Example 4 uses the reduced per-client `sunrpc.max_tcp_slot_table_entry` value of 8 for a 250 machine-count EDA environment. In this scenario, a concurrency of 2000 is reached environment wide, a value more than sufficient to drive 4,000 MiB/s of a backend EDA workload.
+
+* `NFS_Server=10.10.10.10, NFS_Client1=10.10.10.11`
+ * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP)`
+ * The client will issue no more than 8 requests in flight to the server per connection.
+ * The server will accept no more than 128 requests in flight from this single connection.
+* `NFS_Server=10.10.10.10, NFS_Client2=10.10.10.12`
+ * `Connection (10.10.10.10:2049, 10.10.10.12:7820,TCP) `
+ * The client will issue no more than 8 requests in flight to the server per connection.
+ * The server will accept no more than 128 requests in flight from this single connection.
+* `…`
+* `…`
+* `NFS_Server=10.10.10.10, NFS_Client250=10.10.11.13`
+ * `Connection (10.10.10.10:2049, 10.10.11.13:4320,TCP) `
+ * The client will issue no more than 8 requests in flight to the server per connection.
+ * The server will accept no more than 128 requests in flight from this single connection.
+
+When using NFSv3, *you should collectively keep the storage endpoint slot count to 2,000 or less*. It is best to set the per-connection value for `sunrpc.max_tcp_slot_table_entries` to less than 128 when an application scales out across many network connections (`nconnect` and HPC in general, and EDA in particular).
+
+### How to calculate the best `sunrpc.max_tcp_slot_table_entries`
+
+Using *Littles Law*, you can calculate the total required slot table entry count. In general, consider the following factors:
+
+* Scale out workloads are often dominantly large sequential in nature.
+* Database workloads, especially OLTP, are often random in nature.
+
+The following table shows a sample study of concurrency with arbitrary latencies provided:
+
+| I/O size | Latency | I/O or throughput | Concurrency |
+|-|-|-|-|
+| 8 KiB | 0.5 ms | 110,000 IOPS \| 859 MiB/s | 55 |
+| 8 KiB | 2 ms | 400,000 IOPS \| 3,125 MiB/s | 800 |
+| 256 KiB | 2 ms | 16,000 IOPS \| 4,000 MiB/s | 32 |
+| 256 KiB | 4 ms | 32,000 IOPS \| 8,000 MiB/s | 128 |
+
+### How to calculate concurrency settings by connection count
+
+For example, the workload is an EDA farm, and 200 clients all drive workload to the same storage end point (a storage endpoint is a storage IP address), then you calculate the required I/O rate and divide the concurrency across the farm.
+
+Assume that the workload is 4,000 MiB/s using a 256-KiB average operation size and an average latency of 10 ms. To calculate concurrency, use the following formula:
+
+`(concurrency = operation rate × latency in seconds)`
+
+The calculation translates to a concurrency of 160:
+
+`(160 = 16,000 × 0.010)`
+
+Given the need for 200 clients, you could safely set `sunrpc.max_tcp_slot_table_entries` to 2 per client to reach the 4,000 MiB/s. However, you might decide to build in extra headroom by setting the number per client to 4 or even 8, keeping under the 2000 recommended slot ceiling.
+
+### How to set `sunrpc.max_tcp_slot_table_entries` on the client
+
+1. Add `sunrpc.max_tcp_slot_table_entries=<n>` to the `/etc/sysctl.conf` configuration file.
+ During tuning, if a value lower than 128 is found optimal, replace 128 with the appropriate number.
+2. Run the following command:
+ `$ sysctl -p`
+3. Mount (or remount) all NFS file systems, as the tunable applies only to mounts made after the tunable has been set.
+
+## NFSv4.1
+
+In NFSv4.1, sessions define the relationship between the client and the server. Weather the mounted NFS file systems sit atop one connection or many (as is the case with `nconnect`), the rules for the session apply. At session setup, the client and server negotiate the maximum requests for the session, settling on the lower of the two supported values. Azure NetApp Files supports 180 outstanding requests, and Linux clients default to 64. The following table shows the session limits:
+
+| Azure NetApp Files NFSv4.1 server <br> Max commands per session | Linux client <br> Default max commands per session | Negotiated max commands for the session |
+|-|-|-|
+| 180 | 64 | 64 |
+
+Although Linux clients default to 64 maximum requests per session, the value of `max_session_slots` is tunable. A reboot is required for changes to take effect. Use the `systool -v -m nfs` command to see the current maximum in use by the client. For the command to work, at least one NFSv4.1 mount must be in place:
+
+```
+$ systool -v -m nfs
+{
+Module = "nfs"
+…
+ Parameters:
+…
+ max_session_slots = "64"
+…
+}
+```
+
+To tune `max_session_slots`, create a configuration file under `/etc/modprobe.d` as such. Make sure that no ΓÇ£quotesΓÇ¥ are present for the line in the file. Otherwise, the option will not take effect.
+
+`$ echo ΓÇ£options nfs max_session_slots=180ΓÇ¥ > /etc/modprobe.d/nfsclient.conf`
+`$ reboot`
+
+Azure NetApp Files limits each session to 180 max commands. As such, consider 180 the maximum value currently configurable. The client will be unable to achieve a concurrency greater than 128 unless the session is divided across more than one connection as Azure NetApp Files restricts each connection to 128 max NFS commands. To get more than one connection, the `nconnect` mount option is recommended, and a value of two or greater is required.
+
+### Examples of expected concurrency maximums
+
+Examples in this section demonstrate the expected concurrency maximums.
+
+#### Example 1 ΓÇô 64 `max_session_slots` and no `nconnect`
+
+Example 1 is based on default setting of 64 `max_session_slots` and no `nconnect`. With this setting, a concurrency of 64 is achievable, all from a single network connection.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP)`
+ * The client will issue no more than 64 requests in flight to the server for the session.
+ * The server will accept no more than 64 requests in flight from the client for the session. (64 is the negotiated value.)
+
+#### Example 2 ΓÇô 64 `max_session_slots` and `nconnect=2`
+
+Example 2 is based on 64 max `session_slots` but with the added mount option of `nconnect=2`. A concurrency of 64 is achievable but divided across two connections. Although multiple connections bring no greater concurrency in this scenario, the decreased queue depth per connection has a positive impact on latency.
+
+With the `max_session_slots` still at 64 but `nconnect=2`, notice that maximum number of requests get divided across the connections.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection 1 (10.10.10.10:2049, 10.10.10.11:6543,TCP) && Connection 2 (10.10.10.10:2049, 10.10.10.11:6454,TCP)`
+ * Connection 1
+ * The client will issue no more than 32 requests in flight to the server from this connection.
+ * The server is expected to accept no more than 32 requests in flight from the client for this connection.
+ * Connection 2
+ * The client will issue no more than 32 requests in flight to the server from this connection.
+ * The server is expected to accept no more than 32 requests in flight from the client for this connection.
+
+#### Example 3 ΓÇô 180 `max_session_slots` and no `nconnect`
+
+Example 3 drops the `nconnect` mount option and sets the `max_session_slots` value to 180, matching the serverΓÇÖs maximum NFSv4.1 session concurrency. In this scenario, with only one connection and given the Azure NetApp Files 128 maximum outstanding operation per NFS connection, the session is limited to 128 operations in flight.
+
+Although `max_session_slots` has been set to 180, the single network connection is limited to 128 maximum requests as such:
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP) `
+ * The client will issue no more than 180 requests in flight to the server for the session.
+ * The server will accept no more than 180 requests in flight from the client for the session.
+ * *The server will accept no more than 128 requests in flight for the single connection.*
+
+#### Example 4 ΓÇô 180 `max_session_slots` and `nconnect=2 `
+
+Example 4 adds the `nconnect=2` mount option and reuses the 180 `max_session_slots` value. Because the overall workload is divided across two connections, 180 outstanding operations is achievable.
+
+With two connections in play, the session supports the full allotment of 180 outstanding requests.
+
+* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11`
+ * `Connection 1 (10.10.10.10:2049, 10.10.10.11:6543,TCP) && Connection 2 (10.10.10.10:2049, 10.10.10.11:6454,TCP)`
+ * Connection 1
+ * The client is expected to maintain no more than 90 requests in flight to the server from connection one.
+ * *The server is expected to maintain no more than 90 requests in flight from the client for this connection within the session.*
+ * Connection 2
+ * The client is expected to maintain no more than 90 requests in flight to the server from connection one.
+ * *The server is expected to maintain no more than 90 requests in flight from the client for this connection within the session.*
+
+> [!NOTE]
+> For maximum concurrency, set `max_session_slots` equal to 180, which is the maximum session-level concurrency supported by Azure NetApp Files currently.
+
+### How to check for the maximum requests outstanding for the session
+
+To see the `session_slot` sizes supported by the client and server, capture the mount command in a packet trace. Look for the `CREATE_SESSION` call and `CREATE_SESSION` reply as shown in the following example. The call originated from the client, and the reply originated from the server.
+
+Use the following `tcpdump` command to capture the mount command:
+
+`$ tcpdump -i eth0 -s 900 -w /tmp/write.trc port 2049`
+
+Using Wireshark, the packets of interest are as follows:
+
+![Screenshot that shows packets of interest.](../media/azure-netapp-files/performance-packets-interest.png)
+
+Within these two packets, look at the `max_reqs` field within the middle section of the trace file.
+
+* Network File System
+ * Operations
+ * `Opcode`
+ * `csa_fore_channel_attrs`
+ * `max reqs`
+
+Packet 12 (client maximum requests) shows that the client had a `max_session_slots` value of 64. In the next section, notice that the server supports a concurrency of 180 for the session. The session ends up negotiating the lower of the two provided values.
+
+![Screenshot that shows max session slots for Packet 12.](../media/azure-netapp-files/performance-max-session-packet-12.png)
+
+The following example shows Packet 14 (server maximum requests):
+
+![Screenshot that shows max session slots for Packet 14.](../media/azure-netapp-files/performance-max-session-packet-14.png)
+
+## Next steps
+
+* [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-mount-options.md
+
+ Title: Linux NFS mount options best practices for Azure NetApp Files | Microsoft Docs
+description: Describes mount options and the best practices about using them with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 06/01/2021++
+# Linux NFS mount options best practices for Azure NetApp Files
+
+This article helps you understand mount options and the best practices about using them with Azure NetApp Files.
+
+## `Nconnect`
+
+Using the `nconnect` mount option allows you to specify the number of connections (network flows) that should be established between the NFS client and NFS endpoint up to a limit of 16. Traditionally, an NFS client uses a single connection between itself and the endpoint. By increasing the number of network flows, the upper limits of I/O and throughput are increased significantly. Testing has found `nconnect=8` to be the most performant.
+
+When preparing a multi-node SAS GRID environment for production, you might notice a repeatable 30% reduction in run time going from 8 hours to 5.5 hours:
+
+| Mount option | Job run times |
+|-|-|
+| No `nconnect` | 8 hours |
+| `nconnect=8` | 5.5 hours |
+
+Both sets of tests used the same E32-8_v4 virtual machine and RHEL8.3, with readahead set to 15 MiB.
+
+When you use `nconnect`, keep the following rules in mind:
+
+* `nconnect` is supported by Azure NetApp Files on all major Linux distributions but only on newer releases:
+
+ | Linux release | NFSv3 (minimum release) | NFSv4.1 (minimum release) |
+ |-|-|-|
+ | Redhat Enterprise Linux | RHEL8.3 | RHEL8.3 |
+ | SUSE | SLES12SP4 or SLES15SP1 | SLES15SP2 |
+ | Ubuntu | Ubuntu18.04 | |
+
+ > [!NOTE]
+ > SLES15SP2 is the minimum SUSE release in which `nconnect` is supported by Azure NetApp Files for NFSv4.1. All other releases as specified are the first releases that introduced the `nconnect` feature.
+
+* All mounts from a single endpoint will inherit the `nconnect` setting of the first export mounted, as shown in the following scenarios:
+
+ Scenario 1: `nconnect` is used by the first mount. Therefore, all mounts against the same endpoint use `nconnect=8`.
+
+ * `mount 10.10.10.10:/volume1 /mnt/volume1 -o nconnect=8`
+ * `mount 10.10.10.10:/volume2 /mnt/volume2`
+ * `mount 10.10.10.10:/volume3 /mnt/volume3`
+
+ Scenario 2: `nconnect` is not used by the first mount. Therefore, no mounts against the same endpoint use `nconnect` even though `nconnect` may be specified thereon.
+
+ * `mount 10.10.10.10:/volume1 /mnt/volume1`
+ * `mount 10.10.10.10:/volume2 /mnt/volume2 -o nconnect=8`
+ * `mount 10.10.10.10:/volume3 /mnt/volume3 -o nconnect=8`
+
+ Scenario 3: `nconnect` settings are not propagated across separate storage endpoints. `nconnect` is used by the mount coming from `10.10.10.10` but not by the mount coming from `10.12.12.12`.
+
+ * `mount 10.10.10.10:/volume1 /mnt/volume1 -o nconnect=8`
+ * `mount 10.12.12.12:/volume2 /mnt/volume2`
+
+* `nconnect` may be used to increase storage concurrency from any given client.
+
+For details, see [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md).
+
+## `Rsize` and `Wsize`
+
+The `rsize` and `wsize` flags set the maximum transfer size of an NFS operation. If `rsize` or `wsize` are not specified on mount, the client and server negotiate the largest size supported by the two. Currently, both Azure NetApp Files and modern Linux distributions support read and write sizes as large as 1,048,576 Bytes (1 MiB). However, for best overall throughput and latency, Azure NetApp Files recommends setting both `rsize` and `wsize` no larger than 262,144 Bytes (256 K). You might observe that both increased latency and decreased throughput when using `rsize` and `wsize` larger than 256 KiB.
+
+For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md#mount-the-azure-netapp-files-volumes) shows the 256-KiB `rsize` and `wsize` maximum as follows:
+
+```
+sudo vi /etc/fstab
+# Add the following entries
+10.23.1.5:/HN1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-data-mnt00002 /hana/data/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.6:/HN1-log-mnt00002 /hana/log/HN1/mnt00002 nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+10.23.1.4:/HN1-shared/shared /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0
+```
+
+Also for example, SAS Viya recommends a 256-KiB read and write sizes, and [SAS GRID](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/606973/highlight/true#M17740) limits the `r/wsize` to 64 KiB while augmenting read performance with increased readahead for the NFS mounts. <!-- For more information on readahead, see the article ΓÇ£NFS ReadaheadΓÇ¥. -->
+
+The following considerations apply to the use of `rsize` and `wsize`:
+
+* Random I/O operation sizes are often smaller than the `rsize` and `wsize` mount options. As such, in effect, they will not be constrained thereby.
+* When using the filesystem cache, sequential I/O will occur at the size predicated by the `rsize` and `wsize` mount options, unless the file size is smaller than `rsize` and `wsize`.
+* Operations bypassing the filesystem cache, although still constrained by the `rsize` and `wsize` mount options, will not necessarily issue as large as the maximum specified by `rsize` or `wsize`. This consideration is important when you use workload generators that have the `directio` option.
+
+*As a best practice with Azure NetApp Files, for best overall throughput and latency, set `rsize` and `wsize` no larger than 262,144 Bytes.*
+
+## Close-to-open consistency and cache attribute timers
+
+NFS uses a loose consistency model. The consistency is loose because the application does not have to go to shared storage and fetch data every time to use it, a scenario that would have a tremendous impact to application performance. There are two mechanisms that manage this process: cache attribute timers and close-to-open consistency.
+
+*If the client has complete ownership of data, that is, it is not shared between multiple nodes or systems, there is guaranteed consistency.* In that case, you can reduce the `getattr` access operations to storage and speed up the application by turning off close-to-open (`cto`) consistency (`nocto` as a mount option) and by turning up the timeouts for the attribute cache management (`actimeo=600` as a mount option changes the timer to 10m versus the defaults `acregmin=3,acregmax=30,acdirmin=30,acdirmax=60`). In some testing, `nocto` reduces approximately 65-70% of the `getattr` access calls, and adjusting `actimeo` reduces these calls another 20-25%.
+
+### How attribute cache timers work
+
+The attributes `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` control the coherency of the cache. The former two attributes control how long the attributes of files are trusted. The latter two attributes control how long the attributes of the directory file itself are trusted (directory size, directory ownership, directory permissions). The `min` and `max` attributes define minimum and maximum duration over which attributes of a directory, attributes of a file, and cache content of a file are deemed trustworthy, respectively. Between `min` and `max`, an algorithm is used to define the amount of time over which a cached entry is trusted.
+
+For example, consider the default `acregmin` and `acregmax` values, 3 and 30 seconds, respectively. For instance, the attributes are repeatedly evaluated for the files in a directory. After 3 seconds, the NFS service is queried for freshness. If the attributes are deemed valid, the client doubles the trusted time to 6 seconds, 12 seconds, 24 seconds, then as the maximum is set to 30, 30 seconds. From that point on, until the cached attributes are deemed out of date (at which point the cycle starts over), trustworthiness is defined as 30 seconds being the value specified by `acregmax`.
+
+There are other cases that can benefit from a similar set of mount options, even when there is no complete ownership by the clients, for example, if the clients use the data as read only and data update is managed through another path. For applications that use grids of clients like EDA, web hosting and movie rendering and have relatively static data sets (EDA tools or libraries, web content, texture data), the typical behavior is that the data set is largely cached on the clients. There are very few reads and no writes. There will be many `getattr`/access calls coming back to storage. These data sets are typically updated through another client mounting the file systems and periodically pushing content updates.
+
+In these cases, there is a known lag in picking up new content and the application still works with potentially out-of-date data. In these cases, `nocto` and `actimeo` can be used to control the period where out-of-data date can be managed. For example, in EDA tools and libraries, `actimeo=600` works well because this data is typically updated infrequently. For small web hosting where clients need to see their data updates timely as they are editing their sites, `actimeo=10` might be acceptable. For large-scale web sites where there is content pushed to multiple file systems, `actimeo=60` might be acceptable.
+
+Using these mount options significantly reduces the workload to storage in these cases. (For example, a recent EDA experience reduced IOPs to the tool volume from >150 K to ~6 K.) Applications can run significantly faster because they can trust the data in memory. (Memory access time is nanoseconds vs. hundreds of microseconds for `getattr`/access on a fast network.)
+
+### Close-to-open consistency
+
+Close-to-open consistency (the `cto` mount option) ensures that no matter the state of the cache, on open the most recent data for a file is always presented to the application.
+
+* When a directory is crawled (`ls`, `ls -l` for example) a certain set of PRC calls are issued.
+ The NFS server shares its view of the filesystem. As long as `cto` is used by all NFS clients accessing a given NFS export, all clients will see the same list of files and directories therein. The freshness of the attributes of the files in the directory is controlled by the [attribute cache timers](#how-attribute-cache-timers-work). In other words, as long as `cto` is used, files appear to remote clients as soon as the file is created and the file lands on the storage.
+* When a file is opened, the content of the file is guaranteed fresh from the perspective of the NFS server.
+ If there is a race condition where the content has not finished flushing from Machine 1 when a file is opened on Machine 2, Machine 2 will only receive the data present on the server at the time of the open. In this case, Machine 2 will not retrieve more data from the file until the `acreg` timer is reached, and Machine 2 checks its cache coherency from the server again. This scenario can be observed using a tail `-f` from Machine 2 when the file is still being written to from Machine 1.
+
+### No close-to-open consistency
+
+When no close-to-open consistency (`nocto`) is used, the client will trust the freshness of its current view of the file and directory until the cache attribute timers have been breached.
+
+* When a directory is crawled (`ls`, `ls -l` for example) a certain set of PRC calls are issued.
+ The client will only issue a call to the server for a current listing of files when the `acdir` cache timer value has been breached. In this case, recently created files and directories will not appear and recently removed files and directories will still appear.
+
+* When a file is opened, as long as the file is still in the cache, its cached content (if any) is returned without validating consistency with the NFS server.
+
+## Next steps
+
+* [Linux concurrency best practices for Azure NetApp Files](performance-linux-concurrency-session-slots.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-percept Connect Over Cellular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/connect-over-cellular.md
+
+ Title: Connecting Azure Percept Over Cellular Networks
+description: This article explains how to connect the Azure Percept DK over cellular networks.
++++ Last updated : 05/20/2021+++
+# Connect the Azure Percept DK over cellular networks
+
+The benefits of connecting Edge AI devices over cellular (LTE and 5G) networks are many. Scenarios where Edge AI is most effective are in places where Wi-Fi and LAN connectivity are limited, such as smart cities, autonomous vehicles, and agriculture. Additionally, cellular networks provide better security than Wi-Fi. Lastly, using IoT devices that run AI at the Edge provides a way to optimize the bandwidth on cellular networks. Where only necessary information is sent to the cloud while most of the data is processed on the device. Today, the Azure Percept DK isn't able to connect directly to cellular networks. However, they can connect to cellular gateways using the built-in Ethernet and Wi-Fi capabilities. This article covers how this works.
+
+## Options for connecting the Azure Percept DK over cellular networks
+With additional hardware, you can connect the Azure Percept DK using cellular connectivity like LTE or 5G. There are two primary options supported today:
+- **Cellular Wi-Fi hotspot device** - where the dev kit is connected to the Wi-Fi network that the Wi-Fi hotspot provides. In this case, the dev kit connects to the network like any other Wi-Fi network. For more instructions, follow the [Azure Percept DK Setup Guide](./quickstart-percept-dk-set-up.md) and select the cellular Wi-Fi network broadcasted from the hotspot.
+- **Cellular Ethernet gateway device** - here the dev kit is connected to the cellular gateway over Ethernet, which takes advantage of the improved security compared to Wi-Fi connections. The rest of this article goes into more detail on how a network like this is configured.
+
+## Cellular gateway topology
+
+In the above diagram, you can see how a cellular gateway can be easily paired with the Azure Percept DK.
+
+## Considerations when connecting to a cellular gateway
+Here are some important points to consider when connecting the Azure Percept DK to a cellular gateway.
+- Set up the gateway first and then validate that it's receiving a connection via the SIM. It will then be easier to troubleshoot any issues found while connecting the Azure Percept DK.
+- Ensure both ends of the Ethernet cable are firmly connected to the gateway and Azure Percept DK.
+- Follow the [default instructions](./how-to-connect-over-ethernet.md) for connecting the Azure Percept DK over Ethernet.
+- If your cellular plan has a quota, it's recommended that you optimize how much data your Azure Percept DK models send to the cloud.
+- Ensure you have a [properly configured firewall](./concept-security-configuration.md) that blocks externally originated inbound traffic.
+
+## SSH over a cellular network
+To SSH into the dev kit via a cellular ethernet gateway, you have these options:
+- **Using the dev kit's Wi-Fi access point**. If you have Wi-Fi disabled, you can re-enable it by rebooting your dev kit. From there, you can connect to the dev kit's Wi-Fi access point and follow [these SSH procedures](./how-to-ssh-into-percept-dk.md).
+- **Using a Ethernet connection to a local network (LAN)**. With this option, you'll unplug your dev kit from the cellular gateway and plug it into LAN router. For more information, see [How to Connect over Ethernet](./how-to-connect-over-ethernet.md).
+- **Using the gateway's remote access features**. Many cellular gateways include remote access managers that can be used to connect to devices on the network via SSH. Check with manufacturer of your cellular gateway to see if it has this feature. Here's an example of a remote access manager for [Cradlepoint cellular gateways](https://customer.cradlepoint.com/s/article/NCM-Remote-Connect-LAN-Manager).
+- **Using the dev kit's serial port**. The Azure Percept DK includes a serial connection port that can be used to connect directly to the device. See [Connect your Azure Percept DK over serial](./how-to-connect-to-percept-dk-over-serial.md) for detailed instructions.
+
+## Considerations when selecting a cellular gateway device
+Cellular gateways support different technologies that impact the maximum data rate for downloads and uploads. The advertised data rates provide guidance for decision making but are usually never reached. Here is some guidance for selecting the right gateway for your needs.
+
+- **LTE CAT-1** provides up to 10 Mbps down and 5 Mbps up. It is enough for default Azure Percept Devkit features such as object detection and creating a voice assistant. However, it may not be enough for solutions that require video streaming data up to the cloud.
+- **LTE CAT-3 and 4** provides up to 100 Mbps down and 50 Mbps up, which is enough for streaming video to the cloud. However, it is not enough to stream full HD quality video.
+- **LTE CAT-5 and higher** provides data rates high enough for streaming HD video for a single device. If you need to connect multiple devices to a single gateway, you will want to consider 5G.
+- **5G** gateways will best position your scenarios for the future. They have data rates and bandwidth to support high data throughput for multiple devices at a time. Additionally, also provide lower latency for data transfer.
++
+## Next steps
+If you have a cellular gateway and would like to connect your Azure Percept DK to it, follow these next steps.
+- [How to Connect your Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md)
azure-percept How To Connect Over Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-connect-over-ethernet.md
+
+ Title: How to launch the Azure Percept DK setup experience over Ethernet
+description: This guide shows users how to connect to the Azure Percept DK setup experience when connected over an Ethernet connection.
++++ Last updated : 06/01/2021+++
+# How to launch the Azure Percept DK setup experience over Ethernet
+
+In this how-to guide you'll learn how to launch the Azure Percept DK setup experience over an Ethernet connection. It's a companion to the [Quick Start: Set up your Azure Percept DK and deploy your first AI model](./quickstart-percept-dk-set-up.md) guide. See each option outlined below and choose which one is most appropriate for your environment.
+
+## Prerequisites
+
+- An Azure Percept DK ([Get one here](https://go.microsoft.com/fwlink/?linkid=2155270))
+- A Windows, Linux, or OS X based host computer with Wi-Fi or ethernet capability and a web browser
+- Network cable
+
+## Identify your dev kit's IP address
+
+The key to running the Azure Percept DK setup experience over an Ethernet connection is finding your dev kit's IP address. This article covers three options:
+1. From your network router
+1. Via SSH
+1. Via the Nmap tool
+
+### From your network router
+The fastest way to identify your dev kits's IP address is to look it up on your network router.
+1. Plug the Ethernet cable into the dev kit and the other end into the router.
+1. Power on your Azure Percept DK.
+1. Look for a sticker on the network router specifying access instructions
+
+ **Here are examples of router stickers**
+
+ :::image type="content" source="media/how-to-connect-over-ethernet/router-sticker-01.png" alt-text="example sticker from a network router":::
+
+ :::image type="content" source="media/how-to-connect-over-ethernet/router-sticker-02.png" alt-text="another example sticker from a network router":::
+
+1. On your computer that is connected to Ethernet or Wi-Fi, open a web browser.
+1. Type the browser address for the router as found on the sticker.
+1. When prompted, enter the name and password for the router as found on the sticker.
+1. Once in the router interface, select My Devices (or something similar, depending on your router).
+1. Find the Azure Percept dev kit in the list of devices
+1. Copy the IP address of the Azure Percept dev kit
+
+### Via SSH
+It's possible to find your dev kits's IP address by connecting to the dev kit over SSH.
+
+> [!NOTE]
+> Using the SSH method of identifying your dev kit's IP address requires that you are able to connect to your dev kit's Wi-Fi access point. If this is not possible for you, please use one of the other methods.
+
+1. Plug the ethernet cable into the dev kit and the other end into the router
+1. Power on your Azure Percept dev kit
+1. Connect to your dev kit over SSH. See [Connect to your Azure Percept DK over SSH](./how-to-ssh-into-percept-dk.md) for detailed instruction on how to connect to your dev kit over SSH.
+1. To list the ethernet local network IP address, type the bellow command in your SSH terminal window:
+
+ ```bash
+ ip a | grep eth1
+ ```
+
+ :::image type="content" source="media/how-to-connect-over-ethernet/ssh-local-network-address.png" alt-text="example of identifying local network IP in SSH terminal":::
++
+1. The dev kit's IP address is displayed after ΓÇÿinetΓÇÖ. Copy the IP address.
+
+### Using the Nmap tool
+You can also use free tools found on the Web to identify your dev kit's IP address. In these instructions, we cover a tool called Nmap.
+1. Plug the ethernet cable into the dev kit and the other end into the router.
+1. Power on your Azure Percept dev kit.
+1. On your host computer, download and install the [Free Nmap Security Scanner](https://nmap.org/download.html) that is needed for your platform (Windows/Mac/Linux).
+1. Obtain your computerΓÇÖs ΓÇ£Default GatewayΓÇ¥ - [How to Find Your Default Gateway](https://www.noip.com/support/knowledgebase/finding-your-default-gateway/)
+1. Open the Nmap application
+1. Enter your Default Gateway into the *Target* box and append **/24** to the end. Change *Profile* to **Quick scan** and select the **Scan** button.
+
+ :::image type="content" source="media/how-to-connect-over-ethernet/nmap-tool.png" alt-text="example of the Nmap tool input":::
+
+1. In the results, find the Azure Percept dev kit in the list of devices ΓÇô similar to **apd-xxxxxxxx**
+1. Copy the IP address of the Azure Percept dev kit
+
+## Launch the Azure Percept DK setup experience
+1. Plug the ethernet cable into the dev kit and the other end into the router.
+1. Power on your Azure Percept dev kit.
+1. Open a web browser and paste the dev kit's IP address. The setup experience should launch in the browser.
+
+## Next steps
+- [Complete the set up experience](./quickstart-percept-dk-set-up.md)
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
To ensure you apply the correct update package to your dev kit, you must first d
> Applying the incorrect update package could result in your dev kit becoming inoperable. It is important that you follow these steps to ensure you apply the correct update package. Option 1:
-1. Log in to the [Azure Percept Studio](https://docs.microsoft.com/en-us/azure/azure-percept/overview-azure-percept-studio).
+1. Log in to the [Azure Percept Studio](/azure/azure-percept/overview-azure-percept-studio).
2. In **Devices**, choose your devkit device. 3. In the **General** tab, look for the **Model** and **SW Version** information.
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-troubleshoot-setup.md
description: Get troubleshooting tips for some of the more common issues found d
-+ Last updated 03/25/2021-+ # Azure Percept DK setup experience troubleshooting guide
Refer to the table below for workarounds to common issues found during the [Azur
|Issue|Reason|Workaround| |:--|:|:-|
-|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If this is not the account you intended to use, it may result in an experience that is inconsistent with the documentation.|This is usually because of a setting in the browser to "remember" an account you have previously used.|From the Azure page, click on your account name in the upper right corner and select **sign out**. You will then be able to sign in with the correct account.|
-|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) does not appear in the list of available Wi-Fi networks.|This is usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it does not appear after more than 15 minutes, reboot the device.|
-|The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|This can be due to a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they are running on the host computer.|
-|The host computer shows a security warning about the connection to the Azure Percept DK access point.|This is a known issue that will be fixed in a later update.|It is safe to proceed through the setup experience.|
-|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) appears in the network list but fails to connect.|This could be due to a temporary corruption of the dev kit's Wi-Fi access point.|Reboot the dev kit and try again.|
-|Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity to communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and enterprise EAP-TLS connectivity is currently not supported.|Ensure your Wi-Fi network type is supported and has internet connectivity.|
+|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If you don't sign in with the correct account, it may result in an experience that is inconsistent with the documentation.|The result of a browser setting to "remember" an account you have previously used.|From the Azure page, select on your account name in the upper right corner and select **sign out**. You can then sign in with the correct account.|
+|The Azure Percept DK Wi-Fi access point (apd-xxxx) doesn't appear in the list of available Wi-Fi networks.|It's usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it doesn't appear after more than 15 minutes, reboot the device.|
+|The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|It's usually because of a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they're running on the host computer.|
+|The host computer shows a security warning about the connection to the Azure Percept DK access point.|It's a known issue that will be fixed in a later update.|It's safe to continue through the setup experience.|
+|The Azure Percept DK Wi-Fi access point (scz-xxxx or apd-xxxx) appears in the network list but fails to connect.|It could be because of a temporary corruption of the dev kit's Wi-Fi access point.|Reboot the dev kit and try again.|
+|Unable to connect to a Wi-Fi network during the setup experience.|The Wi-Fi network must currently have internet connectivity to communicate with Azure. EAP[PEAP/MSCHAP], captive portals, and enterprise EAP-TLS connectivity is currently not supported.|Ensure your Wi-Fi network type is supported and has internet connectivity.|
+|After using the Device Code and signing into Azure, you're presented with an error about policy permissions or compliance issues and will be unable to continue. Here are some of the errors you may see:<br>**BlockedByConditionalAccessOnSecurityPolicy** The tenant admin has configured a security policy that blocks this request. Check the security policies defined at the tenant level to determine if your request meets the policy. <br>**DevicePolicyError** The user tried to sign into a device from a platform that's currently not supported through Conditional Access policy.<br>**DeviceNotCompliant** - Conditional Access policy requires a compliant device, and the device isn't compliant. The user must enroll their device with an approved MDM provider like Intune<br>**BlockedByConditionalAccess** Access has been blocked by Conditional Access policies. The access policy doesn't allow token issuance. |Some Azure tenants may block the usage of ΓÇ£Device CodesΓÇ¥ for manipulating Azure resources as a Security precaution. It's usually the result of your organization's IT policies. As a result, the Azure Percept Setup experience can't create any Azure resources for you. |Workaround |
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/modules.md
description: Describes how to define and consume a module, and how to use module
Previously updated : 06/01/2021 Last updated : 06/03/2021 # Use Bicep modules
output storageEndpoint object = stgModule.outputs.storageEndpoint
- **module**: Keyword. - **symbolic name** (stgModule): Identifier for the module.-- **module file**: The path to the module in this example is specified using a relative path (./storageAccount.bicep). All paths in Bicep must be specified using the forward slash (/) directory separator to ensure consistent compilation cross-platform. The Windows backslash (\\) character is unsupported.
+- **module file**: Module files must be referenced by using relative paths. All paths in Bicep must be specified using the forward slash (/) directory separator to ensure consistent compilation cross-platform. The Windows backslash (\\) character is unsupported. Paths can contain spaces.
- The **_name_** property (storageDeploy) is required when consuming a module. When Bicep generates the template IL, this field is used as the name of the nested deployment resource, which is generated for the module: ```json
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 06/01/2021 Last updated : 06/03/2021 # What is Bicep?
-Bicep is a language for declaratively deploying Azure resources. We believe Bicep offers the best authoring experience for your infrastructure as code solutions. It provides concise syntax, reliable type safety, and support for code reuse. Bicep is a domain-specific language (DSL), which means it's designed for a particular scenario or domain. It isn't intended as a general programming language for writing applications.
+Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your Azure infrastructure as code solutions.
-You can use Bicep instead of JSON for developing your Azure Resource Manager templates (ARM templates). The JSON syntax for creating a JSON template can be verbose and require complicated expression. Bicep improves that experience without losing any of the capabilities of a JSON template. It's a transparent abstraction over the JSON for ARM templates. Each Bicep file compiles to a standard ARM template.
+You can use Bicep instead of JSON to develop your Azure Resource Manager templates (ARM templates). The JSON syntax to create an ARM template can be verbose and require complicated expressions. Bicep syntax reduces that complexity and improves the development experience. Bicep is a transparent abstraction over ARM template JSON and doesn't lose any of the JSON template capabilities. During deployment, Bicep CLI transpiles a Bicep file into ARM template JSON.
+
+Bicep isn't intended as a general programming language to write applications. A Bicep file declares Azure resources and resource properties, without writing a sequence of programming commands to create resources.
Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file.
To learn about Bicep, see the following video.
To start with Bicep, [install the tools](./install.md).
-After installing the tools, try the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md). The tutorial series walks you through the structure and capabilities of Bicep.
+After installing the tools, try the [quickstart](./quickstart-create-bicep-use-visual-studio-code.md), and the [Microsoft Learn Bicep modules](./learn-bicep.md).
To view equivalent JSON and Bicep files side by side, see the [Bicep Playground](https://aka.ms/bicepdemo). If you have an existing ARM template that you would like to decompile to Bicep, see [Decompile ARM templates to Bicep](./decompile.md).
+Additional Bicep examples can be found in the [Bicep GitHub repo](https://github.com/Azure/bicep/tree/main/docs/examples).
+ ## Benefits of Bicep versus other tools Bicep provides the following advantages over other options:
-* **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.
-* **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages.
-* **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
-* **Modularity**: You can break your Bicep code into manageable parts by using [modules](./modules.md). The module deploys a set of related resources. Modules enable you to reuse code and simplify development. Add the module to a Bicep file anytime you need to deploy those resources.
-* **Integration with Azure services**: Bicep is integrated with Azure services such as Azure Policy, template specs, and Blueprints.
-* **No state or state files to manage**: All state is stored in Azure. Users can collaborate and have confidence their updates are handled as expected. Use the [what-if operation](./deploy-what-if.md) to preview changes before deploying your template.
-* **No cost and open source**: Bicep is completely free. You don't have to pay for premium capabilities. It's also supported by Microsoft support.
+- **Support for all resource types and API versions**: Bicep immediately supports all preview and GA versions for Azure services. As soon as a resource provider introduces new resources types and API versions, you can use them in your Bicep file. You don't have to wait for tools to be updated before using the new services.
+- **Simple syntax**: When compared to the equivalent JSON template, Bicep files are more concise and easier to read. Bicep requires no previous knowledge of programming languages. Bicep syntax is declarative and specifies which resources and resource properties you want to deploy.
+- **Authoring experience**: When you use VS Code to create your Bicep files, you get a first-class authoring experience. The editor provides rich type-safety, intellisense, and syntax validation.
+- **Modularity**: You can break your Bicep code into manageable parts by using [modules](./modules.md). The module deploys a set of related resources. Modules enable you to reuse code and simplify development. Add the module to a Bicep file anytime you need to deploy those resources.
+- **Integration with Azure services**: Bicep is integrated with Azure services such as Azure Policy, template specs, and Blueprints.
+- **No state or state files to manage**: All state is stored in Azure. Users can collaborate and have confidence their updates are handled as expected. Use the [what-if operation](./deploy-what-if.md) to preview changes before deploying your template.
+- **No cost and open source**: Bicep is completely free. You don't have to pay for premium capabilities. It's also supported by Microsoft support.
## Bicep improvements
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 05/28/2021 Last updated : 06/02/2021
For illustration purposes, we have only one dependent resource.
* Step 2: Move the resource and dependent resources together from the source subscription to the target subscription. * Step 3: Optionally, redistribute the dependent resources to different resource groups within the target subscription.
-## Validate move
-
-The [validate move operation](/rest/api/resources/resources/moveresources) lets you test your move scenario without actually moving the resources. Use this operation to check if the move will succeed. Validation is automatically called when you send a move request. Use this operation only when you need to predetermine the results. To run this operation, you need the:
-
-* name of the source resource group
-* resource ID of the target resource group
-* resource ID of each resource to move
-* the [access token](/rest/api/azure/#acquire-an-access-token) for your account
-
-Send the following request:
-
-```HTTP
-POST https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<source-group>/validateMoveResources?api-version=2019-05-10
-Authorization: Bearer <access-token>
-Content-type: application/json
-```
-
-With a request body:
-
-```json
-{
- "resources": ["<resource-id-1>", "<resource-id-2>"],
- "targetResourceGroup": "/subscriptions/<subscription-id>/resourceGroups/<target-group>"
-}
-```
-
-If the request is formatted correctly, the operation returns:
-
-```HTTP
-Response Code: 202
-cache-control: no-cache
-pragma: no-cache
-expires: -1
-location: https://management.azure.com/subscriptions/<subscription-id>/operationresults/<operation-id>?api-version=2018-02-01
-retry-after: 15
-...
-```
-
-The 202 status code indicates the validation request was accepted, but it hasn't yet determined if the move operation will succeed. The `location` value contains a URL that you use to check the status of the long-running operation.
-
-To check the status, send the following request:
-
-```HTTP
-GET <location-url>
-Authorization: Bearer <access-token>
-```
-
-While the operation is still running, you continue to receive the 202 status code. Wait the number of seconds indicated in the `retry-after` value before trying again. If the move operation validates successfully, you receive the 204 status code. If the move validation fails, you receive an error message, such as:
-
-```json
-{"error":{"code":"ResourceMoveProviderValidationFailed","message":"<message>"...}}
-```
- ## Use the portal To move resources, select the resource group that contains those resources.
When it has completed, you're notified of the result.
## Use Azure PowerShell
+### Validate
+
+To test your move scenario without actually moving the resources, use the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) command. Use this command only when you need to predetermine the results. To run this operation, you need the:
+
+* resource ID of the source resource group
+* resource ID of the target resource group
+* resource ID of each resource to move
+
+```azurepowershell
+Invoke-AzResourceAction -Action validateMoveResources `
+-ResourceId "/subscriptions/{subscription-id}/resourceGroups/{source-rg}" `
+-Parameters @{ resources= @("/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}", "/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}", "/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}");targetResourceGroup = '/subscriptions/{subscription-id}/resourceGroups/{destination-rg}' }
+```
+
+If validation passes, you see no output.
+
+If validation fails, you see an error message describing why the resources can't be moved.
+
+### Move
+ To move existing resources to another resource group or subscription, use the [Move-AzResource](/powershell/module/az.resources/move-azresource) command. The following example shows how to move several resources to a new resource group. ```azurepowershell-interactive
To move to a new subscription, include a value for the `DestinationSubscriptionI
## Use Azure CLI
+### Validate
+
+To test your move scenario without actually moving the resources, use the [az resource invoke-action](/cli/azure/resource#az_resource_invoke_action) command. Use this command only when you need to predetermine the results. To run this operation, you need the:
+
+* resource ID of the source resource group
+* resource ID of the target resource group
+* resource ID of each resource to move
+
+In the request body, use `\"` to escape double quotes.
+
+```azurecli
+az resource invoke-action --action validateMoveResources \
+ --ids "/subscriptions/{subscription-id}/resourceGroups/{source-rg}" \
+ --request-body "{ \"resources\": [\"/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}\", \"/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}\", \"/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}\"],\"targetResourceGroup\":\"/subscriptions/{subscription-id}/resourceGroups/{destination-rg}\" }"
+```
+
+If validation passes, you see:
+
+```azurecli
+{} Finished ..
+```
+
+If validation fails, you see an error message describing why the resources can't be moved.
+
+### Move
+ To move existing resources to another resource group or subscription, use the [az resource move](/cli/azure/resource#az_resource_move) command. Provide the resource IDs of the resources to move. The following example shows how to move several resources to a new resource group. In the `--ids` parameter, provide a space-separated list of the resource IDs to move. ```azurecli
To move to a new subscription, provide the `--destination-subscription-id` param
## Use REST API
+### Validate
+
+The [validate move operation](/rest/api/resources/resources/moveresources) lets you test your move scenario without actually moving the resources. Use this operation to check if the move will succeed. Validation is automatically called when you send a move request. Use this operation only when you need to predetermine the results. To run this operation, you need the:
+
+* name of the source resource group
+* resource ID of the target resource group
+* resource ID of each resource to move
+* the [access token](/rest/api/azure/#acquire-an-access-token) for your account
+
+Send the following request:
+
+```HTTP
+POST https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<source-group>/validateMoveResources?api-version=2019-05-10
+Authorization: Bearer <access-token>
+Content-type: application/json
+```
+
+With a request body:
+
+```json
+{
+ "resources": ["<resource-id-1>", "<resource-id-2>"],
+ "targetResourceGroup": "/subscriptions/<subscription-id>/resourceGroups/<target-group>"
+}
+```
+
+If the request is formatted correctly, the operation returns:
+
+```HTTP
+Response Code: 202
+cache-control: no-cache
+pragma: no-cache
+expires: -1
+location: https://management.azure.com/subscriptions/<subscription-id>/operationresults/<operation-id>?api-version=2018-02-01
+retry-after: 15
+...
+```
+
+The 202 status code indicates the validation request was accepted, but it hasn't yet determined if the move operation will succeed. The `location` value contains a URL that you use to check the status of the long-running operation.
+
+To check the status, send the following request:
+
+```HTTP
+GET <location-url>
+Authorization: Bearer <access-token>
+```
+
+While the operation is still running, you continue to receive the 202 status code. Wait the number of seconds indicated in the `retry-after` value before trying again. If the move operation validates successfully, you receive the 204 status code. If the move validation fails, you receive an error message, such as:
+
+```json
+{"error":{"code":"ResourceMoveProviderValidationFailed","message":"<message>"...}}
+```
+
+### Move
+ To move existing resources to another resource group or subscription, use the [Move resources](/rest/api/resources/resources/moveresources) operation. ```HTTP
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
To meet these challenges, you can automate deployments and use the practice of i
To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
-We've introduced a new language for developing ARM templates. The language is named Bicep, and is currently in preview. Bicep and JSON templates offer the same capabilities. You can convert template between the two languages. Bicep provides a syntax that is easier to use for creating templates. For more information, see [What is Bicep (Preview)?](../bicep/overview.md).
+We've introduced a new language named Bicep that's used to develop ARM template JSON. Bicep files and JSON templates offer the same capabilities. You can convert templates between the two languages. Bicep provides a syntax that's easier to use for creating templates. For more information, see [What is Bicep?](../bicep/overview.md).
To learn about how you can get started with ARM templates, see the following video.
To learn about how you can get started with ARM templates, see the following vid
If you're trying to decide between using ARM templates and one of the other infrastructure as code services, consider the following advantages of using templates:
-* **Declarative syntax**: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively. For example, you can deploy not only virtual machines, but also the network infrastructure, storage systems and any other resources you may need.
+* **Declarative syntax**: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively. For example, you can deploy not only virtual machines, but also the network infrastructure, storage systems, and any other resources you may need.
* **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state. You can develop one template that represents the desired state, rather than developing lots of separate templates to represent updates.
azure-sql Always Encrypted Enclaves Enable Sgx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-enable-sgx.md
Title: "Enable Intel SGX for your Azure SQL Database"
+ Title: "Enable Intel SGX for Always Encrypted"
description: "Learn how to enable Intel SGX for Always Encrypted with secure enclaves in Azure SQL Database by selecting an SGX-enabled hardware generation."
-keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
ms.reviwer: vanto Last updated 01/15/2021
-# Enable Intel SGX for your Azure SQL Database
+# Enable Intel SGX for Always Encrypted for your Azure SQL Database
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] > [!NOTE] > Always Encrypted with secure enclaves for Azure SQL Database is currently in **public preview**.
-[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-vcore.md#dc-series) hardware generation.
+[Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves) in Azure SQL Database uses [Intel Software Guard Extensions (Intel SGX)](https://itpeernetwork.intel.com/microsoft-azure-confidential-computing/) enclaves. For Intel SGX to be available, the database must use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-sql-database-vcore.md#dc-series) hardware generation.
Configuring the DC-series hardware generation to enable Intel SGX enclaves is the responsibility of the Azure SQL Database administrator. See [Roles and responsibilities when configuring SGX enclaves and attestation](always-encrypted-enclaves-plan.md#roles-and-responsibilities-when-configuring-sgx-enclaves-and-attestation).
Configuring the DC-series hardware generation to enable Intel SGX enclaves is th
> Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md). > [!IMPORTANT]
-> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-vcore.md#dc-series).
+> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For more information, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
-For detailed instructions for how to configure a new or existing database to use a specific hardware generation, see [Selecting a hardware generation](service-tiers-vcore.md#selecting-a-hardware-generation).
+For detailed instructions for how to configure a new or existing database to use a specific hardware generation, see [Selecting a hardware generation](service-tiers-sql-database-vcore.md#selecting-a-hardware-generation).
## Next steps
azure-sql Always Encrypted Enclaves Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md
Title: "Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database"
+ Title: "Tutorial: Getting started with Always Encrypted with secure enclaves"
description: This tutorial teaches you how to create a basic environment for Always Encrypted with secure enclaves in Azure SQL Database and how to encrypt data in-place, and issue rich confidential queries against encrypted columns using SQL Server Management Studio (SSMS).
-keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
To continue to interact with the PowerShell Gallery, run the following command b
## Step 1: Create and configure a server and a DC-series database
-In this step, you will create a new Azure SQL Database logical server and a new database using the DC-series hardware generation, required for Always Encrypted with secure enclaves. For more information see [DC-series](service-tiers-vcore.md#dc-series).
+In this step, you will create a new Azure SQL Database logical server and a new database using the DC-series hardware generation, required for Always Encrypted with secure enclaves. For more information see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
# [Portal](#tab/azure-portal)
In this step, you will create a new Azure SQL Database logical server and a new
- **Password**: Enter a password that meets requirements, and enter it again in the **Confirm password** field. - **Location**: Select a location from the dropdown list. > [!IMPORTANT]
- > You need to select a location (an Azure region) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-vcore.md#dc-series-1). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
+ > You need to select a location (an Azure region) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
Select **OK**. 1. Leave **Want to use SQL elastic pool** set to **No**.
In this step, you will create a new Azure SQL Database logical server and a new
1. Create a new resource group. > [!IMPORTANT]
- > You need to create your resource group in a region (location) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-vcore.md#dc-series-1). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
+ > You need to create your resource group in a region (location) that supports both the DC-series hardware generation and Microsoft Azure Attestation. For the list of regions supporting DC-series, see [DC-series availability](service-tiers-sql-database-vcore.md#dc-series). [Here](https://azure.microsoft.com/global-infrastructure/services/?products=azure-attestation) is the regional availability of Microsoft Azure Attestation.
```powershell $resourceGroupName = "<your new resource group name>"
After completing this tutorial, you can go to one of the following tutorials:
- [Tutorial: Develop a .NET Framework application using Always Encrypted with secure enclaves](/sql/relational-databases/security/tutorial-always-encrypted-enclaves-develop-net-framework-apps) - [Tutorial: Creating and using indexes on enclave-enabled columns using randomized encryption](/sql/relational-databases/security/tutorial-creating-using-indexes-on-enclave-enabled-columns-using-randomized-encryption)
-## See Also
+## See also
- [Configure and use Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/configure-always-encrypted-enclaves)
azure-sql Always Encrypted Enclaves Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-plan.md
Title: "Plan for Intel SGX enclaves and attestation in Azure SQL Database" description: "Plan the deployment of Always Encrypted with secure enclaves in Azure SQL Database."
-keywords: encrypt data, sql encryption, database encryption, sensitive data, Always Encrypted, secure enclaves, SGX, attestation
Last updated 01/15/2021
## Plan for Intel SGX in Azure SQL Database
-Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-vcore.md) and the [DC-series](service-tiers-vcore.md?#dc-series) hardware generation. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware generation when you create the database, or you can update your existing database to use the DC-series hardware generation.
+Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases that use the [vCore model](service-tiers-sql-database-vcore.md) and the [DC-series](service-tiers-sql-database-vcore.md?#dc-series) hardware generation. Therefore, to ensure you can use Always Encrypted with secure enclaves in your database, you need to either select the DC-series hardware generation when you create the database, or you can update your existing database to use the DC-series hardware generation.
> [!NOTE] > Intel SGX is not available in hardware generations other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and it is not available for databases using the [DTU model](service-tiers-dtu.md). > [!IMPORTANT]
-> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-vcore.md#dc-series).
+> Before you configure the DC-series hardware generation for your database, check the regional availability of DC-series and make sure you understand its performance limitations. For details, see [DC-series](service-tiers-sql-database-vcore.md#dc-series).
## Plan for attestation in Azure SQL Database
azure-sql Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
Title: Plan and manage costs for Azure SQL Database
+ Title: Plan and manage costs
description: Learn how to plan for and manage costs for Azure SQL Database by using cost analysis in the Azure portal.
Last updated 01/15/2021 - # Plan and manage costs for Azure SQL Database This article describes how you plan for and manage costs for Azure SQL Database. First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
Azure SQL Database supports two purchasing models: vCore and DTU. The way you ge
### Provisioned or serverless
-In the vCore purchasing model, Azure SQL Database also supports two types of compute tiers: provisioned throughput and serverless. The way you get charged for each compute tier varies so it's important to understand what works best for your workload when planning and considering costs. For details, see [vCore model overview - compute tiers](service-tiers-vcore.md#compute-tiers).
+In the vCore purchasing model, Azure SQL Database also supports two types of compute tiers: provisioned throughput and serverless. The way you get charged for each compute tier varies so it's important to understand what works best for your workload when planning and considering costs. For details, see [vCore model overview - compute tiers](service-tiers-sql-database-vcore.md#compute-tiers).
In the provisioned compute tier of the vCore-based purchasing model, you can exchange your existing licenses for discounted rates. For details, see [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md).
azure-sql Job Automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/job-automation-overview.md
Consider the following job scheduling technologies on different platforms:
Elastic Jobs can target [Azure SQL Databases](sql-database-paas-overview.md), [Azure SQL Database elastic pools](elastic-pool-overview.md), and Azure SQL Databases in [shard maps](elastic-scale-shard-map-management.md).
-For T-SQL script job automation in SQL Server and Azure SQL Managed Instance, consider [SQL Agent](job-automation-managed-instances.md).
+- For T-SQL script job automation in SQL Server and Azure SQL Managed Instance, consider [SQL Agent](../managed-instance/job-automation-managed-instance.md).
-For T-SQL script job automation in Azure Synapse Analytics, consider [pipelines with recurring triggers](../../synapse-analytics/data-integration/concepts-data-factory-differences.md), which are [based on Azure Data Factory](../../synapse-analytics/data-integration/concepts-data-factory-differences.md).
+- For T-SQL script job automation in Azure Synapse Analytics, consider [pipelines with recurring triggers](../../synapse-analytics/data-integration/concepts-data-factory-differences.md), which are [based on Azure Data Factory](../../synapse-analytics/data-integration/concepts-data-factory-differences.md).
It is worth noting differences between SQL Agent (available in SQL Server and as part of SQL Managed Instance), and the Database Elastic Job agent (which can execute T-SQL on Azure SQL Databases or databases in SQL Server and Azure SQL Managed Instance, Azure Synapse Analytics).
The outcome of a job's steps on each target database are recorded in detail, and
#### Job history
-View Elastic Job execution history in the *Job database* by [querying the table jobs.job_executions](elastic-jobs-tsql-create-manage.md#monitor-job-execution-status). A system cleanup job purges execution history that is older than 45 days. To remove history less than 45 days old, call the **sp_purge_jobhistory** stored procedure in the *Job database*.
+View Elastic Job execution history in the *Job database* by [querying the table jobs.job_executions](elastic-jobs-tsql-create-manage.md#monitor-job-execution-status). A system cleanup job purges execution history that is older than 45 days. To remove history less than 45 days old, call the `sp_purge_jobhistory` stored procedure in the *Job database*.
#### Job status
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Previously updated : 05/25/2021 Last updated : 05/02/2021 # Maintenance window (Preview)
Choosing a maintenance window other than the default is currently available in t
- East US - East US2 - East Asia
+- Germany West Central
- Japan East - NorthCentral US - North Europe
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-dtu-to-vcore.md
Besides the number of vCores (logical CPUs) and the hardware generation, several
- For the same hardware generation and the same number of vCores, IOPS and transaction log throughput resource limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be possible to lower the number of vCores in the vCore model to achieve the same level of performance. Resource limits for DTU and vCore databases in absolute values are exposed in the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) view. Comparing these values between the DTU database to be migrated and a vCore database using an approximately matching service objective will help you select the vCore service objective more precisely. - The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be migrated, and for each hardware generation in the vCore model. Ensuring similar or higher total memory after migration to vCore is important for workloads that require a large memory data cache to achieve sufficient performance, or workloads that require large memory grants for query processing. For such workloads, depending on actual performance, it may be necessary to increase the number of vCores to get sufficient total memory. - The [historical resource utilization](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) of the DTU database should be considered when choosing the vCore service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than the number returned by the mapping query. Conversely, DTU databases where consistently high CPU utilization causes inadequate workload performance may require more vCores than returned by the query.-- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent workers (requests) in serverless is 75% the limit in provisioned compute for the same number of max vcores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vcores configured; for example, max memory is 120 GB when 40 max vcores are configured.
+- If migrating databases with intermittent or unpredictable usage patterns, consider the use of [Serverless](serverless-tier-overview.md) compute tier. Note that the max number of concurrent workers (requests) in serverless is 75% the limit in provisioned compute for the same number of max vCores configured. Also, the max memory available in serverless is 3 GB times the maximum number of vCores configured; for example, max memory is 120 GB when 40 max vCores are configured.
- In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported maximum sizes in the vCore model for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). - For elastic pools, the [DTU](resource-limits-dtu-elastic-pools.md) and [vCore](resource-limits-vcore-elastic-pools.md) models have differences in the maximum supported number of databases per pool. This should be considered when migrating elastic pools with many databases.-- Some hardware generations may not be available in every region. Check availability under [Hardware Generations](service-tiers-vcore.md#hardware-generations).
+- Some hardware generations may not be available in every region. Check availability under [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
> [!IMPORTANT] > The DTU to vCore sizing guidelines above are provided to help in the initial estimation of the target database service objective. >
-> The optimal configuration of the target database is workload-dependent. Thus, achieving the optimal price/performance ratio after migration may require leveraging the flexibility of the vCore model to adjust the number of vCores, the [hardware generation](service-tiers-vcore.md#hardware-generations), the [service](service-tiers-vcore.md#service-tiers) and [compute](service-tiers-vcore.md#compute-tiers) tiers, as well as tuning of other database configuration parameters, such as [maximum degree of parallelism](/sql/relational-databases/query-processing-architecture-guide#parallel-query-processing).
+> The optimal configuration of the target database is workload-dependent. Thus, achieving the optimal price/performance ratio after migration may require leveraging the flexibility of the vCore model to adjust the number of vCores, the hardware generation, the service and compute tiers, as well as tuning of other database configuration parameters, such as [maximum degree of parallelism](/sql/relational-databases/query-processing-architecture-guide#parallel-query-processing).
> ### DTU to vCore migration examples
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/purchasing-models.md
Title: Purchasing models-+ description: Learn about the purchasing models that are available for Azure SQL Database and Azure SQL Managed Instance.
However, across the wide spectrum of customer workloads running in Azure SQL Dat
For example, an application that is sensitive to network latency can see better performance on Gen5 hardware vs. Gen4 due to the use of Accelerated Networking in Gen5, but an application using intensive read IO can see better performance on Gen4 hardware versus Gen5 due to a higher memory per core ratio on Gen4.
-Customers with workloads that are sensitive to hardware changes or customers who wish to control the choice of hardware generation for their database can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware generation during database creation and scaling. In the vCore model, resource limits of each service objective on each hardware generation are documented, for both [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware generations in the vCore model, see [Hardware generations](./service-tiers-vcore.md#hardware-generations).
+Customers with workloads that are sensitive to hardware changes or customers who wish to control the choice of hardware generation for their database can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware generation during database creation and scaling. In the vCore model, resource limits of each service objective on each hardware generation are documented, for both [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware generations in the vCore model, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
## Frequently asked questions (FAQs)
azure-sql Quota Increase Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/quota-increase-request.md
If your subscription needs access in a particular region, select the **Region ac
### Request enabling specific hardware in a region
-If a [hardware generation](service-tiers-vcore.md#hardware-generations) you want to use is not available in your region (see [Hardware availability](service-tiers-vcore.md#hardware-availability)), you may request it using the following steps.
+If a hardware generation you want to use is not available in your region, you may request it using the following steps. For more information on hardware generations and regional availability, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
1. Select the **Other quota request** quota type.
azure-sql Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/reserved-capacity-overview.md
If you have questions or need help, [create a support request](https://portal.az
The vCore reservation discount is applied automatically to the number of databases or managed instances that match the capacity reservation scope and attributes. You can update the scope of the capacity reservation through the [Azure portal](https://portal.azure.com), PowerShell, Azure CLI, or the API.
+- For information on Azure SQL Database service tiers for the vCore model, see [vCore model overview - Azure SQL Database](service-tiers-sql-database-vcore.md).
+- For information on Azure SQL Managed Instance service tiers for the vCore model, see [vCore model overview - Azure SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md).
+ To learn how to manage the capacity reservation, see [manage reserved capacity](../../cost-management-billing/reservations/manage-reserved-vm-instance.md). To learn more about Azure Reservations, see the following articles:
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-sql-database-vcore.md
+
+ Title: vCore purchase model
+description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Database
+++++++ Last updated : 06/02/2021++
+# vCore purchase model overview - Azure SQL Database
+
+This article reviews the vCore purchase model for [Azure SQL Database](sql-database-paas-overview.md). For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
+
+The virtual core (vCore) purchase model used by Azure SQL Database provides several benefits over the DTU purchase model:
+
+- Higher compute, memory, I/O, and storage limits.
+- Control over the hardware generation to better match compute and memory requirements of the workload.
+- Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md).
+- Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.
+- [Reserved instance pricing](reserved-capacity-overview.md) is only available for vCore purchase model.
+
+## Service tiers
+
+Service tier options in the vCore purchase model include General Purpose, Business Critical, and Hyperscale. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
+
+|**Use case**|**General Purpose**|**Business Critical**|**Hyperscale**|
+|||||
+|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
+|Storage|Uses remote storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB<br/>**Serverless compute**:<br/>5 GB - 3 TB|Uses local SSD storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB|Flexible autogrow of storage as needed. Supports up to 100 TB of storage. Uses local SSD storage for local buffer-pool cache and local data storage. Uses Azure remote storage as final long-term data store. |
+|IOPS and throughput (approximate)|**SQL Database**: See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).|See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).|Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS and throughput will depend on the workload.|
+|Availability|1 replica, no read-scale replicas|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|1 read-write replica, plus 0-4 [read-scale replicas](read-scale-out.md)|
+|Backups|[Read-access geo-redundant storage (RA-GRS)](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|[RA-GRS](../..//storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|Snapshot-based backups in Azure remote storage. Restores use these snapshots for fast recovery. Backups are instantaneous and don't impact compute I/O performance. Restores are fast and aren't a size-of-data operation (taking minutes rather than hours or days).|
+|In-memory|Not supported|Supported|Partial support. Memory-optimized table types, table variables, and natively compiled modules are supported.|
+|||
++
+### Choosing a service tier
+
+For information on selecting a service tier for your particular workload, see the following articles:
+
+- [When to choose the General Purpose service tier](service-tier-general-purpose.md#when-to-choose-this-service-tier)
+- [When to choose the Business Critical service tier](service-tier-business-critical.md#when-to-choose-this-service-tier)
+- [When to choose the Hyperscale service tier](service-tier-hyperscale.md#who-should-consider-the-hyperscale-service-tier)
++
+## Compute tiers
+
+Compute tier options in the vCore model include the provisioned and serverless compute tiers.
++
+### Provisioned compute
+
+The provisioned compute tier provides a specific amount of compute resources that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour.
++
+### Serverless compute
+
+The [serverless compute tier](serverless-tier-overview.md) auto-scales compute resources based on workload activity, and bills for the amount of compute used per second.
+++
+## Hardware generations
+
+Hardware generation options in the vCore model include Gen 4/5, M-series, Fsv2-series, and DC-series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
+
+### Gen4/Gen5
+
+- Gen4/Gen5 hardware provides balanced compute and memory resources, and is suitable for most database workloads that do not have higher memory, higher vCore, or faster single vCore requirements as provided by Fsv2-series or M-series.
+
+For regions where Gen4/Gen5 is available, see [Gen4/Gen5 availability](#gen4gen5-1).
+
+### Fsv2-series
+
+- Fsv2-series is a compute optimized hardware option delivering low CPU latency and high clock speed for the most CPU demanding workloads.
+- Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than Gen5, and the 72 vCore size can provide more CPU performance for less cost than 80 vCores on Gen5.
+- Fsv2 provides less memory and tempdb per vCore than other hardware so workloads sensitive to those limits may want to consider Gen5 or M-series instead.  
+
+Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see [Fsv2-series availability](#fsv2-series-1).
+
+### M-series
+
+- M-series is a memory optimized hardware option for workloads demanding more memory and higher compute limits than provided by Gen5.
+- M-series provides 29 GB per vCore and up to 128 vCores, which increases the memory limit relative to Gen5 by 8x to nearly 4 TB.
+
+M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions where M-series is available, see [M-series availability](#m-series-1).
+
+#### Azure offer types supported by M-series
+
+To access M-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by M-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+
+<!--
+To enable M-series hardware for a subscription and region, a support request must be opened. The subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). If the support request is approved, then the selection and provisioning experience of M-series follows the same pattern as for other hardware generations. For regions where M-series is available, see [M-series availability](#m-series).
+-->
+
+### DC-series
+
+> [!NOTE]
+> DC-series is currently in **public preview**.
+
+- DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology.
+- DC-series is required for [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves), which is not supported with other hardware configurations.
+- DC-series is designed for workloads that process sensitive data and demand confidential query processing capabilities, provided by Always Encrypted with secure enclaves.
+- DC-series hardware provides balanced compute and memory resources.
+
+DC-series is only supported for the Provisioned compute (Serverless is not supported) and it does not support zone redundancy. For regions where DC-series is available, see [DC-series availability](#dc-series-1).
+
+#### Azure offer types supported by DC-series
+
+To access DC-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
+
+### Compute and memory specifications
++
+|Hardware generation |Compute |Memory |
+|:|:|:|
+|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
+|Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
+|Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
+|M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
+|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
+
+\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, while hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+
+For more information on resource limits, see [Resource limits for single databases (vCore)](resource-limits-vcore-single-databases.md), or [Resource limits for elastic pools (vCore)](resource-limits-vcore-elastic-pools.md).
+
+### Selecting a hardware generation
+
+In the Azure portal, you can select the hardware generation for a database or pool in SQL Database at the time of creation, or you can change the hardware generation of an existing database or pool.
+
+**To select a hardware generation when creating a SQL Database or pool**
+
+For detailed information, see [Create a SQL Database](single-database-create-quickstart.md).
+
+On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select the **Change configuration** link:
++
+Select the desired hardware generation:
++
+**To change the hardware generation of an existing SQL Database or pool**
+
+For a database, on the Overview page, select the **Pricing tier** link:
++
+For a pool, on the Overview page, select **Configure**.
+
+Follow the steps to change configuration, and select the hardware generation as described in the previous steps.
+
+### Hardware availability
+
+#### <a id="gen4gen5-1"></a> Gen4/Gen5
+
+Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new databases must be deployed on Gen5 hardware.
+
+Gen5 is available in all public regions worldwide.
+
+#### Fsv2-series
+
+Fsv2-series is available in the following regions:
+Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, East Asia, East Us, France Central, India Central, Korea Central, Korea South, North Europe, South Africa North, Southeast Asia, UK South, UK West, West Europe, West Us 2.
+
+#### M-series
+
+M-series is available in the following regions:
+East US, North Europe, West Europe, West US 2.
+<!--
+M-series may also have limited availability in additional regions. You can request a different region than listed here, but fulfillment in a different region may not be possible.
+
+To enable M-series availability in a subscription, access must be requested by [filing a new support request](#create-a-support-request-to-enable-m-series).
++
+##### Create a support request to enable M-series:
+
+1. Select **Help + support** in the portal.
+2. Select **New support request**.
+
+On the **Basics** page, provide the following:
+
+1. For **Issue type**, select **Service and subscription limits (quotas)**.
+2. For **Subscription** = select the subscription to enable M-series.
+3. For **Quota type**, select **SQL database**.
+4. Select **Next** to go to the **Details** page.
+
+On the **Details** page, provide the following:
+
+1. In the **PROBLEM DETAILS** section select the **Provide details** link.
+2. For **SQL Database quota type** select **M-series**.
+3. For **Region**, select the region to enable M-series.
+ For regions where M-series is available, see [M-series availability](#m-series).
+
+Approved support requests are typically fulfilled within 5 business days.
+-->
+
+#### DC-series
+
+> [!NOTE]
+> DC-series is currently in **public preview**.
+
+DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
+
+If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) following the instructions in [Request quota increases for Azure SQL Database and SQL Managed Instance](quota-increase-request.md).
+
+## Next steps
+
+- To get started, see [Creating a SQL Database using the Azure portal](single-database-create-quickstart.md)
+- For pricing details, see the [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/single/)
+- For details about the specific compute and storage sizes available, see:
+ - [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md)
+ - [vCore-based resource limits for pooled Azure SQL Database](resource-limits-vcore-elastic-pools.md)
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
Title: vCore purchasing model overview-
+ Title: vCore purchase model
+ description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Database and Azure SQL Managed Instance.
Previously updated : 05/01/2021 Last updated : 05/18/2021+ # vCore model overview - Azure SQL Database and Azure SQL Managed Instance
The virtual core (vCore) purchasing model used by Azure SQL Database and Azure SQL Managed Instance provides several benefits: -- Higher compute, memory, I/O, and storage limits. - Control over the hardware generation to better match compute and memory requirements of the workload. - Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](reserved-capacity-overview.md). - Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.
+- In the case of Azure SQL Database, vCore purchasing model provides higher compute, memory, I/O, and storage limits than the DTU model.
-## Service tiers
-
-Service tier options in the vCore model include General Purpose, Business Critical, and Hyperscale. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
-
-|-|**General Purpose**|**Business Critical**|**Hyperscale**|
-|||||
-|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
-|Storage|Uses remote storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB<br/>**Serverless compute**:<br/>5 GB - 3 TB<br/>**SQL Managed Instance**: 32 GB - 8 TB |Uses local SSD storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB<br/>**SQL Managed Instance**:<br/>32 GB - 4 TB |Flexible autogrow of storage as needed. Supports up to 100 TB of storage. Uses local SSD storage for local buffer-pool cache and local data storage. Uses Azure remote storage as final long-term data store. |
-|IOPS and throughput (approximate)|**SQL Database**: See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).<br/>**SQL Managed Instance**: See [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#service-tier-characteristics).|See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).|Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS and throughput will depend on the workload.|
-|Availability|1 replica, no read-scale replicas|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|1 read-write replica, plus 0-4 [read-scale replicas](read-scale-out.md)|
-|Backups|[Read-access geo-redundant storage (RA-GRS)](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|[RA-GRS](../..//storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|Snapshot-based backups in Azure remote storage. Restores use these snapshots for fast recovery. Backups are instantaneous and don't impact compute I/O performance. Restores are fast and aren't a size-of-data operation (taking minutes rather than hours or days).|
-|In-memory|Not supported|Supported|Not supported|
-|||
--
-### Choosing a service tier
-
-For information on selecting a service tier for your particular workload, see the following articles:
--- [When to choose the General Purpose service tier](service-tier-general-purpose.md#when-to-choose-this-service-tier)-- [When to choose the Business Critical service tier](service-tier-business-critical.md#when-to-choose-this-service-tier)-- [When to choose the Hyperscale service tier](service-tier-hyperscale.md#who-should-consider-the-hyperscale-service-tier)--
-## Compute tiers
-
-Compute tier options in the vCore model include the provisioned and serverless compute tiers.
--
-### Provisioned compute
-
-The provisioned compute tier provides a specific amount of compute resources that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour.
--
-### Serverless compute
-
-The [serverless compute tier](serverless-tier-overview.md) auto-scales compute resources based on workload activity, and bills for the amount of compute used per second.
---
-## Hardware generations
-
-Hardware generation options in the vCore model include Gen 4/5, M-series, Fsv2-series, and DC-series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
-
-### Gen4/Gen5
--- Gen4/Gen5 hardware provides balanced compute and memory resources, and is suitable for most database workloads that do not have higher memory, higher vCore, or faster single vCore requirements as provided by Fsv2-series or M-series.-
-For regions where Gen4/Gen5 is available, see [Gen4/Gen5 availability](#gen4gen5-1).
-
-### Fsv2-series
--- Fsv2-series is a compute optimized hardware option delivering low CPU latency and high clock speed for the most CPU demanding workloads.-- Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than Gen5, and the 72 vCore size can provide more CPU performance for less cost than 80 vCores on Gen5. -- Fsv2 provides less memory and tempdb per vCore than other hardware so workloads sensitive to those limits may want to consider Gen5 or M-series instead.  -
-Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see [Fsv2-series availability](#fsv2-series-1).
-
-### M-series
--- M-series is a memory optimized hardware option for workloads demanding more memory and higher compute limits than provided by Gen5.-- M-series provides 29 GB per vCore and up to 128 vCores, which increases the memory limit relative to Gen5 by 8x to nearly 4 TB.-
-M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions where M-series is available, see [M-series availability](#m-series-1).
-
-#### Azure offer types supported by M-series
-
-To access M-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by M-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
-
-<!--
-To enable M-series hardware for a subscription and region, a support request must be opened. The subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). If the support request is approved, then the selection and provisioning experience of M-series follows the same pattern as for other hardware generations. For regions where M-series is available, see [M-series availability](#m-series).
>-
-### DC-series
-
-> [!NOTE]
-> DC-series is currently in **public preview**.
--- DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology.-- DC-series is required for [Always Encrypted with secure enclaves](/sql/relational-databases/security/encryption/always-encrypted-enclaves), which is not supported with other hardware configurations.-- DC-series is designed for workloads that process sensitive data and demand confidential query processing capabilities, provided by Always Encrypted with secure enclaves.-- DC-series hardware provides balanced compute and memory resources.-
-DC-series is only supported for the Provisioned compute (Serverless is not supported) and it does not support zone redundancy. For regions where DC-series is available, see [DC-series availability](#dc-series-1).
-
-### Compute and memory specifications
-
-|Hardware generation |Compute |Memory |
-|:|:|:|
-|Gen4 |- Intel® E5-2673 v3 (Haswell) 2.4 GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
-|Gen5 |**Provisioned compute**<br>- Intel® E5-2673 v4 (Broadwell) 2.3-GHz, Intel® SP-8160 (Skylake)\*, and Intel® 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel® E5-2673 v4 (Broadwell) 2.3-GHz and Intel® SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
-|Fsv2-series |- Intel® 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
-|M-series |- Intel® E7-8890 v3 2.5 GHz and Intel® 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
-|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
-
-\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel® SP-8160 (Skylake) processors appears as Gen6, while hardware generation for databases using Intel® 8272CL (Cascade Lake) appears as Gen7. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
-
-For more information on resource limits, see [Resource limits for single databases (vCore)](resource-limits-vcore-single-databases.md), or [Resource limits for elastic pools (vCore)](resource-limits-vcore-elastic-pools.md).
-
-### Selecting a hardware generation
-
-In the Azure portal, you can select the hardware generation for a database or pool in SQL Database at the time of creation, or you can change the hardware generation of an existing database or pool.
-
-**To select a hardware generation when creating a SQL Database or pool**
-
-For detailed information, see [Create a SQL Database](single-database-create-quickstart.md).
-
-On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select the **Change configuration** link:
-
- ![configure database](./media/service-tiers-vcore/configure-sql-database.png)
-
-Select the desired hardware generation:
+For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
- ![select hardware](./media/service-tiers-vcore/select-hardware.png)
--
-**To change the hardware generation of an existing SQL Database or pool**
-
-For a database, on the Overview page, select the **Pricing tier** link:
-
- ![change hardware](./media/service-tiers-vcore/change-hardware.png)
-
-For a pool, on the Overview page, select **Configure**.
-
-Follow the steps to change configuration, and select the hardware generation as described in the previous steps.
-
-**To select a hardware generation when creating a SQL Managed Instance**
-
-For detailed information, see [Create a SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
-
-On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select desired hardware generation:
-
- ![configure SQL Managed Instance](./media/service-tiers-vcore/configure-managed-instance.png)
-
-**To change the hardware generation of an existing SQL Managed Instance**
-
-# [The Azure portal](#tab/azure-portal)
-
-From the SQL Managed Instance page, select **Pricing tier** link placed under the Settings section
-
-![change SQL Managed Instance hardware](./media/service-tiers-vcore/change-managed-instance-hardware.png)
-
-On the Pricing tier page, you will be able to change hardware generation as described in the previous steps.
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the following PowerShell script:
-
-```powershell-interactive
-Set-AzSqlInstance -Name "managedinstance1" -ResourceGroupName "ResourceGroup01" -ComputeGeneration Gen5
-```
-
-For more details, check [Set-AzSqlInstance](/powershell/module/az.sql/set-azsqlinstance) command.
-
-# [The Azure CLI](#tab/azure-cli)
-
-Use the following CLI command:
-
-```azurecli-interactive
-az sql mi update -g mygroup -n myinstance --family Gen5
-```
-
-For more details, check [az sql mi update](/cli/azure/sql/mi#az_sql_mi_update) command.
---
-### Hardware availability
-
-#### <a name="gen4gen5-1"></a> Gen4/Gen5
-
-Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new databases must be deployed on Gen5 hardware.
-
-Gen5 is available in all public regions worldwide.
-
-#### Fsv2-series
-
-Fsv2-series is available in the following regions:
-Australia Central, Australia Central 2, Australia East, Australia Southeast, Brazil South, Canada Central, East Asia, East Us, France Central, India Central, Korea Central, Korea South, North Europe, South Africa North, Southeast Asia, UK South, UK West, West Europe, West Us 2.
--
-#### M-series
-
-M-series is available in the following regions:
-East US, North Europe, West Europe, West US 2.
-<!--
-M-series may also have limited availability in additional regions. You can request a different region than listed here, but fulfillment in a different region may not be possible.
-
-To enable M-series availability in a subscription, access must be requested by [filing a new support request](#create-a-support-request-to-enable-m-series).
--
-##### Create a support request to enable M-series:
-
-1. Select **Help + support** in the portal.
-2. Select **New support request**.
-
-On the **Basics** page, provide the following:
-
-1. For **Issue type**, select **Service and subscription limits (quotas)**.
-2. For **Subscription** = select the subscription to enable M-series.
-3. For **Quota type**, select **SQL database**.
-4. Select **Next** to go to the **Details** page.
-
-On the **Details** page, provide the following:
-
-1. In the **PROBLEM DETAILS** section select the **Provide details** link.
-2. For **SQL Database quota type** select **M-series**.
-3. For **Region**, select the region to enable M-series.
- For regions where M-series is available, see [M-series availability](#m-series).
-
-Approved support requests are typically fulfilled within 5 business days.
>-
-#### DC-series
-
-> [!NOTE]
-> DC-series is currently in **public preview**.
+## Service tiers
-DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
+The following articles provide specific information on the vCore purchase model in each product.
-If you need DC-series in a currently unsupported region, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) following the instructions in [Request quota increases for Azure SQL Database and SQL Managed Instance](quota-increase-request.md).
+- For information on Azure SQL Database service tiers for the vCore model, see [vCore model overview - Azure SQL Database](service-tiers-sql-database-vcore.md).
+- For information on Azure SQL Managed Instance service tiers for the vCore model, see [vCore model overview - Azure SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md).
## Next steps
To get started, see:
- [Creating a SQL Database using the Azure portal](single-database-create-quickstart.md) - [Creating a SQL Managed Instance using the Azure portal](../managed-instance/instance-create-quickstart.md)
-For pricing details, see the [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/single/).
-
+- For pricing details, see
+ - [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/single/)
+ - [Azure SQL Managed Instance single instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/single/)
+ - [Azure SQL Managed Instance pools pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/pools/)
+
For details about the specific compute and storage sizes available in the general purpose and business critical service tiers, see: - [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md).
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
Use the [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvau
Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> ` -ObjectId $server.Identity.PrincipalId -PermissionsToKeys get, wrapKey, unwrapKey ```
-For adding permissions to your server on a Managed HSM, add the 'Managed HSM Crypto Service Encryption' local RBAC role to the server. This will enable the server to perform get, wrap key, unwrap key operations on the keys in the Managed HSM.
+For adding permissions to your server on a Managed HSM, add the 'Managed HSM Crypto Service Encryption User' local RBAC role to the server. This will enable the server to perform get, wrap key, unwrap key operations on the keys in the Managed HSM.
[Instructions for provisioning server access on Managed HSM](../../key-vault/managed-hsm/role-management.md) ## Add the Key Vault key to the server and set the TDE Protector
azure-sql Troubleshoot Common Connectivity Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-connectivity-issues.md
public bool IsTransient(Exception ex)
[step-4-connect-resiliently-to-sql-with-ado-net-a78n]: /sql/connect/ado-net/step-4-connect-resiliently-sql-ado-net
-[step-4-connect-resiliently-to-sql-with-php-p42h]: /sql/connect/php/step-4-connect-resiliently-to-sql-with-php
+[step-4-connect-resiliently-to-sql-with-php-p42h]: /sql/connect/php/step-4-connect-resiliently-to-sql-with-php
+
+## See also
+
+- [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md)
+- [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md)
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
The following steps can either help you work around the problem or provide you w
If you repeatedly encounter this error, try to resolve the issue by following these steps:
-1. Check the sys.dm_exec_requests view to see any open sessions that have a high value for the total_elapsed_time column. Perform this check by running the following SQL script:
+1. Check the `sys.dm_exec_requests` view to see any open sessions that have a high value for the `total_elapsed_time` column. Perform this check by running the following SQL script:
```sql SELECT * FROM sys.dm_exec_requests; ```
-2. Determine the **input buffer** for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the session_id of the offending query, for example:
+2. Determine the input buffer for the head blocker using the [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) dynamic management function, and the `session_id` of the offending query, for example:
```sql SELECT * FROM sys.dm_exec_input_buffer (100,0);
If you repeatedly encounter this error, try to resolve the issue by following th
3. Tune the query.
- > [!Note]
+ > [!NOTE]
> For more information on troubleshooting blocking in Azure SQL Database, see [Understand and resolve Azure SQL Database blocking problems](understand-resolve-blocking.md). Also consider batching your queries. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
Try to reduce the number of rows that are operated on immediately by implementin
> [!NOTE] > For an index rebuild, the average size of the field that's updated should be substituted by the average index size.
+ > [!NOTE]
+ > For more information on troubleshooting a full transaction log in Azure SQL Database and Azure SQL Managed Instance, see [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md).
++ ### Error 40553: The session has been terminated because of excessive memory usage ``40553 : The session has been terminated because of excessive memory usage. Try modifying your query to process fewer rows.``
For more information about how to enable logging, see [Enable diagnostics loggin
## Next steps - [Azure SQL Database connectivity architecture](./connectivity-architecture.md)-- [Azure SQL Database and Azure Synapse Analytics network access controls](./network-access-controls-overview.md)
+- [Azure SQL Database and Azure Synapse Analytics network access controls](./network-access-controls-overview.md)
+
+## See also
+
+- [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md)
+- [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
azure-sql Troubleshoot Transaction Log Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-transaction-log-errors-issues.md
+
+ Title: Troubleshoot transaction log issues
+
+description: Provides steps to troubleshoot Azure SQL Database transaction log issues in Azure SQL Database or Azure SQL Managed Instance
++++++++ Last updated : 06/02/2021++
+# Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance
+
+You may see errors 9002 or 40552 when the transaction log is full and cannot accept new transactions. These errors occur when the database transaction log, managed by Azure SQL Database or Azure SQL Managed Instance, exceeds thresholds for space and cannot continue to accept transactions.
+
+These errors are similar to issues with a full transaction log in SQL Server, but have different resolutions in Azure SQL Database or Azure SQL Managed Instance.
+
+> [!NOTE]
+> **This article is focused on Azure SQL Database and Azure SQL Managed Instance.** Azure SQL Database and Azure SQL Managed Instance are based on the latest stable version of the Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may differ. For more on blocking in SQL Server, see [Troubleshoot a Full Transaction Log (SQL Server Error 9002)](/sql/relational-databases/logs/troubleshoot-a-full-transaction-log-sql-server-error-9002).
+
+## Automated backups and the transaction log
+
+There are some key differences in Azure SQL Database and Azure SQL Managed Instance in regards to database file space management.
+
+- In Azure SQL Database or Azure SQL Managed Instance, transaction log backup are taken automatically. For frequency, retention, and more information, see [Automated backups - Azure SQL Database & SQL Managed Instance](automated-backups-overview.md).
+- In Azure SQL Database, free disk space, database file growth, and file location are also managed, so the typical causes and resolutions of transaction log issues are different from SQL Server.
+- In Azure SQL Managed Instance, the location and name of database files cannot be managed but administrators can manage database files and file autogrowth settings. The typical causes and resolutions of transaction log issues are similar to SQL Server.
+
+Similar to SQL Server, the transaction log for each database is truncated whenever a log backup is taken. Truncation leaves empty space in the log file, which can then access new transactions. When the log file cannot be truncated by log backups, the log file grows to accommodate new transactions. If the log file grows to its maximum limits in Azure SQL Database or Azure SQL Managed Instance, new transactions cannot be accepted. This is a very unusual scenario.
+
+## Prevented transaction log truncation
+
+To discover what is preventing log truncation in a given case, refer to `log_reuse_wait_desc` in `sys.databases`. The log reuse wait informs you to what conditions or causes are preventing the transaction log from being truncated by a regular log backup. For more information, see [sys.databases &#40;Transact-SQL&#41;](/sql/relational-databases/system-catalog-views/sys-databases-transact-sql).
+
+```sql
+SELECT [name], log_reuse_wait_desc FROM sys.databases;
+```
+
+The following values of `log_reuse_wait_desc` in `sys.databases` may indicate the reason why the database's transaction log truncation is being prevented:
+
+| log_reuse_wait_desc | Diagnosis | Response required |
+|--|--|--|
+| **Nothing** | Typical state. There is nothing blocking the log from truncating. | No. |
+| **Checkpoint** | A checkpoint is needed for log truncation. Rare. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **Log Backup** | A log backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **Active backup or restore** | A database backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **Active transaction** | An ongoing transaction is preventing log truncation. | The log file cannot be truncated due to active and/or uncommitted transactions. See next section.|
+| **AVAILABILITY_REPLICA** | Synchronization to the secondary replica is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
++
+### Log truncation prevented by an active transaction
+
+The most common scenario for a transaction log that cannot accept new transactions is a long-running or blocked transaction.
+
+Run this sample query to find uncommitted or active transactions and their properties.
+
+- Returns information about transaction properties, from [sys.dm_tran_active_transactions](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-session-transactions-transact-sql).
+- Returns session connection information, from [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql).
+- Returns request information (for active requests), from [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql). This query can also be used to identify sessions being blocked, look for the `request_blocked_by`. For more information on blocking, see [Gather blocking information](understand-resolve-blocking.md#gather-blocking-information).
+- Returns the current request's text or input buffer text, using the [sys.dm_exec_sql_text](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sql-text-transact-sql) or [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) DMVs. If the data returned by the `text` field of `sys.dm_exec_sql_text` is NULL, the request is not active but has an outstanding transaction. In that case, the `event_info` field of `sys.dm_exec_input_buffer` will contain the last command string passed to the database engine.
+
+```sql
+SELECT [database_name] = db_name(s.database_id)
+, tat.transaction_id, tat.transaction_begin_time, tst.session_id
+, session_open_transaction_count = tst.open_transaction_count --uncommitted and unrolled back transactions open.
+, transaction_duration_s = datediff(s, tat.transaction_begin_time, sysdatetime())
+, input_buffer = ib.event_info
+, request_text = CASE WHEN r.statement_start_offset = 0 and r.statement_end_offset= 0 THEN left(est.text, 4000)
+ ELSE SUBSTRING ( est.[text], r.statement_start_offset/2 + 1,
+ CASE WHEN r.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), est.[text]))
+ ELSE r.statement_end_offset/2 - r.statement_start_offset/2 + 1
+ END ) END
+, request_status = r.status
+, request_blocked_by = r.blocking_session_id
+, transaction_state = CASE tat.transaction_state
+ WHEN 0 THEN 'The transaction has not been completely initialized yet.'
+ WHEN 1 THEN 'The transaction has been initialized but has not started.'
+ WHEN 2 THEN 'The transaction is active - has not been committed or rolled back.'
+ WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
+ WHEN 4 THEN 'The commit process has been initiated on the distributed transaction. This is for distributed transactions only. The distributed transaction is still active but further processing cannot take place.'
+ WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
+ WHEN 6 THEN 'The transaction has been committed.'
+ WHEN 7 THEN 'The transaction is being rolled back.'
+ WHEN 8 THEN 'The transaction has been rolled back.' END
+, transaction_name = tat.name
+, azure_dtc_state --Applies to: Azure SQL Database only
+ = CASE tat.dtc_state
+ WHEN 1 THEN 'ACTIVE'
+ WHEN 2 THEN 'PREPARED'
+ WHEN 3 THEN 'COMMITTED'
+ WHEN 4 THEN 'ABORTED'
+ WHEN 5 THEN 'RECOVERED' END
+, transaction_type = CASE tat.transaction_type WHEN 1 THEN 'Read/write transaction'
+ WHEN 2 THEN 'Read-only transaction'
+ WHEN 3 THEN 'System transaction'
+ WHEN 4 THEN 'Distributed transaction' END
+, tst.is_user_transaction
+, local_or_distributed = CASE tst.is_local WHEN 1 THEN 'Local transaction, not distributed' WHEN 0 THEN 'Distributed transaction or an enlisted bound session transaction.' END
+, transaction_uow --for distributed transactions.
+, s.login_time, s.host_name, s.program_name, s.client_interface_name, s.login_name, s.is_user_process
+, session_cpu_time = s.cpu_time, session_logical_reads = s.logical_reads, session_reads = s.reads, session_writes = s.writes
+, observed = sysdatetimeoffset()
+FROM sys.dm_tran_active_transactions AS tat
+INNER JOIN sys.dm_tran_session_transactions AS tst on tat.transaction_id = tst.transaction_id
+INNER JOIN Sys.dm_exec_sessions AS s on s.session_id = tst.session_id
+LEFT OUTER JOIN sys.dm_exec_requests AS r on r.session_id = s.session_id
+CROSS APPLY sys.dm_exec_input_buffer(s.session_id, null) AS ib
+OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) AS est;
+```
++
+### File management to free more space
+
+If the transaction log is prevented from truncating, freeing more space in the allocation of database files may be part of the solution. However, resolving the root the condition blocking transaction log file truncation is key.
+
+In some cases, temporarily creating more disk space will allow a long-running transaction to complete, removing the condition blocking the transaction log file from truncating with a normal transaction log backup. However, freeing up space in the allocation may provide only temporary relief until the transaction log grows again.
+
+For more information on managing the file space of databases and elastic pools, see [Manage file space for databases in Azure SQL Database](file-space-manage.md).
++
+### Error 40552: The session has been terminated because of excessive transaction log space usage
+
+``40552: The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.``
+
+To resolve this issue, try the following methods:
+
+1. The issue can occur because of insert, update, or delete operations. Review the transaction to avoid unnecessary writes. Try to reduce the number of rows that are operated on immediately by implementing batching or splitting into multiple smaller transactions. For more information, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).
+2. The issue can occur because of index rebuild operations. To avoid this issue, ensure the following formula is true: (number of rows that are affected in the table) multiplied by (the average size of field that's updated in bytes + 80) < 2 gigabytes (GB). For large tables, consider creating partitions and performing index maintenance only on some partitions of the table. For more information, see [Create Partitioned Tables and Indexes](/sql/relational-databases/partitions/create-partitioned-tables-and-indexes?view=azuresqldb-current&preserve-view=true).
+3. If you perform bulk inserts using the `bcp.exe` utility or the `System.Data.SqlClient.SqlBulkCopy` class, try using the `-b batchsize` or `BatchSize` options to limit the number of rows copied to the server in each transaction. For more information, see [bcp Utility](/sql/tools/bcp-utility).
+4. If you are rebuilding an index with the `ALTER INDEX` statement, use the `SORT_IN_TEMPDB = ON` and `ONLINE = ON` options. For more information, see [ALTER INDEX (Transact-SQL)](/sql/t-sql/statements/alter-index-transact-sql).
+
+> [!NOTE]
+> For more information on other resource governor errors, see [Resource governance errors](troubleshoot-common-errors-issues.md#resource-governance-errors).
+
+## Next steps
+
+- [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md)
+- [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
++
+For information on transaction log sizes, see:
+- For vCore resource limits for a single database, see [resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
+- For vCore resource limits for elastic pools, see [resource limits for elastic pools using the vCore purchasing model](resource-limits-vcore-elastic-pools.md)
+- For DTU resource limits for a single database, see [resource limits for single databases using the DTU purchasing model](resource-limits-dtu-single-databases.md)
+- For DTU resource limits for elastic pools, see [resource limits for elastic pools using the DTU purchasing model](resource-limits-dtu-elastic-pools.md)
+- For resource limits for SQL Managed Instance, see [resource limits for SQL Managed Instance](../managed-instance/resource-limits.md).
+
azure-sql Understand Resolve Blocking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/understand-resolve-blocking.md
Referencing DMVs to troubleshoot blocking has the goal of identifying the SPID (
Remember to run each of these scripts in the target Azure SQL database.
-* The sp_who and sp_who2 commands are older commands to show all current sessions. The DMV sys.dm_exec_sessions returns more data in a result set that is easier to query and filter. You will find sys.dm_exec_sessions at the core of other queries.
+* The sp_who and sp_who2 commands are older commands to show all current sessions. The DMV `sys.dm_exec_sessions` returns more data in a result set that is easier to query and filter. You will find `sys.dm_exec_sessions` at the core of other queries.
-* If you already have a particular session identified, you can use `DBCC INPUTBUFFER(<session_id>)` to find the last statement that was submitted by a session. Similar results can be returned with the sys.dm_exec_input_buffer dynamic management function (DMF), in a result set that is easier to query and filter, providing the session_id and the request_id. For example, to return the most recent query submitted by session_id 66 and request_id 0:
+* If you already have a particular session identified, you can use `DBCC INPUTBUFFER(<session_id>)` to find the last statement that was submitted by a session. Similar results can be returned with the `sys.dm_exec_input_buffer` dynamic management function (DMF), in a result set that is easier to query and filter, providing the session_id and the request_id. For example, to return the most recent query submitted by session_id 66 and request_id 0:
```sql SELECT * FROM sys.dm_exec_input_buffer (66,0); ```
-* Refer to the sys.dm_exec_requests and reference the blocking_session_id column. When blocking_session_id = 0, a session is not being blocked. While sys.dm_exec_requests lists only requests currently executing, any connection (active or not) will be listed in sys.dm_exec_sessions. Build on this common join between sys.dm_exec_requests and sys.dm_exec_sessions in the next query.
+* Refer to the `blocking_session_id` column in `sys.dm_exec_requests`. When `blocking_session_id` = 0, a session is not being blocked. While `sys.dm_exec_requests` lists only requests currently executing, any connection (active or not) will be listed in `sys.dm_exec_sessions`. Build on this common join between `sys.dm_exec_requests` and `sys.dm_exec_sessions` in the next query.
-* Run this sample query to find the actively executing queries and their current SQL batch text or input buffer text, using the [sys.dm_exec_sql_text](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sql-text-transact-sql) or [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) DMVs. If the data returned by the `text` field of sys.dm_exec_sql_text is NULL, the query is not currently executing. In that case, the `event_info` field of sys.dm_exec_input_buffer will contain the last command string passed to the SQL engine. This query can also be used to identify sessions blocking other sessions, including a list of session_ids blocked per session_id.
+* Run this sample query to find the actively executing queries and their current SQL batch text or input buffer text, using the [sys.dm_exec_sql_text](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sql-text-transact-sql) or [sys.dm_exec_input_buffer](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-input-buffer-transact-sql) DMVs. If the data returned by the `text` field of `sys.dm_exec_sql_text` is NULL, the query is not currently executing. In that case, the `event_info` field of `sys.dm_exec_input_buffer` will contain the last command string passed to the SQL engine. This query can also be used to identify sessions blocking other sessions, including a list of session_ids blocked per session_id.
```sql WITH cteBL (session_id, blocking_these) AS
INNER JOIN sys.dm_exec_connections [s_ec] ON [s_ec].[session_id] = [s_tst].[sess
CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est]; ```
-* Reference [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql) that is at the thread/task layer of SQL. This returns information about what SQL wait type the request is currently experiencing. Like sys.dm_exec_requests, only active requests are returned by sys.dm_os_waiting_tasks.
+* Reference [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql) that is at the thread/task layer of SQL. This returns information about what SQL wait type the request is currently experiencing. Like `sys.dm_exec_requests`, only active requests are returned by `sys.dm_os_waiting_tasks`.
> [!Note] > For much more on wait types including aggregated wait stats over time, see the DMV [sys.dm_db_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-wait-stats-azure-sql-database). This DMV returns aggregate wait stats for the current database only. * Use the [sys.dm_tran_locks](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-locks-transact-sql) DMV for more granular information on what locks have been placed by queries. This DMV can return large amounts of data on a production SQL Server, and is useful for diagnosing what locks are currently held.
-Due to the INNER JOIN on sys.dm_os_waiting_tasks, the following query restricts the output from sys.dm_tran_locks only to currently blocked requests, their wait status, and their locks:
+Due to the INNER JOIN on `sys.dm_os_waiting_tasks`, the following query restricts the output from `sys.dm_tran_locks` only to currently blocked requests, their wait status, and their locks:
```sql SELECT table_name = schema_name(o.schema_id) + '.' + o.name
By examining the previous information, you can determine the cause of most block
## Analyze blocking data
-* Examine the output of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions to determine the heads of the blocking chains, using blocking_these and session_id. This will most clearly identify which requests are blocked and which are blocking. Look further into the sessions that are blocked and blocking. Is there a common or root to the blocking chain? They likely share a common table, and one or more of the sessions involved in a blocking chain is performing a write operation.
+* Examine the output of the DMVs `sys.dm_exec_requests` and `sys.dm_exec_sessions` to determine the heads of the blocking chains, using `blocking_these` and `session_id`. This will most clearly identify which requests are blocked and which are blocking. Look further into the sessions that are blocked and blocking. Is there a common or root to the blocking chain? They likely share a common table, and one or more of the sessions involved in a blocking chain is performing a write operation.
-* Examine the output of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions for information on the SPIDs at the head of the blocking chain. Look for the following fields:
+* Examine the output of the DMVs `sys.dm_exec_requests` and `sys.dm_exec_sessions` for information on the SPIDs at the head of the blocking chain. Look for the following fields:
- `sys.dm_exec_requests.status` This column shows the status of a particular request. Typically, a sleeping status indicates that the SPID has completed execution and is waiting for the application to submit another query or batch. A runnable or running status indicates that the SPID is currently processing a query. The following table gives brief explanations of the various status values.
By examining the previous information, you can determine the cause of most block
Similarly, this field tells you the number of open transactions in this request. If this value is greater than 0, the SPID is within an open transaction and may be holding locks acquired by any statement within the transaction. - `sys.dm_exec_requests.wait_type`, `wait_time`, and `last_wait_type`
- If the `sys.dm_exec_requests.wait_type` is NULL, the request is not currently waiting for anything and the `last_wait_type` value indicates the last `wait_type` that the request encountered. For more information about `sys.dm_os_wait_stats` and a description of the most common wait types, see [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql). The `wait_time` value can be used to determine if the request is making progress. When a query against the sys.dm_exec_requests table returns a value in the `wait_time` column that is less than the `wait_time` value from a previous query of sys.dm_exec_requests, this indicates that the prior lock was acquired and released and is now waiting on a new lock (assuming non-zero `wait_time`). This can be verified by comparing the `wait_resource` between sys.dm_exec_requests output, which displays the resource for which the request is waiting.
+ If the `sys.dm_exec_requests.wait_type` is NULL, the request is not currently waiting for anything and the `last_wait_type` value indicates the last `wait_type` that the request encountered. For more information about `sys.dm_os_wait_stats` and a description of the most common wait types, see [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql). The `wait_time` value can be used to determine if the request is making progress. When a query against the `sys.dm_exec_requests` table returns a value in the `wait_time` column that is less than the `wait_time` value from a previous query of `sys.dm_exec_requests`, this indicates that the prior lock was acquired and released and is now waiting on a new lock (assuming non-zero `wait_time`). This can be verified by comparing the `wait_resource` between `sys.dm_exec_requests` output, which displays the resource for which the request is waiting.
- `sys.dm_exec_requests.wait_resource` This field indicates the resource that a blocked request is waiting on. The following table lists common `wait_resource` formats and their meaning:
By examining the previous information, you can determine the cause of most block
|:-|:-|:-|:-| | Table | DatabaseID:ObjectID:IndexID | TAB: 5:261575970:1 | In this case, database ID 5 is the pubs sample database and object ID 261575970 is the titles table and 1 is the clustered index. | | Page | DatabaseID:FileID:PageID | PAGE: 5:1:104 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, and page 104 is a page belonging to the titles table. To identify the object_id the page belongs to, use the dynamic management function [sys.dm_db_page_info](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-page-info-transact-sql), passing in the DatabaseID, FileId, PageId from the `wait_resource`. |
- | Key | DatabaseID:Hobt_id (Hash value for index key) | KEY: 5:72057594044284928 (3300a4f361aa) | In this case, database ID 5 is Pubs, Hobt_ID 72057594044284928 corresponds to index_id 2 for object_id 261575970 (titles table). Use the sys.partitions catalog view to associate the hobt_id to a particular index_id and object_id. There is no way to unhash the index key hash to a specific key value. |
+ | Key | DatabaseID:Hobt_id (Hash value for index key) | KEY: 5:72057594044284928 (3300a4f361aa) | In this case, database ID 5 is Pubs, Hobt_ID 72057594044284928 corresponds to index_id 2 for object_id 261575970 (titles table). Use the `sys.partitions` catalog view to associate the hobt_id to a particular `index_id` and `object_id`. There is no way to unhash the index key hash to a specific key value. |
| Row | DatabaseID:FileID:PageID:Slot(row) | RID: 5:1:104:3 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, page 104 is a page belonging to the titles table, and slot 3 indicates the row's position on the page. | | Compile | DatabaseID:FileID:PageID:Slot(row) | RID: 5:1:104:3 | In this case, database ID 5 is pubs, file ID 1 is the primary data file, page 104 is a page belonging to the titles table, and slot 3 indicates the row's position on the page. |
By examining the previous information, you can determine the cause of most block
, s.host_name, s.program_name, s.client_interface_name, s.login_name, s.is_user_process FROM sys.dm_tran_active_transactions tat INNER JOIN sys.dm_tran_session_transactions tst on tat.transaction_id = tst.transaction_id
- INNER JOIN Sys.dm_exec_sessions s on s.session_id = tst.session_id
+ INNER JOIN sys.dm_exec_sessions s on s.session_id = tst.session_id
LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id CROSS APPLY sys.dm_exec_input_buffer(s.session_id, null) AS ib; ``` - Other columns
- The remaining columns in [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql) and [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) can provide insight into the root of a problem as well. Their usefulness varies depending on the circumstances of the problem. For example, you can determine if the problem happens only from certain clients (hostname), on certain network libraries (net_library), when the last batch submitted by a SPID was `last_request_start_time` in sys.dm_exec_sessions, how long a request had been running using `start_time` in sys.dm_exec_requests, and so on.
+ The remaining columns in [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql) and [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) can provide insight into the root of a problem as well. Their usefulness varies depending on the circumstances of the problem. For example, you can determine if the problem happens only from certain clients (hostname), on certain network libraries (net_library), when the last batch submitted by a SPID was `last_request_start_time` in `sys.dm_exec_sessions`, how long a request had been running using `start_time` in `sys.dm_exec_requests`, and so on.
## Common blocking scenarios The table below maps common symptoms to their probable causes.
-The `wait_type`, `open_transaction_count`, and `status` columns refer to information returned by [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql), other columns may be returned by [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql). The "Resolves?" column indicates whether or not the blocking will resolve on its own, or whether the session should be killed via the `KILL` command. For more information, see [KILL (Transact-SQL)](/sql/t-sql/language-elements/kill-transact-sql).
+The Waittype, Open_Tran, and Status columns refer to information returned by [sys.dm_exec_request](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql), other columns may be returned by [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql). The "Resolves?" column indicates whether or not the blocking will resolve on its own, or whether the session should be killed via the `KILL` command. For more information, see [KILL (Transact-SQL)](/sql/t-sql/language-elements/kill-transact-sql).
| Scenario | Waittype | Open_Tran | Status | Resolves? | Other Symptoms | |:-|:-|:-|:-|:-|:-|--|
-| 1 | NOT NULL | >= 0 | runnable | Yes, when query finishes. | In sys.dm_exec_sessions, **reads**, **cpu_time**, and/or **memory_usage** columns will increase over time. Duration for the query will be high when completed. |
+| 1 | NOT NULL | >= 0 | runnable | Yes, when query finishes. | In `sys.dm_exec_sessions`, `reads`, `cpu_time`, and/or `memory_usage` columns will increase over time. Duration for the query will be high when completed. |
| 2 | NULL | \>0 | sleeping | No, but SPID can be killed. | An attention signal may be seen in the Extended Event session for this SPID, indicating a query time-out or cancel has occurred. | | 3 | NULL | \>= 0 | runnable | No. Will not resolve until client fetches all rows or closes connection. SPID can be killed, but it may take up to 30 seconds. | If open_transaction_count = 0, and the SPID holds locks while the transaction isolation level is default (READ COMMMITTED), this is a likely cause. |
-| 4 | Varies | \>= 0 | runnable | No. Will not resolve until client cancels queries or closes connections. SPIDs can be killed, but may take up to 30 seconds. | The **hostname** column in sys.dm_exec_sessions for the SPID at the head of a blocking chain will be the same as one of the SPID it is blocking. |
+| 4 | Varies | \>= 0 | runnable | No. Will not resolve until client cancels queries or closes connections. SPIDs can be killed, but may take up to 30 seconds. | The `hostname` column in `sys.dm_exec_sessions` for the SPID at the head of a blocking chain will be the same as one of the SPID it is blocking. |
| 5 | NULL | \>0 | rollback | Yes. | An attention signal may be seen in the Extended Events session for this SPID, indicating a query time-out or cancel has occurred, or simply a rollback statement has been issued. |
-| 6 | NULL | \>0 | sleeping | Eventually. When Windows NT determines the session is no longer active, the Azure SQL Database connection will be broken. | The `last_request_start_time` value in sys.dm_exec_sessions is much earlier than the current time. |
+| 6 | NULL | \>0 | sleeping | Eventually. When Windows NT determines the session is no longer active, the Azure SQL Database connection will be broken. | The `last_request_start_time` value in `sys.dm_exec_sessions` is much earlier than the current time. |
## Detailed blocking scenarios
The `wait_type`, `open_transaction_count`, and `status` columns refer to informa
1. Blocking caused by a sleeping SPID that has an uncommitted transaction
- This type of blocking can often be identified by a SPID that is sleeping or awaiting a command, yet whose transaction nesting level (`@@TRANCOUNT`, `open_transaction_count` from sys.dm_exec_requests) is greater than zero. This can occur if the application experiences a query time-out, or issues a cancel without also issuing the required number of
+ This type of blocking can often be identified by a SPID that is sleeping or awaiting a command, yet whose transaction nesting level (`@@TRANCOUNT`, `open_transaction_count` from `sys.dm_exec_requests`) is greater than zero. This can occur if the application experiences a query time-out, or issues a cancel without also issuing the required number of
ROLLBACK and/or COMMIT statements. When a SPID receives a query time-out or a cancel, it will terminate the current query and batch, but does not automatically roll back or commit the transaction. The application is responsible for this, as Azure SQL Database cannot assume that an entire transaction must be rolled back due to a single query being canceled. The query time-out or cancel will appear as an ATTENTION signal event for the SPID in the Extended Event session. To demonstrate an uncommitted explicit transaction, issue the following query:
The `wait_type`, `open_transaction_count`, and `status` columns refer to informa
The output of the second query indicates that the transaction nesting level is one. All the locks acquired in the transaction are still be held until the transaction was committed or rolled back. If applications explicitly open and commit transactions, a communication or other error could leave the session and its transaction in an open state.
- Use the script earlier in this article based on sys.dm_tran_active_transactions to identify currently uncommitted transactions across the instance.
+ Use the script earlier in this article based on `sys.dm_tran_active_transactions` to identify currently uncommitted transactions across the instance.
**Resolutions**:
The `wait_type`, `open_transaction_count`, and `status` columns refer to informa
1. Blocking caused by a session in a rollback state
- A data modification query that is KILLed, or canceled outside of a user-defined transaction, will be rolled back. This can also occur as a side effect of the client network session disconnecting, or when a request is selected as the deadlock victim. This can often be identified by observing the output of sys.dm_exec_requests, which may indicate the ROLLBACK **command**, and the **percent_complete column** may show progress.
+ A data modification query that is KILLed, or canceled outside of a user-defined transaction, will be rolled back. This can also occur as a side effect of the client network session disconnecting, or when a request is selected as the deadlock victim. This can often be identified by observing the output of `sys.dm_exec_requests`, which may indicate the ROLLBACK command, and the `percent_complete` column may show progress.
Thanks to the [Accelerated Database Recovery feature](../accelerated-database-recovery.md) introduced in 2019, lengthy rollbacks should be rare.
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/glossary-terms.md
Title: Glossary of terms -+ description: A glossary of terms for working with Azure SQL Database, Azure SQL Managed Instance, and SQL on Azure VM.
Previously updated : 12/09/2020 Last updated : 5/18/2021 # Azure SQL Database glossary of terms ## Azure SQL Database
Last updated 12/09/2020
|:|:|:| |Azure service|Azure SQL Database or SQL Database|[Azure SQL Database](database/sql-database-paas-overview.md)| |Purchasing model|DTU-based purchasing model|[DTU-based purchasing model](database/service-tiers-dtu.md)|
-||vCore-based purchasing model|[vCore-based purchasing model](database/service-tiers-vcore.md)|
+||vCore-based purchasing model|[vCore-based purchasing model](database/service-tiers-sql-database-vcore.md)|
|Deployment option |Single database|[Single databases](database/single-database-overview.md)| ||Elastic pool|[Elastic pool](database/elastic-pool-overview.md)|
-|Service tier|Basic, Standard, Premium, General Purpose, Hyperscale, Business Critical|For service tiers in the vCore model, see [SQL Database service tiers](database/service-tiers-vcore.md#service-tiers). For service tiers in the DTU model, see [DTU model](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers).|
-|Compute tier|Serverless compute|[Serverless compute](database/service-tiers-vcore.md#compute-tiers)
-||Provisioned compute|[Provisioned compute](database/service-tiers-vcore.md#compute-tiers)
-|Compute generation|Gen5, M-series, Fsv2-series, DC-series|[Hardware generations](database/service-tiers-vcore.md#hardware-generations)
+|Service tier|Basic, Standard, Premium, General Purpose, Hyperscale, Business Critical|For service tiers in the vCore model, see [SQL Database service tiers](database/service-tiers-sql-database-vcore.md#service-tiers). For service tiers in the DTU model, see [DTU model](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers).|
+|Compute tier|Serverless compute|[Serverless compute](database/service-tiers-sql-database-vcore.md#compute-tiers)
+||Provisioned compute|[Provisioned compute](database/service-tiers-sql-database-vcore.md#compute-tiers)
+|Compute generation|Gen5, M-series, Fsv2-series, DC-series|[Hardware generations](database/service-tiers-sql-database-vcore.md#hardware-generations)
|Server entity| Server |[Logical SQL servers](database/logical-servers.md)| |Resource type|vCore|A CPU core provided to the compute resource for a single database, elastic pool. | ||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources available for a single database, or elastic pool. Storage size is the maximum amount of storage available for a single database, or elastic pool. For sizing options in the vCore model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md). (../managed-instance/resource-limits.md). For sizing options in the DTU model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
Last updated 12/09/2020
|Context|Term|More information| |:|:|:| |Azure service|Azure SQL Managed Instance|[SQL Managed Instance](managed-instance/sql-managed-instance-paas-overview.md)|
-|Purchasing model|vCore-based purchasing model|[vCore-based purchasing model](database/service-tiers-vcore.md)|
+|Purchasing model|vCore-based purchasing model|[vCore-based purchasing model](managed-instance/service-tiers-managed-instance-vcore.md)|
|Deployment option |Single Instance|[Single Instance](managed-instance/sql-managed-instance-paas-overview.md)| ||Instance pool (preview)|[Instance pools](managed-instance/instance-pools-overview.md)| |Service tier|General Purpose, Business Critical|[SQL Managed Instance service tiers](managed-instance/sql-managed-instance-paas-overview.md#service-tiers)|
-|Compute tier|Provisioned compute|[Provisioned compute](database/service-tiers-vcore.md#compute-tiers)|
-|Compute generation|Gen5|[Hardware generations](database/service-tiers-vcore.md#hardware-generations)
+|Compute tier|Provisioned compute|[Provisioned compute](managed-instance/service-tiers-managed-instance-vcore.md#compute-tiers)|
+|Compute generation|Gen5|[Hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations)
|Server entity|Managed instance or instance| N/A as the SQL Managed Instance is in itself the server | |Resource type|vCore|A CPU core provided to the compute resource for SQL Managed Instance.| ||Compute size and storage amount|Compute size is the maximum amount of CPU, memory and other non-storage related resources for SQL Managed Instance. Storage size is the maximum amount of storage available for a SQL Managed Instance. For sizing options, [SQL Managed Instances](managed-instance/resource-limits.md). |
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/frequently-asked-questions-faq.md
Title: Frequently asked questions (FAQ)- description: Azure SQL Managed Instance frequently asked questions (FAQ)
ms.devlang: --++ Last updated 09/21/2020
This is a current limitation on underlying component that verifies subnet name a
**How can I scale my managed instance?**
-You can scale your managed instance from [Azure portal](../database/service-tiers-vcore.md?tabs=azure-portal#selecting-a-hardware-generation), [PowerShell](/archive/blogs/sqlserverstorageengine/change-size-azure-sql-managed-instance-using-powershell), [Azure CLI](/cli/azure/sql/mi#az_sql_mi_update) or [ARM templates](/archive/blogs/sqlserverstorageengine/updating-azure-sql-managed-instance-properties-using-arm-templates).
+You can scale your managed instance from [Azure portal](../managed-instance/service-tiers-managed-instance-vcore.md?tabs=azure-portal#selecting-a-hardware-generation), [PowerShell](/archive/blogs/sqlserverstorageengine/change-size-azure-sql-managed-instance-using-powershell), [Azure CLI](/cli/azure/sql/mi#az_sql_mi_update) or [ARM templates](/archive/blogs/sqlserverstorageengine/updating-azure-sql-managed-instance-properties-using-arm-templates).
**Can I move my Managed Instance from one region to another?**
azure-sql Job Automation Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/job-automation-managed-instance.md
+
+ Title: Job automation with SQL Agent jobs
+
+description: 'Automation options to run Transact-SQL (T-SQL) scripts in Azure SQL Managed Instance'
++++
+dev_langs:
+ - TSQL
++++ Last updated : 06/03/2021+
+# Automate management tasks using SQL Agent jobs in Azure SQL Managed Instance
+
+Using [SQL Server Agent](/sql/ssms/agent/sql-server-agent) in SQL Server and [SQL Managed Instance](sql-managed-instance-paas-overview.md), you can create and schedule jobs that could be periodically executed against one or many databases to run Transact-SQL (T-SQL) queries and perform maintenance tasks. This article covers the use of SQL Agent for SQL Managed Instance.
+
+> [!Note]
+> SQL Agent is not available in Azure SQL Database or Azure Synapse Analytics. Instead, we recommend [Job automation with Elastic Jobs](../database/job-automation-overview.md).
+
+### SQL Agent job limitations in SQL Managed Instance
+
+It is worth noting the differences between SQL Agent available in SQL Server and as part of SQL Managed Instance. For more on the supported feature differences between SQL Server and SQL Managed Instance, see [Azure SQL Managed Instance T-SQL differences from SQL Server](../../azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent).
+
+Some of the SQL Agent features that are available in SQL Server are not supported in SQL Managed Instance:
+
+- SQL Agent settings are read only.
+ - The system stored procedure `sp_set_agent_properties` is not supported.
+- Enabling/disabling SQL Agent is currently not supported. SQL Agent is always running.
+- Notifications are partially supported:
+ - Pager is not supported.
+ - NetSend is not supported.
+ - Alerts are not supported.
+- Proxies are not supported.
+- Eventlog is not supported.
+- Job schedule trigger based on an idle CPU is not supported.
+
+## When to use SQL Agent jobs
+
+There are several scenarios when you could use SQL Agent jobs:
+
+- Automate management tasks and schedule them to run every weekday, after hours, etc.
+ - Deploy schema changes, credentials management, performance data collection or tenant (customer) telemetry collection.
+ - Update reference data (information common across all databases), load data from Azure Blob storage. Microsoft recommends using [SHARED ACCESS SIGNATURE authentication to authenticate to Azure Blob storage](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage).
+ - Common maintenance tasks including `DBCC CHECKDB` to ensure data integrity or index maintenance to improve query performance. Configure jobs to execute across a collection of databases on a recurring basis, such as during off-peak hours.
+ - Collect query results from a set of databases into a central table on an on-going basis. Performance queries can be continually executed and configured to trigger additional tasks to be executed.
+- Collect data for reporting
+ - Aggregate data from a collection of databases into a single destination table.
+ - Execute longer running data processing queries across a large set of databases, for example the collection of customer telemetry. Results are collected into a single destination table for further analysis.
+- Data movements
+ - Create jobs that replicate changes made in your databases to other databases or collect updates made in remote databases and apply changes in the database.
+ - Create jobs that load data from or to your databases using SQL Server Integration Services (SSIS).
+
+## SQL Agent jobs in SQL Managed Instance
+
+SQL Agent Jobs are executed by the SQL Agent service that continues to be used for task automation in SQL Server and SQL Managed Instance.
+
+SQL Agent Jobs are a specified series of T-SQL scripts against your database. Use jobs to define an administrative task that can be run one or more times and monitored for success or failure.
+
+A job can run on one local server or on multiple remote servers. SQL Agent Jobs are an internal Database Engine component that is executed within the SQL Managed Instance service.
+
+There are several key concepts in SQL Agent Jobs:
+
+- **Job steps** set of one or many steps that should be executed within the job. For every job step you can define retry strategy and the action that should happen if the job step succeeds or fails.
+- **Schedules** define when the job should be executed.
+- **Notifications** enable you to define rules that will be used to notify operators via email once the job completes.
+
+### SQL Agent job steps
+
+SQL Agent Job steps are sequences of actions that SQL Agent should execute. Every step has the following step that should be executed if the step succeeds or fails, number of retries in a case of failure.
+
+SQL Agent enables you to create different types of job steps, such as Transact-SQL job steps that execute a single Transact-SQL batch against the database, or OS command/PowerShell steps that can execute custom OS script, [SSIS job steps](../../data-factory/how-to-invoke-ssis-package-managed-instance-agent.md) that enable you to load data using SSIS runtime, or [replication](../managed-instance/replication-transactional-overview.md) steps that can publish changes from your database to other databases.
+
+> [!Note]
+> For more information on leveraging the Azure SSIS Integration Runtime with SSISDB hosted by SQL Managed Instance, see [Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory](../../data-factory/how-to-use-sql-managed-instance-with-ir.md).
+
+[Transactional replication](../managed-instance/replication-transactional-overview.md) can replicate the changes from your tables into other databases in SQL Managed Instance, Azure SQL Database, or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../../azure-sql/managed-instance/replication-between-two-instances-configure-tutorial.md).
+
+Other types of job steps are not currently supported in SQL Managed Instance, including:
+
+- Merge replication job step is not supported.
+- Queue Reader is not supported.
+- Analysis Services are not supported
+
+### SQL Agent job schedules
+
+A schedule specifies when a job runs. More than one job can run on the same schedule, and more than one schedule can apply to the same job.
+
+A schedule can define the following conditions for the time when a job runs:
+
+- Whenever SQL Server Agent starts. Job is activated after every failover.
+- One time, at a specific date and time, which is useful for delayed execution of some job.
+- On a recurring schedule.
+
+> [!Note]
+> SQL Managed Instance currently does not enable you to start a job when the CPU is idle.
+
+### SQL Agent job notifications
+
+SQL Agent Jobs enable you to get notifications when the job finishes successfully or fails. You can receive notifications via email.
+
+If it isn't already enabled, first you would need to configure [the Database Mail feature](/sql/relational-databases/database-mail/database-mail) on SQL Managed Instance:
+
+```sql
+GO
+EXEC sp_configure 'show advanced options', 1;
+GO
+RECONFIGURE;
+GO
+EXEC sp_configure 'Database Mail XPs', 1;
+GO
+RECONFIGURE
+```
+
+As an example exercise, set up the email account that will be used to send the email notifications. Assign the account to the email profile called `AzureManagedInstance_dbmail_profile`. To send e-mail using SQL Agent jobs in SQL Managed Instance, there should be a profile that must be called `AzureManagedInstance_dbmail_profile`. Otherwise, SQL Managed Instance will be unable to send emails via SQL Agent. See the following sample:
+
+```sql
+-- Create a Database Mail account
+EXECUTE msdb.dbo.sysmail_add_account_sp
+ @account_name = 'SQL Agent Account',
+ @description = 'Mail account for Azure SQL Managed Instance SQL Agent system.',
+ @email_address = '$(loginEmail)',
+ @display_name = 'SQL Agent Account',
+ @mailserver_name = '$(mailserver)' ,
+ @username = '$(loginEmail)' ,
+ @password = '$(password)';
+
+-- Create a Database Mail profile
+EXECUTE msdb.dbo.sysmail_add_profile_sp
+ @profile_name = 'AzureManagedInstance_dbmail_profile',
+ @description = 'E-mail profile used for messages sent by Managed Instance SQL Agent.';
+
+-- Add the account to the profile
+EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
+ @profile_name = 'AzureManagedInstance_dbmail_profile',
+ @account_name = 'SQL Agent Account',
+ @sequence_number = 1;
+```
+
+Test the Database Mail configuration via T-SQL using the [sp_send_db_mail](/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) system stored procedure:
+
+```sql
+DECLARE @body VARCHAR(4000) = 'The email is sent from ' + @@SERVERNAME;
+EXEC msdb.dbo.sp_send_dbmail
+ @profile_name = 'AzureManagedInstance_dbmail_profile',
+ @recipients = 'ADD YOUR EMAIL HERE',
+ @body = 'Add some text',
+ @subject = 'Azure SQL Instance - test email';
+```
+
+You can notify the operator that something happened with your SQL Agent jobs. An operator defines contact information for an individual responsible for the maintenance of one or more instances in SQL Managed Instance. Sometimes, operator responsibilities are assigned to one individual.
+
+In systems with multiple instances in SQL Managed Instance or SQL Server, many individuals can share operator responsibilities. An operator does not contain security information, and does not define a security principal. Ideally, an operator is not an individual whose responsibilities may change, but an email distribution group.
+
+You can [create operators](/sql/relational-databases/system-stored-procedures/sp-add-operator-transact-sql) using SQL Server Management Studio (SSMS) or the Transact-SQL script shown in the following example:
+
+```sql
+EXEC msdb.dbo.sp_add_operator
+ @name=N'AzureSQLTeam',
+ @enabled=1,
+ @email_address=N'AzureSQLTeamn@contoso.com';
+```
+
+Confirm the email's success or failure via the [Database Mail Log](/sql/relational-databases/database-mail/database-mail-log-and-audits) in SSMS.
+
+You can then [modify any SQL Agent job](/sql/relational-databases/system-stored-procedures/sp-update-job-transact-sql) and assign operators that will be notified via email if the job completes, fails, or succeeds using SSMS or the following Transact-SQL script:
+
+```sql
+EXEC msdb.dbo.sp_update_job @job_name=N'Load data using SSIS',
+ @notify_level_email=3, -- Options are: 1 on succeed, 2 on failure, 3 on complete
+ @notify_email_operator_name=N'AzureSQLTeam';
+```
+
+### SQL Agent job history
+
+SQL Managed Instance currently doesn't allow you to change any SQL Agent properties because they are stored in the underlying registry values. This means options for adjusting the Agent retention policy for job history records are fixed at the default of 1000 total records and max 100 history records per job.
+
+### SQL Agent fixed database role membership
+
+If users linked to non-sysadmin logins are added to any of the three SQL Agent fixed database roles in the msdb system database, there exists an issue in which explicit EXECUTE permissions need to be granted to three system stored procedures in the master database. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
+
+Once you add users to a SQL Agent fixed database role (SQLAgentUserRole, SQLAgentReaderRole, or SQLAgentOperatorRole) in msdb, for each of the user's logins added to these roles, execute the below T-SQL script to explicitly grant EXECUTE permissions to the system stored procedures listed. This example assumes that the user name and login name are the same:
+
+```sql
+USE [master]
+GO
+CREATE USER [login_name] FOR LOGIN [login_name];
+GO
+GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO [login_name];
+GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO [login_name];
+GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];
+```
+
+## Learn more
+
+- [What is Azure SQL Managed Instance?](../managed-instance/sql-managed-instance-paas-overview.md)
+- [What's new in Azure SQL Database & SQL Managed Instance?](../../azure-sql/database/doc-changes-updates-release-notes.md?tabs=managed-instance)
+- [Azure SQL Managed Instance T-SQL differences from SQL Server](../../azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent)
+- [Features comparison: Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/features-comparison.md)
azure-sql Service Tiers Managed Instance Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/service-tiers-managed-instance-vcore.md
+
+ Title: vCore purchase model
+description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Managed Instance.
+++++++ Last updated : 05/18/2021+
+# Azure SQL Managed Instance - Compute Hardware in the vCore Service Tier
+
+This article reviews the vCore purchase model for [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md). For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](../database/purchasing-models.md).
+
+The virtual core (vCore) purchase model used by Azure SQL Managed Instance has following characteristics:
+
+- Control over the hardware generation to better match compute and memory requirements of the workload.
+- Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).
+- Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.
+- [Reserved instance pricing](../database/reserved-capacity-overview.md) is only available for vCore purchase model.
+
+## <a id="compute-tiers"></a>Service tiers
+
+Service tier options in the vCore purchase model include General Purpose and Business Critical. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
+
+|**Use case**|**General Purpose**|**Business Critical**|
+||||
+|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance.|
+|Storage|Uses remote storage. 32 GB - 8 TB |Uses local SSD storage. 32 GB - 4 TB |
+|IOPS and throughput (approximate)|See [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#service-tier-characteristics).|See [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#service-tier-characteristics).|
+|Availability|1 replica, no read-scale replicas|3 replicas, 1 [read-scale replica](../database/read-scale-out.md),<br/>zone-redundant high availability (HA)|
+|Backups|[Read-access geo-redundant storage (RA-GRS)](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|[RA-GRS](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|
+|In-memory|Not supported|Supported|
+||||
+
+### Choosing a service tier
+
+For information on selecting a service tier for your particular workload, see the following articles:
+
+- [When to choose the General Purpose service tier](../database/service-tier-general-purpose.md#when-to-choose-this-service-tier)
+- [When to choose the Business Critical service tier](../database/service-tier-business-critical.md#when-to-choose-this-service-tier)
+
+## Compute
+
+SQL Managed Instance compute provides a specific amount of compute resources that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour.
+
+## Hardware generations
+
+Hardware generation options in the vCore model include Gen 5 hardware series. The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload.
+
+### Compute and memory specifications
+
+|Hardware generation |Compute |Memory |
+|:|:|:|
+|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
+|Gen5 |- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)|5.1 GB per vCore<br>- Provision up to 408 GB|
+
+\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for instances using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, while hardware generation for instances using Intel&reg; 8272CL (Cascade Lake) appears as Gen7. Resource limits for all Gen5 instances are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+
+### Selecting a hardware generation
+
+In the Azure portal, you can select the hardware generation at the time of creation, or you can change the hardware generation of an existing SQL Managed Instance
+
+**To select a hardware generation when creating a SQL Managed Instance**
+
+For detailed information, see [Create a SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
+
+On the **Basics** tab, select the **Configure database** link in the **Compute + storage** section, and then select desired hardware generation:
+
+
+**To change the hardware generation of an existing SQL Managed Instance**
+
+#### [The Azure portal](#tab/azure-portal)
+
+From the SQL Managed Instance page, select **Pricing tier** link placed under the Settings section
++
+On the Pricing tier page, you will be able to change hardware generation as described in the previous steps.
+
+#### [PowerShell](#tab/azure-powershell)
+
+Use the following PowerShell script:
+
+```powershell-interactive
+Set-AzSqlInstance -Name "managedinstance1" -ResourceGroupName "ResourceGroup01" -ComputeGeneration Gen5
+```
+
+For more details, check [Set-AzSqlInstance](/powershell/module/az.sql/set-azsqlinstance) command.
+
+#### [The Azure CLI](#tab/azure-cli)
+
+Use the following CLI command:
+
+```azurecli-interactive
+az sql mi update -g mygroup -n myinstance --family Gen5
+```
+
+For more details, check [az sql mi update](/cli/azure/sql/mi#az_sql_mi_update) command.
+++
+### Hardware availability
+
+#### <a id="gen4gen5-1"></a> Gen4/Gen5
+
+Gen4 hardware is [being phased out](https://azure.microsoft.com/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/) and is no longer available for new deployments. All new instances must be deployed on Gen5 hardware.
+
+Gen5 is available in all public regions worldwide.
+
+## Next steps
+
+- To get started, see [Creating a SQL Managed Instance using the Azure portal](instance-create-quickstart.md)
+- For pricing details, see
+ - [Azure SQL Managed Instance single instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/single/)
+ - [Azure SQL Managed Instance pools pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/pools/)
+- For details about the specific compute and storage sizes available in the general purpose and business critical service tiers, see [vCore-based resource limits for Azure SQL Managed Instance](resource-limits.md).
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
Operations:
- The `OPENDATASOURCE` function can be used to execute queries only on SQL Server instances. They can be either managed, on-premises, or in virtual machines. Only the `SQLNCLI`, `SQLNCLI11`, and `SQLOLEDB` values are supported as a provider. An example is `SELECT * FROM OPENDATASOURCE('SQLNCLI', '...').AdventureWorks2012.HumanResources.Employee`. See [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql). - Linked servers cannot be used to read files (Excel, CSV) from the network shares. Try to use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#e-importing-data-from-a-csv-file), [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#g-accessing-data-from-a-csv-file-with-a-format-file) that reads CSV files from Azure Blob Storage, or a [linked server that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/). Track this requests on [SQL Managed Instance Feedback item](https://feedback.azure.com/forums/915676-sql-managed-instance/suggestions/35657887-linked-server-to-non-sql-sources)|
-Linkeds servers on Azure SQL Managed Instance support only SQL authentication. AAD authentication is not supported yet.
+Linked servers on Azure SQL Managed Instance support only SQL authentication. AAD authentication is not supported yet.
### PolyBase
azure-video-analyzer Configure Signal Gate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/configure-signal-gate.md
Correlation IDs are set for every event. These IDs are set from the initial even
* **activationEvaluationWindow**: 0 seconds to 10 seconds * **activationSignalOffset**: -1 minute to 1 minute
-* **minimumActivationTime**: 1 second to 1 hour
-* **maximumActivationTime**: 1 second to 1 hour
+* **minimumActivationTime**: 10 seconds to 1 hour
+* **maximumActivationTime**: 10 seconds to 1 hour
In the use case, you would set the parameters as follows:
azure-video-analyzer Use Intel Grpc Video Analytics Serving Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-intel-grpc-video-analytics-serving-tutorial.md
If you open the [pipeline topology](https://raw.githubusercontent.com/Azure/vide
1. Edit the *operations.json* file: * Change the link to the live pipeline topology:
- `"topologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/grpcExtensionOpenVINO/topology.json"`
+ `"pipelineTopologyUrl" : "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/grpcExtensionOpenVINO/topology.json"`
* Under `pipelineTopologySet`, edit the name of the live pipeline topology to match the value in the preceding link:
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
If you want to use NSX-T to host your DHCP server, you'll create a DHCP server a
1. Select **DHCP** for the **Server Type**, provide the server name and IP address, and then select **Save**.
- :::image type="content" source="./media/manage-dhcp/dhcp-server-settings.png" alt-text="add DHCP server" border="true":::
+ :::image type="content" source="./media/manage-dhcp/dhcp-server-settings.png" alt-text="Screenshot showing how to add a DHCP server in NSX-T Manager." border="true":::
1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
- :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="select the gateway to use" border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway for using a DHCP server." border="true":::
1. Select **No IP Allocation Set** to add a subnet.
- :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="add a subnet" border="true":::
+ :::image type="content" source="./media/manage-dhcp/add-subnet.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway for using a DHCP server." border="true":::
1. For **Type**, select **DHCP Local Server**.
When you create a relay to a DHCP server, you'll also specify the DHCP IP addres
1. Select **Set Subnets** to specify the DHCP IP address for the subnet.
- :::image type="content" source="./media/manage-dhcp/network-segments.png" alt-text="network segments" border="true":::
+ :::image type="content" source="./media/manage-dhcp/network-segments.png" alt-text="Screenshot showing how to set the subnets to specify the DHCP IP address for using a DHCP server." border="true":::
1. Modify the gateway IP address if needed, and enter the DHCP range IP.
- :::image type="content" source="./media/manage-dhcp/edit-subnet.png" alt-text="edit subnets" border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-subnet.png" alt-text="Screenshot showing the gateway IP address and DHCP ranges for using a DHCP server." border="true":::
1. Select **Apply**, and then **Save**. The segment is assigned a DHCP server pool.
- :::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="DHCP server pool assigned to segment" border="true":::
+ :::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="Screenshot showing that the DHCP server pool assigned to segment for using a DHCP server." border="true":::
## Use a third-party external DHCP server
Use a DHCP relay for any non-NSX based DHCP service. For example, a VM running D
1. Select **DHCP Relay** for the **Server Type**, provide the server name and IP address, and then select **Save**.
- :::image type="content" source="./media/manage-dhcp/create-dhcp-relay.png" alt-text="create dhcp relay service" border="true":::
+ :::image type="content" source="./media/manage-dhcp/create-dhcp-relay.png" alt-text="Screenshot showing how to create a DHCP relay service in NSX-T Manager." border="true":::
1. Select **Tier 1 Gateways**, select the vertical ellipsis on the Tier-1 gateway, and then select **Edit**.
- :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway-relay.png" alt-text="edit tier 1 gateway" border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-tier-1-gateway-relay.png" alt-text="Screenshot showing how to edit the Tier-1 Gateway." border="true":::
1. Select **No IP Allocation Set** to define the IP address allocation.
- :::image type="content" source="./media/manage-dhcp/edit-ip-address-allocation.png" alt-text="edit ip address allocation" border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-ip-address-allocation.png" alt-text="Screenshot showing how to add a subnet to the Tier-1 Gateway." border="true":::
1. For **Type**, select **DHCP Server**.
When you create a relay to a DHCP server, you'll also specify the DHCP IP addres
1. Select **Set Subnets** to specify the DHCP IP address for the subnet.
- :::image type="content" source="./media/manage-dhcp/network-segments.png" alt-text="network segments" border="true":::
+ :::image type="content" source="./media/manage-dhcp/network-segments.png" alt-text="Screenshot showing how to set the subnets to specify the DHCP IP address." border="true":::
1. Modify the gateway IP address if needed, and enter the DHCP range IP.
- :::image type="content" source="./media/manage-dhcp/edit-subnet.png" alt-text="edit subnets" border="true":::
+ :::image type="content" source="./media/manage-dhcp/edit-subnet.png" alt-text="Screenshot showing the gateway IP address and DHCP ranges." border="true":::
1. Select **Apply**, and then **Save**. The segment is assigned a DHCP server pool.
- :::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="DHCP server pool assigned to segment" border="true":::
-
+ :::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="Screenshot showing that the DHCP server pool assigned to segment." border="true":::
## Next steps
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
You'll need an Azure account in an Azure subscription. The Azure subscription mu
## Request host quota for EA customers
-1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information:
- **Issue type:** Technical - **Subscription:** Select your subscription - **Service:** All services > Azure VMware Solution
You'll need an Azure account in an Azure subscription. The Azure subscription mu
- **Problem type:** Capacity Management Issues - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
-1. In the **Description** of the support ticket, on the **Details** tab, provide:
+1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
- POC or Production - Region Name
You'll need an Azure account in an Azure subscription. The Azure subscription mu
- Any other details >[!NOTE]
- >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
+ >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
1. Select **Review + Create** to submit the request.
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
1. Expand customer details and select **Microsoft Azure Management Portal**.
- 1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
+ 1. In Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information:
- **Issue type:** Technical - **Subscription:** Select your subscription - **Service:** All services > Azure VMware Solution
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
- **Problem type:** Capacity Management Issues - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
- 1. In the **Description** of the support ticket, on the **Details** tab, provide:
+ 1. In the **Description** of the support ticket, on the **Details** tab, provide information for:
- POC or Production - Region Name
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
- Is intended to host multiple customers? >[!NOTE]
- >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
+ >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
1. Select **Review + Create** to submit the request.
azure-web-pubsub Concept Billing Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/concept-billing-model.md
For billing, only the outbound traffic is counted.
For example, imagine you have an application with Azure Web PubSub service and Azure Functions. One user broadcast 4 KB data to 10 connections in a group. It turns out 4 KB for upstream from service to function and 40 KB from service broadcast to 10 connections.
-> Outbound traffic for billing = 4 KB + 40 KB = 44 KB
+> Outbound traffic for billing = 4 KB (upstream traffic) + 4 KB * 10 (service broadcasting to clients traffic) = 44 KB
> Equivalent message count = 44 KB / 2 KB = 22
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support (Preview) description: Learn about Archive Tier Support for Azure Backup Previously updated : 05/24/2021 Last updated : 06/03/2021
Supported clients:
`$bckItm = $BackupItemList | Where-Object {$_.Name -match '<dbName>' -and $_.ContainerName -match '<vmName>'}`
-1. Add the date range for which you want to view the recovery points. For example, if you want to view the recovery points from the last 60 days to last 30 days, use the following command:
+1. Add the date range for which you want to view the recovery points. For example, if you want to view the recovery points from the last 124 days to last 95 days, use the following command:
```azurepowershell
- $startDate = (Get-Date).AddDays(-59)
- $endDate = (Get-Date).AddDays(-30)
+ $startDate = (Get-Date).AddDays(-124)
+ $endDate = (Get-Date).AddDays(-95)
``` >[!NOTE]
- >The span of the start date and the end date should not be more than 30 days.
+ >The span of the start date and the end date should not be more than 30 days.<br><br>To view recovery points for a different time range, modify the start and the end date accordingly.
## Use PowerShell ### Check archivable recovery points
$RecommendedRecoveryPointList = Get-AzRecoveryServicesBackupRecommendedArchivabl
### Move to archive ```azurepowershell
-Move-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -RecoveryPoint $rp[2] -SourceTier VaultStandard -DestinationTier VaultArchive
+Move-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -RecoveryPoint $rp[0] -SourceTier VaultStandard -DestinationTier VaultArchive
```
+Where, `$rp[0]` is the first recovery point in the list. If you want to move other recovery points, use `$rp[1]`, `$rp[2]`, and so on.
+ This command moves an archivable recovery point to archive. It returns a job that can be used to track the move operation both from portal and with PowerShell. ### View archived recovery points
This command moves an archivable recovery point to archive. It returns a job tha
This command returns all the archived recovery points. ```azurepowershell
-$rp = Get-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -Item $bckItm -Tier VaultArchive -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime
+$rp = Get-AzRecoveryServicesBackupRecoveryPoint -VaultId $vault.ID -Item $bckItm -Tier VaultArchive -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime()
``` ### Restore with PowerShell
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
If you run the script on a computer with restricted access, ensure there's acces
> [!NOTE] >
-> The script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
-> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be:<br> <https://pod01-rec2.wcus.backup.windowsazure.com>
+> In case, the backed up VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
+> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
>
+>
+> If the backed up VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
+> So for example, if the script filename is *ContosoVM_wcus_12345678*, the **geo-name** is *wcus* and the URL would be: <https://pod01-rec2.wcus.backup.windowsazure.com><br><br>
+>
+ For Linux, the script requires 'open-iscsi' and 'lshw' components to connect to the recovery point. If the components don't exist on the computer where the script is run, the script asks for permission to install the components. Provide consent to install the necessary components.
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -Workl
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID ````
+### Resume backup
+
+If the protection is stopped and the backup data is retained, you can resume the protection once more. You have to assign a policy for the renewed protection. The cmdlet is same as that of [change policy of backup items](#change-policy-for-backup-items).
+
+````powershell
+$TargetPol1 = Get-AzRecoveryServicesBackupProtectionPolicy -Name <PolicyName> -VaultId $targetVault.ID
+$anotherBkpItem = Get-AzRecoveryServicesBackupItem -WorkloadType AzureVM -BackupManagementType AzureVM -Name "<BackupItemName>" -VaultId $targetVault.ID
+Enable-AzRecoveryServicesBackupProtection -Item $anotherBkpItem -Policy $TargetPol1 -VaultId $targetVault.ID
+````
+ #### Delete backup data To completely remove the stored backup data in the vault, add the '-RemoveRecoveryPoints' flag/switch to the ['disable' protection command](#retain-data). ````powershell Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID -RemoveRecoveryPoints+ ```` ## Restore an Azure VM
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 08/30/2019 Last updated : 06/02/2021 # Troubleshooting backup failures on Azure virtual machines
This section covers backup operation failure of Azure Virtual machine.
* Here is an example of an Event Viewer error 517 where Azure Backup was working fine but "Windows Server Backup" was failing: ![Windows Server Backup failing](media/backup-azure-vms-troubleshoot/windows-server-backup-failing.png) * If Azure Backup is failing, then look for the corresponding Error Code in the section Common VM backup errors in this article.
+ * If you see Azure Backup option greyed out on an Azure VM, hover over the disabled menu to find the reason. The reasons could be "Not available with EphemeralDisk" or "Not available with Ultra Disk".
+ ![Reasons for the disablement of Azure Backup option](media/backup-azure-vms-troubleshoot/azure-backup-disable-reasons.png)
## Common issues
backup Backup During Vm Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-during-vm-creation.md
Title: Enable backup when you create an Azure VM description: Describes how to enable backup when you create an Azure VM with Azure Backup. Previously updated : 06/13/2019 Last updated : 06/03/2021 # Enable backup when you create an Azure VM
The Backup service creates a separate resource group (RG), different than the re
Points to note:
-1. You can either use the default name of the RG, or edit it according to your company requirements.
-2. You provide the RG name pattern as input during VM backup policy creation. The RG name should be of the following format:
+1. You can either use the default name of the RG, or edit it according to your company requirements.<br>If you haven't created an RG, to specify an RG for restorepointcollection, follow these steps:
+ 1. Create an RG for restorepointcollection. For example, "rpcrg".
+ 1. Mention the name of RG in the VM backup policy.
+ >[!NOTE]
+ >This will create an RG with the numeric appended and will use it for restorepointcollection.
+1. You provide the RG name pattern as input during VM backup policy creation. The RG name should be of the following format:
`<alpha-numeric string>* n <alpha-numeric string>`. 'n' is replaced with an integer (starting from 1) and is used for scaling out if the first RG is full. One RG can have a maximum of 600 RPCs today. ![Choose name when creating policy](./media/backup-during-vm-creation/create-policy.png)
-3. The pattern should follow the RG naming rules below and the total length shouldn't exceed the maximum allowed RG name length.
+1. The pattern should follow the RG naming rules below and the total length shouldn't exceed the maximum allowed RG name length.
1. Resource group names only allow alphanumeric characters, periods, underscores, hyphens, and parenthesis. They can't end in a period. 2. Resource group names can contain up to 74 characters, including the name of the RG and the suffix.
-4. The first `<alpha-numeric-string>` is mandatory while the second one after 'n' is optional. This applies only if you give a customized name. If you don't enter anything in either of the textboxes, the default name is used.
-5. You can edit the name of the RG by modifying the policy if and when required. If the name pattern is changed, new RPs will be created in the new RG. However, the old RPs will still reside in the old RG and won't be moved, as RP Collection doesn't support resource move. Eventually the RPs will get garbage collected as the points expire.
+1. The first `<alpha-numeric-string>` is mandatory while the second one after 'n' is optional. This applies only if you give a customized name. If you don't enter anything in either of the textboxes, the default name is used.
+1. You can edit the name of the RG by modifying the policy if and when required. If the name pattern is changed, new RPs will be created in the new RG. However, the old RPs will still reside in the old RG and won't be moved, as RP Collection doesn't support resource move. Eventually the RPs will get garbage collected as the points expire.
![Change name when modifying policy](./media/backup-during-vm-creation/modify-policy.png)
-6. It's advised not to lock the resource group created for use by the Backup service.
+1. It's advised not to lock the resource group created for use by the Backup service.
To configure the Azure Backup resource group for Virtual Machines using PowerShell, refer to [Creating Azure Backup resource group during snapshot retention](backup-azure-vms-automation.md#creating-azure-backup-resource-group-during-snapshot-retention).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 05/17/2021 Last updated : 06/02/2021
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions. For those supported subscriptions, Azure Backup will back up the virtual machines having disks that are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
+Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions (limited preview). For those supported subscriptions, Azure Backup will back up the virtual machines having disks that are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
Shared storage| Backing up VMs using Cluster Shared Volume (CSV) or Scale-Out Fi
[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. Ultra SSD disks | Not supported. For more information, see these [limitations](selective-disk-backup-restore.md#limitations). [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Temporary disks aren't backed up by Azure Backup.
-NVMe/ephemeral disks | Not supported.
+NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported.
## VM network support
backup Quick Backup Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-powershell.md
Run an on-demand backup job as follows:
2. When the job status is **Completed**, the VM is protected and has a full recovery point stored.
+## Manage VM backups
+
+If you want to perform more actions such as change policy, edit policy etc.. refer to the [manage VM backups section](backup-azure-vms-automation.md#manage-azure-vm-backups).
+ ## Clean up the deployment If you no longer need to back up the VM, you can clean it up.
blockchain Hyperledger Fabric Consortium Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/templates/hyperledger-fabric-consortium-azure-kubernetes-service.md
After reading this article, you will:
- Have a working knowledge of Hyperledger Fabric and the components that form the building blocks of a Hyperledger Fabric blockchain network. - Know how to deploy and configure a Hyperledger Fabric consortium network on Azure Kubernetes Service for your production scenarios.
+>[!IMPORTANT]
+>
+>The template supports Azure Kubernetes Service version 1.18.x and below only. Due to the recent [update in Kubernetes](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/) for underneath runtime environment from docker to "containerd", the chaincode containers will not be functional, customers will have to move to running external chaincode as a service which is possible on HLF 2.2x only. Until AKS v1.18.x is supported by Azure, one will be able to deploy this template through following the steps [here](https://github.com/Azure/Hyperledger-Fabric-on-Azure-Kubernetes-Service).
++ [!INCLUDE [Preview note](./includes/preview.md)] ## Choose an Azure Blockchain solution
cloud-services-extended-support Cses Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/cses-support-help.md
+
+ Title: Azure Cloud Services (extended support) support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure Cloud Services (extended support).
++++ Last updated : 4/28/2021+++
+# Support and troubleshooting for Azure Cloud Services (extended support)
+
+Here are suggestions for where you can get help when developing your Azure Cloud Services (extended support) solutions.
+
+## Self help troubleshooting
+<div class='icon is-large'>
+ <img alt='Self help content' src='./media/logos/i-article.svg'>
+</div>
+
+For common issues and and workarounds, see [Troubleshoot Azure Cloud Services (extended support) role start failures](role-startup-failure.md) and [Frequently asked questions](faq.md)
+++
+## Post a question on Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
+</div>
+
+Get answers to Service Fabric questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
+
+[Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) is Azure's recommended source of community support.
+
+If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Be sure to post your question using the [**azure-cloud-services-extended-support**](https://docs.microsoft.com/answers/topics/azure-cloud-services-extended-support.html) tag. Here are some Microsoft Q&A tips for writing [high-quality questions](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html).
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/logo-azure.svg'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
++
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='./media/logos/i-blog.svg'>
+</div>
+
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
+
+News and information about Azure Cloud Services (extended support) is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
++
+## Next steps
+
+Learn more about [Azure Cloud Services (extended support)](overview.md)
cognitive-services Luis Concept Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-test.md
- Title: Test your LUIS app-
-description: Testing is the process of providing sample utterances to LUIS and getting a response of LUIS-recognized intents and entities.
------- Previously updated : 10/10/2019---
-# Testing example utterances in LUIS
-
-Testing is the process of providing sample utterances to LUIS and getting a response of LUIS-recognized intents and entities.
-
-You can test LUIS interactively, one utterance at a time, or provide a set of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response.
-
-<a name="A-test-score"></a>
-<a name="Score-all-intents"></a>
-<a name="E-(exponent)-notation"></a>
-
-## What is a score in testing?
-See [Prediction score](luis-concept-prediction-score.md) concepts to learn more about prediction scores.
-
-## Interactive testing
-Interactive testing is done from the **Test** panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting the intents and entities as you expect on an utterance in the testing panel, copy it to the **Intent** page as a new utterance. Then label the parts of that utterance for entities, and train LUIS.
-
-## Batch testing
-See [batch testing](./luis-how-to-batch-test.md) if you are testing more than one utterance at a time.
-
-## Endpoint testing
-You can test using the [endpoint](luis-glossary.md#endpoint) with a maximum of two versions of your app. With your main or live version of your app set as the **production** endpoint, add a second version to the **staging** endpoint. This approach gives you three versions of an utterance: the current model in the Test pane of the [LUIS](luis-reference-regions.md) website, and the two versions at the two different endpoints.
-
-All endpoint testing counts toward your usage quota.
-
-## Do not log tests
-If you test against an endpoint, and do not want the utterance logged, remember to use the `logging=false` query string configuration.
-
-## Where to find utterances
-LUIS stores all logged utterances in the query log, available for download on the LUIS portal from the **Apps** list page, as well as the LUIS [authoring APIs](https://go.microsoft.com/fwlink/?linkid=2092087).
-
-Any utterances LUIS is unsure of are listed in the **[Review endpoint utterances](luis-how-to-review-endpoint-utterances.md)** page of the [LUIS](luis-reference-regions.md) website.
-
-## Remember to train
-Remember to [train](luis-how-to-train.md) LUIS after you make changes to the model. Changes to the LUIS app are not seen in testing until the app is trained.
-
-## Best practices
-Learn [best practices](luis-concept-best-practices.md).
-
-## Next steps
-
-* Learn more about [testing](luis-interactive-test.md) your utterances.
cognitive-services Luis Interactive Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-interactive-test.md
description: Use Language Understanding (LUIS) to continuously work on your appl
Previously updated : 06/02/2020++
+ms.
Last updated : 06/01/2021 # Test your LUIS app in the LUIS portal
-[Testing](luis-concept-test.md) an app is an iterative process. After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates to the LUIS app, train, and test again.
+
+Testing is the process of providing sample utterances to LUIS and getting a response of LUIS-recognized intents and entities. You can test LUIS interactively, one utterance at a time, or provide a set of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response.
++
+Testing an app is an iterative process. After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates to the LUIS app, train, and test again.
<!-- anchors for H2 name changes --> <a name="train-your-app"></a>
Last updated 06/02/2020
<a name="access-the-test-page"></a> <a name="luis-interactive-testing"></a>
-## Train before testing
+## Interactive testing
-1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
-1. Open your app by selecting its name on **My Apps** page.
-1. In order to test against the most recent version of the active app, select **Train** from the top menu, before testing.
+Interactive testing is done from the **Test** panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting the intents and entities as you expect on an utterance in the testing panel, copy it to the **Intent** page as a new utterance. Then label the parts of that utterance for entities, and train LUIS.
+
+See [batch testing](luis-concept-batch-test.md) if you are testing more than one utterance at a time, and the [Prediction scores](luis-concept-prediction-score.md) article to learn more about prediction scores.
+
+You can test using the [endpoint](luis-glossary.md#endpoint) with a maximum of two versions of your app. With your main or live version of your app set as the **production** endpoint, add a second version to the **staging** endpoint. This approach gives you three versions of an utterance: the current model in the Test pane of the [LUIS](luis-reference-regions.md) portal, and the two versions at the two different endpoints.
+
+All endpoint testing counts toward your usage quota.
+
+## Logging
+
+LUIS stores all logged utterances in the query log, available for download on the LUIS portal from the **Apps** list page, as well as the LUIS [authoring APIs](https://go.microsoft.com/fwlink/?linkid=2092087).
+
+If you test against an endpoint, and do not want the utterance logged, remember to use the `logging=false` query string configuration.
+
+Any utterances LUIS is unsure of are listed in the **[Review endpoint utterances](luis-how-to-review-endpoint-utterances.md)** page of the [LUIS](luis-reference-regions.md) portal.
## Test an utterance
+> [!NOTE]
+> Remember to [train](luis-how-to-train.md) LUIS after you make changes to the model. Changes to the LUIS app are not seen in testing until the app is trained.
+> 1. Sign in to the LUIS portal, and select your subscription and authoring resource to see the apps assigned to that authoring resource.
+> 2. Open your app by selecting its name on My Apps page.
+> 3. In order to test against the most recent version of the active app, select Train from the top menu, before testing.
+ The test utterance should not be exactly the same as any example utterances in the app. The test utterance should include word choice, phrase length, and entity usage you expect for a user. 1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource.
See batch testing [concepts](./luis-how-to-batch-test.md) and learn [how to](lui
If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's accuracy by labeling more utterances or adding features. * [Label suggested utterances with LUIS](luis-how-to-review-endpoint-utterances.md)
-* [Use features to improve your LUIS app's performance](luis-how-to-add-features.md)
+* [Use features to improve your LUIS app's performance](luis-how-to-add-features.md)
+* [best practices](luis-concept-best-practices.md)
cognitive-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/create-faq-bot-with-azure-bot-service.md
When you make changes to the knowledge base and republish, you don't need to tak
1. Light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).--
- - Click on Channels in the Bot Service resource.
-
- > [!div class="mx-imgBorder"]
- > ![Screenshot of connecting a bot to a channel](../media/qnamaker-tutorial-updates/connect-with-teams.png)
## Integrate the bot with channels
cognitive-services Faq Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-text-to-speech.md
- Title: Text to Speech frequently asked questions-
-description: Get answers to the frequently asked questions about the Text to Speech service.
------ Previously updated : 08/20/2020---
-# Text to Speech frequently asked questions
-
-If you can't find answers to your questions in this FAQ, check out [other support options](../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext%253fcontext%253d%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-
-## General
-
-**Q: What is the difference between a standard voice model and a custom voice model?**
-
-**A**: The standard voice model (also called a _voice font_) has been trained by using Microsoft-owned data and is already deployed in the cloud. You can use a custom voice model either to adapt an average model and transfer the timbre and expression of the speakerΓÇÖs voice style or train a full, new model based on the training data prepared by the user. Today, more and more customers want a one-of-a-kind, branded voice for their bots. The custom voice-building platform is the right choice for that option.
-
-**Q: Where do I start if I want to use a standard voice model?**
-
-**A**: More than 80 standard voice models in over 45 languages are available through HTTP requests. First, get a [subscription key](./overview.md#try-the-speech-service-for-free). To make REST calls to the predeployed voice models, see the [REST API](./overview.md#reference-docs).
-
-**Q: If I want to use a customized voice model, is the API the same as the one that's used for standard voices?**
-
-**A**: When a custom voice model is created and deployed, you get a unique endpoint for your model. To use the voice to speak in your apps, you must specify the endpoint in your HTTP requests. The same functionality that's available in the REST API for the Text to Speech service is available for your custom endpoint. Learn how to [create and use your custom endpoint](./how-to-custom-voice-create-voice.md#create-and-use-a-custom-neural-voice-endpoint).
-
-**Q: Do I need to prepare the training data to create custom voice models on my own?**
-
-**A**: Yes, you must prepare the training data yourself for a custom voice model.
-
-A collection of speech data is required to create a customized voice model. This collection consists of a set of audio files of speech recordings and a text file of the transcription of each audio file. The result of your digital voice relies heavily on the quality of your training data. To produce a good text-to-speech voice, it's important that the recordings are made in a quiet room with a high-quality, standing microphone. A consistent volume, speaking rate, and speaking pitch, and even consistency in expressive mannerisms of speech are essential for building a great digital voice. We highly recommend recording the voices in a recording studio.
-
-Currently, we don't provide online recording support or have any recording studio recommendations. For the format requirement, see [how to prepare recordings and transcripts](./how-to-custom-voice-create-voice.md).
-
-**Q: What scripts should I use to record the speech data for custom voice training?**
-
-**A**: We don't limit the scripts for voice recording. You can use your own scripts to record the speech. Just ensure that you have sufficient phonetic coverage in your speech data. To train a custom voice, you can start with a small volume of speech data, which might be 50 different sentences (about 3-5 minutes of speech). The more data you provide, the more natural your voice will be. You can start to train a full voice font when you provide recordings of more than 2,000 sentences (about 3-4 hours of speech). To get a high-quality, full voice, you need to prepare recordings of more than 6,000 sentences (about 8-10 hours of speech).
-
-We provide additional services to help you prepare scripts for recording. Contact [Custom Voice customer support](mailto:customvoice@microsoft.com?subject=Inquiries%20about%20scripts%20generation%20for%20Custom%20Voice%20creation) for inquiries.
-
-**Q: What if I need higher concurrency than the default value or what is offered in the portal?**
-
-**A**: You can scale up your model in increments of 20 concurrent requests. Contact [Custom Voice customer support](mailto:customvoice@microsoft.com?subject=Inquiries%20about%20scripts%20generation%20for%20Custom%20Voice%20creation) for inquiries about higher scaling.
-
-**Q: Can I download my model and run it locally?**
-
-**A**: Models can't be downloaded and executed locally.
-
-**Q: Are my requests throttled?**
-
-**A**: See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
-
-## Next steps
--- [Troubleshooting](troubleshooting.md)-- [Release notes](releasenotes.md)
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Implement automated training, testing, and release management to enable continuous improvement of Custom Speech models as you apply updates to training and testing data. Through effective implementation of CI/CD workflows, you can ensure that the endpoint for the best-performing Custom Speech model is always available.
-[Continuous integration](/azure/devops/learn/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for Custom Speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
+[Continuous integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing updates in a shared repository, and performing an automated build on it. CI workflows for Custom Speech train a new model from its data sources and perform automated testing on the new model to ensure that it performs better than the previous model.
-[Continuous delivery](/azure/devops/learn/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved Custom Speech model. CD makes endpoints easily available to be integrated into solutions.
+[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved Custom Speech model. CD makes endpoints easily available to be integrated into solutions.
Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
What happens when a model expires and how to update the model depends on how it
### Batch transcription If a model expires that is used with [batch transcription](batch-transcription.md) transcription requests will fail with a 4xx error. To prevent this update the `model` parameter in the JSON sent in the **Create Transcription** request body to either point to a more recent base model or more recent custom model. You can also remove the `model` entry from the JSON to always use the latest base model. ### Custom speech endpoint
-If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. , you are using you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) Rest API.
+If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. To update a model you are using, you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) Rest API.
## Next steps
If a model expires that is used by a [custom speech endpoint](how-to-custom-spee
## Additional resources * [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
+* [Inspect your data](how-to-custom-speech-inspect-data.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
To get pronunciation bits:
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronunciation Datasets" -> Click on Import -> Locale: the list of locales there correspond to the supported locales -->
-| Language | Locale (BCP-47) | Customizations | [Language identification](how-to-automatic-language-detection.md) |
-||--||-|
-| Arabic (Bahrain), modern standard | `ar-BH` | Text | |
-| Arabic (Egypt) | `ar-EG` | Text | Yes |
-| Arabic (Iraq) | `ar-IQ` | Text | |
-| Arabic (Israel) | `ar-IL` | Text | |
-| Arabic (Jordan) | `ar-JO` | Text | |
-| Arabic (Kuwait) | `ar-KW` | Text | |
-| Arabic (Lebanon) | `ar-LB` | Text | |
-| Arabic (Oman) | `ar-OM` | Text | |
-| Arabic (Qatar) | `ar-QA` | Text | |
-| Arabic (Saudi Arabia) | `ar-SA` | Text | |
-| Arabic (State of Palestine) | `ar-PS` | Text | |
-| Arabic (Syria) | `ar-SY` | Text | |
-| Arabic (United Arab Emirates) | `ar-AE` | Text | |
-| Bulgarian (Bulgaria) | `bg-BG` | Text | |
-| Catalan (Spain) | `ca-ES` | Text | Yes |
-| Chinese (Cantonese, Traditional) | `zh-HK` | Audio (20201015)<br>Text | Yes |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Audio (20200910)<br>Text | Yes |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | Audio (20190701, 20201015)<br>Text | Yes |
-| Croatian (Croatia) | `hr-HR` | Text | |
-| Czech (Czech Republic) | `cs-CZ` | Text | |
-| Danish (Denmark) | `da-DK` | Text | Yes |
-| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes |
-| English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes |
-| English (Ghana) | `en-GH` | Text | |
-| English (Hong Kong) | `en-HK` | Text | |
-| English (India) | `en-IN` | Audio (20200923)<br>Text | |
-| English (Ireland) | `en-IE` | Text | |
-| English (Kenya) | `en-KE` | Text | |
-| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | |
-| English (Nigeria) | `en-NG` | Text | |
-| English (Philippines) | `en-PH` | Text | |
-| English (Singapore) | `en-SG` | Text | |
-| English (South Africa) | `en-ZA` | Text | |
-| English (Tanzania) | `en-TZ` | Text | |
-| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
-| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Text<br>Pronunciation| Yes |
-| Estonian(Estonia) | `et-EE` | Text | |
-| Filipino (Philippines) | `fil-PH`| Text | |
-| Finnish (Finland) | `fi-FI` | Text | Yes |
-| French (Canada) | `fr-CA` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| French (Switzerland) | `fr-CH` | Text<br>Pronunciation | |
-| German (Austria) | `de-AT` | Text<br>Pronunciation | |
-| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes |
-| Greek (Greece) | `el-GR` | Text | Yes |
-| Gujarati (Indian) | `gu-IN` | Text | |
-| Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes |
-| Hungarian (Hungary) | `hu-HU` | Text | |
-| Indonesian (Indonesia) | `id-ID` | Text | |
-| Irish(Ireland) | `ga-IE` | Text | |
-| Italian (Italy) | `it-IT` | Audio (20201016)<br>Text<br>Pronunciation| Yes |
-| Japanese (Japan) | `ja-JP` | Text | Yes |
-| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Text | Yes |
-| Latvian (Latvia) | `lv-LV` | Text | |
-| Lithuanian (Lithuania) | `lt-LT` | Text | |
-| Malay (Malaysia) | `ms-MY` | Text | |
-| Maltese (Malta) | `mt-MT` | Text | |
-| Marathi (India) | `mr-IN` | Text | |
-| Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes |
-| Polish (Poland) | `pl-PL` | Text | Yes |
-| Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes |
-| Portuguese (Portugal) | `pt-PT` | Text<br>Pronunciation | Yes |
-| Romanian (Romania) | `ro-RO` | Text | Yes |
-| Russian (Russia) | `ru-RU` | Audio (20200907)<br>Text | Yes |
-| Slovak (Slovakia) | `sk-SK` | Text | |
-| Slovenian (Slovenia) | `sl-SI` | Text | |
-| Spanish (Argentina) | `es-AR` | Text<br>Pronunciation | |
-| Spanish (Bolivia) | `es-BO` | Text<br>Pronunciation | |
-| Spanish (Chile) | `es-CL` | Text<br>Pronunciation | |
-| Spanish (Colombia) | `es-CO` | Text<br>Pronunciation | |
-| Spanish (Costa Rica) | `es-CR` | Text<br>Pronunciation | |
-| Spanish (Cuba) | `es-CU` | Text<br>Pronunciation | |
-| Spanish (Dominican Republic) | `es-DO` | Text<br>Pronunciation | |
-| Spanish (Ecuador) | `es-EC` | Text<br>Pronunciation | |
-| Spanish (El Salvador) | `es-SV` | Text<br>Pronunciation | |
-| Spanish (Equatorial Guinea) | `es-GQ` | Text | |
-| Spanish (Guatemala) | `es-GT` | Text<br>Pronunciation | |
-| Spanish (Honduras) | `es-HN` | Text<br>Pronunciation | |
-| Spanish (Mexico) | `es-MX` | Audio (20200907)<br>Text<br>Pronunciation| Yes |
-| Spanish (Nicaragua) | `es-NI` | Text<br>Pronunciation | |
-| Spanish (Panama) | `es-PA` | Text<br>Pronunciation | |
-| Spanish (Paraguay) | `es-PY` | Text<br>Pronunciation | |
-| Spanish (Peru) | `es-PE` | Text<br>Pronunciation | |
-| Spanish (Puerto Rico) | `es-PR` | Text<br>Pronunciation | |
-| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Text<br>Pronunciation| Yes |
-| Spanish (Uruguay) | `es-UY` | Text<br>Pronunciation | |
-| Spanish (USA) | `es-US` | Text<br>Pronunciation | |
-| Spanish (Venezuela) | `es-VE` | Text<br>Pronunciation | |
-| Swedish (Sweden) | `sv-SE` | Text | Yes |
-| Tamil (India) | `ta-IN` | Text | |
-| Telugu (India) | `te-IN` | Text | |
-| Thai (Thailand) | `th-TH` | Text | Yes |
-| Turkish (Turkey) | `tr-TR` | Text | |
-| Vietnamese (Vietnam) | `vi-VN` | Text | |
+| Language | Locale (BCP-47) | Customizations | [Language identification](how-to-automatic-language-detection.md) | [Pronunciation assessment](how-to-pronunciation-assessment.md) |
+||--||-|--|
+| Arabic (Algeria) | `ar-DZ` | Text | | |
+| Arabic (Bahrain), modern standard | `ar-BH` | Text | | |
+| Arabic (Egypt) | `ar-EG` | Text | Yes | |
+| Arabic (Iraq) | `ar-IQ` | Text | | |
+| Arabic (Israel) | `ar-IL` | Text | | |
+| Arabic (Jordan) | `ar-JO` | Text | | |
+| Arabic (Kuwait) | `ar-KW` | Text | | |
+| Arabic (Lebanon) | `ar-LB` | Text | | |
+| Arabic (Libya) | `ar-LY` | Text | | |
+| Arabic (Morocco) | `ar-MA` | Text | | |
+| Arabic (Oman) | `ar-OM` | Text | | |
+| Arabic (Qatar) | `ar-QA` | Text | | |
+| Arabic (Saudi Arabia) | `ar-SA` | Text | | |
+| Arabic (Palestinian Authority) | `ar-PS` | Text | | |
+| Arabic (Syria) | `ar-SY` | Text | | |
+| Arabic (Tunisia) | `ar-TN` | Text | | |
+| Arabic (United Arab Emirates) | `ar-AE` | Text | | |
+| Arabic (Yemen) | `ar-YE` | Text | | |
+| Bulgarian (Bulgaria) | `bg-BG` | Text | | |
+| Catalan (Spain) | `ca-ES` | Text | Yes | |
+| Chinese (Cantonese, Traditional) | `zh-HK` | Audio (20201015)<br>Text | Yes | |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Audio (20200910)<br>Text | Yes | Yes |
+| Chinese (Taiwanese Mandarin) | `zh-TW` | Audio (20190701, 20201015)<br>Text | Yes | |
+| Croatian (Croatia) | `hr-HR` | Text | | |
+| Czech (Czech Republic) | `cs-CZ` | Text | | |
+| Danish (Denmark) | `da-DK` | Text | Yes | |
+| Dutch (Netherlands) | `nl-NL` | Audio (20201015)<br>Text<br>Pronunciation| Yes | |
+| English (Australia) | `en-AU` | Audio (20201019)<br>Text | Yes | |
+| English (Canada) | `en-CA` | Audio (20201019)<br>Text | Yes | |
+| English (Ghana) | `en-GH` | Text | | |
+| English (Hong Kong) | `en-HK` | Text | | |
+| English (India) | `en-IN` | Audio (20200923)<br>Text | | |
+| English (Ireland) | `en-IE` | Text | | |
+| English (Kenya) | `en-KE` | Text | | |
+| English (New Zealand) | `en-NZ` | Audio (20201019)<br>Text | | |
+| English (Nigeria) | `en-NG` | Text | | |
+| English (Philippines) | `en-PH` | Text | | |
+| English (Singapore) | `en-SG` | Text | | |
+| English (South Africa) | `en-ZA` | Text | | |
+| English (Tanzania) | `en-TZ` | Text | | |
+| English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes | Yes |
+| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Text<br>Pronunciation| Yes | Yes |
+| Estonian(Estonia) | `et-EE` | Text | | |
+| Filipino (Philippines) | `fil-PH`| Text | | |
+| Finnish (Finland) | `fi-FI` | Text | Yes | |
+| French (Canada) | `fr-CA` | Audio (20201015)<br>Text<br>Pronunciation| Yes | |
+| French (France) | `fr-FR` | Audio (20201015)<br>Text<br>Pronunciation| Yes | |
+| French (Switzerland) | `fr-CH` | Text<br>Pronunciation | | |
+| German (Austria) | `de-AT` | Text<br>Pronunciation | | |
+| German (Germany) | `de-DE` | Audio (20190701, 20200619, 20201127)<br>Text<br>Pronunciation| Yes | |
+| Greek (Greece) | `el-GR` | Text | Yes | |
+| Gujarati (Indian) | `gu-IN` | Text | | |
+| Hebrew (Israel) | `he-IL` | Text | | |
+| Hindi (India) | `hi-IN` | Audio (20200701)<br>Text | Yes | |
+| Hungarian (Hungary) | `hu-HU` | Text | | |
+| Indonesian (Indonesia) | `id-ID` | Text | | |
+| Irish(Ireland) | `ga-IE` | Text | | |
+| Italian (Italy) | `it-IT` | Audio (20201016)<br>Text<br>Pronunciation| Yes | |
+| Japanese (Japan) | `ja-JP` | Text | Yes | |
+| Korean (Korea) | `ko-KR` | Audio (20201015)<br>Text | Yes | |
+| Latvian (Latvia) | `lv-LV` | Text | | |
+| Lithuanian (Lithuania) | `lt-LT` | Text | | |
+| Malay (Malaysia) | `ms-MY` | Text | | |
+| Maltese (Malta) | `mt-MT` | Text | | |
+| Marathi (India) | `mr-IN` | Text | | |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Text | Yes | |
+| Polish (Poland) | `pl-PL` | Text | Yes | |
+| Portuguese (Brazil) | `pt-BR` | Audio (20190620, 20201015)<br>Text<br>Pronunciation| Yes | |
+| Portuguese (Portugal) | `pt-PT` | Text<br>Pronunciation | Yes | |
+| Romanian (Romania) | `ro-RO` | Text | Yes | |
+| Russian (Russia) | `ru-RU` | Audio (20200907)<br>Text | Yes | |
+| Slovak (Slovakia) | `sk-SK` | Text | | |
+| Slovenian (Slovenia) | `sl-SI` | Text | | |
+| Spanish (Argentina) | `es-AR` | Text<br>Pronunciation | | |
+| Spanish (Bolivia) | `es-BO` | Text<br>Pronunciation | | |
+| Spanish (Chile) | `es-CL` | Text<br>Pronunciation | | |
+| Spanish (Colombia) | `es-CO` | Text<br>Pronunciation | | |
+| Spanish (Costa Rica) | `es-CR` | Text<br>Pronunciation | | |
+| Spanish (Cuba) | `es-CU` | Text<br>Pronunciation | | |
+| Spanish (Dominican Republic) | `es-DO` | Text<br>Pronunciation | | |
+| Spanish (Ecuador) | `es-EC` | Text<br>Pronunciation | | |
+| Spanish (El Salvador) | `es-SV` | Text<br>Pronunciation | | |
+| Spanish (Equatorial Guinea) | `es-GQ` | Text | | |
+| Spanish (Guatemala) | `es-GT` | Text<br>Pronunciation | | |
+| Spanish (Honduras) | `es-HN` | Text<br>Pronunciation | | |
+| Spanish (Mexico) | `es-MX` | Audio (20200907)<br>Text<br>Pronunciation| Yes | |
+| Spanish (Nicaragua) | `es-NI` | Text<br>Pronunciation | | |
+| Spanish (Panama) | `es-PA` | Text<br>Pronunciation | | |
+| Spanish (Paraguay) | `es-PY` | Text<br>Pronunciation | | |
+| Spanish (Peru) | `es-PE` | Text<br>Pronunciation | | |
+| Spanish (Puerto Rico) | `es-PR` | Text<br>Pronunciation | | |
+| Spanish (Spain) | `es-ES` | Audio (20201015)<br>Text<br>Pronunciation| Yes | |
+| Spanish (Uruguay) | `es-UY` | Text<br>Pronunciation | | |
+| Spanish (USA) | `es-US` | Text<br>Pronunciation | | |
+| Spanish (Venezuela) | `es-VE` | Text<br>Pronunciation | | |
+| Swedish (Sweden) | `sv-SE` | Text | Yes | |
+| Tamil (India) | `ta-IN` | Text | | |
+| Telugu (India) | `te-IN` | Text | | |
+| Thai (Thailand) | `th-TH` | Text | Yes | |
+| Turkish (Turkey) | `tr-TR` | Text | | |
+| Vietnamese (Vietnam) | `vi-VN` | Text | | |
## Text-to-speech
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the table below Parameters without "Adjustable" row are **not** adjustable fo
| **Websocket specific quotas** | | | | Max Audio length produced per turn | 10 min | 10 min | | Max SSML Message size per turn | 64 KB | 64 KB |
-| **REST API limit** | 20 requests per minute | 300 requests per minute |
<sup>3</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
Initiate the increase of Concurrent Request limit for your resource or if necess
- a note, that the request is about **Text-to-Speech** quota - Azure resource information you [collected before](#prepare-the-required-information) - Complete entering the required information and click *Create* button in *Review + create* tab
- - Note the support request number in Azure portal notifications. You will be contacted shortly for further processing
+ - Note the support request number in Azure portal notifications. You will be contacted shortly for further processing
cognitive-services How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/how-to-migrate.md
- Title: Migrate Microsoft Translator Hub workspace and projects? - Custom Translator-
-description: This article explains how to migrate your Hub workspace and projects to Azure Cognitive Services Custom Translator.
---- Previously updated : 05/26/2020--
-#Customer intent: As a Custom Translator user, I want to understand how to migrate from Microsoft Translator Hub to Custom Translator.
--
-# Migrate Hub workspace and projects to Custom Translator
-
-You can easily migrate your [Microsoft Translator Hub](https://hub.microsofttranslator.com/) workspace and projects to Custom Translator. Migration is initiated from Microsoft Hub by selecting a workspace or project, then selecting a workspace in Custom Translator, and then selecting the trainings you want to transfer. After the migration starts, the selected training settings will be transferred with all relevant documents. Deployed models are trained and can be autodeployed upon completion.
-
-These actions are performed during migration:
-* All documents and project definitions will have their names transferred with the addition of "hub_" prefixed to the name. Auto-generated test and tuning data will be named hub_systemtune_\<modelid> or hub_systemtest_\<modelid>.
-* Any trainings that were in the deployed state when the migration takes place will automatically be trained using the documents of the Hub training. This training will not be charged to your subscription. If auto-deploy was selected for the migration, the trained model will be deployed upon completion. Regular hosting charges will be applied.
-* Any migrated trainings that were not in the deployed state will be put into the migrated draft state. In this state, you will have the option of training a model with the migrated definition, but regular training charges will apply.
-* At any point, the BLEU score migrated from the Hub training can be found in the TrainingDetails page of the model in the "Bleu score in MT Hub" heading.
-
-> [!Note]
-> For a training to succeed, Custom Translator requires a minimum of 10,000 unique extracted sentences. Custom Translator can't conduct a training with fewer than the [suggested minimum](./sentence-alignment.md#suggested-minimum-number-of-sentences).
-
-## Find Custom Translator Workspace ID
-
-To migrate [Microsoft Translator Hub](https://hub.microsofttranslator.com/) workspace, you need destination Workspace ID in Custom Translator. The destination workspace in Custom Translator is where all your Hub workspaces and projects shall be migrated to.
-
-You will find your destination Workspace ID on Custom Translator Settings page:
-
-1. Go to "Setting" page in the Custom Translator portal.
-
-2. You will find the Workspace ID in the Basic Information section.
-
- ![How to find destination workspace ID](media/how-to/how-to-find-destination-ws-id.png)
-
-3. Keep your destination Workspace ID to refer during the migration process.
-
-## Migrate a project
-
-If you want to migrate your projects selectively, Microsoft Translator Hub gives you that ability.
-
-To migrate a project:
-
-1. Sign in to Microsoft Translator Hub.
-
-2. Go to "Projects" page.
-
-3. Click "Migrate" link for appropriate project.
-
- ![Screenshot that highlights the Migrate button for the selected project.](media/how-to/how-to-migrate-from-hub.png)
-
-4. Upon pressing the migrate link you will be presented with a form allowing you to:
- * Specify the workspace you wish to transfer to on Custom Translator
- * Indicate whether you wish to transfer all trainings with successful trainings or just the deployed trainings. By default all successful trainings will be transferred.
- * Indicate whether you would like your training auto deployed when training completes. By default your training will not be auto deployed upon completion.
-
-5. Click "Submit Request".
-
-## Migrate a workspace
-
-In addition to migrating a single project, you may also migrate all projects with successful trainings in a workspace. This will cause each project in the workspace to be evaluated as though the migrate link had been pressed. This feature is suitable for users with many projects who want to migrate all of them to Custom Translator with the same settings. A workspace migration can be initiated from the settings page of Translator Hub.
-
-To migrate a workspace:
-
-1. Sign in to Microsoft Translator Hub.
-
-2. Go to "Settings" page.
-
-3. On "Settings" page click "Migrate Workspace data to Custom Translator".
-
- ![Screenshot that highlights the Migrate Workspace data to Custom Translator option.](media/how-to/how-to-migrate-workspace-from-hub.png)
-
-4. On the next page select either of these two options:
-
- a. Deployed Trainings only: Selecting this option will migrate only your deployed systems and related documents.
-
- b. All Successful Trainings: Selecting this option will migrate all your successful trainings and related documents.
-
- c. Enter your destination Workspace ID in Custom Translator.
-
- ![How to migrate from Hub](media/how-to/how-to-migrate-from-hub-screen.png)
-
-5. Click Submit Request.
-
-## Migration History
-
-When you have requested workspace/ project migration from Hub, you'll find your migration history in Custom Translator Settings page.
-
-To view the migration history, follow these steps:
-
-1. Go to "Setting" page in the Custom Translator portal.
-
-2. In the Migration History section of the Settings page, click Migration History.
-
- ![Migration history](media/how-to/how-to-migration-history.png)
-
-Migration History page displays following information as summary for every migration you requested.
-
-1. Migrated By: Name and email of the user submitted this migration request
-
-2. Migrated On: Date and time stamp of the migration
-
-3. Projects: Number of projects requested for migration v/s number of projects successfully migrated.
-
-4. Trainings: Number of trainings requested for migration v/s number of trainings successfully migrated.
-
-5. Documents: The number of documents requested for migration v/s number of documents successfully migrated.
-
- ![Migration history details](media/how-to/how-to-migration-history-details.png)
-
-If you want more detailed migration report about your projects, trainings and documents, you have option export details as CSV.
-
-## Implementation Notes
-* Systems with language pairs NOT yet available in Custom Translator will only be available to access data or undeploy through Custom Translator. These projects will be marked as "Unavailable" on the Projects page. As we enable new language pairs with Custom Translator, the projects will become active to train and deploy.
-* Migrating a project from Hub to Custom Translator will not have any impact on your Hub trainings or projects. We do not delete projects or documents from Hub during a migration and we do not undeploy models.
-* You are only permitted to migrate once per project. If you need to repeat a migration on a project, please contact us.
-* Custom Translator supports NMT language pairs to and from English. [View the complete list of supported languages](../language-support.md#customization). Hub does not require baseline models and therefore supports several thousand languages. You can migrate an unsupported language pair, however we will only perform the migration of documents and project definitions. We will not be able to train the new model. Furthermore, these documents and projects will be displayed as inactive in order to indicate that they can't be used at this time. If support is added for these projects and/or documents, they will become active and trainable.
-* Custom Translator does not currently support monolingual training data. Like unsupported language pairs, you can migrate monolingual documents, but they show as inactive until monolingual data is supported.
-* Custom Translator requires 10k parallel sentences in order to train. Microsoft Hub could train on a smaller set of data. If a training is migrated which does not meet this requirement, it will not be trained.
-
-## Custom Translator versus Hub
-
-This table compares the features between Microsoft Translator Hub and Custom Translator.
-
-| Feature | Hub | Custom Translator |
-| - | :-: | :: |
-| Customization feature status | General Availability | General Availability |
-| Text API version | V2 | V3 |
-| SMT customization | Yes | No |
-| NMT customization | No | Yes |
-| New unified Speech services customization | No | Yes |
-| No Trace | Yes | Yes |
-
-## New languages
-
-If you are a community or organization working on creating a new language system for Translator, reach out to [custommt@microsoft.com](mailto:custommt@microsoft.com) for more information.
-
-## Next steps
--- [Train a model](how-to-train-model.md).-- Start using your deployed custom translation model via [Translator V3](../reference/v3-0-translate.md?tabs=curl).
cognitive-services How To View System Test Results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md
To update deployment settings:
- Start using your deployed custom translation model via [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl). - Learn [how to manage settings](how-to-manage-settings.md) to share your workspace, manage subscription key.-- Learn [how to migrate your workspace and project](how-to-migrate.md) from [Microsoft Translator Hub](https://hub.microsofttranslator.com)
cognitive-services Unsupported Language Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/custom-translator/unsupported-language-deployments.md
We now have a process that allows you to deploy your unsupported models through
## Prerequisites In order for your models to be candidates for deployment, they must meet the following criteria:
-* The project containing the model must have been migrated from the Hub to the Custom Translator using the Migration Tool. The process for migrating projects and workspaces can be found [here](how-to-migrate.md).
+* The project containing the model must have been migrated from the Hub to the Custom Translator using the Migration Tool.
* The model must be in the deployed state when the migration happens. * The language pair of the model must be an unsupported language pair in Custom Translator. Language pairs in which a language is supported to or from English, but the pair itself does not include English, are candidates for unsupported language deployments. For example, a Hub model for a French to German language pair is considered an unsupported language pair even though French to English and English to German are supported language pair.
Unlike standard Custom Translator models, Hub models will only be available in a
## Next steps - [Train a model](how-to-train-model.md).-- Start using your deployed custom translation model via [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl).
+- Start using your deployed custom translation model via [Microsoft Translator Text API V3](../reference/v3-0-translate.md?tabs=curl).
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
The following headers are included with each Document Translator API request:
### POST request body properties
+* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches`
* The POST request body is a JSON object named `inputs`. * The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs and can optionally contain a `glossaryURL` container address. * The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
cognitive-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/get-supported-glossary-formats.md
The following is an example of a successful response.
```JSON {
- "value": [
- {
- "format": "XLIFF",
- "fileExtensions": [
- ".xlf"
- ],
- "contentTypes": [
- "application/xliff+xml"
- ],
- "defaultVersion": "1.2",
- "versions": [
- "1.0",
- "1.1",
- "1.2"
- ]
- },
- {
- "format": "TMX",
- "fileExtensions": [
- ".tmx"
- ],
- "contentTypes": [],
- "versions": [
- "1.0",
- "1.1",
- "1.2",
- "1.3",
- "1.4"
- ]
- }
- ]
+ "value": [
+ {
+ "format": "XLIFF",
+ "fileExtensions": [
+ ".xlf"
+ ],
+ "contentTypes": [
+ "application/xliff+xml"
+ ],
+ "defaultVersion": "1.2",
+ "versions": [
+ "1.0",
+ "1.1",
+ "1.2"
+ ]
+ },
+ {
+ "format": "TSV",
+ "fileExtensions": [
+ ".tsv",
+ ".tab"
+ ],
+ "contentTypes": [
+ "text/tab-separated-values"
+ ]
+ },
+ {
+ "format": "CSV",
+ "fileExtensions": [
+ ".csv"
+ ],
+ "contentTypes": [
+ "text/csv"
+ ]
+ }
+ ]
}+ ``` ### Example error response
cognitive-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/document-translation/reference/start-translation.md
Definition for the input batch translation request.
|Name|Type|Required|Description| | | | | | |source|SourceInput[]|True|inputs.source listed below. Source of the input documents.|
-|storageType|StorageInputType[]|True|inputs.storageType listed below. Storage type of the input documents source string.|
+|storageType|StorageInputType[]|False|inputs.storageType listed below. Storage type of the input documents source string. Required for single document translation only.|
|targets|TargetInput[]|True|inputs.target listed below. Location of the destination for the output.| **inputs.source**
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
The timestamp field must match one of these two formats:
* **Table Name**: Specify a table to query against. This can be found in your Azure Storage Account instance. Click **Tables** in the **Table Service** section. * **Query**
-You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy-MM-ddTHH:mm:ss format string in script. Tip: Use Azure storage explorer to create a query with specific time range and make sure it runs okay, then do the replacement.
+You can use the `@StartTime` in your query. `@StartTime` is replaced with a yyyy-MM-ddTHH:mm:ss format string in script. Tip: Use Azure Storage Explorer to create a query with specific time range and make sure it runs okay, then do the replacement.
``` mssql date ge datetime'@StartTime' and date lt datetime'@EndTime'
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/responsible-use-of-ai-overview.md
+
+ Title: Overview of Responsible use of AI
+
+description: Azure Cognitive Services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite.
+++++ Last updated : 06/02/2021+++
+# Responsible use of AI with Cognitive Services
+
+Azure Cognitive Services provides information and guidelines on how to responsibly use artificial intelligence in applications. Below are the links to articles that provide this guidance for the different services within the Cognitive Services suite.
+
+## Computer Vision - OCR
+
+* [Transparency note and use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note?context=/azure/cognitive-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
+* [Integration and responsible use](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use?context=/azure/cognitive-services/computer-vision/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security?context=/azure/cognitive-services/computer-vision/context/context)
+
+## Computer Vision - Spatial Analysis
+
+* [Transparency note and use cases](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/cognitive-services/computer-vision/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/computer-vision/accuracy-and-limitations?context=/azure/cognitive-services/computer-vision/context/context)
+* [Responsible use in AI deployment](/legal/cognitive-services/computer-vision/responsible-use-deployment?context=/azure/cognitive-services/computer-vision/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/computer-vision/disclosure-design?context=/azure/cognitive-services/computer-vision/context/context)
+* [Research insights](/legal/cognitive-services/computer-vision/research-insights?context=/azure/cognitive-services/computer-vision/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/computer-vision/compliance-privacy-security-2?context=/azure/cognitive-services/computer-vision/context/context)
+
+## QnA Maker
+
+* [Transparency note and use cases](/legal/cognitive-services/qnamaker/transparency-note-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/qnamaker/characteristics-and-limitations-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
+* [Integration and responsible use](/legal/cognitive-services/qnamaker/guidance-integration-responsible-use-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/qnamaker/data-privacy-security-qnamaker?context=/azure/cognitive-services/qnamaker/context/context)
+
+## Text Analytics
+
+* [Transparency note and use cases](/legal/cognitive-services/text-analytics/transparency-note?context=/azure/cognitive-services/text-analytics/context/context)
+* [Integration and responsible use](/legal/cognitive-services/text-analytics/guidance-integration-responsible-use?context=/azure/cognitive-services/text-analytics/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/text-analytics/data-privacy?context=/azure/cognitive-services/text-analytics/context/context)
+
+## Speech - Pronunciation Assessment
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context)
+
+## Speech - Custom Neural Voice
+
+* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Responsible deployment of synthetic speech](./speech-service/concepts-guidelines-responsible-deployment-synthetic.md)
+* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure of design guidelines](./speech-service/concepts-disclosure-guidelines.md)
+* [Disclosure of design patterns](./speech-service/concepts-disclosure-patterns.md)
+* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Model Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
Use the table below to find which model versions are supported by each hosted en
| `/entities/linking` | `2019-10-01`, `2020-02-01` | `2020-02-01` | | `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15` | `2021-01-15` | | `/entities/recognition/pii` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` |
-| `/entities/health` | `2021-03-01` | `2021-03-01` |
+| `/entities/health` | `2021-05-15` | `2021-05-15` |
| `/keyphrases` | `2019-10-01`, `2020-07-01` | `2020-07-01` |
cognitive-services Text Analytics For Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-for-health.md
Text Analytics for Health recognizes relations between different concepts, inclu
**ABBREVIATION**
+**BODY_SITE_OF_CONDITION**
+
+**BODY_SITE_OF_TREATMENT**
+
+**COURSE_OF_CONDITION**
+
+**COURSE_OF_EXAMINATION**
+
+**COURSE_OF_MEDICATION**
+
+**COURSE_OF_TREATMENT**
+ **DIRECTION_OF_BODY_STRUCTURE** **DIRECTION_OF_CONDITION**
Text Analytics for Health recognizes relations between different concepts, inclu
**DOSAGE_OF_MEDICATION**
+**EXAMINATION_FINDS_CONDITION**
+
+**EXPRESSION_OF_GENE**
+
+**EXPRESSION_OF_VARIANT**
+ **FORM_OF_MEDICATION**
+**FREQUENCY_OF_CONDITION**
+ **FREQUENCY_OF_MEDICATION** **FREQUENCY_OF_TREATMENT**
+**MUTATION_TYPE_OF_GENE**
+
+**MUTATION_TYPE_OF_VARIANT**
+ **QUALIFIER_OF_CONDITION** **RELATION_OF_EXAMINATION** **ROUTE_OF_MEDICATION**
+**SCALE_OF_CONDITION**
+ **TIME_OF_CONDITION** **TIME_OF_EVENT**
Text Analytics for Health recognizes relations between different concepts, inclu
**VALUE_OF_EXAMINATION**
+**VARIANT_OF_GENE**
+ > [!NOTE] > * Relations referring to CONDITION may refer to either the DIAGNOSIS entity type or the SYMPTOM_OR_SIGN entity type. > * Relations referring to MEDICATION may refer to either the MEDICATION_NAME entity type or the MEDICATION_CLASS entity type.
cognitive-services Text Analytics How To Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
Before you use the Text Analytics API, you will need to create a Azure resource
2. Select the region you want to use for your endpoint.
-3. Create the Text Analytics resource and go to the ΓÇ£keys and endpoint bladeΓÇ¥ in the left of the page. Copy the key to be used later when you call the APIs. You'll add this later as a value for the `Ocp-Apim-Subscription-Key` header.
+3. Create the Text Analytics resource and go to the ΓÇ£Keys and EndpointΓÇ¥ section under Resource Management in the left of the page. Copy the key to be used later when you call the APIs. You'll add this later as a value for the `Ocp-Apim-Subscription-Key` header.
4. To check the number of text records that have been sent using your Text Analytics resource:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## June 2021
+
+### Text Analytics for health updates
+
+* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
+ * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`,
+ * 14 new relation types,
+ * Assertion detection expanded for new entity types and
+ * Linking support for ALLERGEN entity type
+ ## May 2021 * [Custom question answering](../qnamaker/custom-question-answering.md) (previously QnA maker) can now be accessed using a Text Analytics resource.
container-registry Authenticate Aks Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/authenticate-aks-cross-tenant.md
+
+ Title: Authenticate from AKS cluster to Azure container registry in different AD tenant
+description: Configure an AKS cluster's service principal with permissions to access your Azure container registry in a different AD tenant
+++ Last updated : 05/21/2021++
+# Pull images from a container registry to an AKS cluster in a different Azure AD tenant
+
+In some cases, you might have your Azure AKS cluster in one Azure Active Directory (Azure AD) tenant and your Azure container registry in a different tenant. This article walks through the steps to enable cross-tenant authentication using the AKS service principal credential to pull from the container registry.
+
+## Scenario overview
+Assumptions for this example:
+
+* The AKS cluster is in **Tenant A** and the Azure container registry is in **Tenant B**.
+* The AKS cluster is configured with service principal authentication in **Tenant A**. Learn more about how to create and use a [service principal for your AKS cluster](../aks/kubernetes-service-principal.md).
+
+You need at least the Contributor role in the AKS cluster's subscription and the Owner role in the container registry's subscription.
+
+You use the following steps to:
+
+* Create a new multitenant app (service principal) in **Tenant A**.
+* Provision the app in **Tenant B**.
+* Configure the service principal to pull from the registry in **Tenant B**
+* Update the AKS cluster in **Tenant A** to authenticate using the new service principal
++
+## Step-by-step instructions
+
+### Step 1: Create multitenant Azure AD application
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) in **Tenant A**.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations > + New registration**.
+1. In **Supported account types**, select **Accounts in any organizational directory**.
+1. In **Redirect URI**, enter *https://www.microsoft.com*.
+1. Select **Register**.
+1. On the **Overview** page, take note of the **Application (client) ID**. It will be used in Step 2 and Step 4.
+
+ :::image type="content" source="media/authenticate-kubernetes-cross-tenant/service-principal-overview.png" alt-text="Service principal application ID":::
+1. In **Certificates & secrets**, under **Client secrets**, select **+ New client secret**.
+1. Enter a **Description** such as *Password* and select **Add**.
+1. In **Client secrets**, take note of the value of the client secret. You use it to update the AKS cluster's service principal in Step 4.
+
+ :::image type="content" source="media/authenticate-kubernetes-cross-tenant/configure-client-secret.png" alt-text="Configure client secret":::
+### Step 2: Provision the service principal in the ACR tenant
+
+1. Open the following link using an admin account in **Tenant B**. Where indicated, insert the **ID of Tenant B** and the **application ID** (client ID) of the multitenant app.
+
+ ```console
+ https://login.microsoftonline.com/<Tenant B ID>/oauth2/authorize?client_id=<Multitenant application ID>&response_type=code&redirect_uri=<redirect url>
+ ```
+1. Select **Consent on behalf of your organization** and then **Accept**.
+
+ :::image type="content" source="media/authenticate-kubernetes-cross-tenant/multitenant-app-consent.png" alt-text="Grant tenant access to application":::
+
+
+### Step 3: Grant service principal permission to pull from registry
+
+In **Tenant B**, assign the AcrPull role to the service principal, scoped to the target container registry. You can use the [Azure portal](../role-based-access-control/role-assignments-portal.md) or other tools to assign the role. For example steps using the Azure CLI, see [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md#use-an-existing-service-principal).
++
+### Step 4: Update AKS with the Azure AD application secret
+
+Use the multitenant application (client) ID and client secret collected in Step 1 to [update the AKS service principal credential](../aks/update-credentials.md#update-aks-cluster-with-new-service-principal-credentials).
+
+Updating the service principal can take several minutes.
+
+## Next steps
+
+* Learn more [Azure Container Registry authentication with service principals](container-registry-auth-service-principal.md)
+* Learn more about image pull secrets in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)
+- Learn about [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md)
++
container-registry Authenticate Kubernetes Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/authenticate-kubernetes-options.md
+
+ Title: Scenarios to authenticate with Azure Container Registry from Kubernetes
+description: Overview of options and scenarios to authenticate to an Azure container registry from a Kubernetes cluster to pull container images
+++ Last updated : 06/02/2021++
+# Scenarios to authenticate with Azure Container Registry from Kubernetes
++
+You can use an Azure container registry as a source of container images for Kubernetes, including clusters you manage, managed clusters hosted in [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) or other clouds, and "local" Kubernetes configurations such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/).
+
+To pull images to your Kuberentes cluster from an Azure container registry, an authentication and authorization mechanism needs to be established. Depending on your cluster environment, choose one of the following methods:
+
+## Scenarios
+
+| Kubernetes cluster |Authentication method | Description | Example |
+||||-|
+| AKS cluster |AKS managed identity | Enable the AKS kubelet [managed identity](../aks/use-managed-identity.md) to pull images from an attached Azure container registry.<br/><br/> Registry can be in the same or a different Azure subscription. | [Authenticate with Azure Container Registry from Azure Kubernetes Service](../aks/cluster-container-registry-integration.md?toc=/azure/container-registry/toc.json&bc=/azure/container-registry/breadcrumb/toc.json)|
+| AKS cluster | AKS service principal | Enable the [AKS service principal](../aks/kubernetes-service-principal.md) with permissions to a target Azure container registry.<br/><br/>Registry can be in the same or a different Azure Active Directory tenant. | [Pull images from an Azure container registry to an AKS cluster in a different AD tenant](authenticate-aks-cross-tenant.md)
+| Kubernetes cluster other than AKS |Pod [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) | Use general Kubernetes mechanism to manage registry credentials for pod deployments.<br/><br/>Configure AD service principal, repository-scoped token, or other [registry credentials](container-registry-authentication.md). | [Pull images from an Azure container registry to a Kubernetes cluster using a pull secret](container-registry-auth-kubernetes.md) |
+++
+## Next steps
+
+* Learn more about how to [authenticate with an Azure container registry](container-registry-authentication.md)
container-registry Container Registry Auth Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-auth-kubernetes.md
Title: Authenticate from Kubernetes cluster
+ Title: Authenticate with an Azure container registry using a Kubernetes pull secret
description: Learn how to provide a Kubernetes cluster with access to images in your Azure container registry by creating a pull secret using a service principal Previously updated : 05/28/2020 Last updated : 06/02/2021
-# Pull images from an Azure container registry to a Kubernetes cluster
+# Pull images from an Azure container registry to a Kubernetes cluster using a pull secret
-You can use an Azure container registry as a source of container images with any Kubernetes cluster, including "local" Kubernetes clusters such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/). This article shows how to create a Kubernetes pull secret based on an Azure Active Directory service principal. Then, use the secret to pull images from an Azure container registry in a Kubernetes deployment.
+You can use an Azure container registry as a source of container images with any Kubernetes cluster, including "local" Kubernetes clusters such as [minikube](https://minikube.sigs.k8s.io/) and [kind](https://kind.sigs.k8s.io/). This article shows how to create a Kubernetes pull secret using credentials for an Azure container registry. Then, use the secret to pull images from an Azure container registry in a pod deployment.
-> [!TIP]
-> If you're using the managed [Azure Kubernetes Service](../aks/intro-kubernetes.md), you can also [integrate your cluster](../aks/cluster-container-registry-integration.md?toc=/azure/container-registry/toc.json&bc=/azure/container-registry/breadcrumb/toc.json) with a target Azure container registry for image pulls.
+This example creates a pull secret using Azure Active Directory [service principal credentials](container-registry-auth-service-principal.md). You can also configure a pull secret using other Azure container registry credentials, such as a [repository-scoped access token](container-registry-repository-scoped-permissions.md).
+
+> [!NOTE]
+> While pull secrets are commonly used, they bring additional management overhead. If you're using [Azure Kubernetes Service](../aks/intro-kubernetes.md), we recommend [other options](authenticate-kubernetes-options.md) such as using the cluster's managed identity or service principal to securely pull the image without an additional `imagePullSecrets` setting on each pod.
+
+## Prerequisites
This article assumes you already created a private Azure container registry. You also need to have a Kubernetes cluster running and accessible via the `kubectl` command-line tool.
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-auth-service-principal.md
For example, configure your web application to use a service principal that prov
## When to use a service principal
-You should use a service principal to provide registry access in **headless scenarios**. That is, any application, service, or script that must push or pull container images in an automated or otherwise unattended manner. For example: