Updates from: 01/26/2022 02:10:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/force-password-reset.md
Previously updated : 09/16/2021 Last updated : 01/24/2022 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] - ## Overview As an administrator, you can [reset a user's password](manage-users-portal.md#reset-a-users-password) if the user forgets their password. Or you would like to force them to reset the password. In this article, you'll learn how to force a password reset in these scenarios.
When an administrator resets a user's password via the Azure portal, the value o
The password reset flow is applicable to local accounts in Azure AD B2C that use an [email address](sign-in-options.md#email-sign-in) or [username](sign-in-options.md#username-sign-in) with a password for sign-in. --
-This feature is currently only available for User Flows. For setup steps, choose **User Flow** above. For custom policies, use the force password reset first logon [GitHub sample](https://github.com/azure-ad-b2c/samples/tree/master/policies/force-password-reset-first-logon) with prerequisites below.
- ## Prerequisites
To enable the **Forced password reset** setting in a sign-up or sign-in user flo
1. Sign in with the user account for which you reset the password. 1. You now must change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you. ++
+## Configure your custom policy
+
+Get the example of the force password reset policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/force-password-reset). In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
+
+## Upload and test the policy
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Make sure you're using the directory that contains your Azure AD B2C tenant by selecting the **Directories + subscriptions** icon in the portal toolbar.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+1. Select **Identity Experience Framework**.
+1. In **Custom Policies**, select **Upload Policy**.
+1. Select the *TrustFrameworkExtensionsCustomForcePasswordReset.xml* file.
+1. Select **Upload**.
+1. Repeat steps 6 through 8 for the relying party file *TrustFrameworkExtensionsCustomForcePasswordReset.xml*.
+
+## Run the policy
+
+1. Open the policy that you uploaded *B2C_1A_TrustFrameworkExtensions_custom_ForcePasswordReset*.
+1. For **Application**, select the application that you registered earlier. To see the token, the **Reply URL** should show `https://jwt.ms`.
+1. Select **Run now**.
+1. Sign in with the user account for which you reset the password.
+1. You now must change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you.
++ ## Force password reset on next login To force reset the password on next login, update the account password profile using MS Graph [Update user](/graph/api/user-update) operation. The following example updates the password profile [forceChangePasswordNextSignIn](user-profile-attributes.md#password-profile-property) attribute to `true`, which forces the user to reset the password on next login.
Once a password expiration policy has been set, you must also configure force pa
The password expiry duration default value is **90** days. The value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure Active Directory Module for Windows PowerShell. This command updates the tenant, so that all users' passwords expire after number of days you configure. - ## Next steps Set up a [self-service password reset](add-password-reset-policy.md).
active-directory-b2c Oauth2 Error Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/oauth2-error-technical-profile.md
Previously updated : 05/26/2021 Last updated : 01/25/2022
https://jwt.ms/#error=access_denied&error_description=AAD_Custom_1234%3a+My+cust
## Protocol
-The **Name** attribute of the **Protocol** element needs to be set to `None`. Set the **OutputTokenFormat** element to `OAuth2Error`.
+The **Name** attribute of the **Protocol** element needs to be set to `OAuth2`. Set the **OutputTokenFormat** element to `OAuth2Error`.
The following example shows a technical profile for `ReturnOAuth2Error`:
The following example shows a technical profile for `ReturnOAuth2Error`:
<TechnicalProfiles> <TechnicalProfile Id="ReturnOAuth2Error"> <DisplayName>Return OAuth2 error</DisplayName>
- <Protocol Name="None" />
+ <Protocol Name="OAuth2" />
<OutputTokenFormat>OAuth2Error</OutputTokenFormat> <CryptographicKeys> <Key Id="issuer_secret" StorageReferenceId="B2C_1A_TokenSigningKeyContainer" />
In the following example:
## Next steps
-Learn about [UserJourneys](userjourneys.md)
+Learn about [UserJourneys](userjourneys.md)
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
In production environments, the app registration redirect URI is ordinarily a pu
## Step 4: Publish your Azure AD B2C app
-Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/develop/v2-howto-app-gallery-listing.md). To add your app to the app gallery, do the following:
+Finally, add the multitenant app to the Azure AD app gallery. Follow the instructions in [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md). To add your app to the app gallery, do the following:
-1. [Create and publish documentation](../active-directory/develop/v2-howto-app-gallery-listing.md#step-5create-and-publish-documentation).
-1. [Submit your app](../active-directory/develop/v2-howto-app-gallery-listing.md#step-6submit-your-app) with the following information:
+1. [Create and publish documentation](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#create-and-publish-documentation).
+1. [Submit your app](../active-directory/manage-apps/v2-howto-app-gallery-listing.md#submit-your-application) with the following information:
|Question |Answer you should provide | |||
Finally, add the multitenant app to the Azure AD app gallery. Follow the instruc
## Next steps -- Learn how to [Publish your app to the Azure AD app gallery](../active-directory/develop/v2-howto-app-gallery-listing.md).
+- Learn how to [Publish your app to the Azure AD app gallery](../active-directory/manage-apps/v2-howto-app-gallery-listing.md).
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
The **Azure AD Provisioning Service** provisions users to SaaS apps and other sy
The Azure AD provisioning service uses the [SCIM 2.0 protocol](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/bg-p/IdentityStandards) for automatic provisioning. The service connects to the SCIM endpoint for the application, and uses SCIM user object schema and REST APIs to automate the provisioning and de-provisioning of users and groups. A SCIM-based provisioning connector is provided for most applications in the Azure AD gallery. When building apps for Azure AD, developers can use the SCIM 2.0 user management API to build a SCIM endpoint that integrates Azure AD for provisioning. For details, see [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md).
-To request an automatic Azure AD provisioning connector for an app that doesn't currently have one, see [Azure Active Directory Application Request](../develop/v2-howto-app-gallery-listing.md).
+To request an automatic Azure AD provisioning connector for an app that doesn't currently have one, see [Azure Active Directory Application Request](../manage-apps/v2-howto-app-gallery-listing.md).
## Authorization
active-directory Isv Automatic Provisioning Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
SAML JIT uses the claims information in the SAML token to create and update user
## Next Steps
-* [Enable Single Sign-on for your application](../develop/v2-howto-app-gallery-listing.md)
+* [Enable Single Sign-on for your application](../manage-apps/v2-howto-app-gallery-listing.md)
* [Submit your application listing](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/Default.aspx) and partner with Microsoft to create documentation on MicrosoftΓÇÖs site.
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
The actual steps required to enable and configure automatic provisioning vary de
If not, follow the steps below:
-1. [Create a request](../develop/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team will work with you and the application developer to onboard your application to our platform if it supports SCIM.
+1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team will work with you and the application developer to onboard your application to our platform if it supports SCIM.
1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. This is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Once the initial cycle has started, you can select **Provisioning logs** in the
## Publish your application to the AAD application gallery
-If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. This will make it easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../develop/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use.
+If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. This will make it easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use.
### Gallery onboarding checklist Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery.
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
Azure AD features pre-integrated support for many popular SaaS apps and human re
![Image that shows logos for DropBox, Salesforce, and others.](./media/user-provisioning/gallery-app-logos.png)
- If you want to request a new application for provisioning, you can [request that your application be integrated with our app gallery](../develop/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Request that the application vendor follow the SCIM standard so we can onboard the app to our platform quickly.
+ If you want to request a new application for provisioning, you can [request that your application be integrated with our app gallery](../manage-apps/v2-howto-app-gallery-listing.md). For a user provisioning request, we require the application to have a SCIM-compliant endpoint. Request that the application vendor follow the SCIM standard so we can onboard the app to our platform quickly.
* **Applications that support SCIM 2.0**: For information on how to generically connect applications that implement SCIM 2.0-based user management APIs, see [Build a SCIM endpoint and configure user provisioning](use-scim-to-provision-users-and-groups.md).
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-phone-options.md
If you have problems with phone authentication for Azure AD, review the followin
* Have the user change methods or activate SMS on the device. * Faulty telecom providers such as no phone input detected, missing DTMF tones issues, blocked caller ID on multiple devices, or blocked SMS across multiple devices. * Microsoft uses multiple telecom providers to route phone calls and SMS messages for authentication. If you see any of the above issues, have a user attempt to use the method at least five times within 5 minutes and have that user's information available when contacting Microsoft support.
+* Poor signal quality.
+ * Have the user attempt to log in using a wi-fi connection by installing the Microsoft Authenticator app.
+ * Or, use SMS authentication instead of phone (voice) authentication.
## Next steps
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
You must also meet the following system requirements:
- [Windows Server 2016](https://support.microsoft.com/help/4534307/windows-10-update-kb4534307) - [Windows Server 2019](https://support.microsoft.com/help/4534321/windows-10-update-kb4534321)
+- Have the credentials required to complete the steps in the scenario:
+ - An Active Directory user who is a member of the Domain Admins group for a domain and a member of the Enterprise Admins group for a forest. Referred to as **$domainCred**.
+ - An Azure Active Directory user who is a member of the Global Administrators role. Referred to as **$cloudCred**.
+
### Supported scenarios The scenario in this article supports SSO in both of the following instances:
Run the following steps in each domain and forest in your organization that cont
$domain = "contoso.corp.com" # Enter an Azure Active Directory global administrator username and password.
- $cloudCred = Get-Credential
+ $cloudCred = Get-Credential -Message 'An Active Directory user who is a member of the Domain Admins group for a domain and a member of the Enterprise Admins group for a forest.'
# Enter a domain administrator username and password.
- $domainCred = Get-Credential
+ $domainCred = Get-Credential -Message 'An Active Directory user who is a member of the Domain Admins group.'
# Create the new Azure AD Kerberos Server object in Active Directory # and then publish it to Azure Active Directory.
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/what-is-cloud-sync.md
Title: 'What is Azure AD Connect cloud sync. | Microsoft Docs'
+ Title: 'What is Azure AD Connect cloud sync? | Microsoft Docs'
description: Describes Azure AD Connect cloud sync.
Previously updated : 10/07/2021 Last updated : 01/25/2022 # What is Azure AD Connect cloud sync?
-Azure AD Connect cloud sync is new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits:
+Azure AD Connect cloud sync is new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits:
- Support for synchronizing to an Azure AD tenant from a multi-forest disconnected Active Directory forest environment: The common scenarios include merger & acquisition (where the acquired company's AD forests are isolated from the parent company's AD forests), and companies that have historically had multiple AD forests. - Simplified installation with light-weight provisioning agents: The agents act as a bridge from AD to Azure AD, with all the sync configuration managed in the cloud. - Multiple provisioning agents can be used to simplify high availability deployments, particularly critical for organizations relying upon password hash synchronization from AD to Azure AD.-- Support for large groups with up to 50K members. It is recommended to use only the OU scoping filter when synchronizing large groups.
+- Support for large groups with up to 50,000 members. It's recommended to use only the OU scoping filter when synchronizing large groups.
![What is Azure AD Connect](media/what-is-cloud-sync/architecture-1.png)
The following table provides a comparison between Azure AD Connect and Azure AD
| Support for password writeback |ΓùÅ |ΓùÅ | | Support for device writeback|ΓùÅ | | | Support for group writeback|ΓùÅ | |
+| Support for merging user attributes from multiple domains|ΓùÅ | |
| Azure AD Domain Services support|ΓùÅ | | | [Exchange hybrid writeback](../hybrid/reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) |ΓùÅ | | | Unlimited number of objects per AD domain |ΓùÅ | | | Support for up to 150,000 objects per AD domain |ΓùÅ |ΓùÅ | | Groups with up to 50,000 members |ΓùÅ |ΓùÅ | | Large groups with up to 250,000 members |ΓùÅ | |
-| Cross domain references|ΓùÅ | |
+| Cross domain references|ΓùÅ |ΓùÅ |
| On-demand provisioning|ΓùÅ |ΓùÅ | | Support for US Government|ΓùÅ |ΓùÅ |
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 01/11/2022 Last updated : 01/25/2022
Azure AD Conditional Access supports the following device platforms:
- Android - iOS-- Windows Phone - Windows - macOS
More information about locations can be found in the article, [What is the locat
## Client apps
-By default, all newly created Conditional Access policies will apply to all client app types even if the client apps condition is not configured.
+By default, all newly created Conditional Access policies will apply to all client app types even if the client apps condition isnΓÇÖt configured.
> [!NOTE] > The behavior of the client apps condition was updated in August 2020. If you have existing Conditional Access policies, they will remain unchanged. However, if you click on an existing policy, the configure toggle has been removed and the client apps the policy applies to are selected.
By default, all newly created Conditional Access policies will apply to all clie
> [!IMPORTANT] > Sign-ins from legacy authentication clients donΓÇÖt support MFA and donΓÇÖt pass device state information to Azure AD, so they will be blocked by Conditional Access grant controls, like requiring MFA or compliant devices. If you have accounts which must use legacy authentication, you must either exclude those accounts from the policy, or configure the policy to only apply to modern authentication clients.
-The **Configure** toggle when set to **Yes** applies to checked items, when set to **No** it applies to all client apps, including modern and legacy authentication clients. This toggle does not appear in policies created before August 2020.
+The **Configure** toggle when set to **Yes** applies to checked items, when set to **No** it applies to all client apps, including modern and legacy authentication clients. This toggle doesnΓÇÖt appear in policies created before August 2020.
- Modern authentication clients - Browser
The **Configure** toggle when set to **Yes** applies to checked items, when set
- Legacy authentication clients - Exchange ActiveSync clients - This selection includes all use of the Exchange ActiveSync (EAS) protocol.
- - When policy blocks the use of Exchange ActiveSync the affected user will receive a single quarantine email. This email with provide information on why they are blocked and include remediation instructions if able.
+ - When policy blocks the use of Exchange ActiveSync the affected user will receive a single quarantine email. This email with provide information on why theyΓÇÖre blocked and include remediation instructions if able.
- Administrators can apply policy only to supported platforms (such as iOS, Android, and Windows) through the Conditional Access Microsoft Graph API. - Other clients
- - This option includes clients that use basic/legacy authentication protocols that do not support modern authentication.
+ - This option includes clients that use basic/legacy authentication protocols that donΓÇÖt support modern authentication.
- Authenticated SMTP - Used by POP and IMAP client's to send email messages. - Autodiscover - Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online. - Exchange Online PowerShell - Used to connect to Exchange Online with remote PowerShell. If you block Basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell Module to connect. For instructions, see [Connect to Exchange Online PowerShell using multifactor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).
These conditions are commonly used when requiring a managed device, blocking leg
### Supported browsers
-This setting works with all browsers. However, to satisfy a device policy, like a compliant device requirement, the following operating systems and browsers are supported:
+This setting works with all browsers. However, to satisfy a device policy, like a compliant device requirement, the following operating systems and browsers are supported. Operating Systems and browsers that have fallen out of mainstream support arenΓÇÖt shown on this list:
-| OS | Browsers |
+| Operating Systems | Browsers |
| :-- | :-- |
-| Windows 10 | Microsoft Edge, Internet Explorer, Chrome, [Firefox 91+](https://support.mozilla.org/kb/windows-sso) |
-| Windows 8 / 8.1 | Internet Explorer, Chrome |
-| Windows 7 | Internet Explorer, Chrome |
-| iOS | Microsoft Edge, Intune Managed Browser, Safari |
-| Android | Microsoft Edge, Intune Managed Browser, Chrome |
-| Windows Phone | Microsoft Edge, Internet Explorer |
-| Windows Server 2019 | Microsoft Edge, Internet Explorer, Chrome |
-| Windows Server 2016 | Internet Explorer |
-| Windows Server 2012 R2 | Internet Explorer |
-| Windows Server 2008 R2 | Internet Explorer |
+| Windows 10 + | Microsoft Edge, [Chrome](#chrome-support), [Firefox 91+](https://support.mozilla.org/kb/windows-sso) |
+| Windows Server 2022 | Microsoft Edge, [Chrome](#chrome-support) |
+| Windows Server 2019 | Microsoft Edge, [Chrome](#chrome-support) |
+| iOS | Microsoft Edge, Safari |
+| Android | Microsoft Edge, Chrome |
| macOS | Microsoft Edge, Chrome, Safari |
-These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode or if cookies are disabled.
+These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode or if cookies are disabled.
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario.
This setting has an impact on access attempts made from the following mobile app
| | | | | Dynamics CRM app | Dynamics CRM | Windows 10, Windows 8.1, iOS, and Android | | Mail/Calendar/People app, Outlook 2016, Outlook 2013 (with modern authentication)| Exchange Online | Windows 10 |
-| MFA and location policy for apps. Device-based policies are not supported.| Any My Apps app service | Android and iOS |
+| MFA and location policy for apps. Device-based policies arenΓÇÖt supported.| Any My Apps app service | Android and iOS |
| Microsoft Teams Services - this client app controls all services that support Microsoft Teams and all its Client Apps - Windows Desktop, iOS, Android, WP, and web client | Microsoft Teams | Windows 10, Windows 8.1, Windows 7, iOS, Android, and macOS | | Office 2016 apps, Office 2013 (with modern authentication), [OneDrive sync client](/onedrive/enable-conditional-access) | SharePoint | Windows 8.1, Windows 7 | | Office 2016 apps, Universal Office apps, Office 2013 (with modern authentication), [OneDrive sync client](/onedrive/enable-conditional-access) | SharePoint Online | Windows 10 |
This setting has an impact on access attempts made from the following mobile app
- When creating a policy assigned to Exchange ActiveSync clients, **Exchange Online** should be the only cloud application assigned to the policy. - Organizations can narrow the scope of this policy to specific platforms using the **Device platforms** condition.
-If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multi-factor authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication does not support these controls.
+If the access control assigned to the policy uses **Require approved client app**, the user is directed to install and use the Outlook mobile client. In the case that **Multi-factor authentication**, **Terms of use**, or **custom controls** are required, affected users are blocked, because basic authentication doesnΓÇÖt support these controls.
For more information, see the following articles:
For more information, see the following articles:
By selecting **Other clients**, you can specify a condition that affects apps that use basic authentication with mail protocols like IMAP, MAPI, POP, SMTP, and older Office apps that don't use modern authentication. ## Device state (preview)+ > [!CAUTION] > **This preview feature is being deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
The above scenario, can be configured using *All users* accessing the *Microsoft
## Filter for devices
-There is a new optional condition in Conditional Access called filter for devices. When configuring filter for devices as a condition, organizations can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information see the article, [Conditional Access: Filter for devices (preview)](concept-condition-filters-for-devices.md).
+ThereΓÇÖs a new optional condition in Conditional Access called filter for devices. When configuring filter for devices as a condition, organizations can choose to include or exclude devices based on a filter using a rule expression on device properties. The rule expression for filter for devices can be authored using rule builder or rule syntax. This experience is similar to the one used for dynamic membership rules for groups. For more information, see the article [Conditional Access: Filter for devices (preview)](concept-condition-filters-for-devices.md).
## Next steps
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 01/10/2022 Last updated : 01/25/2022 -+
Within a Conditional Access policy, an administrator can make use of session con
## Application enforced restrictions
-Organizations can use this control to require Azure AD to pass device information to the selected cloud apps. The device information allows cloud apps to know if a connection is from a compliant or domain-joined device and update the session experience. This control only supports SharePoint Online and Exchange Online as selected cloud apps. When selected, the cloud app uses the device information to provide users with a limited or full experience. Limited when the device isn't managed or compliant and full when the device is managed and compliant.
+Organizations can use this control to require Azure AD to pass device information to the selected cloud apps. The device information allows cloud apps to know if a connection is from a compliant or domain-joined device and update the session experience. This control only supports Office 365, SharePoint Online, and Exchange Online as selected cloud apps. When selected, the cloud app uses the device information to provide users with a limited or full experience. Limited when the device isn't managed or compliant and full when the device is managed and compliant.
For more information on the use and configuration of app-enforced restrictions, see the following articles:
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Previously updated : 01/10/2022 Last updated : 01/25/2022 -+
This process enables the scenario where users lose access to organizational file
> [!NOTE] > Not all client app and resource provider combinations are supported. See table below. The first column of this table refers to web applications launched via web browser (i.e. PowerPoint launched in web browser) while the remaining four columns refer to native applications running on each platform described. Additionally, references to "Office" encompass Word, Excel, and PowerPoint.
+Token lifetimes for Office web apps are reduced to 1 hour when a Conditional Access policy is set.
+ | | Outlook Web | Outlook Win32 | Outlook iOS | Outlook Android | Outlook Mac | | : | :: | :: | :: | :: | :: | | **SharePoint Online** | Supported | Supported | Supported | Supported | Supported |
In the following example, a Conditional Access administrator has configured a lo
1. A CAE-capable client presents credentials or a refresh token to Azure AD asking for an access token for some resource. 1. Azure AD evaluates all Conditional Access policies to see whether the user and client meet the conditions. 1. An access token is returned along with other artifacts to the client.
-1. User moves out of an allowed IP range
+1. User moves out of an allowed IP range.
1. The client presents an access token to the resource provider from outside of an allowed IP range. 1. The resource provider evaluates the validity of the token and checks the location policy synced from Azure AD. 1. In this case, the resource provider denies access, and sends a 401+ claim challenge back to the client. The client is challenged because it isn't coming from an allowed IP range.
The following table describes the migration experience of each customer group ba
| | | | | | New tenants that didn't configure anything in the old experience. | No | Yes | Old CAE setting will be hidden given these customers likely didn't see the experience before general availability. | | Tenants that explicitly enabled for all users with the old experience. | No | Yes | Old CAE setting will be greyed out. Since these customers explicitly enabled this setting for all users, they don't need to migrate. |
-| Tenants that explicitly enabled some users in their tenants with the old experience.| Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new conditional access policy wizard, which includes **All users**, while excluding users and groups copied from CAE. It also sets the new **Customize continuous access evaluation** Session control to **Disabled**. |
-| Tenants that explicitly disabled the preview. | Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new conditional access policy wizard, which includes **All users**, and sets the new **Customize continuous access evaluation** Session control to **Disabled**. |
+| Tenants that explicitly enabled some users in their tenants with the old experience.| Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, while excluding users and groups copied from CAE. It also sets the new **Customize continuous access evaluation** Session control to **Disabled**. |
+| Tenants that explicitly disabled the preview. | Yes | No | Old CAE settings will be greyed out. Clicking **Migrate** launches the new Conditional Access policy wizard, which includes **All users**, and sets the new **Customize continuous access evaluation** Session control to **Disabled**. |
More information about continuous access evaluation as a session control can be found in the section, [Customize continuous access evaluation](concept-conditional-access-session.md#customize-continuous-access-evaluation).
When multiple users are collaborating on a document at the same time, their acce
- Closing the document - Closing the Office app-- After a period of 10 hours
+- After 1 hour when a Conditional Access IP policy is set
-To reduce this time a SharePoint Administrator can reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions will be reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command "[Set-SPOTenant ΓÇôIPAddressWACTokenLifetime](/powershell/module/sharepoint-online/set-spotenant)".
+To further reduce this time, a SharePoint Administrator can reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions will be reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command "[Set-SPOTenant ΓÇôIPAddressWACTokenLifetime](/powershell/module/sharepoint-online/set-spotenant)".
### Enable after a user is disabled
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Previously updated : 09/22/2021 Last updated : 01/25/2022 -+
Administrators will have the opportunity to monitor user sign-ins where CAE is a
1. Browse to **Azure Active Directory** > **Sign-ins**. 1. Apply the **Is CAE Token** filter.
-[ ![Add a filter to the Sitn-ins log to see where CAE is being applied or not](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png#lightbox)
+[ ![Add a filter to the Sign-ins log to see where CAE is being applied or not](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/azure-ad-sign-ins-log-apply-filter.png#lightbox)
From here, admins will be presented with information about their userΓÇÖs sign-in events. Select any sign-in to see details about the session, like which Conditional Access policies were applied and is CAE enabled.
The **Continuous access evaluation insights** workbook contains the following ta
![Workbook table 1 showing potential IP address mismatches](./media/howto-continuous-access-evaluation-troubleshoot/continuous-access-evaluation-insights-workbook-table-1.png)
-The potential IP address mismatch between Azure AD & resource provider table allows admins to investigate sessions where the IP address detected by Azure AD doesn't match with the IP address detected by the Resource Provider.
+The potential IP address mismatch between Azure AD & resource provider table allows admins to investigate sessions where the IP address detected by Azure AD doesn't match with the IP address detected by the resource provider.
This workbook table sheds light on these scenarios by displaying the respective IP addresses and whether a CAE token was issued during the session.
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/desktop-app-quickstart.md
zone_pivot_groups: desktop-app-quickstart
::: zone-end ::: zone pivot="devlang-windows-desktop" ::: zone-end ::: zone pivot="devlang-nodejs-electron"
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-client-assertions.md
jti | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the J
nbf | 1601519114 | The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5). Using the current time is appropriate. sub | {ClientID} | The "sub" (subject) claim identifies the subject of the JWT, in this case also your application. Use the same value as `iss`.
-Here is an example of how to craft these claims:
+If you use a certificate as a client secret, the certificate must be deployed safely. We recommend that you store the certificate in a secure spot supported by the platform, such as in the certificate store on Windows or by using Azure Key Vault.
+
+Here's an example of how to craft these claims:
```csharp
-private static IDictionary<string, string> GetClaims()
+using System.Collections.Generic;
+private static IDictionary<string, object> GetClaims(string tenantId, string clientId)
{
- //aud = https://login.microsoftonline.com/ + Tenant ID + /v2.0
- string aud = $"https://login.microsoftonline.com/{tenantId}/v2.0";
-
- string ConfidentialClientID = "00000000-0000-0000-0000-000000000000" //client id
- const uint JwtToAadLifetimeInSeconds = 60 * 10; // Ten minutes
- DateTime validFrom = DateTime.UtcNow;
- var nbf = ConvertToTimeT(validFrom);
- var exp = ConvertToTimeT(validFrom + TimeSpan.FromSeconds(JwtToAadLifetimeInSeconds));
-
- return new Dictionary<string, string>()
- {
- { "aud", aud },
- { "exp", exp.ToString() },
- { "iss", ConfidentialClientID },
- { "jti", Guid.NewGuid().ToString() },
- { "nbf", nbf.ToString() },
- { "sub", ConfidentialClientID }
- };
+ //aud = https://login.microsoftonline.com/ + Tenant ID + /v2.0
+ string aud = $"https://login.microsoftonline.com/{tenantId}/v2.0";
+
+ string ConfidentialClientID = clientId; //client id 00000000-0000-0000-0000-000000000000
+ const uint JwtToAadLifetimeInSeconds = 60 * 10; // Ten minutes
+ DateTimeOffset validFrom = DateTimeOffset.UtcNow;
+ DateTimeOffset validUntil = validFrom.AddSeconds(JwtToAadLifetimeInSeconds);
+
+ return new Dictionary<string, object>()
+ {
+ { "aud", aud },
+ { "exp", validUntil.ToUnixTimeSeconds() },
+ { "iss", ConfidentialClientID },
+ { "jti", Guid.NewGuid().ToString() },
+ { "nbf", validFrom.ToUnixTimeSeconds() },
+ { "sub", ConfidentialClientID }
+ };
} ```
-Here is how to craft a signed client assertion:
+Here's how to craft a signed client assertion:
```csharp
-string Encode(byte[] arg)
+using System.Collections.Generic;
+using System.Security.Cryptography.X509Certificates;
+using System.Security.Cryptography;
+using System.Text;
+using System.Text.Json;
+...
+static string Base64UrlEncode(byte[] arg)
{ char Base64PadCharacter = '='; char Base64Character62 = '+';
string Encode(byte[] arg)
return s; }
-string GetSignedClientAssertion()
+static string GetSignedClientAssertion(X509Certificate2 certificate, string tenantId, string clientId)
{
- //Signing with SHA-256
- string rsaSha256Signature = "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256";
- X509Certificate2 certificate = new X509Certificate2("Certificate.pfx", "Password", X509KeyStorageFlags.EphemeralKeySet);
-
- //Create RSACryptoServiceProvider
- var x509Key = new X509AsymmetricSecurityKey(certificate);
- var privateKeyXmlParams = certificate.PrivateKey.ToXmlString(true);
- var rsa = new RSACryptoServiceProvider();
- rsa.FromXmlString(privateKeyXmlParams);
+ // Get the RSA with the private key, used for signing.
+ var rsa = certificate.GetRSAPrivateKey();
//alg represents the desired signing algorithm, which is SHA-256 in this case
- //kid represents the certificate thumbprint
+ //x5t represents the certificate thumbprint base64 url encoded
var header = new Dictionary<string, string>()
- {
- { "alg", "RS256"},
- { "kid", Encode(certificate.GetCertHash()) }
- };
+ {
+ { "alg", "RS256"},
+ { "typ", "JWT" },
+ { "x5t", Base64UrlEncode(certificate.GetCertHash()) }
+ };
//Please see the previous code snippet on how to craft claims for the GetClaims() method
- string token = Encode(Encoding.UTF8.GetBytes(JObject.FromObject(header).ToString())) + "." + Encode(Encoding.UTF8.GetBytes(JObject.FromObject(GetClaims()).ToString()));
+ var claims = GetClaims(tenantId, clientId);
- string signature = Encode(rsa.SignData(Encoding.UTF8.GetBytes(token), new SHA256Cng()));
+ var headerBytes = JsonSerializer.SerializeToUtf8Bytes(header);
+ var claimsBytes = JsonSerializer.SerializeToUtf8Bytes(claims);
+ string token = Base64UrlEncode(headerBytes) + "." + Base64UrlEncode(claimsBytes);
+
+ string signature = Base64UrlEncode(rsa.SignData(Encoding.UTF8.GetBytes(token), HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1));
string signedClientAssertion = string.Concat(token, ".", signature); return signedClientAssertion; }
string GetSignedClientAssertion()
You also have the option of using [Microsoft.IdentityModel.JsonWebTokens](https://www.nuget.org/packages/Microsoft.IdentityModel.JsonWebTokens/) to create the assertion for you. The code will be a more elegant as shown in the example below: ```csharp
- string GetSignedClientAssertion()
+ string GetSignedClientAssertionAlt(X509Certificate2 certificate)
{
- var cert = new X509Certificate2("Certificate.pfx", "Password", X509KeyStorageFlags.EphemeralKeySet);
- //aud = https://login.microsoftonline.com/ + Tenant ID + /v2.0 string aud = $"https://login.microsoftonline.com/{tenantID}/v2.0";
You also have the option of using [Microsoft.IdentityModel.JsonWebTokens](https:
var securityTokenDescriptor = new SecurityTokenDescriptor { Claims = claims,
- SigningCredentials = new X509SigningCredentials(cert)
+ SigningCredentials = new X509SigningCredentials(certificate)
}; var handler = new JsonWebTokenHandler();
You also have the option of using [Microsoft.IdentityModel.JsonWebTokens](https:
Once you have your signed client assertion, you can use it with the MSAL apis as shown below. ```csharp
- string signedClientAssertion = GetSignedClientAssertion();
+ X509Certificate2 certificate = ReadCertificate(config.CertificateName);
+ string signedClientAssertion = GetSignedClientAssertion(certificate, tenantId, ConfidentialClientID)
+ // OR
+ //string signedClientAssertion = GetSignedClientAssertionAlt(certificate);
var confidentialApp = ConfidentialClientApplicationBuilder .Create(ConfidentialClientID)
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/publisher-verification-overview.md
Below are some frequently asked questions regarding the publisher verification p
Developers who are also integrating with Microsoft 365 can receive additional benefits from these programs. For more information, refer to [Microsoft 365 Publisher Attestation](/microsoft-365-app-certification/docs/attestation) and [Microsoft 365 App Certification](/microsoft-365-app-certification/docs/certification). -- **Is this the same thing as the Azure AD Application Gallery?** No- publisher verification is a complementary but separate program to the [Azure Active Directory application gallery](v2-howto-app-gallery-listing.md). Developers who fit the above criteria should complete the publisher verification process independently of participation in that program.
+- **Is this the same thing as the Azure AD Application Gallery?** No- publisher verification is a complementary but separate program to the [Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). Developers who fit the above criteria should complete the publisher verification process independently of participation in that program.
## Next steps * Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-app-configuration.md
You provide either a `ClientSecret` or a `CertificateName`. These settings are e
Configuration parameters for the [Node.js daemon sample](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console/) are located in an *.env* file:
-```Text
+```JavaScript
# Credentials TENANT_ID=Enter_the_Tenant_Info_Here CLIENT_ID=Enter_the_Application_Id_Here CLIENT_SECRET=Enter_the_Client_Secret_Here # Endpoints
-AAD_ENDPOINT=Enter_the_Cloud_Instance_Id_Here
-GRAPH_ENDPOINT=Enter_the_Graph_Endpoint_Here
+// the Azure AD endpoint is the authority endpoint for token issuance
+AAD_ENDPOINT=Enter_the_Cloud_Instance_Id_Here // https://login.microsoftonline.com/
+// the graph endpoint is the application ID URI of Microsoft Graph
+GRAPH_ENDPOINT=Enter_the_Graph_Endpoint_Here // https://graph.microsoft.com/
``` # [Python](#tab/python)
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/single-sign-out-saml-protocol.md
# Single Sign-Out SAML Protocol
-Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration. If the app is [added to the Azure App Gallery](v2-howto-app-gallery-listing.md) then this value can be set by default. Otherwise, the value must be determined and set by the person adding the app to their Azure AD tenant. Azure AD uses the LogoutURL to redirect users after they're signed out.
+Azure Active Directory (Azure AD) supports the SAML 2.0 web browser single sign-out profile. For single sign-out to work correctly, the **LogoutURL** for the application must be explicitly registered with Azure AD during application registration. If the app is [added to the Azure App Gallery](../manage-apps/v2-howto-app-gallery-listing.md) then this value can be set by default. Otherwise, the value must be determined and set by the person adding the app to their Azure AD tenant. Azure AD uses the LogoutURL to redirect users after they're signed out.
Azure AD supports redirect binding (HTTP GET), and not HTTP POST binding.
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-app-gallery-listing.md
- Title: Publish your app to the Azure Active Directory app gallery
-description: Learn how to list an application that supports single sign-on in the Azure Active Directory app gallery. Publishing to the app gallery makes it easier for customers to find and add your app to their tenant.
------- Previously updated : 06/23/2021-----
-# Publish your app to the Azure AD app gallery
-
-You can publish your app in the Azure Active Directory (Azure AD) app gallery. When your app is published, it will show up as an option for customers when they are [adding apps to their tenant](../manage-apps/add-application-portal.md).
-
-The steps to publishing your app in the Azure AD app gallery are:
-1. Prerequisites
-1. Choose the right single sign-on standard for your app.
-1. Implement single sign-on in your app.
-1. Implement SCIM user provisioning in your app (optional)
-1. Create your Azure tenant and test your app.
-1. Create and publish documentation.
-1. Submit your app.
-1. Join the Microsoft partner network.
-
-## What is the Azure AD application gallery?
-
-The [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps?page=1) is a catalog of thousands of apps that make it easy to deploy and configure single sign-on (SSO) and automated user provisioning.
-
-Some of the benefits of adding your app to the Azure AD gallery include:
--- Customers find the best possible single sign-on experience for your app.-- Configuration of the application is simple and minimal.-- A quick search finds your application in the gallery.-- Free, Basic, and Premium Azure AD customers can all use this integration.-- Mutual customers get a step-by-step configuration tutorial.-- Customers who use the System for Cross-domain Identity Management ([SCIM](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010)) can use provisioning for the same app.-
-In addition, there are many benefits when your customers use Azure AD as an identity provider for your app. Some of these include:
--- Provide single sign-on for your users. With SSO you reduce support costs by making it easier for your customers with single sign-on. If one-click SSO is enabled, your customersΓÇÖ IT Administrators don't have to learn how to configure your application for use in their organization. To learn more about single sign-on, see [What is single sign-on?](../manage-apps/what-is-single-sign-on.md).-- Your app can be discoverable in the Microsoft 365 App Gallery, the Microsoft 365 App Launcher, and within Microsoft Search on Office.com. -- Integrated app management. To learn more about app management in Azure AD, see [What is application management?](../manage-apps/what-is-application-management.md).-- Your app can use the [Graph API](/graph/) to access the data that drives user productivity in the Microsoft ecosystem.-- Application-specific documentation co-produced with the Azure AD team for our mutual customers eases adoption.-- You provide your customers the ability to completely manage their employee and guest identitiesΓÇÖ authentication and authorization.-- Placing all account management and compliance responsibility with the customer owner of those identities.-- Providing ability to enable or disable SSO for specific identity providers, groups, or users to meet their business needs.-- You increase your marketability and adoptability. Many large organizations require that (or aspire to) their employees have seamless SSO experiences across all applications. Making SSO easy is important.-- You reduce end-user friction, which may increase end-user usage and increase your revenue.-- Customers who use the System for Cross-domain Identity Management ([SCIM](https://techcommunity.microsoft.com/t5/Identity-Standards-Blog/Provisioning-with-SCIM-getting-started/ba-p/880010)) can use provisioning for the same app.-- Add security and convenience when users sign on to applications by using Azure AD SSO and removing the need for separate credentials.-
-> [!TIP]
-> When you offer your application for use by other companies through a purchase or subscription, you make your application available to customers within their own Azure tenants. This is known as creating a multi-tenant application. For an overview of this concept, see [Tenancy in Azure Active Directory](single-and-multi-tenant-apps.md).
-
-## Prerequisites
-To publish your app in the Azure AD gallery you must first read and agree to specific [terms and conditions](https://azure.microsoft.com/support/legal/active-directory-app-gallery-terms/).
-
-You need a permanent account for testing with at least two users registered.
--- For federated applications (Open ID and SAML/WS-Fed), the application must support the software-as-a-service (SaaS) model for getting listed in the Azure AD app gallery. The enterprise gallery applications must support multiple customer configurations and not any specific customer.-- For Open ID Connect, the application must be multitenanted and the [Azure AD consent framework](../develop/consent-framework.md) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any customer can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.-- For SAML 2.0/WS-Fed, your application must have the capability to do the SAML/WS-Fed SSO integration in SP or IDP mode. Make sure this capability is working correctly before you submit the request.-- For password SSO, make sure that your application supports form authentication so that password vaulting can be done to get single sign-on to work as expected.-- You need a permanent account for testing with at least two users registered.-
-You can get a free test account with all the premium Azure AD features - 90 days free and can get extended as long as you do dev work with it: [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
-
-## Step 1 - Choose the right single sign-on standard for your app
-
-To list an application in the Azure AD app gallery, implement at least one of the supported single sign-on options. To understand the single sign-on options, and how customers will configure them in Azure AD, see [SSO options](../manage-apps/sso-options.md).
-
-The following table compares the main standards: Open Authentication 2.0 (OAuth 2.0) with OpenID Connect (OIDC), Security Assertion Markup Language (SAML), and Web Services Federation (WS-Fed).
-
-| Capability| OAuth / OIDC| SAML / WS-Fed |
-| - |-|-|
-| Web-based single sign-on| √| √ |
-| Web-based single sign-out| √| √ |
-| Mobile-based single sign-on| √| √* |
-| Mobile-based single sign-out| √| √* |
-| Conditional Access policies for mobile applications| √| √* |
-| Seamless MFA experience for mobile applications| √| √* |
-| SCIM Provisioning| √| √ |
-| Access Microsoft Graph| √| X |
-
-*Possible, but Microsoft doesn't provide samples or guidance.
-
-### OAuth 2.0 and OpenID Connect
-OAuth 2.0 is an [industry-standard](https://oauth.net/2/) protocol for authorization. OpenID Connect (OIDC) is an [industry standard](https://openid.net/connect/) identity authentication layer built on top of the OAuth 2.0 protocol.
-
-**Reasons to choose OAuth/OIDC**
-- The authorization inherent in these protocols enables your application to access and integrate with rich user and organizational data through the Microsoft Graph API.-- Simplifies your customersΓÇÖ end-user experience when adopting SSO for your application. You can easily define the permission sets necessary, which are then automatically represented to the administrator or end user consenting.-- Using these protocols enables your customers to use Conditional Access and Multi-Factor Authentication (MFA) policies to control access to the applications. -- Microsoft provides libraries and [code samples across multiple technology platforms](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Samples) to aid your development. -
-**Some things to consider**
-- If you have already implemented SAML based single sign-on for your application then you might not want to implement a new standard to get your app in the gallery.-
-### SAML 2.0 or WS-Fed
-
-SAML is a mature, and widely adopted, [single sign-on standard](https://www.oasis-open.org/standards#samlv2.0) for web applications. To learn more about how Azure uses SAML, see [How Azure uses the SAML protocol](active-directory-saml-protocol-reference.md).
-
-Web Services Federation (WS-Fed) is an [industry standard](https://docs.oasis-open.org/wsfed/federation/v1.2/ws-federation.html) generally used for web applications that are developed using the .NET platform.
-
-**Reasons to choose SAML**
-- SAML 2.0 is a mature standard and most technology platforms support open-source libraries for SAML 2.0. -- You can provide your customers an administration interface to configure SAML SSO. They can configure SAML SSO for Microsoft Azure AD, and any other identity provider that supports SAML.-
-**Some things to consider**
-- When using SAML 2.0 or WSFed protocols for mobile applications, certain Conditional Access policies including Multi-factor Authentication (MFA) will have a degraded experience.-- If you want to access the Microsoft Graph, you will need to implement authorization through OAuth 2.0 to generate necessary tokens. -
-### Password-based
-Password-based SSO, also referred to as password vaulting, enables you to manage user access and passwords to web applications that don't support identity federation. It's also useful for scenarios in which several users need to share a single account, such as to your organization's social media app accounts.
--
-## Step 2 - Implement single sign-on in your app
-Every app in the gallery must implement one of the supported single sign-on options. To learn more about the supported options, see [SSO options](../manage-apps/sso-options.md).
-
-For OAuth and OIDC, see [guidance on authentication patterns](v2-app-types.md) and [Azure active Directory code samples](sample-v2-code.md).
-
-For SAML and WS-Fed, your application must have the capability to do SSO integration in SP or IDP mode. Make sure this capability is working correctly before you submit the request.
-
-To learn more about authentication, see [What is authentication?](authentication-vs-authorization.md).
-
-> [!IMPORTANT]
-> For federated applications (OpenID and SAML/WS-Fed), the app must support the Software as a Service (SaaS) model. Azure AD gallery applications must support multiple customer configurations and should not be specific to any single customer.
-
-### Implement OAuth 2.0 and OpenID Connect
-
-For OpenID Connect, the application must be multi-tenanted and the [Azure AD consent framework](consent-framework.md) must be properly implemented for the application. The user can send the sign-in request to a common endpoint so that any customer can provide consent to the application. You can control user access based on the tenant ID and the user's UPN received in the token.
-
-To review specific examples, see the [Microsoft identity platform code samples](sample-v2-code.md).
-
-To review mobile specific examples, see:
-* [Android](quickstart-v2-android.md)
-* [iOS](quickstart-v2-ios.md)
-* [Universal Windows Platform](quickstart-v2-uwp.md)
-
-### Implement SAML 2.0
-
-If your app supports SAML 2.0, you can integrate it directly with an Azure AD tenant. To learn more about SAML configuration with Azure AD, see [Configure SAML-based single sign-on](../manage-apps/configure-saml-single-sign-on.md).
-
-Microsoft does not provide, or recommend, libraries for SAML implementations. There are many open-source libraries available.
-
-### Implement WS-Fed
-To learn more about WS-Fed in ASP.NET Core, see [Authenticate users with WS-Federation in ASP.NET Core](/aspnet/core/security/authentication/ws-federation).
-
-### Implement password vaulting
-
-Create a web application that has an HTML sign-in page. Make sure that your application supports form authentication so that password vaulting can be done to get single sign-on to work as expected.
-
-## Step 3 - Implement SCIM user provisioning in your app
-Supporting [SCIM](https://aka.ms/scimoverview) provisioning is an optional, but highly recommended, step in building your application. Supporting the SCIM standard is easy to do and allows customers to automatically create and update user accounts in your app, without relying on manual processes such as uploading CSV files. In addition, customers can automate removing users and keeping group memberships in sync, which can't be accomplished with a solution such as SAML JIT.
-
-### Learn about SCIM
-To learn more about the SCIM standards and benefits for your customers, see [provisioning with SCIM - getting started](https://aka.ms/scimoverview).
-
-### Understand the Azure AD SCIM implementation
-To learn more about the Azure AD SCIM implementation, see [build a SCIM endpoint and configure user provisioning with Azure AD](../app-provisioning/use-scim-to-provision-users-and-groups.md).
-
-### Implement SCIM
-Azure AD provides [reference code](https://aka.ms/scimoverview) to help you build a SCIM endpoint. There are also many third party libraries / references that you can find on GitHub.
-
-## Step 4 - Create your Azure tenant and test your app
-
-You will need an Azure AD tenant in order to test your app. To set up your development environment, see [Quickstart: Set up a tenant](quickstart-create-new-tenant.md).
-
-Alternatively, an Azure AD tenant comes with every Microsoft 365 subscription. To set up a free Microsoft 365 development environment, see [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
-
-Once you have a tenant, test single-sign on and [provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-aad-scim-client).
-
-**For OIDC or Oath applications**, [Register your application](quickstart-register-app.md) as a multi-tenant application. ΓÇÄSelect the Accounts in any organizational directory and personal Microsoft accounts option in Supported Account types.
-
-**For SAML- and WS-Fed-based applications**, you [Configure SAML-based Single sign-on](../manage-apps/configure-saml-single-sign-on.md) applications using a generic SAML template in Azure AD.
-
-You can also [convert a single-tenant application to multi-tenant](howto-convert-app-to-be-multi-tenant.md) if necessary.
--
-## Step 5 - Create and publish documentation
-
-### Documentation on your site
-
-Ease of adoption is a significant factor in enterprise software decisions. Clear easy-to-follow documentation supports your customers in their adoption journey and reduces support costs. Working with thousands of software vendors, Microsoft has seen what works.
-
-We recommend that your documentation on your site at a minimum include the following items.
-
-* Introduction to your SSO functionality
- * Protocols supported
- * Version and SKU
- * Supported Identity Providers list with documentation links
-* Licensing information for your application
-* Role-based access control for configuring SSO
-* SSO Configuration Steps
- * UI configuration elements for SAML with expected values from the provider
- * Service provider information to be passed to identity providers
-* If OIDC/OAuth
- * List of permissions required for consent with business justifications
-* Testing steps for pilot users
-* Troubleshooting information, including error codes and messages
-* Support mechanisms for customers
-* Details about your SCIM endpoint, including the resources and attributes supported
-
-### Documentation on the Microsoft Site
-
-When you list your application with the Azure Active Directory Application Gallery, which also publishes your application in the Azure Marketplace, Microsoft will generate documentation for our mutual customers explaining the step-by-step process. You can see an example [here](../saas-apps/tutorial-list.md). This documentation is created based on your submission to the gallery, and you can easily update it if you make changes to your application using your GitHub account.
--
-## Step 6 - Submit your app
-
-After you've tested that your application integration works with Azure AD, submit your application request in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps).
-
-The first time you try to sign into the portal you will be presented with one of two screens.
-
-If you receive the message "That didn't work" then you will need to contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com). Provide the email account that you want to use for submitting the request. A business email address such as `name@yourbusiness.com` is preferred. The Azure AD team will add the account in the Microsoft Application Network portal.
-
-If you see a "Request Access" page then fill in the business justification and select **Request Access**.
-
-After the account is added, you can sign in to the Microsoft Application Network portal and submit the request by selecting the **Submit Request (ISV)** tile on the home page.
-
-![Submit Request (ISV) tile on home page](./media/howto-app-gallery-listing/homepage.png)
-
-### Issues on logging into portal
-
-If you are seeing this error while logging in then here are the detail on the issue and this is how you can fix it.
-
-* If your sign-in was blocked as shown below:
-
- ![issues resolving application in the gallery](./media/howto-app-gallery-listing/blocked.png)
-
-**What's happening:**
-
-The guest user is federated to a home tenant which is also an Azure AD. The guest user is at High risk. Microsoft doesn't allow High risk users to access its resources. All High risk users (employees or guests / vendors) must remediate / close their risk to access Microsoft resources. For guest users, this user risk comes from the home tenant and the policy comes from the resource tenant (Microsoft in this case).
-
-**Secure solutions:**
-
-* MFA registered guest users remediate their own user risk. This can be done by the guest user performing a secured password change or reset (https://aka.ms/sspr) at their home tenant (this needs MFA and SSPR at the home tenant). The secured password change or reset must be initiated on Azure AD and not on-prem.
-
-* Guest users have their admins remediate their risk. In this case, the admin will perform a password reset (temporary password generation). This does not need Identity Protection. The guest user's admin can go to https://aka.ms/RiskyUsers and click on 'Reset password'.
-
-* Guest users have their admins close / dismiss their risk. Again, this does not need Identity Protection. The admin can go to https://aka.ms/RiskyUsers and click on 'Dismiss user risk'. However, the admin must do the due diligence to ensure this was a false positive risk assessment before closing the user risk. Otherwise, they are putting their and Microsoft's resources at risk by suppressing a risk assessment without investigation.
-
-> [!NOTE]
-> If you have any issues with access, contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com).
-
-### Implementation specific options
-If you want to add your application to list in the gallery by using OpenID Connect, select **OpenID Connect & OAuth 2.0** as shown.
-
-![Listing an OpenID Connect application in the gallery](./media/howto-app-gallery-listing/openid.png)
-
-If you want to add your application to list in the gallery by using **SAML 2.0** or **WS-Fed**, select **SAML 2.0/WS-Fed** as shown.
-
-![Listing a SAML 2.0 or WS-Fed application in the gallery](./media/howto-app-gallery-listing/saml.png)
-
-If you want to add your application to list in the gallery by using password SSO, select **Password SSO(UserName & Password)** as shown.
-
-![Listing a password SSO application in the gallery](./media/howto-app-gallery-listing/passwordsso.png)
-
-If you are implementing a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) 2.0 endpoint for user provisioning, select the option as shown. When providing the schema in the onboarding request, please follow the directions [here](../app-provisioning/export-import-provisioning-configuration.md) to download your schema. We will use the schema you configured when testing the non-gallery application to build the gallery application.
-
- ![Request for user provisioning](./media/howto-app-gallery-listing/user-provisioning.png)
-
-### Update or remove an existing listing
-
-You can update or remove an existing gallery app in the [Microsoft Application Network portal](https://microsoft.sharepoint.com/teams/apponboarding/Apps).
-
-![Listing a SAML application in the gallery](./media/howto-app-gallery-listing/updateorremove.png)
-
-> [!NOTE]
-> If you have any issues with access, review the previous section on creating your account. If that doesn't work, contact the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com).
-
-### Timelines
-
-The timeline for the process of listing a SAML 2.0 or WS-Fed application in the gallery is 7 to 10 business days.
-
-![Timeline for listing a SAML application in the gallery](./media/howto-app-gallery-listing/timeline.png)
-
-The timeline for the process of listing an OpenID Connect application in the gallery is 2 to 5 business days.
-
-![Timeline for listing an OpenID Connect application in the gallery](./media/howto-app-gallery-listing/timeline2.png)
-
-The timeline for the process of listing a SCIM provisioning application in the gallery is variable and depends on numerous factors.
-
-### Escalations
-
-For any escalations, send email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com), and we'll respond as soon as possible.
--
-## Step 7 - Join the Microsoft partner network
-The Microsoft Partner Network provides instant access to exclusive resources, programs, tools, and connections. To join the network and create your go to market plan, see [Reach commercial customers](https://partner.microsoft.com/explore/commercial#gtm).
-
-## Request Apps by sharing ISV App team contact
-
-Customers can request application by sharing the Application and ISV contact information [here](https://microsoft.sharepoint.com/teams/apponboarding/Apps/SitePages/AppRequestsByCustomers.aspx).
-
-![Shows the customer-requested apps tile](./media/howto-app-gallery-listing/customer-submit-request.png)
-
-Here's the flow of customer-requested applications.
-
-![Shows the customer-requested apps flow](./media/howto-app-gallery-listing/customer-request-2.png)
-
-> [!Note]
-> If you have any [issues with access](#issues-on-logging-into-portal), send email to the [Azure AD App Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com).
-
-## Next steps
-
-* [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
Previously updated : 10/14/2021 Last updated : 01/25/2022
From there, you can go to **All devices** to:
- Configure your device identity settings. - Enable or disable enterprise state roaming. - Review device-related audit logs.-- Download devices (preview).
+- Download devices.
[![Screenshot that shows the All devices view in the Azure portal.](./media/device-management-azure-portal/all-devices-azure-portal.png)](./media/device-management-azure-portal/all-devices-azure-portal.png#lightbox)
To enable the preview filtering functionality in the **All devices** view:
You can now add filters to your **All devices** view.
-## Download devices (preview)
+## Download devices
-Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices (preview)** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices will be listed. An export task might run for as long as an hour, depending on your selections.
+Global readers, Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices will be listed. An export task might run for as long as an hour, depending on your selections. If the export task exceeds 1 hour, it fails, and no file is output.
The exported list includes these device identity attributes:
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Title: Manage external access with Azure Active Directory Conditional Access
-description: How to use Azure Active Directory conditional Access policies to secure external access to resources.
+description: How to use Azure Active Directory Conditional Access policies to secure external access to resources.
Previously updated : 12/18/2020 Last updated : 01/25/2022 - # Manage external access with Conditional Access policies [Conditional Access](../conditional-access/overview.md) is the tool Azure AD uses to bring together signals, enforce policies, and determine whether a user should be allowed access to resources. For detailed information on how to create and use Conditional Access policies (Conditional Access policies), see [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md). ![Diagram of Conditional Access signals and decisions](media/secure-external-access//7-conditional-access-signals.png) --
-This article discusses applying Conditional Access policies to external users and assumes you don't have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. Conditional Access policies can be and are used alongside Entitlement Management.
+This article discusses applying Conditional Access policies to external users and assumes you donΓÇÖt have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. Conditional Access policies can be and are used alongside Entitlement Management.
Earlier in this document set, you [created a security plan](3-secure-access-plan.md) that outlined: * Applications and resources have the same security requirements and can be grouped for access.- * Sign-in requirements for external users.
-You will use that plan to create your Conditional Access policies for external access.
+YouΓÇÖll use that plan to create your Conditional Access policies for external access.
> [!IMPORTANT]
-> Create a few external user test accounts so that you can test the policies you create before applying them to all external users.
+> Create several internal and external user test accounts so that you can test the policies you create before applying them.
## Conditional Access policies for external access The following are best practices related to governing external access with Conditional Access policies.
-* If you can't use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in Conditional Access policies.
+* If you canΓÇÖt use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in Conditional Access policies.
* Create as few Conditional Access policies as possible. For applications that have the same access needs, add them all to the same policy.
-ΓÇÄ
+ > [!NOTE] > Conditional Access policies can apply to a maximum of 250 applications. If more than 250 Apps have the same access needs, create duplicate policies. Policy A will apply to apps 1-250, policy B will apply to apps 251-500, etc.
-* Clearly name policies specific to external access with a naming convention. One naming convention is ΓÇÄ*ExternalAccess_actiontaken_AppGroup*. For example ExternalAccess_Block_FinanceApps.
+* Clearly name policies specific to external access with a naming convention. One naming convention is *ExternalAccess_actiontaken_AppGroup*. For example a policy for external access that blocks access to finance apps, called ExternalAccess_Block_FinanceApps.
## Block all external users from resources
-You can block external users from accessing specific sets of resources with Conditional Access policies. Once you've determined the set of resources to which you want to block access, create a policy.
+You can block external users from accessing specific sets of resources with Conditional Access policies. Once youΓÇÖve determined the set of resources to which you want to block access, create a policy.
To create a policy that blocks access for external users to a set of applications:
-1. Access the **Azure portal**, select **Azure Active Directory**, select**Security**, then select **Conditional Access**.
-
-2. Select **New Policy**, and enter a **name**, for example ExternalAccess_Block_FinanceApps
-
-3. Select **Users and group**s. On the Include tab, choose **Select users and groups**, then select **All guests and external users**.
-
-4. Select **Exclude** and enter your Administrator group(s) and any emergency access (break-glass) accounts.
-
-5. Select **Cloud apps or actions**, choose **Select Apps**, select all of the apps to which you want to block external access, then choose **Select**.
-
-6. Select **Conditions**, select **Locations**, under Configure select **Yes**, and select **Any location**.
-
-7. Under Access controls, select **Grant**, change the toggle to **Block**, and choose **Select**.
-
-8. Ensure that the Enable policy setting is set to **Report only**, then select **Create**.
-
-## Block external access to all except specific external users
-
-There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this:
-
-1. Create a security group to hold the external users who should access the finance group.
-
-2. Follow steps 1-3 in block external access from resources above.
-
-3. In step 4, add the security group you want to exclude from being blocked from the finance apps.
-
-4. Perform the rest of the steps.
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps.
+1. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All guests and external users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
+ 1. Select **Done**.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+ 1. Under **Exclude**, select any applications that shouldnΓÇÖt be blocked.
+1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### Block external access to all except specific external users
+
+There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this [Create a security group](active-directory-groups-create-azure-portal.md) to contain the external users who should access the finance applications:
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance.
+1. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All guests and external users**.
+ 1. Under **Exclude**, select **Users and groups**,
+ 1. Choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
+ 1. Choose the security group of external users you want to exclude from being blocked from specific applications.
+ 1. Select **Done**.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+ 1. Under **Exclude**, select the finance applications that shouldnΓÇÖt be blocked.
+1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
## Implement Conditional Access
-Many common Conditional Access policies are documented. See the following which you can adapt for external users.
-
-* [Require Multi-Factor Authentication for all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)
-
-* [User risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk-user.md)
-
-* [Require Multi-Factor Authentication for access from untrusted networks](../conditional-access/untrusted-networks.md)
-
-* [Require Terms of Use](../conditional-access/terms-of-use.md)
+Many common Conditional Access policies are documented. See the article [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) for other common policies you may want to adapt for external users.
## Next steps See the following articles on securing external access to resources. We recommend you take the actions in the listed order. 1. [Determine your desired security posture for external access](1-secure-access-posture.md)-
-2. [Discover your current state](2-secure-access-current-state.md)
-
-3. [Create a governance plan](3-secure-access-plan.md)
-
-4. [Use groups for security](4-secure-access-groups.md)
-
-5. [Transition to Azure AD B2B](5-secure-access-b2b.md)
-
-6. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
-
-7. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) (You are here)
-
-8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+1. [Discover your current state](2-secure-access-current-state.md)
+1. [Create a governance plan](3-secure-access-plan.md)
+1. [Use groups for security](4-secure-access-groups.md)
+1. [Transition to Azure AD B2B](5-secure-access-b2b.md)
+1. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
+1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) (YouΓÇÖre here)
+1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
+1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Providing a standardized single sign-on mechanism to the entire enterprise is cr
> [!NOTE] > If you don't have a mechanism to discover unmanaged applications in your organization, we recommend implementing a discovery process using a cloud access security broker solution (CASB) such as [Microsoft Defender for Cloud Apps](https://www.microsoft.com/enterprise-mobility-security/cloud-app-security).
-Finally, if you have an Azure AD app gallery and use applications that support SSO with Azure AD, we recommend [listing the application in the app gallery](../develop/v2-howto-app-gallery-listing.md).
+Finally, if you have an Azure AD app gallery and use applications that support SSO with Azure AD, we recommend [listing the application in the app gallery](../manage-apps/v2-howto-app-gallery-listing.md).
#### Single sign-on recommended reading
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
Azure Active Directory (Azure AD) has a gallery that contains thousands of pre-i
- [AWS](../saas-apps/amazon-web-service-tutorial.md) - [Slack](../saas-apps/slack-tutorial.md)
-In addition you can [integrate applications not in the gallery](../manage-apps/view-applications-portal.md), including any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. You can also [add your app to the gallery](../develop/v2-howto-app-gallery-listing.md) if it is not there.
+In addition you can [integrate applications not in the gallery](../manage-apps/view-applications-portal.md), including any application that already exists in your organization, or any third-party application from a vendor who is not already part of the Azure AD gallery. You can also [add your app to the gallery](../manage-apps/v2-howto-app-gallery-listing.md) if it is not there.
Finally, you can also integrate the apps you develop in-house. This is covered in step five of this guide.
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
You can monitor privileged account changes by using Azure AD Audit logs and Azur
| What to monitor| Risk level| Where| Filter/subfilter| Notes | | - | - | - | - | - |
-| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = ΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
+| Added to eligible privileged role| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role completed (eligible)<br>-and-<br>Status = Success or failureΓÇï<br>-and-<br>Modified properties = Role.DisplayName| Any account eligible for a role is now being given privileged access. If the assignment is unexpected or into a role that isn't the responsibility of the account holder, investigate.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/UserAssignedPrivilegedRole.yaml) |
| Roles assigned out of PIM| High| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role managementΓÇï<br>-and-<br>Activity type = Add member to role (permanent)<br>-and-<br>Status = Success or failure<br>-and-<br>Modified properties = Role.DisplayName| These roles should be closely monitored and alerted. Users shouldn't be assigned roles outside of PIM where possible.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PrivlegedRoleAssignedOutsidePIM.yaml) | | Elevations| Medium| Azure AD Audit Logs| Service = PIM<br>-and-<br>Category = Role management<br>-and-<br>Activity type = Add member to role completed (PIM activation)<br>-and-<br>Status = Success or failure <br>-and-<br>Modified properties = Role.DisplayName| After a privileged account is elevated, it can now make changes that could affect the security of your tenant. All elevations should be logged and, if happening outside of the standard pattern for that user, should be alerted and investigated if not planned. | | Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity type = Request approved or denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations because it could give a clear indication of the timeline for an attack.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/PIMElevationRequestRejected.yaml) |
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
In April 2020, we've added these 31 new apps with Federation support to the app
[SincroPool Apps](https://www.sincropool.com/), [SmartDB](https://hibiki.dreamarts.co.jp/smartdb/trial/), [Float](../saas-apps/float-tutorial.md), [LMS365](https://lms.365.systems/), [IWT Procurement Suite](../saas-apps/iwt-procurement-suite-tutorial.md), [Lunni](https://lunni.fi/), [EasySSO for Jira](../saas-apps/easysso-for-jira-tutorial.md), [Virtual Training Academy](https://vta.c3p.c)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In March 2020, we've added these 51 new apps with Federation support to the app
[Cisco AnyConnect](../saas-apps/cisco-anyconnect.md), [Zoho One China](../saas-apps/zoho-one-china-tutorial.md), [PlusPlus](https://test.plusplus.app/auth/login/azuread-outlook/), [Profit.co SAML App](../saas-apps/profitco-saml-app-tutorial.md), [iPoint Service Provider](../saas-apps/ipoint-service-provider-tutorial.md), [contexxt.ai SPHERE](https://contexxt-sphere.com/login), [Wisdom By Invictus](../saas-apps/wisdom-by-invictus-tutorial.md), [Flare Digital Signage](https://pixelnebula.com/), [Logz.io - Cloud Observability for Engineers](../saas-apps/logzio-cloud-observability-for-engineers-tutorial.md), [SpectrumU](../saas-apps/spectrumu-tutorial.md), [BizzContact](https://www.bizzcontact.app/), [Elqano SSO](../saas-apps/elqano-sso-tutorial.md), [MarketSignShare](http://www.signshare.com/), [CrossKnowledge Learning Suite](../saas-apps/crossknowledge-learning-suite-tutorial.md), [Netvision Compas](../saas-apps/netvision-compas-tutorial.md), [FCM HUB](../saas-apps/fcm-hub-tutorial.md), [RIB )
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In February 2020, we've added these 31 new apps with Federation support to the a
[TeamViewer](../saas-apps/teamviewer-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In January 2020, we've added these 33 new apps with Federation support to the ap
[JOSA](../saas-apps/josa-tutorial.md), [Fastly Edge Cloud](../saas-apps/fastly-edge-cloud-tutorial.md), [Terraform Enterprise](../saas-apps/terraform-enterprise-tutorial.md), [Spintr SSO](../saas-apps/spintr-sso-tutorial.md), [Abibot Netlogistik](https://azuremarketplace.microsoft.com/marketplace/apps/aad.abibotnetlogistik), [SkyKick](https://login.skykick.com/login?state=g6Fo2SBTd3M5Q0xBT0JMd3luS2JUTGlYN3pYTE1remJQZnR1c6N0aWTZIDhCSkwzYVQxX2ZMZjNUaWxNUHhCSXg2OHJzbllTcmYto2NpZNkgM0h6czk3ZlF6aFNJV1VNVWQzMmpHeFFDbDRIMkx5VEc&client=3Hzs97fQzhSIWUMUd32jGxQCl4H2LyTG&protocol=oauth2&audience=https://papi.skykick.com&response_type=code&redirect_uri=https://portal.skykick.com/callback&scope=openid%20profile%20offline_access), [Upshotly](../saas-apps/upshotly-tutorial.md), [LeaveBot](https://appsource.microsoft.com/en-us/product/office/WA200001175), [DataCamp](../saas-apps/datacamp-tutorial.md), [TripActions](../saas-apps/tripactions-tutorial.md), [SmartWork](https://www.intumit.com/teams-smartwork/), [Dotcom-Monitor](../saas-apps/dotcom-monitor-tutorial.md), [SSOGEN - Azure AD SSO Gateway for Oracle E-Business Suite - EBS, PeopleSoft, and JDE](../saas-apps/ssogen-tutorial.md), [Hosted MyCirqa SSO](../saas-apps/hosted-mycirqa-sso-tutorial.md), [Yuhu Property Management Platform](../saas-apps/yuhu-property-management-platform-tutorial.md), [LumApps](https://sites.lumapps.com/login), [Upwork Enterprise](../saas-apps/upwork-enterprise-tutorial.md), [Talentsoft](../saas-apps/talentsoft-tutorial.md), [SmartDB for Microsoft Teams](http://teams.smartdb.jp/login/), [PressPage](../saas-apps/presspage-tutorial.md), [ContractSafe Saml2 SSO](../saas-apps/contractsafe-saml2-sso-tutorial.md), [Maxient Conduct Manager Software](../saas-apps/maxient-conduct-manager-software-tutorial.md), [Helpshift](../saas-apps/helpshift-tutorial.md), [PortalTalk 365](https://www.portaltalk.com/), [CoreView](https://portal.coreview.com/), Squelch Cloud Office365 Connector, [PingFlow Authentication](https://app-staging.pingview.io/), [ PrinterLogic SaaS](../saas-apps/printerlogic-saas-tutorial.md), [Taskize Connect](../saas-apps/taskize-connect-tutorial.md), [Sandwai](https://app.sandwai.com/), [EZRentOut](../saas-apps/ezrentout-tutorial.md), [AssetSonar](../saas-apps/assetsonar-tutorial.md), [Akari Virtual Assistant](https://akari.io/akari-virtual-assistant/)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In November 2019, we've added these 21 new apps with Federation support to the a
[Airtable](../saas-apps/airtable-tutorial.md), [Hootsuite](../saas-apps/hootsuite-tutorial.md), [Blue Access for Members (BAM)](../saas-apps/blue-access-for-members-tutorial.md), [Bitly](../saas-apps/bitly-tutorial.md), [Riva](../saas-apps/riva-tutorial.md), [ResLife Portal](https://app.reslifecloud.com/hub5_signin/microsoft_azuread/?g=44BBB1F90915236A97502FF4BE2952CB&c=5&uid=0&ht=2&ref=), [NegometrixPortal Single Sign On (SSO)](../saas-apps/negometrixportal-tutorial.md), [TeamsChamp](https://login.microsoftonline.com/551f45da-b68e-4498-a7f5-a6e1efaeb41c/adminconsent?client_id=ca9bbfa4-1316-4c0f-a9ee-1248ac27f8ab&redirect_uri=https://admin.teamschamp.com/api/adminconsent&state=6883c143-cb59-42ee-a53a-bdb5faabf279), [Motus](../saas-apps/motus-tutorial.md), [MyAryaka](../saas-apps/myaryaka-tutorial.md), [BlueMail](https://loginself1.bluemail.me/), [Beedle](https://teams-web.beedle.co/#/), [Visma](../saas-apps/visma-tutorial.md), [OneDesk](../saas-apps/onedesk-tutorial.md), [Foko Retail](../saas-apps/foko-retail-tutorial.md), [Qmarkets Idea & Innovation Management](../saas-apps/qmarkets-idea-innovation-management-tutorial.md), [Netskope User Authentication](../saas-apps/netskope-user-authentication-tutorial.md), [uniFLOW Online](../saas-apps/uniflow-online-tutorial.md), [Claromentis](../saas-apps/claromentis-tutorial.md), [Jisc Student Voter Registration](../saas-apps/jisc-student-voter-registration-tutorial.md), [e4enable](https://portal.e4enable.com/)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In October 2019, we've added these 35 new apps with Federation support to the ap
[In Case of Crisis ΓÇô Mobile](../saas-apps/in-case-of-crisis-mobile-tutorial.md), [Juno Journey](../saas-apps/juno-journey-tutorial.md), [ExponentHR](../saas-apps/exponenthr-tutorial.md), [Tact](https://www.tact.ai/products/tact-assistant), [OpusCapita Cash Management](https://appsource.microsoft.com/product/web-apps/opuscapitagroupoy-1036255.opuscapita-cm), [Salestim](https://www.salestim.com/), [Learnster](../saas-apps/learnster-tutorial.md), [Dynatrace](../saas-apps/dynatrace-tutorial.md), [HunchBuzz](https://login.hunchbuzz.com/integrations/azure/process), [Freshworks](../saas-apps/freshworks-tutorial.md), [eCornell](../saas-apps/ecornell-tutorial.md), [ShipHazmat](../saas-apps/shiphazmat-tutorial.md), [Netskope Cloud Security](../saas-apps/netskope-cloud-security-tutorial.md), [Contentful](../saas-apps/contentful-tutorial.md), [Bindtuning](https://bindtuning.com/login), [HireVue Coordinate ΓÇô Europe](https://www.hirevue.com/), [HireVue Coordinate - USOnly](https://www.hirevue.com/), [HireVue Coordinate - US](https://www.hirevue.com/), [WittyParrot Knowledge Box](https://wittyapi.wittyparrot.com/wittyparrot/api/provision/trail/signup), [Cloudmore](../saas-apps/cloudmore-tutorial.md), [Visit.org](../saas-apps/visitorg-tutorial.md), [Cambium Xirrus EasyPass Portal](https://login.xirrus.com/azure-signup), [Paylocity](../saas-apps/paylocity-tutorial.md), [Mail Luck!](../saas-apps/mail-luck-tutorial.md), [Teamie](https://theteamie.com/), [Velocity for Teams](https://velocity.peakup.org/teams/login), [SIGNL4](https://account.signl4.com/manage), [EAB Navigate IMPL](../saas-apps/eab-navigate-impl-tutorial.md), [ScreenMeet](https://console.screenmeet.com/), [Omega Point](https://pi.ompnt.com/), [Speaking Email for Intune (iPhone)](https://speaking.email/FAQ/98/email-access-via-microsoft-intune), [Speaking Email for Office 365 Direct (iPhone/Android)](https://speaking.email/FAQ/126/email-access-via-microsoft-office-365-direct), [ExactCare SSO](../saas-apps/exactcare-sso-tutorial.md), [iHealthHome Care Navigation System](https://ihealthnav.com/account/signin), [Qubie](https://www.qubie.app/)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In September 2019, we've added these 29 new apps with Federation support to the
[ScheduleLook](https://schedulelook.bbsonlineservices.net/), [MS Azure SSO Access for Ethidex Compliance Office&trade; - Single sign-on](../saas-apps/ms-azure-sso-access-for-ethidex-compliance-office-tutorial.md), [iServer Portal](../saas-apps/iserver-portal-tutorial.md), [SKYSITE](../saas-apps/skysite-tutorial.md), [Concur Travel and Expense](../saas-apps/concur-travel-and-expense-tutorial.md), [WorkBoard](../saas-apps/workboard-tutorial.md), `https://apps.yeeflow.com/`, [ARC Facilities](../saas-apps/arc-facilities-tutorial.md), [Luware Stratus Team](https://stratus.emea.luware.cloud/login), [Wide Ideas](https://wideideas.online/wideideas/), [Prisma Cloud](../saas-apps/prisma-cloud-tutorial.md), [JDLT Client Hub](https://clients.jdlt.co.uk/login), [RENRAKU](../saas-apps/renraku-tutorial.md), [SealPath Secure Browser](https://protection.sealpath.com/SealPathInterceptorWopiSaas/Open/InstallSealPathEditorOneDrive), [Prisma Cloud](../saas-apps/prisma-cloud-tutorial.md), `https://app.penneo.com/`, `https://app.testhtm.com/settings/email-integration`, [Cintoo Cloud](https://aec.cintoo.com/login), [Whitesource](../saas-apps/whitesource-tutorial.md), [Hosted Heritage Online SSO](../saas-apps/hosted-heritage-online-sso-tutorial.md), [IDC](../saas-apps/idc-tutorial.md), [CakeHR](../saas-apps/cakehr-tutorial.md), [BIS](../saas-apps/bis-tutorial.md), [Coo Kai Team Build](https://ms-contacts.coo-kai.jp/), [Sonarqube](../saas-apps/sonarqube-tutorial.md), [Adobe Identity Management](../saas-apps/tutorial-list.md), [Discovery Benefits SSO](../saas-apps/discovery-benefits-sso-tutorial.md), [Amelio](https://app.amelio.co/), `https://itask.yipinapp.com/`
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In August 2019, we've added these 26 new apps with Federation support to the app
[Civic Platform](../saas-apps/civic-platform-tutorial.md), [Amazon Business](../saas-apps/amazon-business-tutorial.md), [ProNovos Ops Manager](../saas-apps/pronovos-ops-manager-tutorial.md), [Cognidox](../saas-apps/cognidox-tutorial.md), [Viareport's Inativ Portal (Europe)](../saas-apps/viareports-inativ-portal-europe-tutorial.md), [Azure Databricks](https://azure.microsoft.com/services/databricks), [Robin](../saas-apps/robin-tutorial.md), [Academy Attendance](../saas-apps/academy-attendance-tutorial.md), [Priority Matrix](https://sync.appfluence.com/pmwebng/), [Cousto MySpace](https://cousto.platformers.be/account/login), [Uploadcare](https://uploadcare.com/accounts/signup/), [Carbonite Endpoint Backup](../saas-apps/carbonite-endpoint-backup-tutorial.md), [CPQSync by Cincom](../saas-apps/cpqsync-by-cincom-tutorial.md), [Chargebee](../saas-apps/chargebee-tutorial.md), [deliver.media&trade; Portal](https://portal.deliver.media), [Frontline Education](../saas-apps/frontline-education-tutorial.md), [F5](https://www.f5.com/products/security/access-policy-manager), [stashcat AD connect](https://www.stashcat.com), [Blink](../saas-apps/blink-tutorial.md), [Vocoli](../saas-apps/vocoli-tutorial.md), [ProNovos Analytics](../saas-apps/pronovos-analytics-tutorial.md), [Sigstr](../saas-apps/sigstr-tutorial.md), [Darwinbox](../saas-apps/darwinbox-tutorial.md), [Watch by Colors](../saas-apps/watch-by-colors-tutorial.md), [Harness](../saas-apps/harness-tutorial.md), [EAB Navigate Strategic Care](../saas-apps/eab-navigate-strategic-care-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In July 2019, we've added these 18 new apps with Federation support to the app g
[Ungerboeck Software](../saas-apps/ungerboeck-software-tutorial.md), [Bright Pattern Omnichannel Contact Center](../saas-apps/bright-pattern-omnichannel-contact-center-tutorial.md), [Clever Nelly](../saas-apps/clever-nelly-tutorial.md), [AcquireIO](../saas-apps/acquireio-tutorial.md), [Looop](https://www.looop.co/schedule-a-demo/), [productboard](../saas-apps/productboard-tutorial.md), [MS Azure SSO Access for Ethidex Compliance Office&trade;](../saas-apps/ms-azure-sso-access-for-ethidex-compliance-office-tutorial.md), [Hype](../saas-apps/hype-tutorial.md), [Abstract](../saas-apps/abstract-tutorial.md), [Ascentis](../saas-apps/ascentis-tutorial.md), [Flipsnack](https://www.flipsnack.com/accounts/sign-in-sso.html), [Wandera](../saas-apps/wandera-tutorial.md), [TwineSocial](https://twinesocial.com/), [Kallidus](../saas-apps/kallidus-tutorial.md), [HyperAnna](../saas-apps/hyperanna-tutorial.md), [PharmID WasteWitness](https://pharmid.com/), [i2B Connect](https://www.i2b-online.com/sign-up-to-use-i2b-connect-here-sso-access/), [JFrog Artifactory](../saas-apps/jfrog-artifactory-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In June 2019, we've added these 22 new apps with Federation support to the app g
[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Perceptyx](https://apexdata.azurewebsites.net/docs.microsoft.com/azure/active-directory/saas-apps/perceptyx-tutorial), [Proptimise OS](https://proptimise.co.uk/), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In May 2019, we've added these 21 new apps with Federation support to the app ga
[Freedcamp](../saas-apps/freedcamp-tutorial.md), [Real Links](../saas-apps/real-links-tutorial.md), [Kianda](https://app.kianda.com/sso/OpenID/AzureAD/), [Simple Sign](../saas-apps/simple-sign-tutorial.md), [Braze](../saas-apps/braze-tutorial.md), [Displayr](../saas-apps/displayr-tutorial.md), [Templafy](../saas-apps/templafy-tutorial.md), [Marketo Sales Engage](https://toutapp.com/login), [ACLP](../saas-apps/aclp-tutorial.md), [OutSystems](../saas-apps/outsystems-tutorial.md), [Meta4 Global HR](../saas-apps/meta4-global-hr-tutorial.md), [Quantum Workplace](../saas-apps/quantum-workplace-tutorial.md), [Cobalt](../saas-apps/cobalt-tutorial.md), [webMethods API Cloud](../saas-apps/webmethods-integration-cloud-tutorial.md), [RedFlag](https://pocketstop.com/redflag/), [Whatfix](../saas-apps/whatfix-tutorial.md), [Control](../saas-apps/control-tutorial.md), [JOBHUB](../saas-apps/jobhub-tutorial.md), [NEOGOV](../saas-apps/neogov-tutorial.md), [Foodee](../saas-apps/foodee-tutorial.md), [MyVR](../saas-apps/myvr-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In April 2019, we've added these 21 new apps with Federation support to the app
[SAP Fiori](../saas-apps/sap-fiori-tutorial.md), [HRworks Single Sign-On](../saas-apps/hrworks-single-sign-on-tutorial.md), [Percolate](../saas-apps/percolate-tutorial.md), [MobiControl](../saas-apps/mobicontrol-tutorial.md), [Citrix NetScaler](../saas-apps/citrix-netscaler-tutorial.md), [Shibumi](../saas-apps/shibumi-tutorial.md), [Benchling](../saas-apps/benchling-tutorial.md), [MileIQ](https://mileiq.onelink.me/991934284/7e980085), [PageDNA](../saas-apps/pagedna-tutorial.md), [EduBrite LMS](../saas-apps/edubrite-lms-tutorial.md), [RStudio Connect](../saas-apps/rstudio-connect-tutorial.md), [AMMS](../saas-apps/amms-tutorial.md), [Mitel Connect](../saas-apps/mitel-connect-tutorial.md), [Alibaba Cloud (Role-based SSO)](../saas-apps/alibaba-cloud-service-role-based-sso-tutorial.md), [Certent Equity Management](../saas-apps/certent-equity-management-tutorial.md), [Sectigo Certificate Manager](../saas-apps/sectigo-certificate-manager-tutorial.md), [GreenOrbit](../saas-apps/greenorbit-tutorial.md), [Workgrid](../saas-apps/workgrid-tutorial.md), [monday.com](../saas-apps/mondaycom-tutorial.md), [SurveyMonkey Enterprise](../saas-apps/surveymonkey-enterprise-tutorial.md), [Indiggo](https://indiggolead.com/)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In March 2019, we've added these 14 new apps with Federation support to the app
[ISEC7 Mobile Exchange Delegate](https://www.isec7.com/english/), [MediusFlow](https://office365.cloudapp.mediusflow.com/), [ePlatform](../saas-apps/eplatform-tutorial.md), [Fulcrum](../saas-apps/fulcrum-tutorial.md), [ExcelityGlobal](../saas-apps/excelityglobal-tutorial.md), [Explanation-Based Auditing System](../saas-apps/explanation-based-auditing-system-tutorial.md), [Lean](../saas-apps/lean-tutorial.md), [Powerschool Performance Matters](../saas-apps/powerschool-performance-matters-tutorial.md), [Cinode](https://cinode.com/), [Iris Intranet](../saas-apps/iris-intranet-tutorial.md), [Empactis](../saas-apps/empactis-tutorial.md), [SmartDraw](../saas-apps/smartdraw-tutorial.md), [Confirmit Horizons](../saas-apps/confirmit-horizons-tutorial.md), [TAS](../saas-apps/tas-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In February 2019, we've added these 27 new apps with Federation support to the a
[Euromonitor Passport](../saas-apps/euromonitor-passport-tutorial.md), [MindTickle](../saas-apps/mindtickle-tutorial.md), [FAT FINGER](https://seeforgetest-exxon.azurewebsites.net/Account/create?Length=7), [AirStack](../saas-apps/airstack-tutorial.md), [Oracle Fusion ERP](../saas-apps/oracle-fusion-erp-tutorial.md), [IDrive](../saas-apps/idrive-tutorial.md), [Skyward Qmlativ](../saas-apps/skyward-qmlativ-tutorial.md), [Brightidea](../saas-apps/brightidea-tutorial.md), [AlertOps](../saas-apps/alertops-tutorial.md), [Soloinsight-CloudGate SSO](../saas-apps/soloinsight-cloudgate-sso-tutorial.md), Permission Click, [Brandfolder](../saas-apps/brandfolder-tutorial.md), [StoregateSmartFile](../saas-apps/smartfile-tutorial.md), [Pexip](../saas-apps/pexip-tutorial.md), [Stormboard](../saas-apps/stormboard-tutorial.md), [Seismic](../saas-apps/seismic-tutorial.md), [Share A Dream](https://www.shareadream.org/), [Bugsnag](../saas-apps/bugsnag-tutorial.md), [webMethods Integration Cloud](../saas-apps/webmethods-integration-cloud-tutorial.md), [Knowledge Anywhere LMS](../saas-apps/knowledge-anywhere-lms-tutorial.md), [OU Campus](../saas-apps/ou-campus-tutorial.md), [Periscope Data](../saas-apps/periscope-data-tutorial.md), [Netop Portal](../saas-apps/netop-portal-tutorial.md), [smartvid.io](../saas-apps/smartvid.io-tutorial.md), [PureCloud by Genesys](../saas-apps/purecloud-by-genesys-tutorial.md), [ClickUp Productivity Platform](../saas-apps/clickup-productivity-platform-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In January 2019, we've added these 35 new apps with Federation support to the ap
[Firstbird](../saas-apps/firstbird-tutorial.md), [Folloze](../saas-apps/folloze-tutorial.md), [Talent Palette](../saas-apps/talent-palette-tutorial.md), [Infor CloudSuite](../saas-apps/infor-cloud-suite-tutorial.md), [Cisco Umbrella](../saas-apps/cisco-umbrella-tutorial.md), [Zscaler Internet Access Administrator](../saas-apps/zscaler-internet-access-administrator-tutorial.md), [Expiration Reminder](../saas-apps/expiration-reminder-tutorial.md), [InstaVR Viewer](../saas-apps/instavr-viewer-tutorial.md), [CorpTax](../saas-apps/corptax-tutorial.md), [Verb](https://app.verb.net/login), [OpenLattice](https://help.openlattice.com/), [TheOrgWiki](https://www.theorgwiki.com/signup), [Pavaso Digital Close](../saas-apps/pavaso-digital-close-tutorial.md), [GoodPractice Toolkit](../saas-apps/goodpractice-toolkit-tutorial.md), [Cloud Service PICCO](../saas-apps/cloud-service-picco-tutorial.md), [AuditBoard](../saas-apps/auditboard-tutorial.md), [iProva](../saas-apps/iprova-tutorial.md), [Workable](../saas-apps/workable-tutorial.md), [CallPlease](https://webapp.callplease.com/create-account/create-account.html), [GTNexus SSO System](../saas-apps/gtnexus-sso-module-tutorial.md), [CBRE ServiceInsight](../saas-apps/cbre-serviceinsight-tutorial.md), [Deskradar](../saas-apps/deskradar-tutorial.md), [Coralogixv](../saas-apps/coralogix-tutorial.md), [Signagelive](../saas-apps/signagelive-tutorial.md), [ARES for Enterprise](../saas-apps/ares-for-enterprise-tutorial.md), [K2 for Office 365](https://www.k2.com/O365), [Xledger](https://www.xledger.net/), [iDiD Manager](../saas-apps/idid-manager-tutorial.md), [HighGear](../saas-apps/highgear-tutorial.md), [Visitly](../saas-apps/visitly-tutorial.md), [Korn Ferry ALP](../saas-apps/korn-ferry-alp-tutorial.md), [Acadia](../saas-apps/acadia-tutorial.md), [Adoddle cSaas Platform](../saas-apps/adoddle-csaas-platform-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In November 2018, we've added these 26 new apps with Federation support to the a
[CoreStack](https://cloud.corestack.io/site/login), [HubSpot](../saas-apps/hubspot-tutorial.md), [GetThere](../saas-apps/getthere-tutorial.md), [Gra-Pe](../saas-apps/grape-tutorial.md), [eHour](https://getehour.com/try-now), [Consent2Go](../saas-apps/consent2go-tutorial.md), [Appinux](../saas-apps/appinux-tutorial.md), [DriveDollar](https://azuremarketplace.microsoft.com/marketplace/apps/savitas.drivedollar-azuread?tab=Overview), [Useall](../saas-apps/useall-tutorial.md), [Infinite Campus](../saas-apps/infinitecampus-tutorial.md), [Alaya](https://alayagood.com), [HeyBuddy](../saas-apps/heybuddy-tutorial.md), [Wrike SAML](../saas-apps/wrike-tutorial.md), [Drift](../saas-apps/drift-tutorial.md), [Zenegy for Business Central 365](https://accounting.zenegy.com/), [Everbridge Member Portal](../saas-apps/everbridge-tutorial.md), [IDEO](https://profile.ideo.com/users/sign_up), [Ivanti Service Manager (ISM)](../saas-apps/ivanti-service-manager-tutorial.md), [Peakon](../saas-apps/peakon-tutorial.md), [Allbound SSO](../saas-apps/allbound-sso-tutorial.md), [Plex Apps - Classic Test](https://test.plexonline.com/signon), [Plex Apps ΓÇô Classic](https://www.plexonline.com/signon), [Plex Apps - UX Test](https://test.cloud.plex.com/sso), [Plex Apps ΓÇô UX](https://cloud.plex.com/sso), [Plex Apps ΓÇô IAM](https://accounts.plex.com/), [CRAFTS - Childcare Records, Attendance, & Financial Tracking System](https://getcrafts.ca/craftsregistration)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In October 2018, we've added these 14 new apps with Federation support to the ap
[My Award Points](../saas-apps/myawardpoints-tutorial.md), [Vibe HCM](../saas-apps/vibehcm-tutorial.md), ambyint, [MyWorkDrive](../saas-apps/myworkdrive-tutorial.md), [BorrowBox](../saas-apps/borrowbox-tutorial.md), Dialpad, [ON24 Virtual Environment](../saas-apps/on24-tutorial.md), [RingCentral](../saas-apps/ringcentral-tutorial.md), [Zscaler Three](../saas-apps/zscaler-three-tutorial.md), [Phraseanet](../saas-apps/phraseanet-tutorial.md), [Appraisd](../saas-apps/appraisd-tutorial.md), [Workspot Control](../saas-apps/workspotcontrol-tutorial.md), [Shuccho Navi](../saas-apps/shucchonavi-tutorial.md), [Glassfrog](../saas-apps/glassfrog-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In September 2018, we've added these 16 new apps with Federation support to the
[Uberflip](../saas-apps/uberflip-tutorial.md), [Comeet Recruiting Software](../saas-apps/comeetrecruitingsoftware-tutorial.md), [Workteam](../saas-apps/workteam-tutorial.md), [ArcGIS Enterprise](../saas-apps/arcgisenterprise-tutorial.md), [Nuclino](../saas-apps/nuclino-tutorial.md), [JDA Cloud](../saas-apps/jdacloud-tutorial.md), [Snowflake](../saas-apps/snowflake-tutorial.md), NavigoCloud, [Figma](../saas-apps/figma-tutorial.md), join.me, [ZephyrSSO](../saas-apps/zephyrsso-tutorial.md), [Silverback](../saas-apps/silverback-tutorial.md), Riverbed Xirrus EasyPass, [Rackspace SSO](../saas-apps/rackspacesso-tutorial.md), Enlyft SSO for Azure, SurveyMonkey, [Convene](../saas-apps/convene-tutorial.md), [dmarcian](../saas-apps/dmarcian-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In August 2018, we've added these 16 new apps with Federation support to the app
[Hornbill](../saas-apps/hornbill-tutorial.md), [Bridgeline Unbound](../saas-apps/bridgelineunbound-tutorial.md), [Sauce Labs - Mobile and Web Testing](../saas-apps/saucelabs-mobileandwebtesting-tutorial.md), [Meta Networks Connector](../saas-apps/metanetworksconnector-tutorial.md), [Way We Do](../saas-apps/waywedo-tutorial.md), [Spotinst](../saas-apps/spotinst-tutorial.md), [ProMaster (by Inlogik)](../saas-apps/promaster-tutorial.md), SchoolBooking, [4me](../saas-apps/4me-tutorial.md), [Dossier](../saas-apps/dossier-tutorial.md), [N2F - Expense reports](../saas-apps/n2f-expensereports-tutorial.md), [Comm100 Live Chat](../saas-apps/comm100livechat-tutorial.md), [SafeConnect](../saas-apps/safeconnect-tutorial.md), [ZenQMS](../saas-apps/zenqms-tutorial.md), [eLuminate](../saas-apps/eluminate-tutorial.md), [Dovetale](../saas-apps/dovetale-tutorial.md).
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In July 2018, we've added these 16 new apps with Federation support to the app g
[Innovation Hub](../saas-apps/innovationhub-tutorial.md), [Leapsome](../saas-apps/leapsome-tutorial.md), [Certain Admin SSO](../saas-apps/certainadminsso-tutorial.md), PSUC Staging, [iPass SmartConnect](../saas-apps/ipasssmartconnect-tutorial.md), [Screencast-O-Matic](../saas-apps/screencast-tutorial.md), PowerSchool Unified Classroom, [Eli Onboarding](../saas-apps/elionboarding-tutorial.md), [Bomgar Remote Support](../saas-apps/bomgarremotesupport-tutorial.md), [Nimblex](../saas-apps/nimblex-tutorial.md), [Imagineer WebVision](../saas-apps/imagineerwebvision-tutorial.md), [Insight4GRC](../saas-apps/insight4grc-tutorial.md), [SecureW2 JoinNow Connector](../saas-apps/securejoinnow-tutorial.md), [Kanbanize](../saas-apps/kanbanize-tutorial.md), [SmartLPA](../saas-apps/smartlpa-tutorial.md), [Skills Base](../saas-apps/skillsbase-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In June 2018, we've added these 15 new apps with Federation support to the app g
[Skytap](../saas-apps/skytap-tutorial.md), [Settling music](../saas-apps/settlingmusic-tutorial.md), [SAML 1.1 Token enabled LOB App](../saas-apps/saml-tutorial.md), [Supermood](../saas-apps/supermood-tutorial.md), [Autotask](../saas-apps/autotaskendpointbackup-tutorial.md), [Endpoint Backup](../saas-apps/autotaskendpointbackup-tutorial.md), [Skyhigh Networks](../saas-apps/skyhighnetworks-tutorial.md), Smartway2, [TonicDM](../saas-apps/tonicdm-tutorial.md), [Moconavi](../saas-apps/moconavi-tutorial.md), [Zoho One](../saas-apps/zohoone-tutorial.md), [SharePoint on-premises](../saas-apps/sharepoint-on-premises-tutorial.md), [ForeSee CX Suite](../saas-apps/foreseecxsuite-tutorial.md), [Vidyard](../saas-apps/vidyard-tutorial.md), [ChronicX](../saas-apps/chronicx-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In May 2018, we've added these 18 new apps with Federation support to our app ga
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
Criterion HCM, [FiscalNote](../saas-apps/fiscalnote-tutorial.md), [Secret Server
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In March 2018, we've added these 15 new apps with Federation support to our app
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
In January 2018, the following new apps with federation support were added in th
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
CybSafe, [FactSet](../saas-apps/factset-tutorial.md), [IMAGE WORKS](../saas-apps
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
+For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
In November 2021, we have added following 32 new applications in our App gallery
You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
-For listing your application in the Azure AD app gallery, read the details [here](../develop/v2-howto-app-gallery-listing.md).
+For listing your application in the Azure AD app gallery, read the details [here](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
The following table lists requirements for using Azure AD Connect Health.
| Firewall ports on the server are running the agent. |The agent requires the following firewall ports to be open so that it can communicate with the Azure AD Connect Health service endpoints: <br /><li>TCP port 443</li><li>TCP port 5671</li> <br />The latest version of the agent doesn't require port 5671. Upgrade to the latest version so that only port 443 is required. For more information, see [Hybrid identity required ports and protocols](./reference-connect-ports.md). | | If Internet Explorer enhanced security is enabled, allow specified websites. |If Internet Explorer enhanced security is enabled, then allow the following websites on the server where you install the agent:<br /><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com</li><li>https:\//login.windows.net</li><li>https:\//aadcdn.msftauth.net</li><li>The federation server for your organization that's trusted by Azure AD (for example, https:\//sts.contoso.com)</li> <br />For more information, see [How to configure Internet Explorer](https://support.microsoft.com/help/815141/internet-explorer-enhanced-security-configuration-changes-the-browsing). If you have a proxy in your network, then see the note that appears at the end of this table.| | PowerShell version 5.0 or newer is installed. | Windows Server 2016 includes PowerShell version 5.0.
-|FIPS (Federal Information Processing Standard) is disabled.|Azure AD Connect Health agents don't support FIPS.|
+ > [!IMPORTANT] > Windows Server Core doesn't support installing the Azure AD Connect Health agent.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites - Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later.
+- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported.
- Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration.
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 10/28/2021 Last updated : 01/24/2022
Real-time detections may not show up in reporting for five to 10 minutes. Offlin
### User-linked detections
-Risky activity can be detected for a user that isn't linked to a specific malicious sign-in but to the user itself.
+Risky activity can be detected for a user that is not linked to a specific malicious sign-in but to the user itself.
These risks are calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources. | Risk detection | Description | | | |
-| Leaked credentials | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they're checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |
+| Leaked credentials | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they are checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |
| Azure AD threat intelligence | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | ### Sign-in risk
-A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner.
+A sign-in risk represents the probability that a given authentication request is not authorized by the identity owner.
These risks can be calculated in real-time or calculated offline using Microsoft's internal and external threat intelligence sources including security researchers, law enforcement professionals, security teams at Microsoft, and other trusted sources.
These risks can be calculated in real-time or calculated offline using Microsoft
| Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. | | Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.| | Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. |
-| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that's not already in the list of familiar locations. Newly created users will be in "learning mode" for a while where unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols don't have modern properties such as client ID, there's limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
+| Unfamiliar sign-in properties | Real-time | This risk detection type considers past sign-in history (IP, Latitude / Longitude and ASN) to look for anomalous sign-ins. The system stores information about previous locations used by a user, and considers these "familiar" locations. The risk detection is triggered when the sign-in occurs from a location that is not already in the list of familiar locations. Newly created users will be in "learning mode" for a while where unfamiliar sign-in properties risk detections will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols do not have modern properties such as client ID, there is limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. |
| Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. | | Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. |
These risks can be calculated in real-time or calculated offline using Microsoft
| Risk detection | Detection type | Description | | | | |
-| Additional risk detected | Real-time or Offline | This detection indicates that one of the above premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
+| Additional risk detected | Real-time or Offline | This detection indicates that one of the above premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they are titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
## Common questions ### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
+Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there is no active indication that the user's identity has been compromised.
-While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
+While Microsoft does not provide specific details about how risk is calculated, we will say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
### Password hash synchronization
Risk detections like leaked credentials require the presence of password hashes
### Why are there risk detections generated for disabled user accounts?
-Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. That's why, Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
+Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. That is why, Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
### Leaked credentials
Microsoft finds leaked credentials in various places, including:
- Law enforcement agencies. - Other groups at Microsoft doing dark web research.
-#### Why aren't I seeing any leaked credentials?
+#### Why are not I seeing any leaked credentials?
-Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs isn't done.
+Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs is not done.
-#### I haven't seen any leaked credential risk events for quite some time?
+#### I have not seen any leaked credential risk events for quite some time?
-If you haven't seen any leaked credential risk events, it's because of the following reasons:
+If you have not seen any leaked credential risk events, it is because of the following reasons:
-- You don't have PHS enabled for your tenant.-- Microsoft hasn't found any leaked credential pairs that match your users.
+- You do not have PHS enabled for your tenant.
+- Microsoft has not found any leaked credential pairs that match your users.
#### How often does Microsoft process new credentials?
Location in risk detections is determined by IP address lookup.
## Next steps - [Policies available to mitigate risks](concept-identity-protection-policies.md)
+- [Investigate risk](howto-identity-protection-investigate-risk.md)
+- [Remediate and unblock users](howto-identity-protection-remediate-unblock.md)
- [Security overview](concept-identity-protection-security-overview.md)
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 10/26/2021 Last updated : 01/24/2022
As we learned in the previous article, [Identity Protection policies](concept-id
- Sign-in risk policy - User risk policy
-![Security overview page to enable user and sign-in risk policies](./media/howto-identity-protection-configure-risk-policies/identity-protection-security-overview.png)
- Both policies work to automate the response to risk detections in your environment and allow users to self-remediate when risk is detected. ## Choosing acceptable risk levels
-Organizations must decide the level of risk they're willing to accept balancing user experience and security posture.
+Organizations must decide the level of risk they are willing to accept balancing user experience and security posture.
Microsoft's recommendation is to set the user risk policy threshold to **High** and the sign-in risk policy to **Medium and above** and allow self-remediation options. Choosing to block access rather than allowing self-remediation options, like password change and multi-factor authentication, will impact your users and administrators. Weigh this choice when configuring your policies.
Organizations can choose to block access when risk is detected. Blocking sometim
- When a user risk policy triggers: - Administrators can require a secure password reset, requiring Azure AD MFA be done before the user creates a new password with SSPR, resetting the user risk. -- When a sign in risk policy triggers:
- - Azure AD MFA can be triggered, allowing to user to prove it's them by using one of their registered authentication methods, resetting the sign in risk.
+- When a sign-in risk policy triggers:
+ - Azure AD MFA can be triggered, allowing to user to prove it is them by using one of their registered authentication methods, resetting the sign-in risk.
> [!WARNING] > Users must register for Azure AD MFA and SSPR before they face a situation requiring remediation. Users not registered are blocked and require administrator intervention.
Organizations can choose to block access when risk is detected. Blocking sometim
## Exclusions
-Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they're still applicable.
+Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they are still applicable.
## Enable policies
Before enabling remediation policies, organizations may want to [investigate](ho
1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **User risk**, set **Configure** to **Yes**.
- 1. Under **Configure user risk levels needed for policy to be enforced** select **High**.
+ 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**.
1. Select **Done**. 1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require password change**.
Before enabling remediation policies, organizations may want to [investigate](ho
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
-1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**
+1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**.
1. Select **High** and **Medium**. 1. Select **Done**. 1. Under **Access controls** > **Grant**.
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Previously updated : 06/05/2020 Last updated : 01/24/2022 -+
The three reports are found in the **Azure portal** > **Azure Active Directory**
Each report launches with a list of all detections for the period shown at the top of the report. Each report allows for the addition or removal of columns based on administrator preference. Administrators can choose to download the data in .CSV or .JSON format. Reports can be filtered using the filters across the top of the report.
-Selecting individual entries may enable additional entries at the top of the report such as the ability to confirm a sign-in as compromised or safe, confirm a user as compromised, or dismiss user risk.
+Selecting individual entries may enable more entries at the top of the report such as the ability to confirm a sign-in as compromised or safe, confirm a user as compromised, or dismiss user risk.
-Selecting individual entries expands a details window below the detections. The details view allows administrators to investigate and perform actions on each detection.
-
-![Example Identity Protection report showing risky sign-ins and details](./media/howto-identity-protection-investigate-risk/identity-protection-risky-sign-ins-report.png)
+Selecting individual entries expands a details window below the detections. The details view allows administrators to investigate and take action on each detection.
## Risky users + With the information provided by the risky users report, administrators can find: - Which users are at risk, have had risk remediated, or have had risk dismissed?
Administrators can then choose to take action on these events. Administrators ca
## Risky sign-ins
-The risky sign-ins report contains filterable data for up to the past 30 days (1 month).
+
+The risky sign-ins report contains filterable data for up to the past 30 days (one month).
With the information provided by the risky sign-ins report, administrators can find:
Administrators can then choose to take action on these events. Administrators ca
## Risk detections
-The risk detections report contains filterable data for up to the past 90 days (3 months).
+
+The risk detections report contains filterable data for up to the past 90 days (three months).
With the information provided by the risk detections report, administrators can find:
Administrators can then choose to return to the user's risk or sign-ins report t
> [!NOTE] > Our system may detect that the risk event that contributed to the risk user risk score was a false positives or the user risk was remediated with policy enforcement such as completing an MFA prompt or secure password change. Therefore our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will surface and it will no longer contribute to the userΓÇÖs risk.
+## Investigation framework
+
+Organizations may use the following frameworks to begin their investigation into any suspicious activity. Investigations may require having a conversation with the user in question, review of the [sign-in logs](../reports-monitoring/concept-sign-ins.md), or review of the [audit logs](../reports-monitoring/concept-audit-logs.md) to name a few.
+
+1. Check the logs and validate whether the suspicious activity is normal for the given user.
+ 1. Look at the userΓÇÖs past activities including at least the following properties to see if they are normal for the given user.
+ 1. Application
+ 1. Device - Is the device registered or compliant?
+ 1. Location - Is the user traveling to a different location or accessing devices from multiple locations?
+ 1. IP address
+ 1. User agent string
+ 1. If you have access to other security tools like [Microsoft Sentinel](../../sentinel/overview.md), check for corresponding alerts that might indicate a larger issue.
+1. Reach out to the user to confirm if they recognize the sign-in. Methods such as email or Teams may be compromised.
+ 1. Confirm the information you have such as:
+ 1. Application
+ 1. Device
+ 1. Location
+ 1. IP address
+
+### Investigate Azure AD threat intelligence detections
+
+To investigate an Azure AD Threat Intelligence risk detection, follow these steps:
+
+If more information is shown for the detection:
+
+1. Sign-in was from a suspicious IP Address:
+ 1. Confirm if the IP address shows suspicious behavior in your environment.
+ 1. Does the IP generate a high number of failures for a user or set of users in your directory?
+ 1. Is the traffic of the IP coming from an unexpected protocol or application, for example Exchange legacy protocols?
+ 1. If the IP address corresponds to a cloud service provider, rule out that there are no legitimate enterprise applications running from the same IP.
+1. This account was attacked by a Password spray:
+ 1. Validate that no other users in your directory are targets of the same attack.
+ 1. Do other users have sign-ins with similar atypical patterns seen in the detected sign-in within the same time frame? Password spray attacks may display unusual patterns in:
+ 1. User agent string
+ 1. Application
+ 1. Protocol
+ 1. Ranges of IPs/ASNs
+ 1. Time and frequency of sign-ins
+ ## Next steps
+- [Remediate and unblock users](howto-identity-protection-remediate-unblock.md)
+ - [Policies available to mitigate risks](concept-identity-protection-policies.md) - [Enable sign-in and user risk policies](howto-identity-protection-configure-risk-policies.md)
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Previously updated : 01/25/2021 Last updated : 01/24/2022 -+ # Remediate risks and unblock users
-After completing your [investigation](howto-identity-protection-investigate-risk.md), you will want to take action to remediate the risk or unblock users. Organizations also have the option to enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they are presented with in a time period your organization is comfortable with. Microsoft recommends closing events as soon as possible because time matters when working with risk.
+After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they are presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
## Remediation
Administrators have the following options to remediate:
- Dismiss user risk - Close individual risk detections manually
+### Remediation framework
+
+1. If the account is confirmed compromised:
+ 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
+ 1. If a risk policy or a Conditional Access policy was not triggered at part of the risk detection, and the risk was not [self-remediated](#self-remediation-with-risk-policy), then:
+ 1. [Request a password reset](#manual-password-reset).
+ 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user.
+ 1. Revoke refresh tokens.
+ 1. [Disable any devices](../devices/device-management-azure-portal.md) considered compromised.
+ 1. If using [continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md), revoke all access tokens.
+
+For more information about what happens when confirming compromise, see the section [How should I give risk feedback and what happens under the hood?](howto-identity-protection-risk-feedback.md#how-should-i-give-risk-feedback-and-what-happens-under-the-hood).
+ ### Self-remediation with risk policy
-If you allow users to self-remediate, with Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR in order to use when risk is detected.
+If you allow users to self-remediate, with Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR for use when risk is detected.
-Some detections may not raise risk to the level where a user self-remediation would be required but administrators should still evaluate these detections. Administrators may determine that additional measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
+Some detections may not raise risk to the level where a user self-remediation would be required but administrators should still evaluate these detections. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
### Manual password reset
Administrators are given two options when resetting a password for their users:
- **Generate a temporary password** - By generating a temporary password, you can immediately bring an identity back into a safe state. This method requires contacting the affected users because they need to know what the temporary password is. Because the password is temporary, the user is prompted to change the password to something new during the next sign-in. -- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that have not been registered, this option isn't available.
+- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that have not been registered, this option is not available.
### Dismiss user risk If a password reset is not an option for you, because for example the user has been deleted, you can choose to dismiss user risk detections.
-When you click **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
+When you click **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method does not have an impact on the existing password, it does not bring the related identity back into a safe state.
### Close individual risk detections manually
An administrator may choose to block a sign-in based on their risk policy or inv
### Unblocking based on user risk
-To unblock an account blocked due to user risk, administrators have the following options:
+To unblock an account blocked because of user risk, administrators have the following options:
1. **Reset password** - You can reset the user's password. 1. **Dismiss user risk** - The user risk policy blocks a user if the configured user risk level for blocking access has been reached. You can reduce a user's risk level by dismissing user risk or manually closing reported risk detections.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
Prior BIG-IP experience isnΓÇÖt necessary, but youΓÇÖll need:
* An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
-## Big-IP configuration methods
+## BIG-IP configuration methods
There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration, or an advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
active-directory Overview Application Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/overview-application-gallery.md
When you select the **Create your own application** link near the top of the bla
## Request new gallery application
-After you successfully integrate an application with Azure AD and thoroughly tested it, you can request to have it added to the gallery. Publishing an application to the gallery from the portal isn't supported but there is a process that you can follow to have it done for you. For more information about publishing to the gallery, select [Request new gallery application](../develop/v2-howto-app-gallery-listing.md).
+After you successfully integrate an application with Azure AD and thoroughly tested it, you can request to have it added to the gallery. Publishing an application to the gallery from the portal isn't supported but there is a process that you can follow to have it done for you. For more information about publishing to the gallery, select [Request new gallery application](../manage-apps/v2-howto-app-gallery-listing.md).
## Next steps
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
With Microsoft Azure AD Application Proxy, you can provide access to application
### Integrating custom applications
-If you want to add your custom application to the Azure Application Gallery, see [Publish your app to the Azure AD app gallery](../develop/v2-howto-app-gallery-listing.md).
+If you want to add your custom application to the Azure Application Gallery, see [Publish your app to the Azure AD app gallery](../manage-apps/v2-howto-app-gallery-listing.md).
## Managing access to applications
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
The rest of this guide explains the technical considerations and our recommendat
## Publishing your application to Azure Marketplace
-You can pre-integrate your application with Azure AD to support SSO and automated provisioning by following the process to [publish it in Azure Marketplace](../develop/v2-howto-app-gallery-listing.md). Azure Marketplace is a trusted source of applications for IT admins. Applications listed there have been validated to be compatible with Azure AD. They support SSO, automate user provisioning, and can easily integrate into customer tenants with automated app registration.
+You can pre-integrate your application with Azure AD to support SSO and automated provisioning by following the process to [publish it in Azure Marketplace](../manage-apps/v2-howto-app-gallery-listing.md). Azure Marketplace is a trusted source of applications for IT admins. Applications listed there have been validated to be compatible with Azure AD. They support SSO, automate user provisioning, and can easily integrate into customer tenants with automated app registration.
In addition, we recommend that you become a [verified publisher](../develop/publisher-verification-overview.md) so that customers know you're the trusted publisher of the app.
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
This article provides information for resolving a blocked sign-in to the Microso
The user sees this message when trying to sign in to the Microsoft Application Network portal. ## Cause
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
This problem typically happens if the application vendor has changed their sign-
While Microsoft has technologies to automatically detect when integrations break, it might not be possible to find the issues right away, or the issues take some time to fix. In the case when one of these integrations does not work correctly, open a support case so it can be fixed as quickly as possible.
-**If you are in contact with this applicationΓÇÖs vendor,** send them our way so Microsoft can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md) to get them started.
+**If you are in contact with this applicationΓÇÖs vendor,** send them our way so Microsoft can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md) to get them started.
## Credentials are filled in and submitted, but the page indicates the credentials are incorrect
In case the previous suggestions do not work, it could be the case that a change
While Microsoft has technologies to automatically detect when application integrations break, it might not be possible to find the issues right away, or the issues might take some time to fix. When an integration does not work correctly, you can open a support case to get it fixed as quickly as possible.
-In addition to this, **if you are in contact with this applicationΓÇÖs vendor,** **send them our way** so we can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md) to get them started.
+In addition to this, **if you are in contact with this applicationΓÇÖs vendor,** **send them our way** so we can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md) to get them started.
## Check if the applicationΓÇÖs login page has changed recently or requires an additional field
If the applicationΓÇÖs login page has changed drastically, sometimes this causes
While Microsoft has technologies to automatically detect when application integrations break, it might not be possible to find the issues right away, or the issues might take some time to fix. When an integration does not work correctly, you can open a support case to get it fixed as quickly as possible.
-In addition to this, **if you are in contact with this applicationΓÇÖs vendor,** **send them our way** so we can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md) to get them started.
+In addition to this, **if you are in contact with this applicationΓÇÖs vendor,** **send them our way** so we can work with them to natively integrate their application with Azure Active Directory. You can send the vendor to the [Listing your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md) to get them started.
## Capture sign-in fields for an app
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
You can track application requests by customer name at the Microsoft Application
The timeline for the process of listing a SAML 2.0 or WS-Fed application in the gallery is 7 to 10 business days. The timeline for the process of listing an OpenID Connect application in the gallery is 2 to 5 business days. The timeline for the process of listing a SCIM provisioning application in the gallery is variable and depends on numerous factors.
Not all applications can be onboarded. Per the terms and conditions, the choice
Here's the flow of customer-requested applications. For any escalations, send email to the [Azure AD SSO Integration Team](mailto:SaaSApplicationIntegrations@service.microsoft.com), and a response is sent as soon as possible.
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-application-management.md
Many applications are already pre-integrated (shown as ΓÇ£Cloud applicationsΓÇ¥
If you develop your own business application, you can register it with Azure AD to take advantage of the security features that the tenant provides. You can register your application in **App Registrations**, or you can register it using the **Create your own application** link when adding a new application in **Enterprise applications**. Consider how [authentication](../develop/authentication-vs-authorization.md) is implemented in your application for integration with Azure AD.
-If you want to make your application available through the gallery, you can [submit a request to have it added](../develop/v2-howto-app-gallery-listing.md).
+If you want to make your application available through the gallery, you can [submit a request to have it added](../manage-apps/v2-howto-app-gallery-listing.md).
### On-premises applications
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 12/17/2021 Last updated : 01/24/2022
The following Azure AD roles can be assigned with administrative unit scope:
| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. | | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators in the assigned administrative unit only. | | [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. |
-| [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators and Password Administrators within the assigned administrative unit only. |
+| [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators within the assigned administrative unit only. |
| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) * | Can manage all aspects of the SharePoint service. | | [Teams Administrator](permissions-reference.md#teams-administrator) * | Can manage the Microsoft Teams service. | | [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. |
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-create.md
$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $resourceScope -
POST ``` HTTP
- https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions
+ https://graph.microsoft.com/v1.0/roleManagement/directory/roleDefinitions
``` Body
$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $resourceScope -
POST ```http
- https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
+ https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments
``` Body
$roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId $resourceScope -
{ "principalId":"<GUID OF USER>", "roleDefinitionId":"<GUID OF ROLE DEFINITION>",
- "resourceScope":"/<GUID OF APPLICATION REGISTRATION>"
+ "directoryScopeId":"/<GUID OF APPLICATION REGISTRATION>"
} ```
active-directory Elearnposh Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/elearnposh-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with eLearnPOSH'
+description: Learn how to configure single sign-on between Azure Active Directory and eLearnPOSH.
++++++++ Last updated : 01/21/2022++++
+# Tutorial: Azure AD SSO integration with eLearnPOSH
+
+In this tutorial, you'll learn how to integrate eLearnPOSH with Azure Active Directory (Azure AD). When you integrate eLearnPOSH with Azure AD, you can:
+
+* Control in Azure AD who has access to eLearnPOSH.
+* Enable your users to be automatically signed-in to eLearnPOSH with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* eLearnPOSH single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* eLearnPOSH supports **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add eLearnPOSH from the gallery
+
+To configure the integration of eLearnPOSH into Azure AD, you need to add eLearnPOSH from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **eLearnPOSH** in the search box.
+1. Select **eLearnPOSH** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for eLearnPOSH
+
+Configure and test Azure AD SSO with eLearnPOSH using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in eLearnPOSH.
+
+To configure and test Azure AD SSO with eLearnPOSH, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure eLearnPOSH SSO](#configure-elearnposh-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create eLearnPOSH test user](#create-elearnposh-test-user)** - to have a counterpart of B.Simon in eLearnPOSH that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **eLearnPOSH** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. eLearnPOSH application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, eLearnPOSH application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | department | user.department |
+ | designation | user.jobtitle |
+ | email | user.userprincipalname |
+ | empid | user.employeeid |
+ | firstname | user.givenname |
+ | lastname | user.surname |
+ | primary-email | user.primaryauthoritativeemail |
+ | username | user.userprincipalname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to eLearnPOSH.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **eLearnPOSH**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure eLearnPOSH SSO
+
+To configure single sign-on on **eLearnPOSH** side, you need to send the **App Federation Metadata Url** to [eLearnPOSH support team](mailto:contact@succeedtech.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create eLearnPOSH test user
+
+In this section, you create a user called Britta Simon in eLearnPOSH. Work with [eLearnPOSH support team](mailto:contact@succeedtech.com) to add the users in the eLearnPOSH platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the eLearnPOSH for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the eLearnPOSH tile in the My Apps, you should be automatically signed in to the eLearnPOSH for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure eLearnPOSH you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Excelity Hcm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/excelity-hcm-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Excelity HCM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Excelity HCM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Hornbill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/hornbill-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Hornbill | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Hornbill'
description: Learn how to configure single sign-on between Azure Active Directory and Hornbill.
Previously updated : 07/23/2021 Last updated : 01/21/2022
-# Tutorial: Azure Active Directory integration with Hornbill
+# Tutorial: Azure AD SSO integration with Hornbill
In this tutorial, you'll learn how to integrate Hornbill with Azure Active Directory (Azure AD). When you integrate Hornbill with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/lib/saml/auth/simplesaml/module.php/saml/sp/metadata.php/saml`
+ `https://sso.hornbill.com/<INSTANCE_NAME>/<SUBDOMAIN>`
b. In the **Sign on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. On the Home page, click **System**.
- ![Hornbill system](./media/hornbill-tutorial/system.png "Hornbill system")
+ ![Screenshot shows the Hornbill system.](./media/hornbill-tutorial/system.png "Hornbill system")
3. Navigate to **Security**.
- ![Hornbill security](./media/hornbill-tutorial/security.png "Hornbill security")
+ ![Screenshot shows the Hornbill security.](./media/hornbill-tutorial/security.png "Hornbill security")
4. Click **SSO Profiles**.
- ![Hornbill single](./media/hornbill-tutorial/profile.png "Hornbill single")
+ ![Screenshot shows the Hornbill single.](./media/hornbill-tutorial/profile.png "Hornbill single")
5. On the right side of the page, click on **Add logo**.
- ![Hornbill add](./media/hornbill-tutorial/add-logo.png "Hornbill add")
+ ![Screenshot shows to add the logo.](./media/hornbill-tutorial/add-logo.png "Hornbill add")
6. On the **Profile Details** bar, click on **Import SAML Meta logo**.
- ![Hornbill logo](./media/hornbill-tutorial/logo.png "Hornbill logo")
+ ![Screenshot shows Hornbill Meta logo.](./media/hornbill-tutorial/logo.png "Hornbill logo")
7. On the Pop-up page in the **URL** text box, paste the **App Federation Metadata Url**, which you have copied from Azure portal and click **Process**.
- ![Hornbill process](./media/hornbill-tutorial/process.png "Hornbill process")
+ ![Screenshot shows Hornbill process.](./media/hornbill-tutorial/process.png "Hornbill process")
8. After clicking process the values get auto populated automatically under **Profile Details** section.
- ![Hornbill page1](./media/hornbill-tutorial/page.png "Hornbill page1")
+ ![Screenshot shows Hornbill profile](./media/hornbill-tutorial/page.png "Hornbill profile")
- ![Hornbill page2](./media/hornbill-tutorial/services.png "Hornbill page2")
+ ![Screenshot shows Hornbill details.](./media/hornbill-tutorial/services.png "Hornbill details")
- ![Hornbill page3](./media/hornbill-tutorial/details.png "Hornbill page3")
+ ![Screenshot shows Hornbill certificate.](./media/hornbill-tutorial/details.png "Hornbill certificate")
9. Click **Save Changes**.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Hornbill you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Hornbill you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Manifestly Checklists Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/manifestly-checklists-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Manifestly Checklists'
+description: Learn how to configure single sign-on between Azure Active Directory and Manifestly Checklists.
++++++++ Last updated : 01/21/2022++++
+# Tutorial: Azure AD SSO integration with Manifestly Checklists
+
+In this tutorial, you'll learn how to integrate Manifestly Checklists with Azure Active Directory (Azure AD). When you integrate Manifestly Checklists with Azure AD, you can:
+
+* Control in Azure AD who has access to Manifestly Checklists.
+* Enable your users to be automatically signed-in to Manifestly Checklists with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Manifestly Checklists single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Manifestly Checklists supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Manifestly Checklists from the gallery
+
+To configure the integration of Manifestly Checklists into Azure AD, you need to add Manifestly Checklists from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Manifestly Checklists** in the search box.
+1. Select **Manifestly Checklists** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Manifestly Checklists
+
+Configure and test Azure AD SSO with Manifestly Checklists using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Manifestly Checklists.
+
+To configure and test Azure AD SSO with Manifestly Checklists, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Manifestly Checklists SSO](#configure-manifestly-checklists-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Manifestly Checklists test user](#create-manifestly-checklists-test-user)** - to have a counterpart of B.Simon in Manifestly Checklists that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Manifestly Checklists** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://app.manifest.ly/users/saml/metadata`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://app.manifest.ly/users/saml/auth`
+
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ | -|
+ | `https://app.manifest.ly/users/sign_in` |
+ | `https://app.manifest.ly/a/<CustomerName>` |
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign-on URL. Contact [Manifestly Checklists Client support team](mailto:support@manifest.ly) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Manifestly Checklists application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Manifestly Checklists application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | email | user.mail |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Manifestly Checklists** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Manifestly Checklists.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Manifestly Checklists**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Manifestly Checklists SSO
+
+1. Log in to your Manifestly Checklists company site as an administrator.
+
+1. Go to **Settings** > **SSO** and click **Set up SAML Sign On** button.
+
+ ![Screenshot shows the SSO Settings.](./media/manifestly-checklists-tutorial/settings.png "SSO Settings")
+
+1. In the **Edit SAML Single Sign** page, perform the following steps:
+
+ ![Screenshot shows the SSO Configuration.](./media/manifestly-checklists-tutorial/certificate.png "SSO Configuration")
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **SAML Cert** textbox.
+
+ 1. In the **SAML Entity** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. In the **SAML URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. Click **Save**.
+
+### Create Manifestly Checklists test user
+
+1. In a different web browser window, sign into your Manifestly Checklists company site as an administrator.
+
+1. Go to **Teams** > **Users** and click **Add User**.
+
+ ![Screenshot shows the Team Members.](./media/manifestly-checklists-tutorial/user.png "Team Members")
+
+1. Enter the **Name** and **Email** in the textbox and click **Send Invite**.
+
+ ![Screenshot shows to add users.](./media/manifestly-checklists-tutorial/name.png "Add users")
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Manifestly Checklists Sign on URL where you can initiate the login flow.
+
+* Go to Manifestly Checklists Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Manifestly Checklists for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Manifestly Checklists tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Manifestly Checklists for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Manifestly Checklists you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Maxient Conduct Manager Software Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/maxient-conduct-manager-software-tutorial.md
If a support ticket has not already been opened with a Maxient Implementation/Su
## Next steps
-Once you configure Maxient Conduct Manager Software you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Maxient Conduct Manager Software you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Pendo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/pendo-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, perform the following steps:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://sso.connect.pingidentity.com/<CUSTOM_GUID>`
+ a. In the **Identifier** text box, enter `PingConnect`. (If this identifier is already used by another application, contact the [Pendo support team](mailto:support@pendo.io).)
+
b. In the **Relay State** text box, type a URL using the following pattern: `https://pingone.com/1.0/<CUSTOM_GUID>`
active-directory Perceptionunitedstates Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/perceptionunitedstates-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Perception United States (Non-UltiPro)'
-description: Learn how to configure single sign-on between Azure Active Directory and Perception United States (Non-UltiPro).
+ Title: 'Tutorial: Azure AD SSO integration with UltiPro Perception'
+description: Learn how to configure single sign-on between Azure Active Directory and UltiPro Perception.
Previously updated : 10/05/2021 Last updated : 01/18/2022
-# Tutorial: Azure AD SSO integration with Perception United States (Non-UltiPro)
+# Tutorial: Azure AD SSO integration with UltiPro Perception
-In this tutorial, you'll learn how to integrate Perception United States (Non-UltiPro) with Azure Active Directory (Azure AD). When you integrate Perception United States (Non-UltiPro) with Azure AD, you can:
+In this tutorial, you'll learn how to integrate UltiPro Perception with Azure Active Directory (Azure AD). When you integrate UltiPro Perception with Azure AD, you can:
-* Control in Azure AD who has access to Perception United States (Non-UltiPro).
-* Enable your users to be automatically signed-in to Perception United States (Non-UltiPro) with their Azure AD accounts.
+* Control in Azure AD who has access to UltiPro Perception.
+* Enable your users to be automatically signed-in to UltiPro Perception with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Perception United States (Non-Ul
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Perception United States (Non-UltiPro) single sign-on (SSO) enabled subscription.
+* UltiPro Perception single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Perception United States (Non-UltiPro) supports **IDP** initiated SSO.
+* UltiPro Perception supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Perception United States (Non-UltiPro) from the gallery
+## Add UltiPro Perception from the gallery
-To configure the integration of Perception United States (Non-UltiPro) into Azure AD, you need to add Perception United States (Non-UltiPro) from the gallery to your list of managed SaaS apps.
+To configure the integration of UltiPro Perception into Azure AD, you need to add UltiPro Perception from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Perception United States (Non-UltiPro)** in the search box.
-1. Select **Perception United States (Non-UltiPro)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **UltiPro Perception** in the search box.
+1. Select **UltiPro Perception** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Perception United States (Non-UltiPro)
+## Configure and test Azure AD SSO for UltiPro Perception
-Configure and test Azure AD SSO with Perception United States (Non-UltiPro) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Perception United States (Non-UltiPro).
+Configure and test Azure AD SSO with UltiPro Perception using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UltiPro Perception.
-To configure and test Azure AD SSO with Perception United States (Non-UltiPro), perform the following steps:
+To configure and test Azure AD SSO with UltiPro Perception, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Perception United States (Non-UltiPro) SSO](#configure-perception-united-states-non-ultipro-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Perception United States (Non-UltiPro) test user](#create-perception-united-states-non-ultipro-test-user)** - to have a counterpart of B.Simon in Perception United States (Non-UltiPro) that is linked to the Azure AD representation of user.
+1. **[Configure UltiPro Perception SSO](#configure-ultipro-perception-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create UltiPro Perception test user](#create-ultipro-perception-test-user)** - to have a counterpart of B.Simon in UltiPro Perception that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Perception United States (Non-UltiPro)** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **UltiPro Perception** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** page, perform the following steps:
- a. In the **Identifier** text box, type the URL:
- `https://perception.kanjoya.com/sp`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ a. In the **Reply URL** text box, type a URL using the following pattern:
`https://perception.kanjoya.com/sso?idp=<entity_id>`
- c. The **Perception United States (Non-UltiPro)** application requires the **Azure AD Identifier** value as <entity_id>, which you will get from the **Set up Perception United States (Non-UltiPro)** section, to be URI-encoded. To get the URI-encoded value, use the following link: **http://www.url-encode-decode.com/**.
+ b. The **UltiPro Perception** application requires the **Azure AD Identifier** value as <entity_id>, which you will get from the **Set up UltiPro Perception** section, to be URI-encoded. To get the URI-encoded value, use the following link: **http://www.url-encode-decode.com/**.
- d. After getting the URI-encoded value combine it with the **Reply URL** as mentioned below-
+ c. After getting the URI-encoded value combine it with the **Reply URL** as mentioned below-
`https://perception.kanjoya.com/sso?idp=<URI encooded entity_id>`
- e. Paste the above value in the **Reply URL** textbox.
+ d. Paste the above value in the **Reply URL** textbox.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-6. On the **Set up Perception United States (Non-UltiPro)** section, copy the appropriate URL(s) as per your requirement.
+6. On the **Set up UltiPro Perception** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Perception United States (Non-UltiPro).
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UltiPro Perception.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Perception United States (Non-UltiPro)**.
+1. In the applications list, select **UltiPro Perception**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Perception United States (Non-UltiPro) SSO
+## Configure UltiPro Perception SSO
-1. In another browser window, sign on to your Perception United States (Non-UltiPro) company site as an administrator.
+1. In another browser window, sign on to your UltiPro Perception company site as an administrator.
2. In the main toolbar, click **Account Settings**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
3. On the **Account Settings** page, perform the following steps:
- ![Perception United States (Non-UltiPro) user](./media/perceptionunitedstates-tutorial/account.png)
+ ![UltiPro Perception user](./media/perceptionunitedstates-tutorial/account.png)
a. In the **Company Name** textbox, type the name of the **Company**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
4. On the **SSO Configuration** page, perform the following steps:
- ![Perception United States (Non-UltiPro) SSO Configuration.](./media/perceptionunitedstates-tutorial/configuration.png)
+ ![UltiPro Perception SSO Configuration.](./media/perceptionunitedstates-tutorial/configuration.png)
a. Select **SAML NameID Type** as **EMAIL**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. Click **Update**.
-### Create Perception United States (Non-UltiPro) test user
+### Create UltiPro Perception test user
-In this section, you create a user called Britta Simon in Perception United States (Non-UltiPro). Work with [Perception United States (Non-UltiPro) support team](https://www.ultimatesoftware.com/Contact/ContactUs) to add the users in the Perception United States (Non-UltiPro) platform.
+In this section, you create a user called Britta Simon in UltiPro Perception. Work with [UltiPro Perception support team](https://www.ultimatesoftware.com/Contact/ContactUs) to add the users in the UltiPro Perception platform.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the Perception United States (Non-UltiPro) for which you set up the SSO.
+* Click on Test this application in Azure portal and you should be automatically signed in to the UltiPro Perception for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Perception United States (Non-UltiPro) tile in the My Apps, you should be automatically signed in to the Perception United States (Non-UltiPro) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the UltiPro Perception tile in the My Apps, you should be automatically signed in to the UltiPro Perception for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Perception United States (Non-UltiPro) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure UltiPro Perception you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Tutorial List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tutorial-list.md
To help integrate your cloud-enabled [software as a service (SaaS)](https://azur
For a list of all SaaS apps that have been pre-integrated into Azure AD, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps).
-Use the [application network portal](../develop/v2-howto-app-gallery-listing.md) to request a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) enabled application to be added to the gallery for automatic provisioning or a SAML / OIDC enabled application to be added to the gallery for SSO.
+Use the [application network portal](../manage-apps/v2-howto-app-gallery-listing.md) to request a [SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) enabled application to be added to the gallery for automatic provisioning or a SAML / OIDC enabled application to be added to the gallery for SSO.
## Quick links
active-directory Twic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/twic-tutorial.md
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Twic you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Twic you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Zivver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zivver-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ZIVVER | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and ZIVVER.
+ Title: 'Tutorial: Azure AD SSO integration with Zivver'
+description: Learn how to configure single sign-on between Azure Active Directory and Zivver.
Previously updated : 08/17/2021 Last updated : 01/21/2022
-# Tutorial: Azure Active Directory integration with ZIVVER
+# Tutorial: Azure AD SSO integration with Zivver
-In this tutorial, you'll learn how to integrate ZIVVER with Azure Active Directory (Azure AD). When you integrate ZIVVER with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Zivver with Azure Active Directory (Azure AD). When you integrate Zivver with Azure AD, you can:
-* Control in Azure AD who has access to ZIVVER.
-* Enable your users to be automatically signed-in to ZIVVER with their Azure AD accounts.
+* Control in Azure AD who has access to Zivver.
+* Enable your users to be automatically signed-in to Zivver with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
-To configure Azure AD integration with ZIVVER, you need the following items:
+To configure Azure AD integration with Zivver, you need the following items:
* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
-* ZIVVER single sign-on enabled subscription.
+* Zivver single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ZIVVER supports **IDP** initiated SSO.
+* Zivver supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add ZIVVER from the gallery
+## Add Zivver from the gallery
-To configure the integration of ZIVVER into Azure AD, you need to add ZIVVER from the gallery to your list of managed SaaS apps.
+To configure the integration of Zivver into Azure AD, you need to add Zivver from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **ZIVVER** in the search box.
-1. Select **ZIVVER** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Zivver** in the search box.
+1. Select **Zivver** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for ZIVVER
+## Configure and test Azure AD SSO for Zivver
-Configure and test Azure AD SSO with ZIVVER using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ZIVVER.
+Configure and test Azure AD SSO with Zivver using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zivver.
-To configure and test Azure AD SSO with ZIVVER, perform the following steps:
+To configure and test Azure AD SSO with Zivver, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure ZIVVER SSO](#configure-zivver-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create ZIVVER test user](#create-zivver-test-user)** - to have a counterpart of B.Simon in ZIVVER that is linked to the Azure AD representation of user.
+1. **[Configure Zivver SSO](#configure-zivver-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zivver test user](#create-zivver-test-user)** - to have a counterpart of B.Simon in Zivver that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **ZIVVER** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Zivver** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
In the **Identifier** text box, type the URL: `https://app.zivver.com/SAML/Zivver`
-5. ZIVVER application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. ZIVVER application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+5. Zivver application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Zivver application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
-6. In addition to above, ZIVVER application expects few more attributes to be passed back in SAML response. In the **User Claims** section on the **User Attributes** dialog, perform the following steps to add SAML token attribute as shown in the below table:
+6. In addition to above, Zivver application expects few more attributes to be passed back in SAML response. In the **User Claims** section on the **User Attributes** dialog, perform the following steps to add SAML token attribute as shown in the below table:
| Name | Namespace | Source Attribute | | || - |
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate URL download link](./media/zivver-tutorial/metadataxmlurl.png)
-8. On the **Set up ZIVVER** section, copy the appropriate URL(s) as per your requirement.
+8. On the **Set up Zivver** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ZIVVER.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zivver.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **ZIVVER**.
+1. In the applications list, select **Zivver**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure ZIVVER SSO
+## Configure Zivver SSO
-1. In a different web browser window, sign in to your ZIVVER company [site](https://app.zivver.com/login) as an administrator.
+1. In a different web browser window, sign in to your Zivver company [site](https://app.zivver.com/login) as an administrator.
2. Click the **Organization settings** icon at the bottom left of your browser window.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
7. Click **SAVE**.
-### Create ZIVVER test user
+### Create Zivver test user
-In this section, you create a user called Britta Simon in ZIVVER. Work with [ZIVVER support team](https://support.zivver.com/) to add the users in the ZIVVER platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Zivver. Work with [Zivver support team](https://support.zivver.com/) to add the users in the Zivver platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the ZIVVER for which you set up the SSO.
+* Click on Test this application in Azure portal and you should be automatically signed in to the Zivver for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the ZIVVER tile in the My Apps, you should be automatically signed in to the ZIVVER for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* You can use Microsoft My Apps. When you click the Zivver tile in the My Apps, you should be automatically signed in to the Zivver for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure ZIVVER you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Zivver you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
For Windows Server nodes, Windows Update does not automatically run and apply th
### Are there additional security threats relevant to AKS that customers should be aware of?
-Microsoft provides guidance on additional actions you can take to secure your workloads through services like [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). The following is a list of additional security threats related to AKS and Kubernetes that customers should be aware of:
+Microsoft provides guidance on additional actions you can take to secure your workloads through services like [Microsoft Defender for Containers](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks). The following is a list of additional security threats related to AKS and Kubernetes that customers should be aware of:
* [New large-scale campaign targets Kubeflow](https://techcommunity.microsoft.com/t5/azure-security-center/new-large-scale-campaign-targets-kubeflow/ba-p/2425750) - June 8, 2021
Except for the following two images, AKS images aren't required to run as root:
- *mcr.microsoft.com/oss/kubernetes/coredns* - *mcr.microsoft.com/azuremonitor/containerinsights/ciprod*
+- *mcr.microsoft.com/oss/calico/node*
+- *mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi*
## What is Azure CNI Transparent Mode vs. Bridge Mode?
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
Title: Use managed identities in Azure Kubernetes Service description: Learn how to use managed identities in Azure Kubernetes Service (AKS) Previously updated : 05/12/2021 Last updated : 01/25/2022 # Use managed identities in Azure Kubernetes Service
az aks update -g <RGName> -n <AKSName> --enable-managed-identity
> > The Azure CLI will ensure your addon's permission is correctly set after migrating, if you're not using the Azure CLI to perform the migrating operation, you will need to handle the addon identity's permission by yourself. Here is one example using [ARM](../role-based-access-control/role-assignments-template.md).
+> [!WARNING]
+> Nodepool upgrade will cause downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+ ## Obtain and use the system-assigned managed identity for your AKS cluster Confirm your AKS cluster is using managed identity with the following CLI command:
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-ca-certificates.md
Follow the steps below to upload a new CA certificate. If you have not created a
1. Select **Save**. This operation may take a few minutes. > [!NOTE]
-> You can also upload a CA certificate using the `New-AzApiManagementSystemCertificate` PowerShell command.
+> - The process of assigning the certificate might take 15 minutes or more depending on the size of the deployment. The Developer SKU has downtime during the process. The Basic and higher SKUs don't have downtime during the process.
+> - You can also upload a CA certificate using the `New-AzApiManagementSystemCertificate` PowerShell command.
## <a name="step1a"> </a>Delete a CA certificate
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
Previously updated : 08/11/2021 Last updated : 01/24/2022
Configuring API Management for zone redundancy is currently supported in the fol
* Canada Central * Central India (*) * Central US
+* East Asia
* East US * East US 2 * France Central
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
After providing your application's Health check path, you can monitor the health
If your app is only scaled to one instance and becomes unhealthy, it will not be removed from the load balancer because that would take your application down entirely. Scale out to two or more instances to get the re-routing benefit of Health check. If your app is running on a single instance, you can still use Health check's [monitoring](#monitoring) feature to keep track of your application's health.
-### Why are the Health check request not showing in my frontend logs?
+### Why are the Health check request not showing in my web server logs?
-The Health check request are sent to your site internally, so the request will not show in [the frontend logs](troubleshoot-diagnostic-logs.md#enable-web-server-logging). This also means the request will have an origin of `127.0.0.1` since it the request being sent internally. You can add log statements in your Health check code to keep logs of when your Health check path is pinged.
+The Health check request are sent to your site internally, so the request will not show in [the web server logs](troubleshoot-diagnostic-logs.md#enable-web-server-logging). This also means the request will have an origin of `127.0.0.1` since it the request being sent internally. You can add log statements in your Health check code to keep logs of when your Health check path is pinged.
### Are the Health check requests sent over HTTP or HTTPS?
app-service Network Secure Outbound Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/network-secure-outbound-traffic-azure-firewall.md
Outbound traffic from your app is now routed through the integrated virtual netw
An easy way to verify your configuration is to use the `curl` command from your app's SCM debug console to verify the outbound connection. 1. In a browser, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole`.
-1. In the console, run `curl -s <protocol>://<fqdn-address>` with a URL that matches the application rule you configured, To continue example in the previous screenshot, you can use **curl -s https://api.my-ip.io/api**. The following screenshot shows a successful response from the API, showing the public IP address of your App Service app.
+1. In the console, run `curl -s <protocol>://<fqdn-address>` with a URL that matches the application rule you configured, To continue example in the previous screenshot, you can use **curl -s https://api.my-ip.io/ip**. The following screenshot shows a successful response from the API, showing the public IP address of your App Service app.
:::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/verify-outbound-traffic-fw-allow-rule.png" alt-text="Screenshot of verifying the success outbound traffic by using curl command in SCM debug console.":::
An easy way to verify your configuration is to use the `curl` command from your
## More resources [Monitor Azure Firewall logs and metrics](../firewall/firewall-diagnostics.md). ---
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
Add-AzApplicationGatewaySslCertificate -KeyVaultSecretId $secretId -ApplicationG
Set-AzApplicationGateway -ApplicationGateway $appgw ```
+> [!NOTE]
+> If you require Application Gateway to sync the last version of the certificate with the key vault, provide the versionless `secretId` value (no hash). To do this, in the preceding example, replace the following line:
+>
+> ```
+> $secretId = $secret.Id # https://<keyvaultname>.vault.azure.net/secrets/<hash>
+> ```
+>
+> With this line:
+>
+> ```
+> $secretId = $secret.Id.Replace($secret.Version, "") # https://<keyvaultname>.vault.azure.net/secrets/
+> ```
+ Once the commands have been executed, you can navigate to your Application Gateway in the Azure portal and select the Listeners tab. Click Add Listener (or select an existing) and specify the Protocol to HTTPS. Under *Choose a certificate* select the certificate named in the previous steps. Once selected, select *Add* (if creating) or *Save* (if editing) to apply the referenced Key Vault certificate to the listener.
application-gateway Renew Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/renew-certificates.md
Previously updated : 01/20/2021 Last updated : 01/25/2022
At some point, you'll need to renew your certificates if you configured your application gateway for TLS/SSL encryption.
-You can renew a certificate associated with a listener using either the Azure portal, Azure PowerShell, or Azure CLI:
+There are two locations where certificates may exist: certificates stored in Azure Key Vault, or certificates uploaded to an application gateway.
-## Azure portal
+## Certificates on Azure Key Vault
+
+When Application Gateway is configured to use Key Vault certificates, its instances retrieve the certificate from Key Vault and install them locally for TLS termination. The instances poll Key Vault at four-hour intervals to retrieve a renewed version of the certificate if it exists. If an updated certificate is found, the TLS/SSL certificate that's currently associated with the HTTPS listener is automatically rotated.
+
+> [!TIP]
+> Any change to Application Gateway will force a check against Key Vault to see if any new versions of certificates are available. This includes, but is not limited to, changes to Frontend IP Configurations, Listeners, Rules, Backend Pools, Resource Tags, and more. If an updated certificate is found, the new certificate will immediately be presented.
+
+Application Gateway uses a secret identifier in Key Vault to reference the certificates. For Azure PowerShell, the Azure CLI, or Azure Resource Manager, we strongly recommend that you use a secret identifier that doesn't specify a version. This way, Application Gateway will automatically rotate the certificate if a newer version is available in your key vault. An example of a secret URI without a version is `https://myvault.vault.azure.net/secrets/mysecret/`.
+
+## Certificates on an application gateway
+
+Application Gateway supports certificate upload without the need to configure Azure Key Vault. To renew the uploaded certificates, use the following steps for the Azure portal, Azure PowerShell, or Azure CLI.
+
+### Azure portal
To renew a listener certificate from the portal, navigate to your application gateway listeners. Select the listener that has a certificate that needs to be renewed, and then select **Renew or edit selected certificate**.
Select the listener that has a certificate that needs to be renewed, and then se
Upload your new PFX certificate, give it a name, type the password, and then select **Save**.
-## Azure PowerShell
+### Azure PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
set-AzApplicationGatewaySSLCertificate -Name <oldcertname> `
Set-AzApplicationGateway -ApplicationGateway $appgw ```
-## Azure CLI
+### Azure CLI
```azurecli-interactive az network application-gateway ssl-cert update \
az network application-gateway ssl-cert update \
--cert-password "<password>" ``` ++ ## Next steps
-To learn how to configure TLS Offloading with Azure Application Gateway, see [Configure TLS Offload](./create-ssl-portal.md)
+To learn how to configure TLS Offloading with Azure Application Gateway, see [Configure TLS Offload](./create-ssl-portal.md).
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/self-signed-certificates.md
Create your root CA certificate using OpenSSL.
### Create a Root Certificate and self-sign it
-1. Use the following commands to generate the csr and the certificate.
+1. Use the following command to generate the Certificate Signing Request (CSR).
``` openssl req -new -sha256 -key contoso.key -out contoso.csr ```
-
- ```
- openssl x509 -req -sha256 -days 365 -in contoso.csr -signkey contoso.key -out contoso.crt
- ```
- The previous commands create the root certificate. You'll use this to sign your server certificate.
1. When prompted, type the password for the root key, and the organizational information for the custom CA such as Country/Region, State, Org, OU, and the fully qualified domain name (this is the domain of the issuer). ![create root certificate](media/self-signed-certificates/root-cert.png)
+1. Use the following command to generate the Root Certificate.
+
+ ```
+ openssl x509 -req -sha256 -days 365 -in contoso.csr -signkey contoso.key -out contoso.crt
+ ```
+ The previous commands create the root certificate. You'll use this to sign your server certificate.
+ ## Create a server certificate Next, you'll create a server certificate using OpenSSL.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/language-support.md
Language| Locale code |
This technology is currently available for US driver licenses and the biographical page from international passports (excluding visa and other travel documents).
-> [!div class="nextstepaction"]
-> [Try Form Recognizer](https://aka.ms/fott-2.1-ga)
- ## General Document Language| Locale code |
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 01/04/2022 Last updated : 01/24/2022 recommendations: false
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-[Reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/jav)
+[Reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/jav)
Get started with Azure Form Recognizer using the Java programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract and analyze form fields, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
This quickstart uses the Gradle dependency manager. You can find the client libr
mavenCentral() } dependencies {
- implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.2")
+ implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.3")
} ```
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/overview.md
Azure Automation supports management throughout the lifecycle of your infrastruc
* **Dev/test automation scenarios** - Start and start resources, scale resources, etc. * **Governance related automation** - Automatically apply or update tags, locks, etc. * **Azure Site Recovery** - orchestrate pre/post scripts defined in a Site Recovery DR workflow.
-* **Windows Virtual Desktop** - orchestrate scaling of VMs or start/stop VMs based on utilization.
+* **Azure Virtual Desktop** - orchestrate scaling of VMs or start/stop VMs based on utilization.
Depending on your requirements, one or more of the following Azure services integrate with or compliment Azure Automation to help fullfil them:
You can review the prices associated with Azure Automation on the [pricing](http
## Next steps > [!div class="nextstepaction"]
-> [Create an Automation account](./quickstarts/create-account-portal.md)
+> [Create an Automation account](./quickstarts/create-account-portal.md)
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Title: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
-description: Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
+description: You can create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal.
# Create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
-This document describes the steps to create a PostgreSQL Hyperscale server group on Azure Arc from the Azure portal.
+You can create an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal. To do so, follow the steps in this article.
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Getting started
-If you are already familiar with the topics below, you may skip this paragraph.
-There are important topics you may want read before you proceed with creation:
+## Get started
+
+You might want read the following important topics before you proceed. (If you're already familiar with these topics, you can skip.)
+ - [Overview of Azure Arc-enabled data services](overview.md) - [Connectivity modes and requirements](connectivity.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md) - [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
-If you prefer to try out things without provisioning a full environment yourself, get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
+If you prefer to try things out without provisioning a full environment yourself, get started quickly with [Azure Arc jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/). You can do this on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE), or in an Azure virtual machine (VM).
+
+## Deploy an Azure Arc data controller
+Before you deploy an Azure Arc-enabled PostgreSQL Hyperscale server group that you operate from the Azure portal, you must first deploy an Azure Arc data controller. You must configure the data controller to use the *directly connected* mode.
-## Deploy an Arc data controller configured to use the Direct connectivity mode
+To deploy an Azure Arc data controller, complete the instructions in these articles:
-Requirement: before you deploy an Azure Arc-enabled PostgreSQL Hyperscale server group that you operate from the Azure portal you must first deploy an Azure Arc data controller configured to use the *Direct* connectivity mode.
-To deploy an Arc data controller, complete the instructions in these articles:
-1. [Deploy data controller - direct connect mode (prerequisites)](create-data-controller-direct-prerequisites.md)
-1. [Deploy Azure Arc data controller in Direct connect mode from Azure portal](create-data-controller-direct-azure-portal.md)
+1. [Deploy data controller - directly connected mode (prerequisites)](create-data-controller-direct-prerequisites.md)
+1. [Deploy Azure Arc data controller in directly connected mode from Azure portal](create-data-controller-direct-azure-portal.md)
+## Temporary step for OpenShift users only
-## Preliminary and temporary step for OpenShift users only
-Implement this step before moving to the next step. To deploy PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL Hyperscale server group. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller.
+If you're using Red Hat OpenShift, you must implement this step before moving to the next one. To deploy an Azure Arc-enabled PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, run the following command against your cluster. This command updates the security constraints and grants the necessary privileges to the service accounts that will run your Hyperscale server group. The security context constraint (SCC) called `arc-data-scc` is the one you added when you deployed the Azure Arc data controller.
```Console oc adm policy add-scc-to-user arc-data-scc -z <server-group-name> -n <namespace name> ```
-**Server-group-name is the name of the server group you will create during the next step.**
+`server-group-name` is the name of the server group you will create during the next step.
For more details on SCCs in OpenShift, refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.2/authentication/managing-security-context-constraints.html).
-Proceed to the next step.
- ## Deploy an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal
-To deploy and operate an Azure Arc-enabled Postgres Hyperscale server group from the Azure portal you must deploy it to an Arc data controller configured to use the *Direct* connectivity mode.
+You have now deployed an Azure Arc data controller that uses the directly connected mode, as described earlier in the article. You can't operate an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *indirectly connected* mode.
-> [!IMPORTANT]
-> You can not operate an Azure Arc-enabled PostgreSQL Hyperscale server group from the Azure portal if you deployed it to an Azure Arc data controller configured to use the *Indirect* connectivity mode.
+Next, you choose one the options in the following sections.
-After you deployed an Arc data controller enabled for Direct connectivity mode, you may chose one the following 3 options to deploy a Azure Arc-enabled Postgres Hyperscale server group:
+### Deploy from Azure Marketplace
-### Option 1: Deploy from the Azure Marketplace
-1. Open a browser to the following URL [https://portal.azure.com](https://portal.azure.com)
-2. In the search window at the top of the page search for "*azure arc postgres*" in the Azure Market Place and select **Azure Arc-enabled PostgreSQL Hyperscale server groups**.
-3. In the page that opens, click **+ Create** at the top left corner.
-4. Fill in the form like you deploy an other Azure resource.
+1. Go to [the Azure portal](https://portal.azure.com).
+2. In Azure Marketplace, search for **azure arc postgres**, and select **Azure Arc-enabled PostgreSQL Hyperscale server groups**.
+3. Select **+ Create** in the upper-left corner of the page.
+4. Fill in the form, like you deploy any other Azure resource.
-### Option 2: Deploy from the Azure Database for PostgreSQL deployment option page
-1. Open a browser to the following URL https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer.
-2. Click the tile at the bottom right. It is titled: Azure Arc-enabled PostgreSQL Hyperscale (Preview).
-3. Fill in the form like you deploy an other Azure resources.
+### Deploy from Azure Database for PostgreSQL deployment option page
-### Option 3: Deploy from the Azure Arc center
-1. Open a browser to the following URL https://ms.portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview
-1. From the center of the page, click [Deploy] under the tile titled *Deploy Azure services* and then click [Deploy] in tile titled PostgreSQL Hyperscale (Preview).
-2. or, from the navigation pane on the left of the page, in the Services section, click [PostgreSQL Hyperscale (Preview)] and then click [+ Create] at the top left of the pane.
+1. Go to the following URL: `https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer`.
+1. Select **Azure Arc-enabled PostgreSQL Hyperscale (Preview)** in the lower right of the page.
+1. Fill in the form, like you deploy any other Azure resource.
+### Deploy from the Azure Arc center
-#### Important parameters you should consider:
+1. Go to the following URL: `https://ms.portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview`.
+1. From the **Deploy Azure services** tile, select **Deploy**. Then, from the **PostgreSQL Hyperscale (Preview)** tile, select **Deploy**. Alternatively, from the left pane, in the **Services** section, select **PostgreSQL Hyperscale (Preview)**. Then select **+ Create** in the upper left of the pane.
-- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+### Important considerations
+Be aware of the following considerations when you're deploying:
+- **The number of worker nodes you want to deploy to scale out and potentially reach better performances.** For more information, see [Concepts for distributing data with Azure Arc-enabled PostgreSQL Hyperscale server group](concepts-distributed-postgres-hyperscale.md).
-|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
-|||||
-|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
-|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
-| | | | |
+ The following table indicates the range of supported values, and what form of deployment you get with them. For example, if you want to deploy a server group with two worker nodes, indicate *2*. This will create three pods, one for the coordinator node or instance, and two for the worker nodes or instances (one for each of the workers).
-While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
+ |You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
+ |||||
+ |A scaled-out form of Azure Arc-enabled PostgreSQL Hyperscale to satisfy the scalability needs of your applications. |Three or more instances of Azure Arc-enabled PostgreSQL Hyperscale. One is the coordinator, and *n* are workers, with *n >=2*. |*n*, with *n>=2*. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A basic form of Azure Arc-enabled PostgreSQL Hyperscale. You want to do functional validation of your application, at minimum cost. You don't need performance and scalability validation. |One instance of Azure Arc-enabled PostgreSQL Hyperscale. The instance serves as both coordinator and worker. |*0*, and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A simple instance of Azure Arc-enabled PostgreSQL Hyperscale that is ready to scale out when you need it. |One instance of Azure Arc-enabled PostgreSQL Hyperscale. It isn't yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes, and distribute the data. |*0*. |The Citus extension that provides the Hyperscale capability is present on your deployment, but isn't yet loaded. |
+ | | | | |
-- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
- - to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
- - to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
- - to set the storage class for the backups: in this Preview of the Azure Arc-enabled PostgreSQL Hyperscale there are two ways to set storage classes depending on what types of backup/restore operations you want to do. We are working on simplifying this experience. You will either indicate a storage class or a volume claim mount. A volume claim mount is a pair of an existing persistent volume claim (in the same namespace) and volume type (and optional metadata depending on the volume type) separated by colon. The persistent volume will be mounted in each pod for the PostgreSQL server group.
- - if you want plan to do only full database restores, set the parameter `--storage-class-backups` or `-scb` followed by the name of the storage class.
- - if you plan to do both full database restores and point in time restores, set the parameter `--volume-claim-mounts` or `--volume-claim-mounts` followed by the name of a volume claim and a volume type.
+ Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
+- **The storage classes you want your server group to use.** It's important to set the storage class right at the time you deploy a server group. You can't change this setting after you deploy. If you don't indicate storage classes, you get the storage classes of the data controller by default.
+ - To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd`, followed by the name of the storage class.
+ - To set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl`, followed by the name of the storage class.
+ - To set the storage class for the backups, you either indicate a storage class or a volume claim mount. A *volume claim mount* is a pair of an existing persistent volume claim (in the same namespace) and a volume type (and optional metadata depending on the volume type), separated by colon. The persistent volume is mounted in each pod for the Azure Arc-enabled PostgreSQL Hyperscale server group.
+ - If you want to do only full database restores, set the parameter `--storage-class-backups` or `-scb`, followed by the name of the storage class.
+ - If you want to do both full database restores and point-in-time restores, set the parameter `--volume-claim-mounts`, followed by the name of a volume claim and a volume type.
## Next steps -- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)-- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
- * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
- * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
- * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
- * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
- * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
-
- > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
-
+- [Get connection endpoints and connection strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md)
- [Scale out your Azure Arc-enabled for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md) - [Storage configuration and Kubernetes storage concepts](storage-configuration.md)-- [Expanding Persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)
+- [Expanding persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)
- [Kubernetes resource model](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/resources.md#resource-quantities)
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/limitations-managed-instance.md
At this time, the business critical service tier is public preview. The general
- Transactional replication is currently not supported. - Log shipping is currently blocked.-- Only SQL Server authentication is supported. ## Roles and responsibilities
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 11/22/2021 Last updated : 1/24/2022
# GitOps in Azure
-Azure provides configuration management capability using GitOps in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters. You can easily enable and use GitOps in these clusters.
+Azure provides configuration management capability using GitOps in Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes clusters. You can easily enable and use GitOps in these clusters.
With GitOps, you declare the desired state of your Kubernetes clusters in files in Git repositories. The Git repositories may contain the following files:
GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](h
:::image type="content" source="media/gitops/flux2-extension-install-aks.png" alt-text="Diagram showing the installation of the Flux extension for Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-extension-install-aks.png":::
-GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. You can install the `microsoft.flux` extension manually using the portal or the Azure CLI (*az k8s-extension create --extensionType=microsoft.flux*) or have it installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in the cluster. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created.
+GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` [cluster extension](./conceptual-extensions.md) resource. The `microsoft.flux` extension must be installed in the cluster before one or more `fluxConfigurations` can be created. The extension will be installed automatically when you create the first `Microsoft.KubernetesConfiguration/fluxConfigurations` in a cluster, or you can install it manually using the portal, the Azure CLI (*az k8s-extension create --extensionType=microsoft.flux*), ARM template, or REST API.
The `microsoft.flux` extension installs by default the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed and can optionally install the Flux image-automation and image-reflector controllers, which provide functionality around updating and retrieving Docker images.
The `microsoft.flux` extension installs by default the [Flux controllers](https:
:::image type="content" source="media/gitops/flux2-config-install.png" alt-text="Diagram showing the installation of a Flux configuration in an Azure Arc-enabled Kubernetes or Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-config-install.png":::
-With the `microsoft.flux` extension installed in your cluster, you can then create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
+You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos or Bucket sources. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `m
* Creates private/public key pair that exists for the lifetime of the `fluxConfigurations`. This key is used for authentication if the URL is SSH based and if the user doesn't provide their own private key during creation of the configuration. * Creates custom authentication secret based on user-provided private-key/http basic-auth/known-hosts/no-auth data. * Sets up RBAC (service account provisioned, role binding created/assigned, role created/assigned).
-* Creates `GitRepository` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource.
+* Creates `GitRepository` or `Bucket` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource.
-Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the Git repository and the sync target in the Git repository for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster.
+Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository or Bucket) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
> [!NOTE] > * `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
-> * Sensitive customer inputs like private key, known hosts content, HTTPS username, and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, assure that your clusters connect with Azure within 48 hours.
+> * Sensitive customer inputs like private key and token/password are stored for less than 48 hours in the Kubernetes Configuration service. If you update any of these values in Azure, assure that your clusters connect with Azure within 48 hours.
## Next steps
-Advance to the next tutorial to learn how to enable GitOps on your Azure Arc-enabled Kubernetes or AKS clusters
+Advance to the next tutorial to learn how to enable GitOps on your AKS or Azure Arc-enabled Kubernetes clusters
> [!div class="nextstepaction"] * [Enable GitOps with Flux](./tutorial-use-gitops-flux2.md)
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 12/15/2021 Last updated : 1/24/2022
# Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or AKS clusters (public preview)
-GitOps with Flux v2 can be enabled in Azure Arc-enabled Kubernetes connected clusters or Azure Kubernetes Service (AKS) managed clusters as a cluster extension. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
+GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clusters or Azure Arc-enabled Kubernetes connected clusters as a cluster extension. After the `microsoft.flux` cluster extension is installed, you can create one or more `fluxConfigurations` resources that sync your Git repository sources to the cluster and reconcile the cluster to the desired state. With GitOps, you can use your Git repository as the source of truth for cluster configuration and application deployment.
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
az extension list -o table
Experimental ExtensionType Name Path Preview Version - -- -- -- -- --
-False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.1.7
-False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.2.0
-False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.0.0
+False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.0
+False whl k8s-configuration C:\Users\somename\.azure\cliextensions\k8s-configuration False 1.4.1
+False whl k8s-extension C:\Users\somename\.azure\cliextensions\k8s-extension False 1.0.4
``` ## Apply a Flux configuration by using the Azure CLI
In the following example:
* The resource group that contains the cluster is `flux-demo-rg`. * The name of the Azure Arc cluster is `flux-demo-arc`.
-* The cluster type is Azure Arc (`connectedClusters`), but this example can also work with AKS (`managedClusters`).
+* The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`).
* The name of the Flux configuration is `gitops-demo`. * The namespace for configuration installation is `gitops-demo`. * The URL for the public Git repository is `https://github.com/fluxcd/flux2-kustomize-helm-example`.
For an Azure Arc-enabled Kubernetes cluster, use this command:
az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes ```
-For an AKS cluster, use this command:
-
-```console
-az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t managedClusters --yes
-```
+For an AKS cluster, use the same command but with `-t managedClusters`replacing `-t connectedClusters`.
### Control which controllers are deployed with the Flux cluster extension
Group
This command group is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Subgroups:
- kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes
- configurations.
+ deployed-object : Commands to see deployed objects associated with Flux v2 Kubernetes
+ configurations.
+ kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes
+ configurations.
Commands:
- create : Create a Kubernetes Flux v2 Configuration.
- delete : Delete a Kubernetes Flux v2 Configuration.
- list : List Kubernetes Flux v2 Configurations.
- show : Show a Kubernetes Flux v2 Configuration.
- update : Update a Kubernetes Flux v2 Configuration.
+ create : Create a Flux v2 Kubernetes configuration.
+ delete : Delete a Flux v2 Kubernetes configuration.
+ list : List all Flux v2 Kubernetes configurations.
+ show : Show a Flux v2 Kubernetes configuration.
+ update : Update a Flux v2 Kubernetes configuration.
``` Here are the parameters for the `k8s-configuration flux create` CLI command:
az k8s-configuration flux create -h
This command is from the following extension: k8s-configuration Command
- az k8s-configuration flux create : Create a Kubernetes Flux v2 Configuration.
+ az k8s-configuration flux create : Create a Flux v2 Kubernetes configuration.
Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Arguments --cluster-name -c [Required] : Name of the Kubernetes cluster.
- --cluster-type -t [Required] : Specify Arc connectedClusters or AKS managedClusters.
+ --cluster-type -t [Required] : Specify Arc connected clusters or AKS managed clusters.
Allowed values: connectedClusters, managedClusters. --name -n [Required] : Name of the flux configuration. --resource-group -g [Required] : Name of resource group. You can configure the default group using `az configure --defaults group=<name>`.
+ --url -u [Required] : URL of the source to reconcile.
+ --bucket-insecure : Communicate with a bucket without TLS. Allowed values: false,
+ true.
+ --bucket-name : Name of the S3 bucket to sync.
--interval --sync-interval : Time between reconciliations of the source on the cluster.
- --kind : Source kind to reconcile. Allowed values: git. Default: git.
+ --kind : Source kind to reconcile. Allowed values: bucket, git.
+ Default: git.
--kustomization -k : Define kustomizations to sync sources with parameters ['name', 'path', 'depends_on', 'timeout', 'sync_interval', 'retry_interval', 'prune', 'force'].
Arguments
associated with this configuration. Allowed values: false, true. --timeout : Maximum time to reconcile the source before timing out.
- --url -u : URL of the git repo source to reconcile.
Auth Arguments
+ --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
+ namespace to use for communication to the source.
+
+Bucket Auth Arguments
+ --bucket-access-key : Access Key ID used to authenticate with the bucket.
+ --bucket-secret-key : Secret Key used to authenticate with the bucket.
+
+Git Auth Arguments
--https-ca-cert : Base64-encoded HTTPS CA certificate for TLS communication with private repository sync. --https-ca-cert-file : File path to HTTPS CA certificate file for TLS communication
Auth Arguments
required to access private Git instances. --known-hosts-file : File path to known_hosts contents containing public SSH keys required to access private Git instances.
- --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
- namespace to use for communication to the source.
--ssh-private-key : Base64-encoded private ssh key for private repository sync. --ssh-private-key-file : File path to private ssh key for private repository sync.
Global Arguments
--verbose : Increase logging verbosity. Use --debug for full debug logs. Examples
- Create a Kubernetes v2 Flux Configuration
+ Create a Flux v2 Kubernetes configuration
az k8s-configuration flux create --resource-group my-resource-group \ --cluster-name mycluster --cluster-type connectedClusters \ --name myconfig --scope cluster --namespace my-namespace \ --kind git --url https://github.com/Azure/arc-k8s-demo \
- --branch main --kustomization name=my-kustomization```
+ --branch main --kustomization name=my-kustomization
+
+ Create a Kubernetes v2 Flux Configuration with Bucket Source Kind
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind bucket --url https://bucket-provider.minio.io \
+ --bucket-name my-bucket --kustomization name=my-kustomization \
+ --bucket-access-key my-access-key --bucket-secret-key my-secret-key
``` ### Configuration general arguments
Examples
| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`. | `--suspend` | flag | Suspends all source and kustomize reconciliations defined in this Flux configuration. Reconciliations active at the time of suspension will continue. |
-### Git repository arguments
+### Source general arguments
| Parameter | Format | Notes | | - | - | - |
-| `--kind` | String | Source kind to reconcile. Default: `git`. Currently, only `git` is supported. |
-| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to reconcile the source before timing out. Default: `10m`. |
-| `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Frequency of reconciliations of the Git source on the cluster. Default: `10m`. |
+| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`. Default: `git`. |
+| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. |
+| `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. |
-### Git repository reference arguments
+### Git repository source reference arguments
| Parameter | Format | Notes | | - | - | - |
Just like private keys, you can provide your `known_hosts` content directly or i
| `--https-ca-cert` | Base64 string | CA certificate for TLS communication. | | `--https-ca-cert-file` | Full path to local file | Provide CA certificate content in a local file. |
-### Local secret for authentication
+### Bucket source arguments
+If you use a `bucket` source instead of a `git` source, here are the bucket-specific command arguments.
| Parameter | Format | Notes | | - | - | - |
-| `--local-auth-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for communication with the Git source. |
+| `--url` `-u` | URL String | The URL for the `bucket`. Formats supported: http://, https://, s3://. |
+| `--bucket-name` | String | Name of the `bucket` to sync. |
+| `--bucket-access-key` | String | Access Key ID used to authenticate with the `bucket`. |
+| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. |
+| `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. |
+
+### Local secret for authentication with source
+You can use a local Kubernetes secret for authentication with the `git` or `bucket` source.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--local-auth-ref` `--local-ref` | String | Local reference to a Kubernetes secret in the Flux configuration namespace to use for authentication with the source. |
For HTTPS authentication, you create a secret (in the same namespace where the Flux configuration will be) with the username and password/key:
az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -
``` >[!NOTE]
->If you need Flux to access the Git repository through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli#4a-connect-using-an-outbound-proxy-server).
+>If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli#4a-connect-using-an-outbound-proxy-server).
### Git implementation
Group
v2 Kubernetes configurations. Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus+ Commands:
- create : Create a Kustomization associated with a Kubernetes Flux v2 Configuration.
- delete : Delete a Kustomization associated with a Kubernetes Flux v2 Configuration.
- list : List Kustomizations associated with a Kubernetes Flux v2 Configuration.
- show : Show a Kustomization associated with a Flux v2 Configuration.
- update : Update a Kustomization associated with a Kubernetes Flux v2 Configuration.
+ create : Create a Kustomization associated with a Flux v2 Kubernetes configuration.
+ delete : Delete a Kustomization associated with a Flux v2 Kubernetes configuration.
+ list : List Kustomizations associated with a Flux v2 Kubernetes configuration.
+ show : Show a Kustomization associated with a Flux v2 Kubernetes configuration.
+ update : Update a Kustomization associated with a Flux v2 Kubernetes configuration.
``` Here are the kustomization creation options:
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
The Action Groups Secure Webhook action enables you to take advantage of Azure A
- Modify the PowerShell script's Connect-AzureAD call to use your Azure AD Tenant ID. - Modify the PowerShell script's variable $myAzureADApplicationObjectId to use the Object ID of your Azure AD Application. - Run the modified script.+
+ > [!NOTE]
+ > Service principle need to be a member of **owner role** of Azure AD application to be able to create or modify the Secure Webhook action in the action group.
3. Configure the Action Group Secure Webhook action. - Copy the value $myApp.ObjectId from the script and enter it in the Application Object ID field in the Webhook action definition.
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-api-switch.md
Title: Upgrade to the current Azure Monitor Log Alerts API
-description: Learn how to switch to the log alerts ScheduledQueryRules API
+ Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API
+description: Learn how to switch to the log alerts management to ScheduledQueryRules API
Previously updated : 09/22/2020 Last updated : 01/25/2022
-# Upgrade to the current Log Alerts API from legacy Log Analytics Alert API
+# Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API
> [!NOTE] > This article is only relevant to Azure public (**not** to Azure Government or Azure China cloud). > [!NOTE]
-> Once a user chooses to switch preference to the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules) it is not possible to revert back to the older [legacy Log Analytics Alert API](./api-alerts.md).
+> Once a user chooses to switch rules with legacy management to the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) it is not possible to revert back to the older [legacy Log Analytics Alert API](./api-alerts.md).
-In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to manage log alert rules. Current workspaces use [ScheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules). This article describes the benefits and the process of switching from the legacy API to the current API.
+In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to manage log alert rules. Currently workspaces use [ScheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log alert rules management from the legacy API to the current API.
## Benefits
+- Manage all log rules in one API.
- Single template for creation of alert rules (previously needed three separate templates). - Single API for all Azure resources log alerting.-- Support for stateful and 1-minute log alert previews.-- [PowerShell cmdlets support](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).-- Alignment of severities with all other alert types.-- Ability to create [cross workspace log alert](../logs/cross-workspace-query.md) that span several external resources like Log Analytics workspaces or Application Insights resources.-- Users can specify dimensions to split the alerts.-- Log alerts have extended period of up to two days of data (previously limited to one day).
+- Support for stateful and 1-minute log alert previews for legacy rules.
+- [PowerShell cmdlets](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell) and [Azure CLI](./alerts-log.md#manage-log-alerts-using-cli) support for switched rules.
+- Alignment of severities with all other alert types and newer rules.
+- Ability to create [cross workspace log alert](../logs/cross-workspace-query.md) that span several external resources like Log Analytics workspaces or Application Insights resources for switched rules.
+- Users can specify dimensions to split the alerts for switched rules.
+- Log alerts have extended period of up to two days of data (previously limited to one day) for switched rules.
## Impact -- All new rules must be created/edited with the current API. See [sample use via Azure Resource Template](alerts-log-create-templates.md) and [sample use via PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).
+- All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](alerts-log-create-templates.md) and [sample use via PowerShell](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell).
- As rules become Azure Resource Manager tracked resources in the current API and must be unique, rules resource ID will change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names of the alert rule will remain unchanged. ## Process
You can also use [ARMClient](https://github.com/projectkudu/ARMClient) tool:
armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ```
-If the Log Analytics workspace was switched to [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules), the response is:
+If the Log Analytics workspace was switched to [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), the response is:
```json {
azure-monitor Alerts Log Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-create-templates.md
Last updated 07/12/2021
Log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resources logs every set frequency, and fire an alert based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). [Learn more about functionality and terminology of log alerts](./alerts-unified-log.md).
-This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure [log alerts](./alerts-unified-log.md) in Azure Monitor. Resource Manager templates enable you to programmatically set up alerts in a consistent and reproducible way across your environments. Log alerts are created in the `Microsoft.Insights/scheduledQueryRules` resource provider. See API reference for [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules).
+This article shows how you can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure [log alerts](./alerts-unified-log.md) in Azure Monitor. Resource Manager templates enable you to programmatically set up alerts in a consistent and reproducible way across your environments. Log alerts are created in the `Microsoft.Insights/scheduledQueryRules` resource provider. See API reference for [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules).
The basic steps are as follows:
The basic steps are as follows:
## Template for all resource types (from API version 2021-08-01)
-[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules/create-or-update) template for all resource types (sample data set as variables):
+[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update) template for all resource types (sample data set as variables):
```json {
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
## Simple template (up to API version 2018-04-16)
-[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules/create-or-update) template based on [number of results log alert](./alerts-unified-log.md#count-of-the-results-table-rows) (sample data set as variables):
+[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [number of results log alert](./alerts-unified-log.md#count-of-the-results-table-rows) (sample data set as variables):
```json {
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
## Template with cross-resource query (up to API version 2018-04-16)
-[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules/create-or-update) template based on [metric measurement](./alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) that queries [cross-resources](../logs/cross-workspace-query.md) (sample data set as variables):
+[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [metric measurement](./alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) that queries [cross-resources](../logs/cross-workspace-query.md) (sample data set as variables):
```json {
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
``` > [!IMPORTANT]
-> When using cross-resource query in log alert, the usage of [authorizedResources](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules/create-or-update#source) is mandatory and user must have access to the list of resources stated
+> When using cross-resource query in log alert, the usage of [authorizedResources](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update#source) is mandatory and user must have access to the list of resources stated
This JSON can be saved and deployed using [Azure Resource Manager in Azure portal](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template).
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log-query.md
workspace('Contoso-workspace1').Perf
``` >[!NOTE]
-> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, you can learn about switching [here](../alerts/alerts-log-api-switch.md).
+> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, you can learn about switching [here](../alerts/alerts-log-api-switch.md).
## Examples The following examples include log queries that use `search` and `union` and provide steps you can use to modify these queries for use in alert rules.
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-log.md
description: Use Azure Monitor to create, view, and manage log alert rules
Previously updated : 12/14/2021 Last updated : 01/25/2022 # Create, view, and manage log alerts using Azure Monitor
You can also [create log alert rules using Azure Resource Manager templates](../
> This article describes creating alert rules using the new alert rule wizard. Please note these changes in the new alert rule experience: > - Search results are not included with the triggered alert and its associated notifications. The alert contains a link to the search results in Logs. > - The new alert rule wizard does not include the option to customize the triggered alert's email or to include a custom JSON payload.
-> - The new alert rule wizard does not currently support a frequency of 1 minute. 1 minute alert frequency will be supported soon.
1. In the [portal](https://portal.azure.com/), select the relevant resource. 1. In the Resource menu, under **Monitoring**, select **Alerts**.
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
description: Common issues, errors, and resolutions for log alert rules in Azure
Previously updated : 12/08/2021 Last updated : 01/25/2022
Logs are semi-structured data and are inherently more latent than metrics. If yo
To mitigate latency, the system retries the alert evaluation multiple times. After the data arrives, the alert fires, which in most cases don't equal the log record time.
-### Incorrect query time range configured
+### Actions are muted or alert rule is defined to resolve automatically
-Query time range is set in the rule condition definition. For workspaces and Application Insights, this field is called **Period**. For all other resource types, it's called **Override query time range**. Like in log analytics, the time range limits query data to the specified period. Even if the **ago** command is used in the query, the time range will apply.
+Log alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**.
-For example, a query scans 60 minutes when the time range is 60 minutes, even if the text contains **ago(1d)**. The time range and query time filtering need to match. In the example case, changing the **Period** / **Override query time range** to one day, works as expected.
-
-![Time period](media/alerts-troubleshoot-log/LogAlertTimePeriod.png)
-
-### Actions are muted in the alert rule
-
-Log alerts provide an option to mute fired alert actions for a set amount of time. In workspaces and Application Insights, this field is called **Suppress alerts**. In all other resource types, it's called **Mute actions**.
-
-A common issue is that you think that the alert didn't fire the actions because of a service issue, even though it was muted by the rule configuration.
+A common issue is that you think that the alert didn't fire, but it was actually the rule configuration.
![Suppress alerts](media/alerts-troubleshoot-log/LogAlertSuppress.png)
Log alerts work best when you try to detect data in the logs. It works less well
There are built-in capabilities to prevent false alerts, but they can still occur on very latent data (over ~30 minutes) and data with latency spikes.
-### Query optimization issues
-
-The alerting service changes your query to optimize for lower load and alert latency. The alert flow was built to transform the results that indicate the issue to an alert. For example, in a case of a query like:
-
-``` Kusto
-SecurityEvent
-| where EventID == 4624
-```
-
-If the intent of the user is to alert, when this event type happens, the alerting logic appends `count` to the query. The query that will run is:
-
-``` Kusto
-SecurityEvent
-| where EventID == 4624
-| count
-```
-
-There's no need to add alerting logic to the query, and doing that may even cause issues. In the preceding example, if you include `count` in your query, it always results in the value **1**, because the alert service performs a `count` of `count`.
-
-The log alert service runs the optimized query. You can run the modified query in the Log Analytics [portal](../logs/log-query-overview.md) or [API](/rest/api/loganalytics/).
-
-For workspaces and Application Insights, it's called **Query to be executed** in the Condition pane. In all other resource types, select **See final alert Query** on the **Condition** tab.
-
-![Query to be executed](media/alerts-troubleshoot-log/LogAlertPreview.png)
- ## Log alert was disabled The following sections list some reasons why Azure Monitor might disable a log alert rule. After those section, there's an [example of the activity log that is sent when a rule is disabled](#activity-log-example-when-rule-is-disabled).
If you've reached the quota limit, the following steps might help resolve the is
#### From the Azure portal
-1. On the Alerts screen, select **Manage alert rules**.
+1. On the Alerts screen in Azure Monitor, select **Alert rules**.
1. In the **Subscription** dropdown control, filter to the subscription you want. (Make sure you don't filter to a specific resource group, resource type, or resource.) 1. In the **Signal type** dropdown control, select **Log Search**. 1. Verify that the **Status** dropdown control is set to **Enabled**.
The total number of log search alert rules is displayed above the rules list.
#### From API - PowerShell - [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule)-- REST API - [List by subscription](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules/list-by-subscription)
+- CLI: [az monitor scheduled-query list](/cli/azure/monitor/scheduled-query#az-monitor-scheduled-query-list)
+- REST API - [List by subscription](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/list-by-subscription)
## Activity log example when rule is disabled
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
description: Trigger emails, notifications, call websites URLs (webhooks), or au
Previously updated : 09/22/2020 Last updated : 01/25/2022 # Log alerts in Azure Monitor
Log alerts are one of the alert types that are supported in [Azure Alerts](./ale
## Prerequisites
-Log alerts run queries on Log Analytics data. First you should start [collecting log data](../essentials/resource-logs.md) and query the log data for issues. You can use the [alert query examples article](../logs/queries.md) in Log Analytics to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md).
+Log alerts run queries on Log Analytics data. First you should start [collecting log data](../essentials/resource-logs.md) and query the log data for issues. You can use the [alert query examples topic](../logs/queries.md) in Log Analytics to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md).
[Azure Monitoring Contributor](../roles-permissions-security.md) is a common role that is needed for creating, modifying, and updating log alerts. Access & query execution rights for the resource logs are also needed. Partial access to resource logs can fail queries or return partial results. [Learn more about configuring log alerts in Azure](./alerts-log.md).
The [Log Analytics](../logs/log-analytics-tutorial.md) query used to evaluate th
#### Query time Range
-Time range is set in the rule condition definition. In workspaces and Application Insights, it's called **Period**. In all other resource types, it's called **Override query time range**.
+Time range is set in the rule condition definition. It's called **Override query time range** in the advance settings section.
-Like in log analytics, the time range limits query data to the specified range. Even if **ago** command is used in the query, the time range will apply.
+Unlike log analytics, the time range in alerts is limited to a maximum of two days of data. Even if longer range **ago** command is used in the query, the time range will apply. For example, a query scans up to 2 days, even if the text contains **ago(7d)**.
-For example, a query scans 60 minutes, when time range is 60 minutes, even if the text contains **ago(1d)**. The time range and query time filtering need to match. In the example case, changing the **Period** / **Override query time range** to one day, would work as expected.
+If you use **ago** command in the query, the range is automatically set to two days. You can also change time range manually in cases the query requires more data than the alert evaluation even if there is no **ago** command in the query.
### Measure
Log alerts turn log into numeric values that can be evaluated. You can measure t
#### Count of the results table rows
-Count of results is the default measure. Ideal for working with events such as Windows event logs, syslog, application exceptions. Triggers when log records happen or doesn't happen in the evaluated time window.
+Count of results is the default measure and is used when you set a **Measure** with a selection of **Table rows**. Ideal for working with events such as Windows event logs, syslog, application exceptions. Triggers when log records happen or doesn't happen in the evaluated time window.
Log alerts work best when you try to detect data in the log. It works less well when you try to detect lack of data in the logs. For example, alerting on virtual machine heartbeat.
-For workspaces and Application Insights, it's called **Based on** with selection **Number of results**. In all other resource types, it's called **Measure** with selection **Table rows**.
- > [!NOTE] > Since logs are semi-structured data, they are inherently more latent than metric, you may experience misfires when trying to detect lack of data in the logs, and you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
Then alert rules monitors for any requests ending with 500 error code. The query
#### Calculation of measure based on a numeric column (such as CPU counter value)
-For workspaces and Application Insights, it's called **Based on** with selection **Metric measurement**. In all other resource types, it's called **Measure** with selection of any number column name.
-
+Calculation of measure based on a numeric column is used when the **Measure** has a selection of any number column name.
### Aggregation type
-The calculation that is done on multiple records to aggregate them to one numeric value. For example:
-- **Count** returns the number of records in the query-- **Average** returns the average of the measure column [**Aggregation granularity**](#aggregation-granularity) defined.-
-In workspaces and Application Insights, it's supported only in **Metric measurement** measure type. The query result must contain a column called AggregatedValue that provide a numeric value after a user-defined aggregation. In all other resource types, **Aggregation type** is selected from the field of that name.
+The calculation that is done on multiple records to aggregate them to one numeric value using the [**Aggregation granularity**](#aggregation-granularity) defined. For example:
+- **Sum** returns the sum of measure column.
+- **Average** returns the average of the measure column.
### Aggregation granularity Determines the interval that is used to aggregate multiple records to one numeric value. For example, if you specified **5 minutes**, records would be grouped by 5-minute intervals using the **Aggregation type** specified.
-In workspaces and Application Insights, it's supported only in **Metric measurement** measure type. The query result must contain [bin()](/azure/kusto/query/binfunction) that sets interval in the query results. In all other resource types, the field that controls this setting is called **Aggregation granularity**.
- > [!NOTE] > As [bin()](/azure/kusto/query/binfunction) can result in uneven time intervals, the alert service will automatically convert [bin()](/azure/kusto/query/binfunction) function to [bin_at()](/azure/kusto/query/binatfunction) function with appropriate time at runtime, to ensure results with a fixed point. ### Split by alert dimensions
-Split alerts by number or string columns into separate alerts by grouping into unique combinations. When creating resource-centric alerts at scale (subscription or resource group scope), you can split by Azure resource ID column. Splitting on Azure resource ID column will change the target of the alert to the specified resource.
+Split alerts by number or string columns into separate alerts by grouping into unique combinations. It's configured in **Split by dimensions** section of the condition (limited to six splits). When creating resource-centric alerts at scale (subscription or resource group scope), you can split by Azure resource ID column. Splitting on Azure resource ID column will change the target of the alert to the specified resource.
Splitting by Azure resource ID column is recommended when you want to monitor the same condition on multiple Azure resources. For example, monitoring all virtual machines for CPU usage over 80%. You may also decide not to split when you want a condition on multiple resources in the scope. Such as monitoring that at least five machines in the resource group scope have CPU usage over 80%.-
-In workspaces and Application Insights, it's supported only in **Metric measurement** measure type. The field is called **Aggregate On**. It's limited to three columns. Having more than three groups by columns in the query could lead to unexpected results. In all other resource types, it's configured in **Split by dimensions** section of the condition (limited to six splits).
- #### Example of splitting by alert dimensions For example, you want to monitor errors for multiple virtual machines running your web site/app in a specific resource group. You can do that using a log alert rule as follows:
For example, you want to monitor errors for multiple virtual machines running yo
or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records ```
- When using workspaces and Application Insights with **Metric measurement** alert logic, this line needs to be added to the query text:
-
- ```Kusto
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
- ```
--- **Resource ID Column:** _ResourceId (Splitting by resource ID column in alert rules is only available for subscriptions and resource groups currently)-- **Dimensions / Aggregated on:**
+- **Resource ID Column:** _ResourceId
+- **Dimensions:**
- Computer = VM1, VM2 (Filtering values in alert rules definition isn't available currently for workspaces and Application Insights. Filter in the query text.)-- **Time period / Aggregation granularity:** 15 minutes
+- **Aggregation granularity:** 15 minutes
- **Alert frequency:** 15 minutes - **Threshold value:** Greater than 0
The query results are transformed into a number that is compared against the thr
### Frequency
-> [!NOTE]
-> There are currently no additional charges for 1-minute frequency log alerts preview. Pricing for features that are in preview will be announced in the future and a notice provided prior to start of billing. Should you choose to continue using 1-minute frequency log alerts after the notice period, you will be billed at the applicable rate.
-
-The interval in which the query is run. Can be set from a minute to a day. Must be equal to or less than the [query time range](#query-time-range) to not miss log records.
-
-For example, if you set the time period to 30 minutes and frequency to 1 hour. If the query is run at 00:00, it returns records between 23:30 and 00:00. The next time the query would run is 01:00 that would return records between 00:30 and 01:00. Any records created between 00:00 and 00:30 would never be evaluated.
+The interval in which the query is run. Can be set from a minute to a day.
### Number of violations to trigger alert
For example, if your rule [**Aggregation granularity**](#aggregation-granularity
Log alerts can either be stateless or stateful (currently in preview).
-Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired. In Log Analytics Workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
+Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired using the **Mute Actions** option in the alert details section.
See this alert stateless evaluation example:
Stateful alerts feature is currently in preview in the Azure public cloud. You c
## Location selection in log alerts
-Log alerts allow you to set a location for alert rules. In Log Analytics Workspaces, the rule location must match the workspace location. In all other resources, you can select any of the supported locations, which align to [Log Analytics supported region list](https://azure.microsoft.com/global-infrastructure/services/?products=monitor).
+Log alerts allow you to set a location for alert rules. You can select any of the supported locations, which align to [Log Analytics supported region list](https://azure.microsoft.com/global-infrastructure/services/?products=monitor).
Location affects which region the alert rule is evaluated in. Queries are executed on the log data in the selected region, that said, the alert service end to end is global. Meaning alert rule definition, fired alerts, notifications, and actions aren't bound to the location in the alert rule. Data is transfer from the set region since the Azure Monitor alerts service is a [non-regional service](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=non-regional).
Location affects which region the alert rule is evaluated in. Queries are execut
Pricing information is located in the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Log Alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with: - Log Alerts on Application Insights shown with exact resource name along with resource group and alert properties.-- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules).
+- Log Alerts on Log Analytics shown with exact resource name along with resource group and alert properties; when created using [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules).
- Log alerts created from [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure Resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have this resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log Alerts on legacy API are shown with above hidden resource name along with resource group and alert properties. > [!NOTE]
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/api-alerts.md
Last updated 09/22/2020
# Create and manage alert rules in Log Analytics with REST API > [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), log analytics workspace(s) created after *June 1, 2019* manage alert rules using the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-02-01-preview/scheduled-query-rules). Customers are encouraged to [switch to the current API](./alerts-log-api-switch.md) in older workspaces to leverage Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). This article describes management of alert rules using the legacy API.
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), log analytics workspace(s) created after *June 1, 2019* manage alert rules using the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). Customers are encouraged to [switch to the current API](./alerts-log-api-switch.md) in older workspaces to leverage Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). This article describes management of alert rules using the legacy API.
The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details of the API and several examples for performing different operations.
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-manage-agent.md
Perform the following steps to upgrade the agent on a Kubernetes cluster running
If the Log Analytics workspace is in commercial Azure, run the following command: ```console
-$ helm upgrade --name myrelease-1 \
set omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<my_prod_cluster> incubator/azuremonitor-containers
+$ helm upgrade --set omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<my_prod_cluster> incubator/azuremonitor-containers
``` If the Log Analytics workspace is in Azure China 21Vianet, run the following command: ```console
-$ helm upgrade --name myrelease-1 \
set omsagent.domain=opinsights.azure.cn,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+$ helm upgrade --set omsagent.domain=opinsights.azure.cn,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
``` If the Log Analytics workspace is in Azure US Government, run the following command: ```console
-$ helm upgrade --name myrelease-1 \
set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+$ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
``` ### Upgrade agent on Azure Red Hat OpenShift v4
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data from a Log Analyti
## Limitations -- All tables will be supported in export, but support is currently limited to those specified in the [supported tables](#supported-tables) section below. -- The current custom log tables wonΓÇÖt be supported in export. A new version of custom log preview available February 2022, will be supported in export.
+- All tables will be supported in export, but currently limited to those specified in the [supported tables](#supported-tables) section below.
+- The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported.
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled. - Destinations must be in the same region as the Log Analytics workspace.
+- Storage account must be unique across rules in workspace.
- Tables names can be no longer than 60 characters when exporting to storage account and 47 characters to event hub. Tables with longer names will not be exported. - Data export isn't supported in these regions currently: - Korea South
If you have configured your storage account to allow access from selected networ
- Use 'Premium' or 'Dedicated' tiers for higher throughput ### Create or update data export rule
-Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. You can use the same storage account and event hub namespace in multiple rules in the same workspace. When event hub names are provided in rules, they must be unique in workspace.
+Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage account must be unique across rules in workspace. Multiple rules can use the same event hub namespace when sending to separate event hubs.
> [!NOTE] > - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
-> - The current custom log tables wonΓÇÖt be supported in export. The next generation of custom log available early 2022 in preview is supported.
+> - The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported.
> - Export to storage account - a separate container is created in storage account for each table. > - Export to event hub - if event hub name isn't provided, a separate event hub is created for each table. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
azure-monitor Move Workspace Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/move-workspace-region.md
Title: Move a Log Analytics workspace to another Azure region by using the Azure portal description: Use an Azure Resource Manager template to move a Log Analytics workspace from one Azure region to another by using the Azure portal.-+ Last updated 08/17/2021
azure-percept Azure Percept Devkit Container Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-devkit-container-release-notes.md
To download the container updates, go to [Azure Percept Studio](https://ms.porta
## December (2112) Release - Removed lines in the image frames using automatic image capture in Azure Percept Studio. This issue was introduced in the 2108 module release. -- Security fixes for docker services running as root in azureeyemodule, azureearspeechclientmodule, and webstreammodule.
+- Security fixes for docker services running as root in azureeyemodule (mcr.microsoft.com/azureedgedevices/azureeyemodule:2112-1), azureearspeechclientmodule, and webstreammodule.
## August (2108) Release
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 01/21/2022 Last updated : 01/24/2022
This article describes the scenarios that aren't currently supported and the ste
The following scenarios aren't yet supported: * Virtual Machine Scale Sets with Standard SKU Load Balancer or Standard SKU Public IP can't be moved.
-* Virtual machines in an existing virtual network can't be moved to a new subscription when you aren't moving all resources in the virtual network.
+* Virtual machines in an existing virtual network can be moved to a new subscription only when the virtual network and all of its dependent resources are also moved.
* Virtual machines created from Marketplace resources with plans attached can't be moved across subscriptions. For a potential workaround, see [Virtual machines with Marketplace plans](#virtual-machines-with-marketplace-plans). * Low-priority virtual machines and low-priority virtual machine scale sets can't be moved across resource groups or subscriptions. * Virtual machines in an availability set can't be moved individually.
azure-resource-manager Error Reserved Resource Name https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/troubleshooting/error-reserved-resource-name.md
Title: Reserved resource name errors description: Describes how to resolve errors when providing a resource name that includes a reserved word. Previously updated : 12/13/2021 Last updated : 01/24/2021 # Resolve reserved resource name errors
This article describes the error you get when deploying a resource that includes
## Symptom
-When deploying a resource that is available through a public endpoint, you may receive the following error:
+When deploying a resource, you may receive the following error:
``` Code=ReservedResourceName;
Message=The resource name <resource-name> or a part of the name is a trademarked
## Cause
-Resources that have a public endpoint can't use reserved words or trademarks in the name.
+Resources that have an accessible endpoint, such as a fully qualified domain name, can't use reserved words or trademarks in the name. The name is checked when the resource is created, even if the endpoint isn't currently enabled.
The following words are reserved:
azure-sql-edge Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/backup-restore.md
-+ Last updated 05/19/2020
azure-sql-edge Configure Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/configure-replication.md
-+ Last updated 05/19/2020
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/configure.md
-+ Last updated 09/22/2020
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/connect.md
-+ Last updated 07/25/2020
azure-sql-edge Create External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/create-external-stream-transact-sql.md
-+ Last updated 07/27/2020
azure-sql-edge Create Stream Analytics Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/create-stream-analytics-job.md
-+ Last updated 07/27/2020
azure-sql-edge Data Retention Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-cleanup.md
-+ Last updated 09/04/2020
azure-sql-edge Data Retention Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-enable-disable.md
-+ Last updated 09/04/2020
azure-sql-edge Data Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/data-retention-overview.md
-+ Last updated 09/04/2020
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/date-bucket-tsql.md
-+ Last updated 09/03/2020
azure-sql-edge Deploy Dacpac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-dacpac.md
-+ Last updated 09/03/2020
azure-sql-edge Deploy Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-kubernetes.md
-+ Last updated 09/22/2020
azure-sql-edge Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/deploy-portal.md
-+ Last updated 09/22/2020
azure-sql-edge Disconnected Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/disconnected-deployment.md
-+ Last updated 09/22/2020
azure-sql-edge Drop External Stream Transact Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/drop-external-stream-transact-sql.md
-+ Last updated 05/19/2020
azure-sql-edge Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/features.md
-+ Last updated 09/03/2020
azure-sql-edge High Availability Sql Edge Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/high-availability-sql-edge-containers.md
-+ Last updated 09/22/2020
azure-sql-edge Imputing Missing Values https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/imputing-missing-values.md
-+ Last updated 09/22/2020
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/overview.md
-+ Last updated 05/19/2020
azure-sql-edge Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/performance-best-practices.md
-+ Last updated 09/22/2020 # Performance best practices and configuration guidelines
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
---+++ Last updated 11/24/2020 # Azure SQL Edge release notes
azure-sql-edge Resources Partners Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/resources-partners-security.md
---+++ Last updated 10/09/2020 # Azure SQL Edge security partners
azure-sql-edge Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/security-overview.md
-+ Last updated 09/22/2020
azure-sql-edge Stream Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/stream-data.md
-+ Last updated 05/19/2020
azure-sql-edge Streaming Catalog Views https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/streaming-catalog-views.md
-+ Last updated 05/19/2019
azure-sql-edge Sys External Job Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-job-streams.md
-+ Last updated 05/19/2019
azure-sql-edge Sys External Streaming Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-streaming-jobs.md
-+ Last updated 05/19/2019
azure-sql-edge Sys External Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-external-streams.md
-+ Last updated 05/19/2019
azure-sql-edge Sys Sp Cleanup Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/sys-sp-cleanup-data-retention.md
-+ Last updated 09/22/2020
azure-sql-edge Track Data Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/track-data-changes.md
-+ Last updated 05/19/2020
azure-sql-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/troubleshoot.md
-+ Last updated 09/22/2020
azure-sql-edge Tutorial Deploy Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-deploy-azure-resources.md
keywords:
---+++ Last updated 05/19/2020 # Install software and set up resources for the tutorial
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
keywords:
---+++ Last updated 12/18/2020 # Using Azure SQL Edge to build smarter renewable resources
azure-sql-edge Tutorial Run Ml Model On Sql Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-run-ml-model-on-sql-edge.md
keywords:
---+++ Last updated 05/19/2020
azure-sql-edge Tutorial Set Up Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-set-up-iot-edge-modules.md
keywords:
---+++ Last updated 09/22/2020
azure-sql-edge Tutorial Sync Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-sync-data-factory.md
-+ Last updated 05/19/2020
azure-sql-edge Tutorial Sync Data Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-sync-data-sync.md
-+ Last updated 05/19/2020
azure-sql-edge Usage And Diagnostics Data Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/usage-and-diagnostics-data-configuration.md
-+ Last updated 08/04/2020
azure-sql Accelerated Database Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/accelerated-database-recovery.md
ms.devlang:
-+ Last updated 05/19/2020 # Accelerated Database Recovery in Azure SQL
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
--++ Last updated 01/10/2022
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
ms.devlang: --++ Last updated 10/29/2020
azure-sql Elastic Pool Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-resource-management.md
ms.devlang:
- Previously updated : 10/13/2021+ Last updated : 1/24/2022 # Resource management in dense elastic pools [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Azure SQL Database [elastic pools](./elastic-pool-overview.md) is a cost-effective solution for managing many databases with varying resource usage. All databases in an elastic pool share the same allocation of resources, such as CPU, memory, worker threads, storage space, tempdb, on the assumption that **only a subset of databases in the pool will use compute resources at any given time**. This assumption allows elastic pools to be cost-effective. Instead of paying for all resources each individual database could potentially need, customers pay for a much smaller set of resources, shared among all databases in the pool.
+Azure SQL Database [elastic pools](./elastic-pool-overview.md) is a cost-effective solution for managing many databases with varying resource usage. All databases in an elastic pool share the same allocation of resources, such as CPU, memory, worker threads, storage space, `tempdb`, on the assumption that **only a subset of databases in the pool will use compute resources at any given time**. This assumption allows elastic pools to be cost-effective. Instead of paying for all resources each individual database could potentially need, customers pay for a much smaller set of resources, shared among all databases in the pool.
## Resource governance
-Resource sharing requires the system to carefully control resource usage to minimize the "noisy neighbor" effect, where a database with high resource consumption affects other databases in the same elastic pool. At the same time, the system must provide sufficient resources for features such as high availability and disaster recovery (HADR), backup and restore, monitoring, Query Store, Automatic tuning, etc. to function reliably.
-
-Azure SQL Database achieves these goals by using multiple resource governance mechanisms, including Windows [Job Objects](/windows/win32/procthread/job-objects) for process level resource governance, Windows [File Server Resource Manager (FSRM)](/windows-server/storage/fsrm/fsrm-overview) for storage quota management, and a modified and extended version of SQL Server [Resource Governor](/sql/relational-databases/resource-governor/resource-governor) to implement resource governance within SQL Database.
+Resource sharing requires the system to carefully control resource usage to minimize the "noisy neighbor" effect, where a database with high resource consumption affects other databases in the same elastic pool. Azure SQL Database achieves these goals by implementing [resource governance](resource-limits-logical-server.md#resource-governance). At the same time, the system must provide sufficient resources for features such as high availability and disaster recovery (HADR), backup and restore, monitoring, Query Store, Automatic tuning, etc. to function reliably.
The primary design goal of elastic pools is to be cost-effective. For this reason, the system intentionally allows customers to create _dense_ pools, that is pools with the number of databases approaching or at the maximum allowed, but with a moderate allocation of compute resources. For the same reason, the system doesn't reserve all potentially needed resources for its internal processes, but allows resource sharing between internal processes and user workloads.
To avoid performance degradation due to resource contention, customers using den
Azure SQL Database provides several metrics that are relevant for this type of monitoring. Exceeding the recommended average value for each metric indicates resource contention in the pool, and should be addressed using one of the actions mentioned earlier.
+To send an alert when pool resource utilization (CPU, data IO, log IO, workers, etc.) exceeds a threshold, consider creating alerts via the [Azure portal](alerts-insights-configure-portal.md) or the [Add-AzMetricAlertRulev2](/powershell/module/az.monitor/add-azmetricalertrulev2) PowerShell cmdlet. When monitoring elastic pools, consider also creating alerts for individual databases in the pool if needed in your scenario. For a sample scenario of monitoring elastic pools, see [Monitor and manage performance of Azure SQL Database in a multi-tenant SaaS app](saas-dbpertenant-performance-monitoring.md).
+ |Metric name|Description|Recommended average value| |-|--|| |`avg_instance_cpu_percent`|CPU utilization of the SQL process associated with an elastic pool, as measured by the underlying operating system. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `sqlserver_process_core_percent`, and can be viewed in Azure portal. This value is the same for every database in the same elastic pool.|Below 70%. Occasional short spikes up to 90% may be acceptable.| |`max_worker_percent`|[Worker thread](/sql/relational-databases/thread-and-task-architecture-guide) utilization. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of worker threads at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `workers_percent`, and can be viewed in Azure portal.|Below 80%. Spikes up to 100% will cause connection attempts and queries to fail.| |`avg_data_io_percent`|IOPS utilization for read and write physical IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of IOPS at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `physical_data_read_percent`, and can be viewed in Azure portal.|Below 80%. Occasional short spikes up to 100% may be acceptable.| |`avg_log_write_percent`|Throughput utilizations for transaction log write IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the log throughput at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `log_write_percent`, and can be viewed in Azure portal. When this metric is close to 100%, all database modifications (INSERT, UPDATE, DELETE, MERGE statements, SELECT … INTO, BULK INSERT, etc.) will be slower.|Below 90%. Occasional short spikes up to 100% may be acceptable.|
-|`oom_per_second`|The rate of out-of-memory (OOM) errors in an elastic pool, which is an indicator of memory pressure. Available in the [sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database) view. See [Examples](#examples) for a sample query to calculate this metric.|0|
+|`oom_per_second`|The rate of out-of-memory (OOM) errors in an elastic pool, which is an indicator of memory pressure. Available in the [sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database) view. See [Examples](#examples) for a sample query to calculate this metric. For more information, see resource limits for [elastic pools using DTUs](resource-limits-dtu-elastic-pools.md) or [elastic pools using vCores](resource-limits-vcore-elastic-pools.md), and [Troubleshoot out of memory errors with Azure SQL Database](troubleshoot-memory-errors-issues.md).|0|
|`avg_storage_percent`|Total storage space used by data in all databases within an elastic pool. Does not include empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `storage_percent`, and can be viewed in Azure portal.|Below 80%. Can approach 100% for pools with no data growth.| |`avg_allocated_storage_percent`|Total storage space used by database files in storage in all databases within an elastic pool. Includes empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `allocated_data_storage_percent`, and can be viewed in Azure portal.|Below 90%. Can approach 100% for pools with no data growth.| |`tempdb_log_used_percent`|Transaction log space utilization in the `tempdb` database. Even though temporary objects created in one database are not visible in other databases in the same elastic pool, `tempdb` is a shared resource for all databases in the same pool. A long running or orphaned transaction in `tempdb` started from one database in the pool can consume a large portion of transaction log, and cause failures for queries in other databases in the same pool. Derived from [sys.dm_db_log_space_usage](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-space-usage-transact-sql) and [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) views. This metric is also emitted to Azure Monitor, and can be viewed in Azure portal. See [Examples](#examples) for a sample query to return the current value of this metric.|Below 50%. Occasional spikes up to 80% are acceptable.|
In addition to these metrics, Azure SQL Database provides a view that returns ac
|[sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database)|Returns actual configuration and capacity settings used by resource governance mechanisms in the current database or elastic pool.| |[sys.dm_resource_governor_resource_pools](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-transact-sql)|Returns information about the current resource pool state, the current configuration of resource pools, and cumulative resource pool statistics.| |[sys.dm_resource_governor_workload_groups](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-transact-sql)|Returns cumulative workload group statistics and the current configuration of the workload group. This view can be joined with sys.dm_resource_governor_resource_pools on the `pool_id` column to get resource pool information.|
-|[sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database)|Returns resource pool utilization statistics for the last 32 minutes. Each row represents a 20-second interval. The `delta_` columns return the change in each statistic during the interval.|
-|[sys.dm_resource_governor_workload_groups_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-history-ex-azure-sql-database)|Returns workload group utilization statistics for the last 32 minutes. Each row represents a 20-second interval. The `delta_` columns return the change in each statistic during the interval.|
+|[sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database)|Returns resource pool utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.|
+|[sys.dm_resource_governor_workload_groups_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-history-ex-azure-sql-database)|Returns workload group utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.|
||| > [!TIP]
In addition to monitoring current resource utilization, customers using dense po
## Operational recommendations
-**Leave sufficient resource headroom**. If resource contention and performance degradation occurs, mitigation may involve moving some databases out of the affected elastic pool, or scaling up the pool, as noted earlier. However, these actions require additional compute resources to complete. In particular, for Premium and Business Critical pools, these actions require transferring all data for the databases being moved, or for all databases in the elastic pool if the pool is scaled up. Data transfer is a long running and resource-intensive operation. If the pool is already under high resource pressure, the mitigating operation itself will degrade performance even further. In extreme cases, it may not be possible to solve resource contention via database move or pool scale-up because the required resources are not available. In this case, temporarily reducing query workload on the affected elastic pool may be the only solution.
+**Leave sufficient resource headroom**. If resource contention and performance degradation occur, mitigation may involve moving some databases out of the affected elastic pool, or scaling up the pool, as noted earlier. However, these actions require additional compute resources to complete. In particular, for Premium and Business Critical pools, these actions require transferring all data for the databases being moved, or for all databases in the elastic pool if the pool is scaled up. Data transfer is a long running and resource-intensive operation. If the pool is already under high resource pressure, the mitigating operation itself will degrade performance even further. In extreme cases, it may not be possible to solve resource contention via database move or pool scale-up because the required resources are not available. In this case, temporarily reducing query workload on the affected elastic pool may be the only solution.
Customers using dense pools should closely monitor resource utilization trends as described earlier, and take mitigating action while metrics remain within the recommended ranges and there are still sufficient resources in the elastic pool. Resource utilization depends on multiple factors that change over time for each database and each elastic pool. Achieving optimal price/performance ratio in dense pools requires continuous monitoring and rebalancing, that is moving databases from more utilized pools to less utilized pools, and creating new pools as necessary to accommodate increased workload.
+> [!NOTE]
+> For DTU elastic pools, the eDTU metric at the pool level is not a MAX or a SUM of individual database utilization. It is derived from the utilization of various pool level metrics. Pool level resource limits may be higher than individual database level limits, so it is possible that an individual database can reach a specific resource limit (CPU, data IO, log IO, etc.), even when the eDTU reporting for the pool indicates no limit been reached.
+ **Do not move "hot" databases**. If resource contention at the pool level is primarily caused by a small number of highly utilized databases, it may be tempting to move these databases to a less utilized pool, or make them standalone databases. However, doing this while a database remains highly utilized is not recommended, because the move operation will further degrade performance, both for the database being moved, and for the entire pool. Instead, either wait until high utilization subsides, or move less utilized databases instead to relieve resource pressure at the pool level. But moving databases with very low utilization does not provide any benefit in this case, because it does not materially reduce resource utilization at the pool level. **Create new databases in a "quarantine" pool**. In scenarios where new databases are created frequently, such as applications using the tenant-per-database model, there is risk that a new database placed into an existing elastic pool will unexpectedly consume significant resources and affect other databases and internal processes in the pool. To mitigate this risk, create a separate "quarantine" pool with ample allocation of resources. Use this pool for new databases with yet unknown resource consumption patterns. Once a database has stayed in this pool for a business cycle, such as a week or a month, and its resource consumption is known, it can be moved to a pool with sufficient capacity to accommodate this additional resource usage.
If used pool space (total size of data in all databases in a pool, not including
**Avoid overly dense servers**. Azure SQL Database [supports](./resource-limits-logical-server.md) up to 5000 databases per server. Customers using elastic pools with thousands of databases may consider placing multiple elastic pools on a single server, with the total number of databases up to the supported limit. However, servers with many thousands of databases create operational challenges. Operations that require enumerating all databases on a server, for example viewing databases in the portal, will be slower. Operational errors, such as incorrect modification of server level logins or firewall rules, will affect a larger number of databases. Accidental deletion of the server will require assistance from Microsoft Support to recover databases on the deleted server, and will cause a prolonged outage for all affected databases.
-It is recommended to limit the number of databases per server to a lower number than the maximum supported. In many scenarios, using up to 1000-2000 databases per server is optimal. To reduce the likelihood of accidental server deletion, place a [delete lock](../../azure-resource-manager/management/lock-resources.md) on the server or its resource group.
+Limit the number of databases per server to a lower number than the maximum supported. In many scenarios, using up to 1000-2000 databases per server is optimal. To reduce the likelihood of accidental server deletion, place a [delete lock](../../azure-resource-manager/management/lock-resources.md) on the server or its resource group.
## Examples
+### View individual database capacity settings
+
+Use the `sys.dm_user_db_resource_governance` dynamic management view to view the actual configuration and capacity settings used by resource governance in the current database or elastic pool. For more information, see [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database).
+
+Run this query in any database in an elastic pool. All databases in the pool have the same resource governance settings.
+
+```sql
+SELECT * FROM sys.dm_user_db_resource_governance AS rg
+WHERE database_id = DB_ID();
+```
+
+### Monitoring overall elastic pool resource consumption
+
+Use the `sys.elastic_pool_resource_stats` system catalog view to monitor the resource consumption of the entire pool. For more information, see [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database).
+
+This sample query to view the last 10 minutes should be run in the `master` database of the logical Azure SQL server containing the desired elastic pool.
+
+```sql
+SELECT * FROM sys.elastic_pool_resource_stats AS rs
+WHERE rs.start_time > DATEADD(mi, -10, SYSUTCDATETIME())
+AND rs.elastic_pool_name = '<elastic pool name>';
+```
+
+### Monitoring individual database resource consumption
+
+Use the `sys.dm_db_resource_stats` dynamic management view to monitor the resource consumption of individual databases. For more information, see [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database). One row exists for every 15 seconds, even if there is no activity. Historical data is maintained for approximately one hour.
+
+This sample query to view the last 10 minutes of data should be run in the desired database.
+
+```sql
+SELECT * FROM sys.dm_db_resource_stats AS rs
+WHERE rs.end_time > DATEADD(mi, -10, SYSUTCDATETIME());
+```
+
+For longer retention time with less frequency, consider the following query on `sys.resource_stats`, run in the `master` database of the Azure SQL logical server. For more information, see [sys.resource_stats (Azure SQL Database)](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database). One row exists every five minutes, and historical data is maintained for two weeks.
+
+```sql
+SELECT * FROM sys.resource_stats
+WHERE [database_name] = 'sample'
+ORDER BY [start_time] desc;
+```
+ ### Monitoring memory utilization
-This query calculates the `oom_per_second` metric for each resource pool, over the last 32 minutes. This query can be executed in any database in an elastic pool.
+This query calculates the `oom_per_second` metric for each resource pool for recent history, based on the number of snapshots available. This sample query helps identify the recent average number of failed memory allocations in the pool. This query can be run in any database in an elastic pool.
```sql SELECT pool_id,
ORDER BY pool_id;
### Monitoring `tempdb` log space utilization
-This query returns the current value of the `tempdb_log_used_percent` metric, showing the relative utilization of the `tempdb` transaction log relative to its maximum allowed size. This query can be executed in any database in an elastic pool.
+This query returns the current value of the `tempdb_log_used_percent` metric, showing the relative utilization of the `tempdb` transaction log relative to its maximum allowed size. This query can be run in any database in an elastic pool.
```sql SELECT (lsu.used_log_space_in_bytes / df.log_max_size_bytes) * 100 AS tempdb_log_space_used_percent
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Previously updated : 01/17/2022 Last updated : 01/24/2022 # Tutorial: Add an Azure SQL Database elastic pool to a failover group [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Configure a failover group for an Azure SQL Database elastic pool and test failover using the Azure portal. In this tutorial, you will learn how to:
+Configure a failover group for an Azure SQL Database elastic pool and test failover using the Azure portal. In this tutorial, you'll learn how to:
> [!div class="checklist"] >
To complete the tutorial, make sure you have the following items:
# [Azure CLI](#tab/azure-cli)
-To complete the tutorial, make sure you have the following items:
-- An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one.-- The latest version of [the Azure CLI](/cli/azure/install-azure-cli).+ ## 1 - Create a single database
+In this step, you create a resource group, server, single database, and server-level IP firewall rule for access to the server.
+ [!INCLUDE [sql-database-create-single-database](../includes/sql-database-create-single-database.md)] ## 2 - Add the database to an elastic pool
-In this step, you will create an elastic pool and add your database to it.
+In this step, you'll create an elastic pool and add your database to it.
# [Azure portal](#tab/azure-portal) Create your elastic pool using the Azure portal.
-1. Select **Azure SQL** in the left-hand menu of the Azure portal. If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select **Azure SQL** in the left-hand menu of the Azure portal. If **Azure SQL** isn't in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
1. Select **+ Add** to open the **Select SQL deployment option** page. You can view additional information about the different databases by selecting Show details on the Databases tile. 1. Select **Elastic pool** from the **Resource type** drop-down in the **SQL Databases** tile. Select **Create** to create your elastic pool.
In this step, you create your elastic pool and add your database to the elastic
### Set additional parameter values to create elastic pool
-Set these additional parameter values for use in creating the an elastic pool.
+Set these additional parameter values for use in creating the elastic pool.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="12-13"::: ### Create elastic pool on primary server
-Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
+Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="29-31"::: ### Add database to elastic pool
-Use this script to add a database to an elastic pool with the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command.
+Use the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command to add a database to an elastic pool.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="32-34":::
This portion of the tutorial uses the following Azure CLI cmdlets:
## 3 - Create the failover group
-In this step, you will create a [failover group](auto-failover-group-overview.md) between an existing server and a new server in another region. Then add the elastic pool to the failover group.
+In this step, you'll create a [failover group](auto-failover-group-overview.md) between an existing server and a new server in another region. Then add the elastic pool to the failover group.
# [Azure portal](#tab/azure-portal) Create your failover group using the Azure portal.
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** isn't in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
1. Select the elastic pool created in the previous section, such as `myElasticPool`. 1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
Create your failover group using the Azure portal.
- **Server name**: Type in a unique name for the secondary server, such as `mysqlsecondary`. - **Server admin login**: Type `azureuser` - **Password**: Type a complex password that meets password requirements.
- - **Location**: Choose a location from the drop-down, such as `East US`. This location cannot be the same location as your primary server.
+ - **Location**: Choose a location from the drop-down, such as `East US`. This location can't be the same location as your primary server.
> [!NOTE] > The server login and firewall settings must match that of your primary server.
Create your failover group using the Azure portal.
1. Select **Databases within the group** then select the elastic pool you created in section 2. A warning should appear, prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
- ![Add elastic pool to failover group](./media/failover-group-add-elastic-pool-tutorial/add-elastic-pool-to-failover-group.png)
+ ![Add elastic pool to the failover group](./media/failover-group-add-elastic-pool-tutorial/add-elastic-pool-to-failover-group.png)
1. Select **Select** to apply your elastic pool settings to the failover group, and then select **Create** to create your failover group. Adding the elastic pool to the failover group will automatically start the geo-replication process.
This portion of the tutorial uses the following PowerShell cmdlets:
# [Azure CLI](#tab/azure-cli)
-In this step, you create your secondary server, failover group, elastic pool, and add a database to failover group using the Azure CLI.
+In this step, you use the Azure CLI to create your secondary server, failover group, elastic pool, and add a database to the failover group.
### Set additional parameter values to create failover group
-Set these additional parameter values for use in creating the failover group, in addition to the values defined in the preceding script that created the primary resource group and server.
+Set these additional parameter values for use in creating the failover group.
Change the failover location as appropriate for your environment.
Change the failover location as appropriate for your environment.
### Create secondary server
-Use this script to create a secondary server with the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command.
+Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command to create a secondary server.
> [!NOTE] > The server login and firewall settings must match that of your primary server.
Use this script to create a secondary server with the [az sql server create](/cl
### Create elastic pool on secondary server
-Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
+Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool on the secondary server.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="38-40"::: ### Create failover group
-Use this script to create a failover group with the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command.
+Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="41-43":::
-### Add database to failover group
+### Add database to the failover group
-Use this script to add a database to the failover group with the command.
+Use the [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) command to add a database to the failover group.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="44-48":::
This portion of the tutorial uses the following Azure CLI cmdlets:
## 4 - Test failover
-In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure portal.
+In this step, you'll fail your failover group over to the secondary server, and then fail back using the Azure portal.
# [Azure portal](#tab/azure-portal) Test failover of your failover group using the Azure portal.
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** isn't in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
1. Select the elastic pool created in the previous section, such as `myElasticPool`. 1. Select the name of the server under **Server name** to open the settings for the server.
Test failover using the Azure CLI.
### Verify the roles of each server
-Use this script to confirm the roles of each server with the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command.
+Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server in the failover group.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="49-51"::: ### Fail over to the secondary server
-Use this script to failover to the secondary server and verify a successful failover with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) and [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) commands.
+Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="52-57"::: ### Revert failover group back to the primary server
-Use this script to fail back to the primary server with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command.
+Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="58-60":::
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Previously updated : 01/17/2022 Last updated : 01/24/2022 # Tutorial: Add an Azure SQL Database to an autofailover group
To complete the tutorial, make sure you have the following items:
# [Azure CLI](#tab/azure-cli)
-To complete the tutorial, make sure you have the following items:
-- An Azure subscription. [Create a free account](https://azure.microsoft.com/free/) if you don't already have one.-- The latest version of [the Azure CLI](/cli/azure/install-azure-cli).+ ## 1 - Create a database
+In this step, you create a resource group, server, single database, and server-level IP firewall rule for access to the server.
+ [!INCLUDE [sql-database-create-single-database](../includes/sql-database-create-single-database.md)] ## 2 - Create the failover group
Change the failover location as appropriate for your environment.
### Create the secondary server
-Use this script to create a secondary server with the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command.
+Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) command to create a secondary server with .
> [!NOTE] > The server login and firewall settings must match that of your primary server.
Use this script to create a secondary server with the [az sql server create](/cl
### Create the failover group
-Use this script to create a failover group with the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command.
+Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh" range="30-32":::
Test failover using the Azure CLI.
### Verify the roles of each server
-Use this script to confirm the roles of each server with the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command.
+Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh" range="33-35"::: ### Fail over to the secondary server
-Use this script to failover to the secondary server and verify a successful failover with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) and [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) commands.
+Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh" range="36-41"::: ### Revert failover group back to the primary server
-Use this script to fail back to the primary server with the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command.
+Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-single-db-to-failover-group-az-cli.sh" range="42-44":::
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
The following table lists the major features of SQL Server and provides informat
| [Distributed transactions - MS DTC](/sql/relational-databases/native-client-ole-db-transactions/supporting-distributed-transactions) | No - see [Elastic transactions](elastic-transactions-overview.md) | No - see [Elastic transactions](elastic-transactions-overview.md) | | [DML triggers](/sql/relational-databases/triggers/create-dml-triggers) | Most - see individual statements | Yes | | [DMVs](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views) | Most - see individual DMVs | Yes - see [T-SQL differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) |
-| [Elastic query](elastic-query-overview.md) (in public preview) | Yes, with required RDBMS type. | No |
+| [Elastic query](elastic-query-overview.md) (in public preview) | Yes, with required RDBMS type. | No, use native cross-DB queries and Linked Server instead|
| [Event notifications](/sql/relational-databases/service-broker/event-notifications) | No - see [Alerts](alerts-insights-configure-portal.md) | No | | [Expressions](/sql/t-sql/language-elements/expressions-transact-sql) |Yes | Yes | | [Extended events (XEvent)](/sql/relational-databases/extended-events/extended-events) | Some - see [Extended events in SQL Database](xevent-db-diff-from-svr.md) | Yes - see [Extended events differences](../managed-instance/transact-sql-tsql-differences-sql-server.md#extended-events) |
The Azure platform provides a number of PaaS capabilities that are added as an a
| [SQL Server Reporting Services (SSRS)](/sql/reporting-services/create-deploy-and-manage-mobile-and-paginated-reports) | No - [see Power BI](/power-bi/) | No - use [Power BI paginated reports](/power-bi/paginated-reports/paginated-reports-report-builder-power-bi) instead or host SSRS on an Azure VM. While SQL Managed Instance cannot run SSRS as a service, it can host [SSRS catalog databases](/sql/reporting-services/install-windows/ssrs-report-server-create-a-report-server-database#database-server-version-requirements) for a reporting server installed on Azure Virtual Machine, using SQL Server authentication. | | [Query Performance Insights (QPI)](query-performance-insight-use.md) | Yes | No. Use built-in reports in SQL Server Management Studio and Azure Data Studio. | | [VNet](../../virtual-network/virtual-networks-overview.md) | Partial, it enables restricted access using [VNet Endpoints](vnet-service-endpoint-rule-overview.md) | Yes, SQL Managed Instance is injected in customer's VNet. See [subnet](../managed-instance/transact-sql-tsql-differences-sql-server.md#subnet) and [VNet](../managed-instance/transact-sql-tsql-differences-sql-server.md#vnet) |
-| VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | No |
+| VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | Yes |
| VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | Yes, using [Virtual network peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913). | | [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](../../private-link/private-endpoint-overview.md) | Yes, using VNet. |
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 12/16/2020
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
ms.devlang: --++ Last updated 07/13/2021
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
ms.devlang: --++ Last updated 01/10/2022
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
ms.devlang: azurecli --++ Last updated 01/17/2022
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
ms.devlang: azurecli --++ Last updated 01/18/2022
azure-sql Import From Bacpac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-powershell.md
ms.devlang: PowerShell --++ Last updated 05/24/2019
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
ms.devlang: azurecli ---+++ Last updated 01/18/2022
azure-sql Restore Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-powershell.md
ms.devlang: PowerShell --++ Last updated 03/27/2019
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Previously updated : 01/17/2022 Last updated : 01/24/2022 # Quickstart: Create an Azure SQL Database single database
To create a single database in the Azure portal, this quickstart starts at the A
# [Azure CLI](#tab/azure-cli)
-You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI).
+The Azure CLI code blocks in this section create a resource group, server, single database, and server-level IP firewall rule for access to the server. Make sure to record the generated resource group and server names, so you can manage these resources later.
++ [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
az sql db create \
# [Azure CLI (sql up)](#tab/azure-cli-sql-up)
-You can create an Azure resource group, server, and single database using the Azure command-line interface (Azure CLI). If you don't want to use the Azure Cloud Shell, [install Azure CLI](/cli/azure/install-azure-cli) on your computer.
+The Azure CLI code blocks in this section use the [az sql up](/cli/azure/sql#az_sql_up) command to simplify the database creation process. With it, you can create a database and all of its associated resources with a single command. This includes the resource group, server name, server location, database name, and login information. The database is created with a default pricing tier of General Purpose, Provisioned, Gen5, 2 vCores.
-The following Azure CLI code blocks create a resource group, server, single database, and server-level IP firewall rule for access to the server. Make sure to record the generated resource group and server names, so you can manage these resources later.
+ [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
Change the location as appropriate for your environment. Replace `0.0.0.0` with
### Create a database and resources
-The [az sql up](/cli/azure/sql#az_sql_up) command simplifies the database creation process. With it, you can create a database and all of its associated resources with a single command. This includes the resource group, server name, server location, database name, and login information. The database is created with a default pricing tier of General Purpose, Provisioned, Gen5, 2 vCores.
-
-This command creates and configures a [logical server](logical-servers.md) for Azure SQL Database for immediate use. For more granular resource control during database creation, use the standard Azure CLI commands in this article.
+Use the [az sql up](/cli/azure/sql#az_sql_up) command to create and configure a [logical server](logical-servers.md) for Azure SQL Database for immediate use. Make sure to record the generated resource group and server names, so you can manage these resources later.
> [!NOTE] > When running the `az sql up` command for the first time, Azure CLI prompts you to install the `db-up` extension. This extension is currently in preview. Accept the installation to continue. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
azure-sql Backup Activity Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/backup-activity-monitor.md
ms.devlang: -+ -+ Last updated 12/14/2018 # Monitor backup activity for Azure SQL Managed Instance
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 09/12/2021
azure-sql Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/point-in-time-restore.md
ms.devlang: -+ -+ Last updated 08/25/2019 # Restore a database in Azure SQL Managed Instance to a previous point in time
azure-sql Restore Sample Database Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/restore-sample-database-quickstart.md
ms.devlang: -+ -+ Last updated 09/13/2021 # Quickstart: Restore a database to Azure SQL Managed Instance with SSMS
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
ms.devlang: azurecli --++ Last updated 01/18/2022
azure-sql Restore Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup.md
ms.devlang: PowerShell
-+ Last updated 07/03/2019 # Use PowerShell to restore an Azure SQL Managed Instance database to another geo-region
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
The following table types aren't supported:
- [FILESTREAM](/sql/relational-databases/blob/filestream-sql-server) - [FILETABLE](/sql/relational-databases/blob/filetables-sql-server)-- [EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql) (Polybase)
+- [EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql) (except Polybase, in preview)
- [MEMORY_OPTIMIZED](/sql/relational-databases/in-memory-oltp/introduction-to-memory-optimized-tables) (not supported only in General Purpose tier) For information about how to create and alter tables, see [CREATE TABLE](/sql/t-sql/statements/create-table-transact-sql) and [ALTER TABLE](/sql/t-sql/statements/alter-table-transact-sql).
SQL Managed Instance places verbose information in error logs. There are many in
- For a features and comparison list, see [Azure SQL Managed Instance feature comparison](../database/features-comparison.md). - For release updates, see [What's new?](doc-changes-updates-release-notes-whats-new.md). - For issues, workarounds, and resolutions, see [Known issues](doc-changes-updates-known-issues.md).-- For a quickstart that shows you how to create a new SQL Managed Instance, see [Create a SQL Managed Instance](instance-create-quickstart.md).
+- For a quickstart that shows you how to create a new SQL Managed Instance, see [Create a SQL Managed Instance](instance-create-quickstart.md).
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
| **Host** | vSphere Replication<br />&#160;&#160;&#160;&#160;Manage replication | | **Network** | Assign network | | **Permissions** | Modify permissions<br />Modify role |
-| **Profile** | Profile driven storage view |
+| **Profile Driven Storage** | Profile driven storage view |
| **Resource** | Apply recommendation<br />Assign vApp to resource pool<br />Assign virtual machine to resource pool<br />Create resource pool<br />Migrate powered off virtual machine<br />Migrate powered on virtual machine<br />Modify resource pool<br />Move resource pool<br />Query vMotion<br />Remove resource pool<br />Rename resource pool | | **Scheduled task** | Create task<br />Modify task<br />Remove task<br />Run task | | **Sessions** | Message<br />Validate session | | **Storage view** | View | | **vApp** | Add virtual machine<br />Assign resource pool<br />Assign vApp<br />Clone<br />Create<br />Delete<br />Export<br />Import<br />Move<br />Power off<br />Power on<br />Rename<br />Suspend<br />Unregister<br />View OVF environment<br />vApp application configuration<br />vApp instance configuration<br />vApp managedBy configuration<br />vApp resource configuration |
-| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
+| **Virtual machine** | Change Configuration<br />&#160;&#160;&#160;&#160;Acquire disk lease<br />&#160;&#160;&#160;&#160;Add existing disk<br />&#160;&#160;&#160;&#160;Add new disk<br />&#160;&#160;&#160;&#160;Add or remove device<br />&#160;&#160;&#160;&#160;Advanced configuration<br />&#160;&#160;&#160;&#160;Change CPU count<br />&#160;&#160;&#160;&#160;Change memory<br />&#160;&#160;&#160;&#160;Change settings<br />&#160;&#160;&#160;&#160;Change swapfile placement<br />&#160;&#160;&#160;&#160;Change resource<br />&#160;&#160;&#160;&#160;Configure host USB device<br />&#160;&#160;&#160;&#160;Configure raw device<br />&#160;&#160;&#160;&#160;Configure managedBy<br />&#160;&#160;&#160;&#160;Display connection settings<br />&#160;&#160;&#160;&#160;Extend virtual disk<br />&#160;&#160;&#160;&#160;Modify device settings<br />&#160;&#160;&#160;&#160;Query fault tolerance compatibility<br />&#160;&#160;&#160;&#160;Query unowned files<br />&#160;&#160;&#160;&#160;Reload from paths<br />&#160;&#160;&#160;&#160;Remove disk<br />&#160;&#160;&#160;&#160;Rename<br />&#160;&#160;&#160;&#160;Reset guest information<br />&#160;&#160;&#160;&#160;Set annotation<br />&#160;&#160;&#160;&#160;Toggle disk change tracking<br />&#160;&#160;&#160;&#160;Toggle fork parent<br />&#160;&#160;&#160;&#160;Upgrade virtual machine compatibility<br />Edit inventory<br />&#160;&#160;&#160;&#160;Create from existing<br />&#160;&#160;&#160;&#160;Create new<br />&#160;&#160;&#160;&#160;Move<br />&#160;&#160;&#160;&#160;Register<br />&#160;&#160;&#160;&#160;Remove<br />&#160;&#160;&#160;&#160;Unregister<br />Guest operations<br />&#160;&#160;&#160;&#160;Guest operation alias modification<br />&#160;&#160;&#160;&#160;Guest operation alias query<br />&#160;&#160;&#160;&#160;Guest operation modifications<br />&#160;&#160;&#160;&#160;Guest operation program execution<br />&#160;&#160;&#160;&#160;Guest operation queries<br />Interaction<br />&#160;&#160;&#160;&#160;Answer question<br />&#160;&#160;&#160;&#160;Back up operation on virtual machine<br />&#160;&#160;&#160;&#160;Configure CD media<br />&#160;&#160;&#160;&#160;Configure floppy media<br />&#160;&#160;&#160;&#160;Connect devices<br />&#160;&#160;&#160;&#160;Console interaction<br />&#160;&#160;&#160;&#160;Create screenshot<br />&#160;&#160;&#160;&#160;Defragment all disks<br />&#160;&#160;&#160;&#160;Drag and drop<br />&#160;&#160;&#160;&#160;Guest operating system management by VIX API<br />&#160;&#160;&#160;&#160;Inject USB HID scan codes<br />&#160;&#160;&#160;&#160;Install VMware tools<br />&#160;&#160;&#160;&#160;Pause or Unpause<br />&#160;&#160;&#160;&#160;Wipe or shrink operations<br />&#160;&#160;&#160;&#160;Power off<br />&#160;&#160;&#160;&#160;Power on<br />&#160;&#160;&#160;&#160;Record session on virtual machine<br />&#160;&#160;&#160;&#160;Replay session on virtual machine<br />&#160;&#160;&#160;&#160;Reset<br />&#160;&#160;&#160;&#160;Resume Fault Tolerance<br />&#160;&#160;&#160;&#160;Suspend<br />&#160;&#160;&#160;&#160;Suspend fault tolerance<br />&#160;&#160;&#160;&#160;Test failover<br />&#160;&#160;&#160;&#160;Test restart secondary VM<br />&#160;&#160;&#160;&#160;Turn off fault tolerance<br />&#160;&#160;&#160;&#160;Turn on fault tolerance<br />Provisioning<br />&#160;&#160;&#160;&#160;Allow disk access<br />&#160;&#160;&#160;&#160;Allow file access<br />&#160;&#160;&#160;&#160;Allow read-only disk access<br />&#160;&#160;&#160;&#160;Allow virtual machine download<br />&#160;&#160;&#160;&#160;Clone template<br />&#160;&#160;&#160;&#160;Clone virtual machine<br />&#160;&#160;&#160;&#160;Create template from virtual machine<br />&#160;&#160;&#160;&#160;Customize guest<br />&#160;&#160;&#160;&#160;Deploy template<br />&#160;&#160;&#160;&#160;Mark as template<br />&#160;&#160;&#160;&#160;Modify customization specification<br />&#160;&#160;&#160;&#160;Promote disks<br />&#160;&#160;&#160;&#160;Read customization specifications<br />Service configuration<br />&#160;&#160;&#160;&#160;Allow notifications<br />&#160;&#160;&#160;&#160;Allow polling of global event notifications<br />&#160;&#160;&#160;&#160;Manage service configuration<br />&#160;&#160;&#160;&#160;Modify service configuration<br />&#160;&#160;&#160;&#160;Query service configurations<br />&#160;&#160;&#160;&#160;Read service configuration<br />Snapshot management<br />&#160;&#160;&#160;&#160;Create snapshot<br />&#160;&#160;&#160;&#160;Remove snapshot<br />&#160;&#160;&#160;&#160;Rename snapshot<br />&#160;&#160;&#160;&#160;Revert snapshot<br />vSphere Replication<br />&#160;&#160;&#160;&#160;Configure replication<br />&#160;&#160;&#160;&#160;Manage replication<br />&#160;&#160;&#160;&#160;Monitor replication |
| **vService** | Create dependency<br />Destroy dependency<br />Reconfigure dependency configuration<br />Update dependency | | **vSphere tagging** | Assign and unassign vSphere tag<br />Create vSphere tag<br />Create vSphere tag category<br />Delete vSphere tag<br />Delete vSphere tag category<br />Edit vSphere tag<br />Edit vSphere tag category<br />Modify UsedBy field for category<br />Modify UsedBy field for tag |
azure-vmware Ecosystem Disaster Recovery Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-disaster-recovery-vms.md
Following our principle of giving customers the choice to apply their investment
You can find more information about their solutions in the links below: - [Jetstream](https://www.jetstreamsoft.com/2020/09/28/solution-brief-disaster-recovery-for-avs/)-- [Zerto](https://www.zerto.com/solutions/use-cases/disaster-recovery/)
+- [Zerto](https://www.zerto.com/solutions/use-cases/disaster-recovery/)
+- [RiverMeadow](https://www.rivermeadow.com/disaster-recovery-azure-blob)
azure-web-pubsub Concept Service Internals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/concept-service-internals.md
A PubSub WebSocket client can:
You may have noticed that for a [simple WebSocket client](#simple_client), the *server* is a MUST HAVE role to handle the events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different upstream (event handlers) by customizing the *event* the message belongs. #### Scenarios:
-Such clients can be used when clients want to talk to each other. Messages are sent from `client1` to the service and the service delivers the message directly to `client2` if the clients are authorized to do so.
+Such clients can be used when clients want to talk to each other. Messages are sent from `client2` to the service and the service delivers the message directly to `client1` if the clients are authorized to do so.
Client1:
You may have noticed that the *event handler role* handles communication from th
## Next steps
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-secured-core.md
Overview content
::: zone pivot="platform-windows" ## Windows IoT OS Support
-Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 1903
+Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 1903 or greater
+* [Windows 10 IoT Enterprise Lifecycle](https://docs.microsoft.com/lifecycle/products/windows-10-iot-enterprise)
> [!Note] > The Windows secured-core tests require you to download and run the following package (https://aka.ms/Scforwiniot) from an Administrator Command Prompt on the IoT device being validated.
Edge Secured-core for Windows IoT requires Windows 10 IoT Enterprise version 190
|Validation|Device to be validated through [Edge Secured-core Agent](https://aka.ms/Scforwiniot) toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li></ul>| |Resources|| +
+</br>
+
+|Name|SecuredCore.Firmware.Attestation|
+|:|:|
+|Status|Required|
+|Description|The purpose of the test is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
+|Target Availability|2022|
+|Requirements dependency|Azure Attestation Service|
+|Validation Type|Manual/Tools|
+|Validation|Device to be validated through toolset to ensure that platform boot logs and measurements of boot activity can be collected and remotely attested to the Microsoft Azure Attestation service.|
+|Resources| [Microsoft Azure Attestation](../attestation/index.yml) |
+ ## Windows IoT configuration requirements
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Update|
-|:|:|
-|Status|Required|
-|Description|The purpose of the test is to validate the device can receive and update its firmware and software.|
-|Target Availability|2022|
-|Requirements dependency||
-|Validation Type|Manual/Tools|
-|Validation|Partner confirmation that they were able to send an update to the device through Windows update and other approved services.|
-|Resources|[Device Update for IoT Hub](../iot-hub-device-update/index.yml)|
--
-</br>
-
-|Name|SecuredCore.Hardware.Attestation|
-|:|:|
-|Status|Required|
-|Description|The purpose of the test is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
-|Target Availability|2022|
-|Requirements dependency|Azure Attestation Service|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that platform boot logs and measurements of boot activity can be collected and remotely attested to the Microsoft Azure Attestation service.|
-|Resources| [Microsoft Azure Attestation](../attestation/index.yml) |
--
-</br>
- |Name|SecuredCore.Protection.Baselines| |:|:| |Status|Coming Soon June 2022|
Edge Secured-core validation on Linux based devices is executed through a contai
</br>
-|Name|SecuredCore.Protection.SignedUpdates|
+|Name|SecuredCore.Firmware.Attestation|
|:|:| |Status|Required|
-|Description|The purpose of the test is to validate that updates must be signed.|
+|Description|The purpose of the test is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
|Target Availability|2022| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that updates to the operating system, drivers, application software, libraries, packages and firmware will not be applied unless properly signed and validated.
-|Resources||
+|Validation|Device to be validated through toolset to ensure that platform boot logs and measurements of boot activity can be collected and remotely attested to the Microsoft Azure Attestation service.|
+|Resources| [Microsoft Azure Attestation](../attestation/index.yml) |
++
+</br>
+
+|Name|SecuredCore.Hardware.SecureEnclave|
+|:|:|
+|Status|Optional|
+|Description|The purpose of the test to validate the existence of a secure enclave and that the enclave is accessible from a secure agent.|
+|Target Availability|2022|
+|Validation Type|Manual/Tools|
+|Validation|Device to be validated through toolset to ensure the Azure Security Agent can communicate with the secure enclave|
+|Resources|https://github.com/openenclave/openenclave/blob/master/samples/BuildSamplesLinux.md|
## Linux Configuration Requirements
Validation|Device to be validated through toolset to ensure the device supports
|Validation|Device to be validated through toolset to ensure that services accepting network connections are not running with SYSTEM or root privileges.| |Resources|| -
-</br>
-
-|Name|SecuredCore.Hardware.SecureEnclave|
-|:|:|
-|Status|Optional|
-|Description|The purpose of the test to validate the existence of a secure enclave and that the enclave is accessible from a secure agent.|
-|Target Availability|2022|
-|Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure the Azure Security Agent can communicate with the secure enclave|
-|Resources|https://github.com/openenclave/openenclave/blob/master/samples/BuildSamplesLinux.md|
## Linux Software/Service Requirements
Validation|Device to be validated through toolset to ensure the device supports
</br>
-|Name|SecuredCore.Firmware.Attestation|
+|Name|SecuredCore.Protection.SignedUpdates|
|:|:| |Status|Required|
-|Description|The purpose of the test is to ensure the device can remotely attest to the Microsoft Azure Attestation service.|
+|Description|The purpose of the test is to validate that updates must be signed.|
|Target Availability|2022| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that platform boot logs and measurements of boot activity can be collected and remotely attested to the Microsoft Azure Attestation service.|
-|Resources| [Microsoft Azure Attestation](../attestation/index.yml) |
+|Validation|Device to be validated through toolset to ensure that updates to the operating system, drivers, application software, libraries, packages and firmware will not be applied unless properly signed and validated.
+|Resources||
++ ## Linux Policy Requirements
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge comp
In this article, you will download and install the following software packages. The host computer must be able to run the following (see below for instructions):
-* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html)
+* [NVIDIA graphics drivers](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/https://docsupdatetracker.net/index.html) and [NVIDIA CUDA Toolkit](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html). The minimum GPU driver version is 460 with CUDA 11.1.
* Configurations for [NVIDIA MPS](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf) (Multi-Process Service). * [Docker CE](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-enginecommunity-1) and [NVIDIA-Docker2](https://github.com/NVIDIA/nvidia-docker) * [Azure IoT Edge](../../iot-edge/how-to-provision-single-device-linux-symmetric.md) runtime.
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
These are the parameters required by each of these Spatial Analysis operations.
| CALIBRATION_CONFIG | JSON indicating parameters to control how the camera calibration works. It should be in the following format: `"{\"enable_recalibration\": true, \"quality_check_frequency_seconds\": 86400}",`| | SPACEANALYTICS_CONFIG | JSON configuration for zone and line as outlined below.| | ENABLE_FACE_MASK_CLASSIFIER | `True` to enable detecting people wearing face masks in the video stream, `False` to disable it. By default this is disabled. Face mask detection requires input video width parameter to be 1920 `"INPUT_VIDEO_WIDTH": 1920`. The face mask attribute will not be returned if detected people are not facing the camera or are too far from it. Refer to the [camera placement](spatial-analysis-camera-placement.md) guide for more information |
+| STATIONARY_TARGET_REMOVER_CONFIG | JSON indicating the parameters for stationary target removal, which adds the capability to learn and ignore long-term stationary false positive targets such as mannequins or people in pictures. Configuration should be in the following format: `"{\"enable\": true, \"bbox_dist_threshold-in_pixels\": 5, \"buffer_length_in_seconds\": 3600, \"filter_ratio\": 0.2 }"`|
### Detector node parameter settings This is an example of the DETECTOR_NODE_CONFIG parameters for all Spatial Analysis operations.
This is an example of the output from camera calibration if enabled. Ellipses in
"sourceInfo": { "id": "camera1", "timestamp": "2021-04-20T21:15:59.100Z",
- "width": 640,
- "height": 360,
+ "width": 512,
+ "height": 288,
"frameId": 531, "cameraCalibrationInfo": { "status": "Calibrated",
This is an example of the output from camera calibration if enabled. Ellipses in
{ "x": 0.15805946791862285, "y": 0.5487465181058496
- },
- ...
+ }
], "name": "optimal_zone_region" },
This is an example of the output from camera calibration if enabled. Ellipses in
{ "x": 0.22065727699530516, "y": 0.7325905292479109
- },
- ...
+ }
], "name": "fair_zone_region" },
This is an example of the output from camera calibration if enabled. Ellipses in
"y": 0.2757660167130919 } ]
- },
- ...
+ }
], "personBoundingBoxGroundPoints": [ { "x": -22.944068908691406, "y": 31.487680435180664
- },
- ...
+ }
] } }
See [Spatial analysis operation output](#spatial-analysis-operation-output) for
| `optimalZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for optimal results. <br/> Each value pair represents the x,y for vertices of a polygon. The polygon represents the areas in which people are tracked or counted and polygon points are based on normalized coordinates (0-1), where the top left corner is (0.0, 0.0) and the bottom right corner is (1.0, 1.0).| | `fairZonePolygon` | object| A polygon in the camera image where lines or zones for your operations can be placed for good, but possibly not optimal, results. <br/> See `optimalZonePolygon` above for an in-depth explanation of the contents. | | `uniformlySpacedPersonBoundingBoxes` | list | A list of bounding boxes of people within the camera image distributed uniformly in real space. Values are based on normalized coordinates (0-1).|
-| `personBoundingBoxGroundPoints` | list | A list of coordinates on the floor plane relative to the camera. Each coordinate corresponds to the bottom right of the bounding box in `uniformlySpacedPersonBoundingBoxes` with the same index. <br/> See the `centerGroundPoint` field under the [JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights](#json-format-for-cognitiveservicesvisionspatialanalysis-persondistance-ai-insights) section for more details on how coordinates on the floor plane are calculated. |
+| `personBoundingBoxGroundPoints` | list | A list of coordinates on the floor plane relative to the camera. Each coordinate corresponds to the bottom right of the bounding box in `uniformlySpacedPersonBoundingBoxes` with the same index. <br/> See the `centerGroundPointX/centerGroundPointY` fields under the [JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights](#json-format-for-cognitiveservicesvisionspatialanalysis-persondistance-ai-insights) section for more details on how coordinates on the floor plane are calculated. |
Example of the zone placement info output visualized on a video frame: ![Zone placement info visualization](./media/spatial-analysis/zone-placement-info-visualization.png) The zone placement info provides suggestions for your configurations, but the guidelines in [Camera configuration](#camera-configuration) must still be followed for best results.
-### Speed parameter settings
+### Tracker node parameter settings
You can configure the speed computation through the tracker node parameter settings. ``` { "enable_speed": true,
+"remove_stationary_objects": true,
+"stationary_objects_dist_threshold_in_pixels": 5,
+"stationary_objects_buffer_length_in_seconds": 3600,
+"stationary_objects_filter_ratio": 0.2
} ``` | Name | Type| Description| |||| | `enable_speed` | bool | Indicates whether you want to compute the speed for the detected people or not. `enable_speed` is set by default to `True`. It is highly recommended that you enable both speed and orientation to have the best estimated values. |
+| `remove_stationary_objects` | bool | Indicates whether you want to remove stationary objects. `remove_stationary_objects` is set by default to True. |
+| `stationary_objects_dist_threshold_in_pixels` | int | The neighborhood distance threshold to decide whether two detection boxes can be treated as the same detection. `stationary_objects_dist_threshold_in_pixels` is set by default to 5. |
+| `stationary_objects_buffer_length_in_seconds` | int | The minimum length of time in seconds that the system has to look back to decide whether a target is a stationary target or not. `stationary_objects_buffer_length_in_seconds` is set by default to 3600. |
+| `stationary_objects_filter_ratio` | float | If a target is repeatedly detected at the same location (defined in `stationary_objects_dist_threshold_in_pixels`) for greater `stationary_objects_filter_ratio` (0.2 means 20%) of the `stationary_objects_buffer_length_in_seconds` time interval, it will be treated as a stationary target. `stationary_objects_filter_ratio` is set by default to 0.2. |
## Spatial Analysis operations configuration and output
This is an example of a JSON input for the SPACEANALYTICS_CONFIG parameter that
"output_frequency":1, "minimum_distance_threshold":6.0, "maximum_distance_threshold":35.0,
- "aggregation_method": "average"
+ "aggregation_method": "average"
"threshold": 16.00, "focus": "footprint"
- }
+ }
}] }] }
Sample JSON for an event output by this operation.
] }, "confidence": 0.9559211134910583,
- "centerGroundPoint": {
- "x": 0.0,
- "y": 0.0
- },
"metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "0.0",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
}, { "type": "person",
Sample JSON for an event output by this operation.
] }, "confidence": 0.9389744400978088,
- "centerGroundPoint": {
- "x": 0.0,
- "y": 0.0
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
},
- "metadata":{
- "attributes": {
- "face_nomask": 0.99
- }
- }
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
} ],
- "schemaVersion": "1.0"
+ "schemaVersion": "2.0"
} ```
Sample JSON for an event output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
-| `face_mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
-| `face_nomask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
+| `attributes` | array| Array of attributes. Each attribute consist of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask ) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask ) |
+| `task` | string | The attribute classification task/class |
+ | SourceInfo Field Name | Type| Description| ||||
Sample JSON for detections output by this operation.
}, "confidence": 0.9005028605461121, "metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "trackingId": "90d55bfc64c54bfd98226697ad8445ca",
+ "speed": "1.2",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
} ],
- "schemaVersion": "1.0"
+ "schemaVersion": "2.0"
} ``` | Event Field Name | Type| Description|
Sample JSON for detections output by this operation.
| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space | | `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`| | `confidence` | float| Algorithm confidence|
-| `face_mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
-| `face_nomask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
+| `attributes` | array| Array of attributes. Each attribute consist of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask ) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask ) |
+| `task` | string | The attribute classification task/class |
| SourceInfo Field Name | Type| Description| ||||
Sample JSON for detections output by this operation with `zonecrossing` type SPA
] }, "confidence": 0.6267998814582825,
- "metadata": {
- "attributes": {
- "face_mask": 0.99
- }
- }
-
- }
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "speed": "1.2",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ },
+ "attributes": [
+ {
+ "label": "face_mask",
+ "confidence": 0.99,
+ "task": ""
+ }
+ ]
+ }
],
- "schemaVersion": "1.0"
+ "schemaVersion": "2.0"
} ```
Sample JSON for detections output by this operation with `zonedwelltime` type SP
"trackingId": "afcc2e2a32a6480288e24381f9c5d00e", "status": "Exit", "side": "1",
- "dwellTime": 7132.0,
- "dwellFrames": 20
+ "dwellTime": 7132.0,
+ "dwellFrames": 20
}, "zone": "queuecamera" }
Sample JSON for detections output by this operation with `zonedwelltime` type SP
] }, "confidence": 0.6267998814582825,
- "metadataType": "",
- "metadata": {
- "groundOrientationAngle": 1.2,
- "mappedImageOrientation": 0.3,
- "speed": 1.2
- },
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.2",
+ "mappedImageOrientation": "0.3",
+ "speed": "1.2",
+ "trackingId": "afcc2e2a32a6480288e24381f9c5d00e",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
} ],
- "schemaVersion": "1.0"
+ "schemaVersion": "2.0"
} ```
Sample JSON for detections output by this operation with `zonedwelltime` type SP
| `mappedImageOrientation` | float| The projected clockwise radian angle of the person's orientation on the 2D image space | | `speed` | float| The estimated speed of the detected person. The unit is `foot per second (ft/s)`| | `confidence` | float| Algorithm confidence|
-| `face_mask` | float | The attribute confidence value with range (0-1) indicates the detected person is wearing a face mask |
-| `face_nomask` | float | The attribute confidence value with range (0-1) indicates the detected person is **not** wearing a face mask |
+| `attributes` | array| Array of attributes. Each attribute consist of label, task, and confidence |
+| `label` | string| The attribute value (for example, `{label: face_mask}` indicates the detected person is wearing a face mask ) |
+| `confidence (attribute)` | float| The attribute confidence value with range of 0 to 1 (for example, `{confidence: 0.9, label: face_nomask}` indicates the detected person is *not* wearing a face mask ) |
+| `task` | string | The attribute classification task/class |
### JSON format for cognitiveservices.vision.spatialanalysis-persondistance AI Insights
Sample JSON for detections output by this operation.
] }, "confidence": 0.948630690574646,
- "centerGroundPoint": {
- "x": -1.4638760089874268,
- "y": 18.29732322692871
- },
- "metadataType": ""
+ "metadata": {
+ "centerGroundPointX": "-1.4638760089874268",
+ "centerGroundPointY": "18.29732322692871",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
}, { "type": "person",
Sample JSON for detections output by this operation.
] }, "confidence": 0.8235412240028381,
- "centerGroundPoint": {
- "x": 2.6310102939605713,
- "y": 18.635927200317383
- },
- "metadataType": ""
+ "metadata": {
+ "centerGroundPointX": "2.6310102939605713",
+ "centerGroundPointY": "18.635927200317383",
+ "groundOrientationAngle": "1.3",
+ "footprintX": "0.7306610584259033",
+ "footprintY": "0.8814966493381893"
+ }
} ],
- "schemaVersion": "1.0"
+ "schemaVersion": "2.0"
} ```
Sample JSON for detections output by this operation.
| `type` | string| Type of region| | `points` | collection| Top left and bottom right points when the region type is RECTANGLE | | `confidence` | float| Algorithm confidence|
-| `centerGroundPoint` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` and `y` are coordinates on the floor plane, assuming the floor is level. The camera's location is the origin. |
+| `centerGroundPointX/centerGroundPointY` | 2 float values| `x`, `y` values with the coordinates of the person's inferred location on the ground in feet. `x` and `y` are coordinates on the floor plane, assuming the floor is level. The camera's location is the origin. |
When calculating `centerGroundPoint`, `x` is the distance from the camera to the person along a line perpendicular to the camera image plane. `y` is the distance from the camera to the person along a line parallel to the camera image plane. ![Example center ground point](./media/spatial-analysis/x-y-chart.png)
-In this example, `centerGroundPoint` is `{x: 4, y: 5}`. This means there's a person 4 feet away from the camera and 5 feet to the right, looking at the room top-down.
+In this example, `centerGroundPoint` is `{centerGroundPointX: 4, centerGroundPointY: 5}`. This means there's a person four feet away from the camera and five feet to the right, looking at the room top-down.
| SourceInfo Field Name | Type| Description|
cognitive-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/add-sharepoint-datasources.md
description: Add secured SharePoint data sources to your knowledge base to enric
Previously updated : 02/20/2020 Last updated : 01/25/2022 # Add a secured SharePoint data source to your knowledge base
If the QnA Maker knowledge base manager is not the Active Directory manager, you
## Add supported file types to knowledge base
-You can add all QnA Maker-supported [file types](../index.yml) from a SharePoint site to your knowledge base. You may have to grant [permissions](#permissions) if the file resource is secured.
+You can add all QnA Maker-supported [file types](https://docs.microsoft.com/azure/cognitive-services/qnamaker/concepts/data-sources-and-content#file-and-url-data-types) from a SharePoint site to your knowledge base. You may have to grant [permissions](#permissions) if the file resource is secured.
1. From the library with the SharePoint site, select the file's ellipsis menu, `...`. 1. Copy the file's URL.
Use the **@microsoft.graph.downloadUrl** from the previous section as the `fileu
## Next steps > [!div class="nextstepaction"]
-> [Collaborate on your knowledge base](../index.yml)
+> [Collaborate on your knowledge base](https://docs.microsoft.com/azure/cognitive-services/qnamaker/concepts/data-sources-and-content#file-and-url-data-types.yml)
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure Blobs. By using the dedicated REST API, you can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcriptions. ++ Previously updated : 06/17/2021 Last updated : 01/23/2022 ms.devlang: csharp
This sample code doesn't specify a custom model. The service uses the baseline m
## Next steps -- [Speech to text v3 API reference](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)
+> [!div class="nextstepaction"]
+> [Speech to text v3.0 API reference](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription)
cognitive-services Call Center Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/call-center-transcription.md
description: A common scenario for speech-to-text is transcribing large volumes
- Previously updated : 07/05/2019 Last updated : 01/23/2022
Our new voices are also indistinguishable from human voices. You can use our voi
### Search
-Another staple of analytics is to identify interactions where a specific event or experience has occurred. This is typically done with one of two approaches; either an ad hoc search where the user simply types a phrase and the system responds, or a more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries. A good search example is the ubiquitous compliance statement "this call shall be recorded for quality purposes... ". Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query/search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through [Cognitive services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/) your end to end solution can be significantly enhanced with indexing and search capabilities.
+Another staple of analytics is to identify interactions where a specific event or experience has occurred. This is typically done with one of two approaches; either an ad hoc search where the user simply types a phrase and the system responds, or a more structured query where an analyst can create a set of logical statements that identify a scenario in a call, and then each call can be indexed against that set of queries. A good search example is the ubiquitous compliance statement "this call shall be recorded for quality purposes... ". Many companies want to make sure that their agents are providing this disclaimer to customers before the call is actually recorded. Most analytics systems have the ability to trend the behaviors found by query/search algorithms, and this reporting of trends is ultimately one of the most important functions of an analytics system. Through [Cognitive services directory](https://azure.microsoft.com/services/cognitive-services/directory/search/) your end-to-end solution can be significantly enhanced with indexing and search capabilities.
### Key Phrase Extraction
The Speech service can be easily integrated in any solution by using either the
Several IVR or telephony service products (such as Genesys or AudioCodes) offer integration capabilities that can be leveraged to enable inbound and outbound audio pass-through to an Azure service. Basically, a custom Azure service might provide a specific interface to define phone call sessions (such as Call Start or Call End) and expose a WebSocket API to receive inbound stream audio that is used with the Speech service. Outbound responses, such as conversation transcription or connections with the Bot Framework, can be synthesized with Microsoft's text-to-speech service and returned to the IVR for playback.
-Another scenario is direct integration with Session Initiation Protocol (SIP). An Azure service connects to a SIP Server, thus getting an inbound stream and an outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP Server there are commercial software offerings, such as Ozeki SDK, or [the Teams calling and meetings API](/graph/api/resources/communications-api-overview) (currently in beta), that are designed to support this type of scenario for audio calls.
+Another scenario is direct integration with Session Initiation Protocol (SIP). An Azure service connects to a SIP Server, thus getting an inbound stream and an outbound stream, which is used for the speech-to-text and text-to-speech phases. To connect to a SIP Server there are commercial software offerings, such as Ozeki SDK, or the [Microsoft Graph communications API](/graph/api/resources/communications-api-overview), that are designed to support this type of scenario for audio calls.
## Customize existing experiences
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/conversation-transcription.md
Previously updated : 03/26/2021 Last updated : 01/23/2022 # What is Conversation Transcription (Preview)?
-Conversation Transcription is a [speech-to-text](speech-to-text.md) solution that combines speech recognition, speaker identification, and sentence attribution to each speaker (also known as _diarization_) to provide real-time and/or asynchronous transcription of any conversation. Conversation Transcription distinguishes speakers in a conversation to determine who said what and when, and makes it easy for developers to add speech-to-text to their applications that perform multi-speaker diarization.
+Conversation Transcription is a [speech-to-text](speech-to-text.md) solution that provides real-time or asynchronous transcription of any conversation. Conversation Transcription combines speech recognition, speaker identification, and sentence attribution to determine who said what and when in a conversation.
## Key features -- **Timestamps** - each speaker utterance has a timestamp, so that you can easily find when a phrase was said.-- **Readable transcripts** - transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.-- **User profiles** - user profiles are generated by collecting user voice samples and sending them to signature generation.-- **Speaker identification** - speakers are identified using user profiles and a _speaker identifier_ is assigned to each.-- **Multi-speaker diarization** - determine who said what by synthesizing the audio stream with each speaker identifier.-- **Real-time transcription** ΓÇô provide live transcripts of who is saying what and when while the conversation is happening.-- **asynchronous transcription** ΓÇô provide transcripts with higher accuracy by using a multichannel audio stream.
+- **Timestamps** - Each speaker utterance has a timestamp, so that you can easily find when a phrase was said.
+- **Readable transcripts** - Transcripts have formatting and punctuation added automatically to ensure the text closely matches what was being said.
+- **User profiles** - User profiles are generated by collecting user voice samples and sending them to signature generation.
+- **Speaker identification** - Speakers are identified using user profiles and a _speaker identifier_ is assigned to each.
+- **Multi-speaker diarization** - Determine who said what by synthesizing the audio stream with each speaker identifier.
+- **Real-time transcription** ΓÇô Provide live transcripts of who is saying what and when while the conversation is happening.
+- **Asynchronous transcription** ΓÇô Provide transcripts with higher accuracy by using a multichannel audio stream.
> [!NOTE] > Although Conversation Transcription does not put a limit on the number of speakers in the room, it is optimized for 2-10 speakers per session.
This is a high-level overview of how Conversation Transcription works.
- **Multi-channel audio stream** ΓÇô For specification and design details, see [Microphone array recommendations](./speech-sdk-microphone.md). - **User voice samples** ΓÇô Conversation Transcription needs user profiles in advance of the conversation for speaker identification. You will need to collect audio recordings from each user, then send the recordings to the [Signature Generation Service](https://aka.ms/cts/signaturegenservice) to validate the audio and generate user profiles.
-> [!NOTE]
-> User voice samples for voice signatures are required for speaker identification. Speakers who do not have voice samples will be recognized as "Unidentified". Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see example below). The transcription output will then show speakers as "Guest_0", "Guest_1", etc. instead of recognizing as pre-enrolled specific speaker names.
-> ```csharp
-> config.SetProperty("DifferentiateGuestSpeakers", "true");
-> ```
-
+User voice samples for voice signatures are required for speaker identification. Speakers who do not have voice samples will be recognized as "Unidentified". Unidentified speakers can still be differentiated when the `DifferentiateGuestSpeakers` property is enabled (see example below). The transcription output will then show speakers as "Guest_0", "Guest_1", etc. instead of recognizing as pre-enrolled specific speaker names.
+```csharp
+config.SetProperty("DifferentiateGuestSpeakers", "true");
+```
## Real-time vs. asynchronous
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Previously updated : 05/18/2021 Last updated : 01/23/2022
To learn how to use Custom Neural Voice responsibly, see the [transparency note]
## Next steps
-* [Get started with Custom Neural Voice](how-to-custom-voice.md)
+> [!div class="nextstepaction"]
+> [Get started with Custom Neural Voice](how-to-custom-voice.md)
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Previously updated : 10/08/2021 Last updated : 01/23/2022
Older models typically become less useful over time because the newest model usu
## Next steps * [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
-* [Inspect your data](how-to-custom-speech-inspect-data.md)
* [Evaluate and improve model accuracy](how-to-custom-speech-evaluate-data.md) * [Train and deploy a model](how-to-custom-speech-train-model.md)
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
Previously updated : 01/31/2020 Last updated : 01/23/2022 # Improve synthesis with the Audio Content Creation tool
-[Audio Content Creation](https://aka.ms/audiocontentcreation) is an easy-to-use and powerful tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can fine-tune Text-to-Speech voices and design customized audio experiences in an efficient and low-cost way.
+[Audio Content Creation](https://aka.ms/audiocontentcreation) is an easy-to-use and powerful tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can fine-tune Text-to-Speech voices and design-customized audio experiences in an efficient and low-cost way.
The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text-to-Speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
It takes a few moments to deploy your new Speech resource. Once the deployment i
### Step 3 - Log into the Audio Content Creation with your Azure account and Speech resource
-1. After getting the Azure account and the Speech resource, you can log into [Audio Content Creation](https://aka.ms/audiocontentcreation) by clicking **Get started**.
-2. The home page lists all the products under Speech Studio. Click **Audio Content Creation** to start.
-3. The **Welcome to Speech Studio** page will appear to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Click **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by clicking **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
+1. After getting the Azure account and the Speech resource, you can log into [Audio Content Creation](https://aka.ms/audiocontentcreation) by selecting **Get started**.
+2. The home page lists all the products under Speech Studio. Select **Audio Content Creation** to start.
+3. The **Welcome to Speech Studio** page will appear to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Select **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by selecting **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
4. You can modify your Speech resource at any time with the **Settings** option, located in the top nav. 5. If you want to switch directory, please go the **Settings** or your profile to operate.
This diagram shows the steps it takes to fine-tune Text-to-Speech outputs. Use t
> [!NOTE] > Gated access is available for Custom Neural Voice, which allow you to create high-definition voices similar to natural-sounding speech. For additional details, see [Gating process](./text-to-speech.md).
-4. Select the content you want to preview and click the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on the text, you need to click the **Stop** icon and then click **play** icon again to re-generate the audio with changed scripts.
+4. Select the content you want to preview and select the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on the text, you need to select the **Stop** icon and then select **play** icon again to re-generate the audio with changed scripts.
5. Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). Here is a [video](https://youtu.be/ygApYuOOG6w) to show how to fine-tune speech output with Audio Content Creation. 6. Save and [export your tuned audio](#export-tuned-audio). When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
There are two ways to get your content into the Audio Content Creation tool.
**Option 1:**
-1. Click **New** > **file** to create a new audio tuning file.
+1. Select **New** > **file** to create a new audio tuning file.
2. Type or paste your content into the editing window. The characters for each file is up to 20,000. If your script is longer than 20,000 characters, you can use Option 2 to automatically split your content into multiple files. 3. Don't forget to save. **Option 2:**
-1. Click **Upload** to import one or more text files. Both plain text and SSML are supported. If your script file is more than 20,000 characters, please split the file by paragraphs, by character or by regular expressions.
+1. Select **Upload** to import one or more text files. Both plain text and SSML are supported. If your script file is more than 20,000 characters, please split the file by paragraphs, by character or by regular expressions.
3. When you upload your text files, make sure that the file meets these requirements. | Property | Value / Notes |
Welcome to use Audio Content Creation to customize audio output for your product
After you've reviewed your audio output and are satisfied with your tuning and adjustment, you can export the audio.
-1. Click **Export** to create an audio creation task. **Export to Audio Library** is recommended as it supports the long audio output and the full audio output experience. You can also download the audio to your local disk directly, but only the first 10 minutes are available.
+1. Select **Export** to create an audio creation task. **Export to Audio Library** is recommended as it supports the long audio output and the full audio output experience. You can also download the audio to your local disk directly, but only the first 10 minutes are available.
2. Choose the output format for your tuned audio. A list of supported formats and sample rates is available below. 3. You can view the status of the task on the **Export task** tab. If the task fails, see the detailed information page for a full report. 4. When the task is complete, your audio is available for download on the **Audio Library** tab.
-5. Click **Download**. Now you're ready to use your custom tuned audio in your apps or products.
+5. Select **Download**. Now you're ready to use your custom tuned audio in your apps or products.
**Supported audio formats**
The user need to prepare a [Microsoft account](https://account.microsoft.com/acc
Follow these steps to add a user to a speech resource so they can use Audio Content Creation. 1. Search for **Cognitive services** in the [Azure portal](https://portal.azure.com/), select the speech resource that you want to add users to.
-2. Click **Access control (IAM)**. Select **Add** > **Add role assignment (Preview)** to open the Add role assignment pane.
+2. Select **Access control (IAM)**. Select **Add** > **Add role assignment (Preview)** to open the Add role assignment pane.
1. On the **Role** tab, select the **Cognitive Service User** role. If you want to give the user ownership of this speech resource, you can select the **Owner** role. 1. On the **Members** tab, type in user's email address and select the user in the directory. The email address must be a **Microsoft account**, which is trusted by Azure active directory. Users can easily sign up a [Microsoft account](https://account.microsoft.com/account) using a personal email address. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-1. The user will receive an email invitation. Accept the invitation by clicking **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user does not need to take further action in the Azure portal. After a few moments, the user is assigned the role at the speech resource scope, and will have the access to this speech resource. If the user didn't receive the invitation email, you can search the user's account under "Role assignments" and go inside the user's profile. Find "Identity" -> "Invitation accepted", and click **(manage)** to resend the email invitation. You can also copy the invitation link to the users.
-1. The user now visits or refreshes the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with the user's Microsoft account. Select **Audio Content Creation** block among all speech products. Choose the speech resource in the pop-up window or in the settings at the upper right of the page. If the user cannot find available speech resource, check if you are in the right directory. To check the right directory, click the account profile in the upper right corner, and click **Switch** besides the "Current directory". If there are more than one directory available, it means you have access to multiple directories. Switch to different directories and go to settings to see if the right speech resource is available.
+1. The user will receive an email invitation. Accept the invitation by selecting **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user does not need to take further action in the Azure portal. After a few moments, the user is assigned the role at the speech resource scope, and will have the access to this speech resource. If the user didn't receive the invitation email, you can search the user's account under "Role assignments" and go inside the user's profile. Find "Identity" -> "Invitation accepted", and select **(manage)** to resend the email invitation. You can also copy the invitation link to the users.
+1. The user now visits or refreshes the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with the user's Microsoft account. Select **Audio Content Creation** block among all speech products. Choose the speech resource in the pop-up window or in the settings at the upper right of the page. If the user cannot find available speech resource, check if you are in the right directory. To check the right directory, select the account profile in the upper right corner, and select **Switch** besides the "Current directory". If there are more than one directory available, it means you have access to multiple directories. Switch to different directories and go to settings to see if the right speech resource is available.
:::image type="content" source="media/audio-content-creation/add-role-first.png" alt-text="Add role dialog":::
Users who are in the same speech resource will see each other's work in Audio Co
### Remove users from a speech resource 1. Search for **Cognitive services** in the Azure portal, select the speech resource that you want to remove users from.
-2. Click **Access control (IAM)**. Click the **Role assignments** tab to view all the role assignments for this speech resource.
-3. Select the users you want to remove, click **Remove** > **Ok**.
+2. Select **Access control (IAM)** > **Role assignments** tab to view all the role assignments for this speech resource.
+3. Select the users you want to remove, select **Remove** > **Ok**.
:::image type="content" source="media/audio-content-creation/remove-user.png" alt-text="Remove button"::: ### Enable users to grant access
Users who are in the same speech resource will see each other's work in Audio Co
If you want one of the users to give access to other users, you need to give the user the owner role for the speech resource and set the user as the Azure directory reader. 1. Add the user as the owner of the speech resource. See [how to add users to a speech resource](#add-users-to-a-speech-resource). :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Role Owner field":::
-2. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Click **Azure Active Directory**, and then Click **Users**.
-3. Search the user's Microsoft account, and go to the user's detail page. Click **Assigned roles**.
-4. Click **Add assignments** -> **Directory Readers**. If the button "Add assignments" is grayed out, it means that you do not have the access. Only the global administrator of this directory can add assignment to users.
-
-## See also
-
-* [Long Audio API](./long-audio-api.md)
+2. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Select **Azure Active Directory**, and then Select **Users**.
+3. Search the user's Microsoft account, and go to the user's detail page. Select **Assigned roles**.
+4. Select **Add assignments** > **Directory Readers**. If the button "Add assignments" is grayed out, it means that you do not have the access. Only the global administrator of this directory can add assignment to users.
## Next steps
cognitive-services How To Automatic Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md
Previously updated : 05/21/2021 Last updated : 01/23/2022 zone_pivot_groups: programming-languages-speech-services-nomore-variant ms.devlang: cpp, csharp, java, javascript, objective-c, python
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Previously updated : 02/12/2021 Last updated : 01/23/2022 # Evaluate and improve Custom Speech accuracy
-In this article, you learn how to quantitatively measure and improve the accuracy of Microsoft's speech-to-text models or your own custom models. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
+In this article, you learn how to quantitatively measure and improve the accuracy of Microsoft speech-to-text models or your own custom models. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
## Evaluate Custom Speech accuracy
Here's an example:
![Example of incorrectly identified words](./media/custom-speech/custom-speech-dis-words.png)
-If you want to replicate WER measurements locally, you can use sclite from [SCTK](https://github.com/usnistgov/SCTK).
+If you want to replicate WER measurements locally, you can use `sclite` from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK).
## Resolve errors and improve WER
If you'd like to test the quality of Microsoft's speech-to-text baseline model o
To evaluate models side by side: 1. Sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-2. Navigate to **Speech-to-text > Custom Speech > [name of project] > Testing**.
-3. Click **Add Test**.
+2. Navigate to **Speech-to-text** > **Custom Speech** > [Project Name] > **Testing**.
+3. Select **Add Test**.
4. Select **Evaluate accuracy**. Give the test a name, description, and select your audio + human-labeled transcription dataset. 5. Select up to two models that you'd like to test.
-6. Click **Create**.
+6. Select **Create**.
After your test has been successfully created, you can compare the results side by side. ### Side-by-side comparison
-Once the test is complete, indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Click on the test name to view the testing detail page. This detail page lists all the utterances in your dataset, indicating the recognition results of the two models alongside the transcription from the submitted dataset. To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which shows the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and where additional training and improvements are required.
+Once the test is complete, indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select on the test name to view the testing detail page. This detail page lists all the utterances in your dataset, indicating the recognition results of the two models alongside the transcription from the submitted dataset. To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which shows the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and where additional training and improvements are required.
## Improve Custom Speech accuracy
Speech recognition scenarios vary by audio quality and language (vocabulary and
| Scenario | Audio Quality | Vocabulary | Speaking Style | |-|||-|
-| Call center | Low, 8 kHz, could be 2 humans on 1 audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
+| Call center | Low, 8 kHz, could be two people on one audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
| Voice assistant (such as Cortana, or a drive-through window) | High, 16 kHz | Entity heavy (song titles, products, locations) | Clearly stated words and phrases | | Dictation (instant message, notes, search) | High, 16 kHz | Varied | Note-taking | | Video closed captioning | Varied, including varied microphone use, added music | Varied, from meetings, recited speech, musical lyrics | Read, prepared, or loosely structured |
When you train a new custom model, start by adding plain text sentences of relat
### Add structured text data
-You can use structured text data in markdown format similarly to plain text sentences, but you would use structured text data when your data follows a particular pattern in particular utterances that only differ by words or phrases from a list. See [Structured text data for training](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview) for more information.
+You can use structured text data in markdown format similarly to plain text sentences, but you would use structured text data when your data follows a particular pattern in particular utterances that only differ by words or phrases from a list. For more information, see [Structured text data for training](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview).
> [!NOTE] > Training with structured text is only supported for these locales: `en-US`, `de-DE`, `en-UK`, `en-IN`, `fr-FR`, `fr-CA`, `es-ES`, `es-MX` and you must use the latest base model for these locales. See [Language support](language-support.md) for a list of base models that support training with structured text data.
Consider these details:
* Custom Speech can only capture word context to reduce substitution errors, not insertion, or deletion errors. * Avoid samples that include transcription errors, but do include a diversity of audio quality. * Avoid sentences that are not related to your problem domain. Unrelated sentences can harm your model.
-* When the quality of transcripts vary, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
+* When the quality of transcripts varies, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
* The Speech service will automatically use the transcripts to improve the recognition of domain-specific words and phrases, as if they were added as related text. * It can take several days for a training operation to complete. To improve the speed of training, make sure to create your Speech service subscription in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Previously updated : 11/09/2021 Last updated : 01/23/2022
When you're testing the accuracy of Microsoft speech recognition or training you
Text and audio that you use to test and train a custom model need to include samples from a diverse set of speakers and scenarios that you want your model to recognize. Consider these factors when you're gathering data for custom model testing and training:
-* Your text and speech audio data needs to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
-* Your data needs to include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
-* You must include samples from different environments (indoor, outdoor, road noise) where your model will be used.
-* You must gather audio by using hardware devices that the production system will use. If your model needs to identify speech recorded on recording devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
-* You can add more data to your model later, but take care to keep the dataset diverse and representative of your project needs.
-* Including data that's *not* within your custom model's recognition needs can harm recognition quality overall. Include only data that your model needs to transcribe.
+* Include text and audio data to cover the kinds of verbal statements that your users will make when they're interacting with your model. For example, a model that raises and lowers the temperature needs training on statements that people might make to request such changes.
+* Include all speech variances that you want your model to recognize. Many factors can vary speech, including accents, dialects, language-mixing, age, gender, voice pitch, stress level, and time of day.
+* Include samples from different environments, for example, indoor, outdoor, and road noise, where your model will be used.
+* Record audio with hardware devices that the production system will use. If your model needs to identify speech recorded on devices of varying quality, the audio data that you provide to train your model must also represent these diverse scenarios.
+* Keep the dataset diverse and representative of your project needs. You can add more data to your model later.
+* Only include data that your model needs to transcribe. Including data that isn't within your custom model's recognition needs can harm recognition quality overall.
A model that's trained on a subset of scenarios can perform well in only those scenarios. Carefully choose data that represents the full scope of scenarios that you need your custom model to recognize. > [!TIP]
-> Start with small sets of sample data that match the language and acoustics that your model will encounter. For example, record a small but representative sample of audio on the same hardware and in the same acoustic environment that your model will find in production scenarios. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training.
+> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training.
> > To quickly get started, consider using sample data. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
* [Inspect your data](how-to-custom-speech-inspect-data.md) * [Evaluate your data](how-to-custom-speech-evaluate-data.md) * [Train a custom model](how-to-custom-speech-train-model.md)
-* [Deploy a model](./how-to-custom-speech-train-model.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Previously updated : 02/12/2021 Last updated : 01/23/2022
The **Training** table displays a new entry that corresponds to the new model. T
See the [how-to](how-to-custom-speech-evaluate-data.md) on evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model to get a realistic sense of the model's performance. > [!NOTE]
-> Both base models and custom models can be used only up to a certain date (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
+> Base models and custom models can be used up to a certain date as described in [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
> > Retrain your model using the then most recent base model to benefit from accuracy improvements and to avoid that your model expires.
Logging data is available for export if you go to the endpoint's page under **De
> [!NOTE] >Logging data is available for 30 days on Microsoft-owned storage. It will be removed afterwards. If a customer-owned storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
-## Next steps
-
-* [Learn how to use your custom model](how-to-specify-source-language.md)
- ## Additional resources -- [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
+- [Learn how to use your custom model](how-to-specify-source-language.md)
- [Inspect your data](how-to-custom-speech-inspect-data.md) - [Evaluate your data](how-to-custom-speech-evaluate-data.md)
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Previously updated : 11/04/2019 Last updated : 01/23/2022 # Create and use your voice model
-In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice and the different format requirements. Once you've prepared your data and the voice talent verbal statement, you can start to upload them to the [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal. See the [supported languages](language-support.md#custom-neural-voice) for Custom Neural Voice.
+In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice and the different format requirements. Once you've prepared your data and the voice talent verbal statement, you can start to upload them to the [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal. See the [supported languages](language-support.md#custom-neural-voice) for custom neural voice.
## Prerequisites
-* Complete [get started with Custom Neural Voice](how-to-custom-voice.md)
+* [Create a custom voice project](how-to-custom-voice.md)
* [Prepare training data](how-to-custom-voice-prepare-data.md) ## Set up voice talent
To train a neural voice, you must create a voice talent profile with an audio fi
:::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Upload voice talent statement"::: > [!NOTE]
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
+> Custom neural voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
The following steps assume you've prepared the voice talent verbal consent files. Go to [Speech Studio](https://aka.ms/custom-voice-portal) to select a custom neural voice project, then follow the following steps to create a voice talent profile.
When you're ready to upload your data, go to the **Prepare training data** tab t
You can do the following to create and review your training data.
-1. On the **Prepare training data** tab, select **Add training set** to enter **Name** and **Description** > **Create** to add a new training set.
+1. Select **Prepare training data** > **Add training set**.
+1. Enter **Name** and **Description**, and then select **Create** to add a new training set.
When the training set is successfully created, you can start to upload your data.
-2. To upload data, select **Upload data** > **Choose data type** > **Upload data** and **Specify the target training set** > Enter **Name** and **Description** for your data > review the settings and select **Submit**.
+1. Select **Upload data** > **Choose data type** > **Upload data** > **Specify the target training set**.
+1. Enter **Name** and **Description** for your data > review the settings and select **Submit**.
> [!NOTE] >- Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicate, they'll be rejected. >- If you've created data files in the previous version of Speech Studio, you must specify a training set for your data in advance to use them. Or else, an exclamation mark will be appended to the data name, and the data could not be used.
-Each data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Custom Neural Voice service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md) and make sure your data has been rightly formatted.
+All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md) and make sure your data has been rightly formatted.
> [!NOTE] > - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
Each data you upload must meet the requirements for the data type that you choos
Data files are automatically validated once you hit the **Submit** button. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. Fix the errors if any and submit again.
-Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0 to 100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
+Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice.
Consider re-recording any utterances with low pronunciation scores or poor signa
On the **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message displayed to fix them before training.
-The issues are divided into three types. Referring to the following three tables to check the respective types of errors.
-
-Manually fix the first type of errors listed in the table below, otherwise the data with these errors will be excluded during training.
+The issues are divided into three types. Referring to the following three tables to check the respective types of errors. Data with these errors will be excluded during training.
| Category | Name | Description | | | -- | |
Manually fix the first type of errors listed in the table below, otherwise the d
| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. We suggest utterances should be shorter than 15 seconds.| | Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
-The second type of errors listed in the table below will be automatically fixed, but double checking the fixed data is recommended.
+The second type of errors listed in the next table will be automatically fixed, but double checking the fixed data is recommended.
| Category | Name | Description | | | -- | | | Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. | | Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
-If the third type of errors listed in the table below aren't fixed, although the data with these errors won't be excluded during training, it will affect the quality of training. For higher-quality training, manually fixing these errors is recommended.
+Unresolved errors listed in the next table will affect the quality of training. However, the data with these errors won't be excluded during training. For higher-quality training, manually fixing these errors is recommended.
| Category | Name | Description | | | -- | |
-| Script | Non-normalized text|This script contains digit 0-9. Expand them to normalized words and match with the audio. For example, normalize '123' to 'one hundred and twenty-three'.|
-| Script | Non-normalized text|This script contains symbols {}. Normalize the symbols to match the audio. For example, '50%' to 'fifty percent'.|
+| Script | Non-normalized text|This script contains digit 0-9. Expand them to normalized words and match with the audio. For example, normalize "123" to "one hundred and twenty-three".|
+| Script | Non-normalized text|This script contains symbols {}. Normalize the symbols to match the audio. For example, '50%' to "fifty percent".|
| Script | Not enough question utterances| At least 10% of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.| | Script |Not enough exclamation utterances| At least 10% of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.| | Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. It will be automatically upsampled to 24 KHz if it's lower.|
After your data files have been validated, you can use them to build your custom
By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language (preview) for your voice model. Check the languages supported for custom neural voice and cross-lingual feature: [language for custom neural voice](language-support.md#custom-neural-voice).
-Training of custom neural voices isn't free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for details. However, if you have statistical parametric or concatenative voice models deployed before 3/31/2021 with S0 Speech resources, free neural training credits are offered to your Azure subscription, and you can train 5 different versions of neural voices for free.
+Training of custom neural voices isn't free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for details. However, if you have statistical parametric or concatenative voice models deployed before March 31, 2021 with S0 Speech resources, free neural training credits are offered to your Azure subscription, and you can train 5 different versions of neural voices for free.
3. Next, choose the data you want to use for training, and specify a speaker file. >[!NOTE] >- You need to select at least 300 utterances to create a custom neural voice.
->- To train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom neural voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
+>- To train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom neural voice model. Custom neural voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
4. Then, choose your test script.
After your voice model is successfully built, you can use the generated sample a
The quality of the voice depends on many factors, including the size of the training data, the quality of the recording, the accuracy of the transcript file, how well the recorded voice in the training data matches the personality of the designed voice for your intended use case, and more. [Check here to learn more about the capabilities and limits of our technology and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext). > [!NOTE]
-> Custom neural voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other different regions. Check the regions supported for Custom Neural Voice: [regions for Custom Neural Voice](regions.md#text-to-speech).
+> Custom neural voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other different regions. Check the regions supported for custom neural voice: [regions for custom neural voice](regions.md#text-to-speech).
## Create and use a custom neural voice endpoint
After you've successfully created and tested your voice model, you deploy it in
You can do the following to create a custom neural voice endpoint.
-1. On the **Deploy model** tab, select **Deploy model**.
-2. Next, enter a **Name** and **Description** for your custom endpoint.
-3. Then, select a voice model you would like to associate with this endpoint.
-4. Finally, select **Deploy** to create your endpoint.
+1. Select **Deploy model** > **Deploy model**.
+1. Enter a **Name** and **Description** for your custom endpoint.
+1. Select a voice model you would like to associate with this endpoint.
+1. Select **Deploy** to create your endpoint.
-After you've clicked the **Deploy** button, in the endpoint table, you'll see an entry for your new endpoint. It may take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+After you've selected the **Deploy** button, you'll see an entry for your new endpoint. It may take a few minutes for the Speech service to deploy the new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
You can **Suspend** and **Resume** your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL will be kept the same so you don't need to change your code in your apps.
You can also update the endpoint to a new model. To change the model, make sure
After your endpoint is deployed, the endpoint name appears as a link. Click the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
-The custom endpoint is functionally identical to the standard endpoint that's used for Text-to-Speech requests. For more information, see [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
+The custom endpoint is functionally identical to the standard endpoint that's used for Text-to-Speech requests. For more information, see [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
We also provide an online tool, [Audio Content Creation](https://speech.microsoft.com/audiocontentcreation), that allows you to fine-tune their audio output using a friendly UI. ## Next steps - [How to record voice samples](record-custom-voice-samples.md)-- [Text-to-Speech API reference](rest-text-to-speech.md) - [Long Audio API](long-audio-api.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Previously updated : 05/18/2021 Last updated : 01/23/2022
A Speech service subscription is required before you can use Custom Neural Voice
Once you've created an Azure account and a Speech service subscription, you'll need to sign in Speech Studio and connect your subscription. 1. Get your Speech service subscription key from the Azure portal.
-2. Sign in to [Speech Studio](https://speech.microsoft.com), then click **Custom Voice**.
+2. Sign in to [Speech Studio](https://speech.microsoft.com), then select **Custom Voice**.
3. Select your subscription and create a speech project. 4. If you'd like to switch to another Speech subscription, use the cog icon located in the top navigation.
Once you've created an Azure account and a Speech service subscription, you'll n
Content like data, models, tests, and endpoints are organized into **Projects** in Speech Studio. Each project is specific to a country/language and the gender of the voice you want to create. For example, you may create a project for a female voice for your call center's chat bots that use English in the United States ('en-US').
-To create your first project, select the **Text-to-Speech/Custom Voice** tab, then click **Create project**. Follow the instructions provided by the wizard to create your project. After you've created a project, you will see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. Use the links provided in [next steps](#next-steps) to learn how to use each tab.
+To create a custom voice project:
+1. Sign in [Speech Studio](https://speech.microsoft.com).
+1. Select **Text-to-Speech** > **Custom Voice** > **Create project**.
+1. Follow the instructions provided by the wizard to create your project.
+1. After you've created a project, you will see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md) to set up voice talent and proceed to training data.
## Tips for creating a custom neural voice
Once the recordings are ready, follow [Prepare training data](how-to-custom-voic
### Training
-Once you have prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. You need to select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high quality voice models, you should fix the errors and submit again.
+Once you have prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. You need to select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix the errors and submit again.
### Testing
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Title: How to use speech SDK for pronunciation assessment
+ Title: How to use pronunciation assessment
description: The Speech SDK supports pronunciation assessment, which assesses the pronunciation quality of speech input, with indicators of accuracy, fluency, completeness, etc.
Previously updated : 01/12/2021 Last updated : 01/23/2022 ms.devlang: cpp, csharp, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Pronunciation assessment
-Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real-time.
+Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. Educators can use the capability to evaluate pronunciation of multiple speakers in real time.
In this article, you'll learn how to set up `PronunciationAssessmentConfig` and retrieve the `PronunciationAssessmentResult` using the speech SDK.
This table lists the result parameters of pronunciation assessment.
| Parameter | Description | |--|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Word and full text level accuracy scores are aggregated from phoneme level accuracy score. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Word and full text accuracy scores are aggregated from phoneme-level accuracy score. |
| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | | `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. | | `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
A typical pronunciation assessment result in JSON:
::: zone pivot="programming-language-objectivec" * See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L642) on GitHub for pronunciation assessment. ::: zone-end-
-* [Speech SDK reference documentation](speech-sdk.md)
-
-* [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Previously updated : 03/03/2021 Last updated : 01/23/2022 ms.devlang: cpp, csharp, java, javascript, python
zone_pivot_groups: programming-languages-speech-services-nomore-variant
> [!NOTE] > Viseme events are only available for `en-US` English (United States) [neural voices](language-support.md#text-to-speech) for now.
-A _viseme_ is the visual description of a phoneme in spoken language.
-It defines the position of the face and mouth when speaking a word.
-Each viseme depicts the key facial poses for a specific set of phonemes.
-Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthetic speech.
+A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word. Each viseme depicts the key facial poses for a specific set of phonemes.
-Visemes make avatars easier to use and control. Using visemes, you can:
+Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthetic speech. For example, you can:
- * Create an **animated virtual voice assistant** for intelligent kiosks, building multi-mode integrated services for your customers.
- * Build **immersive news broadcasts** and improve audience experiences with natural face and mouth movements.
- * Generate more **interactive gaming avatars and cartoon characters** that can speak with dynamic content.
- * Make more **effective language teaching videos** that help language learners to understand the mouth behavior of each word and phoneme.
- * People with hearing impairment can also pick up sounds visually and **"lip-read"** speech content that shows visemes on an animated face.
+ * Create an animated virtual voice assistant for intelligent kiosks, building multi-mode integrated services for your customers.
+ * Build immersive news broadcasts and improve audience experiences with natural face and mouth movements.
+ * Generate more interactive gaming avatars and cartoon characters that can speak with dynamic content.
+ * Make more effective language teaching videos that help language learners to understand the mouth behavior of each word and phoneme.
+ * People with hearing impairment can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
See [the introduction video](https://youtu.be/ui9XT47uwxs) of the viseme. > [!VIDEO https://www.youtube.com/embed/ui9XT47uwxs]
Visemes vary by language. Each language has a set of visemes that correspond to
## Next steps
-* [Speech SDK reference documentation](speech-sdk.md)
+> [!div class="nextstepaction"]
+> [SSML phonetic alphabets](speech-ssml-phonetic-sets.md)
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
Title: Real-time Conversation Transcription quickstart - Speech service
-description: Learn how to use real-time Conversation Transcription with the Speech SDK. Conversation Transcription allows you to transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service.
+description: Learn how to use real-time Conversation Transcription with the REST API and SDK. Conversation Transcription allows you to transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service.
Previously updated : 10/20/2020 Last updated : 01/24/2022 zone_pivot_groups: acs-js-csharp ms.devlang: csharp, javascript
# Get started with real-time Conversation Transcription
-The Speech SDK's **ConversationTranscriber** API allows you to transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service using `PullStream` or `PushStream`. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the SDK to transcribe conversations. See the Conversation Transcription [overview](conversation-transcription.md) for more information.
+You can transcribe meetings and other conversations with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe conversations. See the Conversation Transcription [overview](conversation-transcription.md) for more information.
## Limitations
This article assumes that you have an Azure account and Speech service subscript
> [!div class="nextstepaction"] > [Asynchronous Conversation Transcription](how-to-async-conversation-transcription.md)
-> [ROOBO device sample code](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK/blob/master/Samples/Java/Android/Speech%20Devices%20SDK%20Starter%20App/example/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdsdkstarterapp/ConversationTranscription.java)
-> [Azure Kinect Dev Kit sample code](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK/blob/master/Samples/Java/Windows_Linux/SampleDemo/src/com/microsoft/cognitiveservices/speech/samples/Cts.java)
+
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
Previously updated : 08/11/2020 Last updated : 01/24/2022 # Long Audio API
-The Long Audio API provides asynchronous synthesis of long-form text to speech. For example: audio books, news articles and documents. There's no need to deploy a custom voice endpoint. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
+The Long Audio API provides asynchronous synthesis of long-form text to speech. For example: audio books, news articles, and documents. There's no need to deploy a custom voice endpoint. Unlike the Text-to-speech API used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes. This makes it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
## Workflow
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
Previously updated : 10/15/2020 Last updated : 01/24/2022 ms.devlang: cpp, csharp, golang, java, javascript, python
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Previously updated : 07/01/2021 Last updated : 01/24/2022 ms.devlang: csharp
Use REST API v3.0 to:
- Request the manifest of the models that you create, to set up on-premises containers. REST API v3.0 includes such features as:-- **Webhook notifications**: All running processes of the service now support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent.
+- **Webhook notifications**: All running processes of the service support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent.
- **Updating models behind endpoints** - **Model adaptation with multiple datasets**: Adapt a model by using multiple dataset combinations of acoustic, language, and pronunciation data. - **Bring your own storage**: Use your own storage accounts for logs, transcription files, and other data.
Audio is sent in the body of the HTTP `POST` request. It must be in one of the f
| OGG | OPUS | 256 kpbs | 16 kHz, mono | >[!NOTE]
->The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) currently supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+>The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
### Pronunciation assessment parameters
var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
We strongly recommend streaming (chunked) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment). >[!NOTE]
-> The pronunciation assessment feature currently supports the `en-US` language, which is available on all [speech-to-text regions](regions.md#speech-to-text). Support for `en-GB` and `zh-CN` languages is under preview.
+> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
### Sample request
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
Previously updated : 07/01/2021 Last updated : 01/24/2022
ogg-48khz-16bit-mono-opus
### Request body
-The body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
-
-> [!NOTE]
-> If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8).
+If you're using a custom neural voice, the body of a request can be sent as plain text (ASCII or UTF-8). Otherwise, the body of each `POST` request is sent as [SSML](speech-synthesis-markup.md). SSML allows you to choose the voice and language of the synthesized speech that the text-to-speech feature returns. For a complete list of supported voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
### Sample request
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Previously updated : 10/11/2021 Last updated : 01/24/2022 keywords: on-premises, Docker, container
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 09/10/2021 Last updated : 01/24/2022
This article contains a quick reference and the **detailed description** of Azur
## Quotas and Limits quick reference Jump to [Text-to-Speech Quotas and limits](#text-to-speech-quotas-and-limits-per-speech-resource) ### Speech-to-Text Quotas and Limits per Speech resource
-In the tables below Parameters without "Adjustable" row are **not** adjustable for all price tiers.
+In the following tables, the parameters without "Adjustable" row are **not** adjustable for all price tiers.
#### Online Transcription For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
Before requesting a quota increase (where applicable), ensure that it is necessa
To minimize issues related to throttling (Response Code 429), we recommend using the following techniques: - Implement retry logic in your application - Avoid sharp changes in the workload. Increase the workload gradually <br/>
-*Example.* Your application is using Text-to-Speech and your current workload is 5 TPS (transactions per second). The next second you increase the load to 20 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it will not be able to do it within a second, so some of the requests will get Response Code 429.
-- Test different load increase patterns
- - See [Workload pattern example](#example-of-a-workload-pattern-best-practice)
-- Create additional Speech resources in the same or different Regions and distribute the workload among them using "Round Robin" technique. This is especially important for **Text-to-Speech TPS (transactions per second)** parameter, which is set as 200 per Speech Resource and cannot be adjusted
+*Example.* Your application is using text-to-speech and your current workload is 5 Transactions per Second (TPS). The next second you increase the load to 20 TPS (that is four times more). The Service immediately starts scaling up to fulfill the new load, but likely it will not be able to do it within a second, so some of the requests will get Response Code 429.
+- Test different load increase patterns. See the [workload pattern example](#example-of-a-workload-pattern-best-practice)
+- Create additional Speech resources in the same or different Regions and distribute the workload among them using "Round Robin" technique. This is especially important for the text-to-speech Transactions per Second (TPS) parameter, which is set to 200 per Speech resource and cannot be adjusted.
The next sections describe specific cases of adjusting quotas.<br/> Jump to [Text-to-Speech: increasing concurrent request limit for custom neural voices](#text-to-speech-increasing-concurrent-request-limit-for-custom-neural-voices)
By default the number of concurrent requests is limited to 100 per Speech resour
>[!NOTE] > If you use custom models, please be aware, that one Speech resource may be associated with many custom endpoints hosting many custom model deployments. Each Custom endpoint has the default number of concurrent request limit (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint **separately**. Please also note, that the value of the number of concurrent request limit for the base model of a Speech resource has **no** effect to the custom endpoints associated with this resource. - Increasing the Concurrent Request limit does **not** directly affect your costs. Speech service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests. Concurrent Request limits for **Base** and **Custom** models need to be adjusted **separately**.
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Previously updated : 05/07/2021 Last updated : 01/24/2022 # What is Speech Studio?
-[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Speech service in your applications. You create projects in Speech Studio using a no-code approach, and then reference the assets you create in your applications using the [Speech SDK](speech-sdk.md), [Speech CLI](spx-overview.md), or various REST APIs.
+[Speech Studio](https://speech.microsoft.com) is a set of UI-based tools for building and integrating features from Azure Cognitive Services Speech in your applications. You create projects in Speech Studio using a no-code approach, and then reference those assets in your applications using the [Speech SDK](speech-sdk.md), [Speech CLI](spx-overview.md), or REST APIs.
## Set up your Azure account
-You need to have an Azure account and add a Speech service resource before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and resource, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+You need to have an Azure account and add a Speech resource before you can use [Speech Studio](https://speech.microsoft.com). If you don't have an account and resource, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
After you create an Azure account and a Speech service resource: 1. Sign in to the [Speech Studio](https://speech.microsoft.com) with your Azure account.
-1. Select the Speech service resource you need to get started. (You can change the resources anytime in "Settings" in the top menu.)
+1. Select a Speech resource in your subscription. You can change the resources anytime in "Settings" in the top menu.
## Speech Studio features
The following Speech service features are available as project types in Speech S
* **Real-time speech-to-text**: Quickly test speech-to-text by dragging and dropping audio files without using any code. This is a demo tool for seeing how speech-to-text works on your audio samples, but see the [overview](speech-to-text.md) for speech-to-text to explore the full functionality that's available. * **Custom Speech**: Custom Speech allows you to create speech recognition models that are tailored to specific vocabulary sets and styles of speaking. In contrast to using a base speech recognition model, Custom Speech models become part of your unique competitive advantage because they are not publicly accessible. See the [quickstart](how-to-custom-speech-test-and-train.md) to get started with uploading sample audio to create a Custom Speech model. * **Pronunciation Assessment**: Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. Speech Studio provides a sandbox for testing this feature quickly with no code, but see the [how-to](how-to-pronunciation-assessment.md) article for using the feature with the Speech SDK in your applications.
-* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and humanlike neural voices.
+* **Voice Gallery**: Build apps and services that speak naturally. Choose from more than 170 voices in over 70 languages and variants. Bring your scenarios to life with highly expressive and human-like neural voices.
* **Custom Voice**: Custom Voice allows you to create custom, one-of-a-kind voices for text-to-speech. You supply audio files and create matching transcriptions in Speech Studio, and then use the custom voices in your applications. See the [how-to](how-to-custom-voice-create-voice.md) article on creating and using custom voices via endpoints. * **Audio Content Creation**: [Audio Content Creation](how-to-audio-content-creation.md) is an easy-to-use tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. Speech Studio allows you to export your created audio files to use in your applications. * **Custom Keyword**: A Custom Keyword is a word or short phrase that allows your product to be voice-activated. You create a Custom Keyword in Speech Studio, and then generate a binary file to [use with the Speech SDK](custom-keyword-basics.md) in your applications.
The following Speech service features are available as project types in Speech S
## Next steps
-[Explore Speech Studio](https://speech.microsoft.com) and create a project.
----
+> [!div class="nextstepaction"]
+> [Explore Speech Studio](https://speech.microsoft.com)
cognitive-services Tutorial Voice Enable Your Bot Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
Title: "Tutorial: Voice-enable your bot using Speech SDK - Speech service"
+ Title: "Tutorial: Voice-enable your bot - Speech service"
description: In this tutorial, you'll create an echo bot and configure a client app that lets you speak to your bot and hear it respond back to you.
Previously updated : 02/25/2020 Last updated : 01/24/2022 ms.devlang: csharp
-# Tutorial: Voice-enable your bot by using the Speech SDK
+# Tutorial: Voice-enable your bot
-You can use the Speech service in Azure Cognitive Services to voice-enable a chat bot.
+You can use Azure Cognitive Services Speech to voice-enable a chat bot.
-In this tutorial, you'll use the Microsoft Bot Framework to create a bot that repeats what you say to it. You'll deploy your bot to Azure and register it with the Bot Framework Direct Line Speech channel.
-Then, you'll configure a sample client app for Windows that lets you speak to your bot and hear it speak back to you.
+In this tutorial, you'll use the Microsoft Bot Framework to create a bot that responds to what you say. You'll deploy your bot to Azure and register it with the Bot Framework Direct Line Speech channel. Then, you'll configure a sample client app for Windows that lets you speak to your bot and hear it speak back to you.
-This tutorial is designed for developers who are new to Azure, Bot Framework bots, Direct Line Speech, or the Speech SDK, and want to quickly build a working system with limited coding. You don't need experience or familiarity with these services.
+To complete the tutorial, you don't need extensive experience or familiarity with Azure, Bot Framework bots, or Direct Line Speech.
The voice-enabled chat bot that you make in this tutorial follows these steps:
Here's what you'll need to complete this tutorial:
## Create a resource group
-The client app that you'll create in this tutorial uses a handful of Azure services. To reduce the round-trip time for responses from your bot, you'll want to make sure that these services are in the same Azure region.
+The client app that you'll create in this tutorial uses a handful of Azure services. To reduce the round-trip time for responses from your bot, you'll want to make sure that these services are in the same Azure region.
-This section walks you though creating a resource group in the West US region. You'll use this resource group when you're creating individual resources for the Bot Framework, the Direct Line Speech channel, and the Speech service.
+This section walks you through creating a resource group in the West US region. You'll use this resource group when you're creating individual resources for the Bot Framework, the Direct Line Speech channel, and the Speech service.
1. Go to the [Azure portal page for creating a resource group](https://ms.portal.azure.com/#create/Microsoft.ResourceGroup). 1. Provide the following information:
This section walks you though creating a resource group in the West US region. Y
### Choose an Azure region
-If you want to use a different region for this tutorial, these factors might limit your choices:
-
-* Ensure that you use a [supported Azure region](regions.md#voice-assistants).
-
-* The Direct Line Speech channel uses the text-to-speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#prebuilt-neural-voices), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md).
+Ensure that you use a [supported Azure region](regions.md#voice-assistants). The Direct Line Speech channel uses the text-to-speech service, which has neural and standard voices. Neural voices are used at [these Azure regions](regions.md#prebuilt-neural-voices), and standard voices (retiring) are used at [these Azure regions](how-to-migrate-to-prebuilt-neural-voice.md).
For more information about regions, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/).
At this point, check that your resource group (**SpeechEchoBotTutorial-ResourceG
## Build an echo bot
-Now that you've created resources, let's build a bot. We're going to start with the echo bot sample, which (as the name implies) echoes the text that you've entered as its response. Don't worry, the sample code is ready for you to use without any changes. It's configured to work with the Direct Line Speech channel, which you'll connect after you've deployed the bot to Azure.
+Now that you've created resources, start with the echo bot sample, which echoes the text that you've entered as its response. The sample code is already configured to work with the Direct Line Speech channel, which you'll connect after you've deployed the bot to Azure.
> [!NOTE] > The instructions that follow, along with more information about the echo bot, are available in the [sample's README on GitHub](https://github.com/microsoft/BotBuilder-Samples/blob/master/samples/csharp_dotnetcore/02.echo-bot/README.md).
Now that you've created resources, let's build a bot. We're going to start with
### Test the bot sample with the Bot Framework Emulator
-The [Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop app that lets bot developers test and debug their bots locally (or remotely through a tunnel). The emulator accepts typed text as the input (not voice). The bot will also respond with text.
+The [Bot Framework Emulator](https://github.com/microsoft/botframework-emulator) is a desktop app that lets bot developers test and debug their bots locally (or remotely through a tunnel). The emulator accepts typed text as the input (not voice). The bot will also respond with text.
Follow these steps to use the Bot Framework Emulator to test your echo bot running locally, with text input and text output. After you deploy the bot to Azure, you'll test it with voice input and voice output.
Follow these steps to use the Bot Framework Emulator to test your echo bot runni
``` http://localhost:3978/api/messages ```
- Then select **Connect**.
+
+1. Select **Connect**.
1. The bot should greet you with a "Hello and welcome!" message. Type in any text message and confirm that you get a response from the bot. This is what an exchange of communication with an echo bot might look like:
At this point, check your resource group (**SpeechEchoBotTutorial-ResourceGroup*
The **Azure Bot** page has a **Test in Web Chat** option under **Settings**. It won't work by default with your bot because the web chat needs to authenticate against your bot.
-If you want to test your deployed bot with text input, use the following steps. Note that these steps are optional and are not required for you to continue with the tutorial.
+If you want to test your deployed bot with text input, use the following steps. Note that these steps are optional and aren't required for you to continue with the tutorial.
1. In the [Azure portal](https://portal.azure.com), find and open your **EchoBotTutorial-BotRegistration-####** resource. 1. From the **Settings** area, select **Configuration**. Copy the value under **Microsoft App ID**.
If you get an error message in your main app window, use this table to identify
|Error (AuthenticationFailure) : WebSocket Upgrade failed with an authentication error (401). Check for correct subscription key (or authorization token) and region name| On the **Settings** page of the app, make sure that you entered the subscription key and its region correctly. | |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: We could not connect to the bot before sending a message | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.| |Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1002. Error details: The server returned status code '503' when status code '101' was expected | Make sure that you [selected the Enable Streaming Endpoint checkbox](#register-the-direct-line-speech-channel) box and/or [turned on web sockets](#enable-web-sockets).<br>Make sure that Azure App Service is running. If it is, try restarting it.|
-|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your subscription key does not support neural voices. See [neural voices](./regions.md#prebuilt-neural-voices) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
+|Error (ConnectionFailure) : Connection was closed by the remote host. Error code: 1011. Error details: Response status code does not indicate success: 500 (InternalServerError)| Your bot specified a neural voice in the [speak](https://github.com/microsoft/botframework-sdk/blob/master/specs/botframework-activity/botframework-activity.md#speak) field of its output activity, but the Azure region associated with your subscription key doesn't support neural voices. See [neural voices](./regions.md#prebuilt-neural-voices) and [standard voices](how-to-migrate-to-prebuilt-neural-voice.md).|
If the actions in the table don't address your problem, see [Voice assistants: Frequently asked questions](faq-voice-assistants.yml). If you still can't resolve your problem after following all the steps in this tutorial, please enter a new issue on the [Voice Assistant GitHub page](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/issues). #### A note on connection timeout
-If you're connected to a bot and no activity has happened in the last five minutes, the service automatically closes the web socket connection with the client and with the bot. This is by design. A message appears on the bottom bar: "Active connection timed out but ready to reconnect on demand."
+If you're connected to a bot and no activity has happened in the last five minutes, the service automatically closes the web socket connection with the client and with the bot. This is by design. A message appears on the bottom bar: "Active connection timed out but ready to reconnect on demand."
You don't need to select the **Reconnect** button. Press the microphone button and start talking, enter a text message, or say the keyword (if one is enabled). The connection is automatically reestablished.
The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveSe
## Add custom keyword activation
-The Speech SDK supports custom keyword activation. Similar to "Hey Cortana" for Microsoft's assistant, you can write an app that will continuously listen for a keyword of your choice. Keep in mind that a keyword can be single word or a multiple-word phrase.
+The Speech SDK supports custom keyword activation. Similar to "Hey Cortana" for a Microsoft assistant, you can write an app that will continuously listen for a keyword of your choice. Keep in mind that a keyword can be single word or a multiple-word phrase.
> [!NOTE] > The term *keyword* is often used interchangeably with the term *wake word*. You might see both used in Microsoft documentation.
Follow these steps to create a keyword model, configure the Windows Voice Assist
1. Enter the values for **Subscription key** and **Subscription key region**, and then select **OK** to close the **Settings** menu. 1. Select **Reconnect**. You should see a message that reads: "New conversation started - type, press the microphone button, or say the keyword." The app is now continuously listening. 1. Speak any phrase that starts with your keyword. For example: "{your keyword}, what time is it?" You don't need to pause after uttering the keyword. When you're finished, two things happen:
- 1. You see a transcription of what you spoke.
- 1. You hear the bot's response.
+ * You see a transcription of what you spoke.
+ * You hear the bot's response.
1. Continue to experiment with the three input types that your bot supports: * Entering text on the bottom bar * Pressing the microphone icon and speaking
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `2.18.0-amd64`:
+Release note for `3.0.0-amd64`:
-Regular monthly upgrade
+**Features**
+* Support for using containers in [disconnected environments](disconnected-containers.md).
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:c9ef9b95effe2be170d245c1b380262076224a21e859cd648e9dbd4146ddbdaf`|
-| `2.18.0-amd64` | | `sha256:c9ef9b95effe2be170d245c1b380262076224a21e859cd648e9dbd4146ddbdaf`|
+| `latest` | | `sha256:7eff5d7610f20622b5c5cae6235602774108f2de7aeebe2148016b6d232f7c42`|
+| `3.0.0-amd64` | | `sha256:7eff5d7610f20622b5c5cae6235602774108f2de7aeebe2148016b6d232f7c42`|
# [Previous version](#tab/previous)
+Release note for `2.18.0-amd64`:
+
+Regular monthly upgrade
+ Release note for `2.17.0-amd64`: **Features**
Release note for `2.5.0-amd64`:
| Image Tags | Notes | |-|:--|
+| `2.18.0-amd64` | |
| `2.17.0-amd64` | | | `2.16.0-amd64` | | | `2.15.0-amd64` | |
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
-Release note for `2.18.0-amd64-<locale>`:
+Release note for `3.0.0-amd64-<locale>`:
-Regular monthly release
+**Features**
+* Support for using containers in [disconnected environments](disconnected-containers.md).
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `2.18.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.18.0-amd64-en-us`.|
+| `3.0.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.0.0-amd64-en-us`.|
This container has the following locales available.
-| Locale for v2.18.0 | Notes | Digest |
+| Locale for v3.0.0 | Notes | Digest |
|--|:--|:--|
-| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
-| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:16e6f169cf2ea025fc7d21c805a4a452e12b8d7b9530c8e9fc54ae68ee4f08dd` |
-| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:05dd5bc85de5567809259339aa213fc802b38924d025dc1786600e663bfd4996` |
-| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:94973685069d212c19d67d9c0c8eb3f0124e08ff82807e976b59578f1bd67e97` |
-| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:0dd7f1985b8544136bb1049d1b40d7c5858551f81721181a2e34fd1f9cb68e5b` |
-| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
-| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:9879fce4158fb8af2457eb6503607f78b7aade76eb4146c1ee7c142e7f9a21d4` |
-| `ar-om` | Container image with the `ar-OM` locale. | `sha256:0b1cd0c810cabad4217833d44b91479cd416d375e7ea43f2d14645f7bf859aa6` |
-| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
-| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
-| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:7b206ca47a9004866857ad8b9c9ea824bd128089a8bdb374e6da565b0ea30f05` |
-| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:a560c4e58476dcd9e5044f81da766a350b3b3464faaa6c93741a094c4afb621c` |
-| `ca-es` | Container image with the `ca-ES` locale. | `sha256:405cb4f74d10d5ff50efe9161b5cf21204d51c74b83766ea31ec2b8a878de495` |
-| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:87bde59f8fc441165f638a8665c480d259a3107b0edae5f022cb1b8f7e02a837` |
-| `da-dk` | Container image with the `da-DK` locale. | `sha256:ee6773b88378e9a01a35804f965bec0531b01327630174b927320553f023b7e9` |
-| `de-at` | Container image with the `de-AT` locale. | `sha256:f66bee7e43c05c1e434e0218d57ad094d47ec7be39e90ede3eb48fc9398fb873` |
-| `de-ch` | Container image with the `de-CH` locale. | `sha256:adb77da42c2637c072850fb2b5b2b2e508dff79e1ccdc5111b8f635167e35cc1` |
-| `de-de` | Container image with the `de-DE` locale. | `sha256:7143c59231017104bab633a108b5605166355f78e9dde2e3a4ebe6ffe71faafb` |
-| `el-gr` | Container image with the `el-GR` locale. | `sha256:4ce2fdeeaf53edc6811c079365e2aab56be75ea9abe3d94a6a96ca8dc0368573` |
-| `en-au` | Container image with the `en-AU` locale. | `sha256:e02827b1dcef490f792b04e7cd39eb7d46df4dbe57d340549b11193753136e76` |
-| `en-ca` | Container image with the `en-CA` locale. | `sha256:f5411eccf7659b1cc2303e118ef1ef002a700dd1a7363688a224763a6d19b7fe` |
-| `en-gb` | Container image with the `en-GB` locale. | `sha256:a87007b86fb1ca31b9a0368d01d7bfc4337b4262afb3356a88c752a29480f364` |
-| `en-hk` | Container image with the `en-HK` locale. | `sha256:a6014d4cbfafd2d49453f3ff12ea82fe8abc1e14bae639a2e9361de85a095f34` |
-| `en-ie` | Container image with the `en-IE` locale. | `sha256:aa6202c44028d4a8c608f04d6b66f473566d945012372182053d94dfc78eaa93` |
-| `en-in` | Container image with the `en-IN` locale. | `sha256:7ec9eaef19a2545e0a1afd70cb9707cf48029031e9f6b50cb6833045cbe66b29` |
-| `en-nz` | Container image with the `en-NZ` locale. | `sha256:48a95d03dc1200bfb56b1e3416dd1f94a0ad0227c0cf6c3c1730d862f2e99c15` |
-| `en-ph` | Container image with the `en-PH` locale. | `sha256:ab220ea3063af44c0ee7f7b9805289302faea578a50f4da5790b587ea49d31bc` |
-| `en-sg` | Container image with the `en-SG` locale. | `sha256:0f9cadefbe4d8236ef8e9c57b7473327541c1e37f53a2796f332bb2e190391f4` |
-| `en-us` | Container image with the `en-US` locale. | `sha256:bb13765581c938cbdcdcdec16fbc86f098fcebeecd16f33a50d9e5728a9dedb7` |
-| `en-za` | Container image with the `en-ZA` locale. | `sha256:096f4652fa8150cd1e2fa9b504cd2cce5bbb55b467ca9ba9f33d6b5c904fc51f` |
-| `es-ar` | Container image with the `es-AR` locale. | `sha256:acccaa583aaedab78d6614ada897d948d1d36d994d2fcd7f6b7e6435fe0b224f` |
-| `es-bo` | Container image with the `es-BO` locale. | `sha256:8d6631fefc679fe27366521a124d65dfa21c3e6b2a983f7da953e87d8711fad0` |
-| `es-cl` | Container image with the `es-CL` locale. | `sha256:0cd131cc39c2fe1231b7442f43f81b5e7c5317b51f5c9d9306bfa38c6abee060` |
-| `es-co` | Container image with the `es-CO` locale. | `sha256:ef4dcdcbce5f0dadde35f52c4322084274312e7b4a1e7dd18d76f92471a0688a` |
-| `es-cr` | Container image with the `es-CR` locale. | `sha256:8ee41457cf10efda1f3b126ae8dc21a1d5d2e966c9e3327a2134c597cfc16d89` |
-| `es-cu` | Container image with the `es-CU` locale. | `sha256:d00af5e4c41c9a240b64029ea8035e5e0012f54eec970771e84cfc4b59ecc373` |
-| `es-do` | Container image with the `es-DO` locale. | `sha256:9905d776b637cc5de8014a36af94ecc67088c1725fc578f805b682e969e04b3f` |
-| `es-ec` | Container image with the `es-EC` locale. | `sha256:a4e8d08b0a696d879cc20fb55171e90b32590514e999f73f98146b6921443cc3` |
-| `es-es` | Container image with the `es-ES` locale. | `sha256:1ecb4b3c86ff34b26b25058fd6c00b738c3c65d98f15c7a42e187f372ebadb60` |
-| `es-gt` | Container image with the `es-GT` locale. | `sha256:fd575f64f124bcb909d0515666e0a2555c3f1fe31dc8383c7fc953b423eed2e7` |
-| `es-hn` | Container image with the `es-HN` locale. | `sha256:5f96eebe2cea5a67e054c211cb744205e0ef15c957e8d38d618c746ff2c9f82a` |
-| `es-mx` | Container image with the `es-MX` locale. | `sha256:f9c8beb68ac7a1090f974b192df158013da5817b84b7e4c478ca646afe777c70` |
-| `es-ni` | Container image with the `es-NI` locale. | `sha256:150b98205f6802d85c4bb49fd8d334a6dd757ca1bb6cec747f93a5450a94eb85` |
-| `es-pa` | Container image with the `es-PA` locale. | `sha256:b27591217dc5b6db01570e9afac00949cdd78b26fe3469ed538bda62d6fb9209` |
-| `es-pe` | Container image with the `es-PE` locale. | `sha256:77dc8b771f638c2086de2ab573a28953865b95145cf82016459361e5cc3c5a47` |
-| `es-pr` | Container image with the `es-PR` locale. | `sha256:9f429598b0fc09efc6e9ce575fde538d400ceb7fa92807319873daba4b19dcf1` |
-| `es-py` | Container image with the `es-PY` locale. | `sha256:5cdaefc98a799ddd3800176efd6ffb896f5356af9b53a215d0600e874d94d893` |
-| `es-sv` | Container image with the `es-SV` locale. | `sha256:888bee57b4962c05c7a2cf569a22bb7bdc8bf2cf502e7f235ef1a0dafacb352d` |
-| `es-us` | Container image with the `es-US` locale. | `sha256:b021255ff7916f2d4b669114f3e5aad06de0c0b87656a9cc37af1f5f452e910b` |
-| `es-uy` | Container image with the `es-UY` locale. | `sha256:f69c019aa438f3f701b84805842dad98eeaa9a6998b261ea63e56dd80c1cd42c` |
-| `es-ve` | Container image with the `es-VE` locale. | `sha256:6cbd6d11bf9a021277c2fd42ef53242f12b7df00b559e572bbbe6baf48a84bac` |
-| `et-ee` | Container image with the `et-EE` locale. | `sha256:7b3a11a1e6f03ea4b802d97034588fbd461ebfed7ad08dc100c92586feff2208` |
-| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:eb765a640aa8ff89e9bc718b100635a7c6adc2342b2da8fc621e66b7ba8696d4` |
-| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:90127487c698e5d1a45c1a5813cda6805ba52a41468130f6dd4c28fe87f98fab` |
-| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:ffc7c3844873f7e639f2a137b991edc54b750b362756f6f8897fbfaaa32fe1df` |
-| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:ab41b4ad9161c342fac69fbd517264ad23579512a2500190b62e97586e5ec963` |
-| `gu-in` | Container image with the `gu-IN` locale. | `sha256:ac4da9f6d62baa41a193c4765e76eb507f51d069f989ae2860bada1c3e5ff968` |
-| `hi-in` | Container image with the `hi-IN` locale. | `sha256:9131208103997e9829239e3a8585c23f5dc2affc4ffbe3840270247d30b42be6` |
-| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:4ccb5056e7763b736362b7f7b663f71f2bd20b23fc4516a6c63dd105f2b99e9b` |
-| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:05a8d6be2d280cf8aa43fa059f4571417d47866bf603b8c1714ce079c4e66e6d` |
-| `it-it` | Container image with the `it-IT` locale. | `sha256:9e35544bc1a488d4b3fefc05860279c7a189505562fe2e4b1267da67154efded` |
-| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:a1a3a6a81916a98aa6df68704f8a2d8ad318e3cd54d78ed97a98ee3b6af1e599` |
-| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:67af86517f8915f3ebe107f65e62175dd2a7bb995416c963dca1eb398ed1502a` |
-| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:aa2248878811831ab58438f40c66be6332505f3194037275b37babfceaed1732` |
-| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:1ac940c96d054cf75e93cda1b88942ad5a7f4d3a269bbaf42060b91786394356` |
-| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca917fa5139516a75a9747f479fbbfb80819899c9d447c893578aadebf2d1c84` |
-| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:8f2e0aac8961d8c7d560b83ff02f9fdb50708c1e508f8c0c12662391940354df` |
-| `nb-no` | Container image with the `nb-NO` locale. | `sha256:7eae1acddc5341e653944dbe26fd44669e1868b70e5d49559529f2eeb8f33b02` |
-| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:5c3767d6f563b6b201a55338de1149fac43706c026c4ba6a358675d44c44d743` |
-| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:22ee4fd3a864576b58276b9a02821fba439f7ea5f5c462e62deca1778a8b91a6` |
-| `pt-br` | Container image with the `pt-BR` locale. | `sha256:660c69103e721206e14436882272e80396592a45801a186d2830993140d4c8e0` |
-| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:3579963235d8b05173fac42725e3509475bc42e197a5f0f325828a37ef2cf613` |
-| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:23c07debd00bf4a817898784fb77bdf3fd27071b196226a8df81de5bdf4bf9f8` |
-| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:b310ce3849e3c066678e4c90843ccf24e5972759a58b32863ba94801a481811b` |
-| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:a750a88a2c7677b2507730905819764ae56e560a96394abe3340888d4c986f3f` |
-| `sl-si` | Container image with the `sl-SI` locale. | `sha256:3b92dde403d279395de09c77e3f866fc5d6757fc1c9bbf52639be59aee57b3be` |
-| `sv-se` | Container image with the `sv-SE` locale. | `sha256:70291a568a3093db066fbeff4ae294dac1d3ee41789e293896793b9c76990eb9` |
-| `ta-in` | Container image with the `ta-IN` locale. | `sha256:e1a5d1a748137d549b858635c6c9f470e3049a14dc3f5b300dca46819765de9b` |
-| `te-in` | Container image with the `te-IN` locale. | `sha256:0e11a0d8be515c7149f4d1774c1621d6a3b27674a31beaa7a9f62e54f9497858` |
-| `th-th` | Container image with the `th-TH` locale. | `sha256:2164d04ab1f9821c4beccc2d34e97bc9cec7ad387b17e8257801cd25a28dc412` |
-| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:011ce659926bb4d4a56c8b3616b16ac7b80228c43e23d4b9154c96c67aa5db1b` |
-| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:c7357d975838ae827376cc10ef48c6db8ee65751ee4f15db9a31ab5e51a876f2` |
-| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:ea1c310631044b22fb61b79da59089db5ecd2e2ea0c3ab75d63e1c1c1d204a48` |
-| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:c3a2388d3cb7d22035b3a5e4562185541cbfe885ab6ed96f3b9e3a3aa65aa56c` |
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:86ed164f98f1d1776faa9bda4a7846bc0ad9232dd0613ae506fd5698d4823787` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:43fa641504d6e8b89e31f6eaa033ad680bb586b93fa3853747051a570fbf05ca` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:001c0d3ac2e3fec59993e001a8278696b847b14a1bd1ed5c843d18959b3d3d4e` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:1707f21fa9cbe5bd2275023620718f1a98429e5f5fb7279211951500d30a6e65` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:d237ecf21770b493c5aaf3bbab5ae9260aba121518996192d13924b4c5e999f4` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:d1e4e45ba5df3a9307433e8a631f02142c246e5a2fbf9c25edf97e290008c63a` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:a51c67916deac54a73ea1bb5084b511c34cd649764bd5935aac9f527bf33baf0` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:f0d70b8ab0e324ee42f0ca7fe57fa828c29ac1df11261676f7168b60139a0e3c` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:b876d37460b96cddb76fd74f0dfa64ad97399681eda27969e30f74d703a16b05` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:73bb40181bae4da3d3aaa1f77f5b831156ca496fbd065b4944b6e49f0807d9e9` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:a0b65559390af1100941983d850bf549f1aefe3ce56574de1a8cab63d5c52694` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:78030695ef9ff10e5a465340e211f1ca76dce569b9e8bd8c7758d28d2139965e` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:7705a78e3ea3d05bdf1a09876b9cd4c03a8734463f350e0eed81cc989710bcd5` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:d10066583f94bc3db96d2afd28fa42e880bd71e3f6195cc764bda79d039a58c7` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:d8b7d28287e016baacb4df7e3bf2d5cd7f6124ec136446200ad70b9472ee8207` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:493742b671c10b6767b371b8bb687241cbf38f53929a2ecc18979d531be136b4` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:61fa4cb2a671b504f06fa89f4d90ade6ccfbc378d93d1eada0cc47434b45601f` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:3b0f47356aab046c176bf2a5a5187404e3e5a9a50387bd29d35ce2371d49beff` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:bf98a2553b9555254968f6deeeee85e83462cb45a04adcd9c35be62c1cf51924` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:952a611a3911faf893313b51529d392eeac82a4a8abe542c49ca7aa9c89e8a48` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:6ad1168ac4e278ed65d66a8a5de441327522b27619dfdf6ecae52a68ab04b214` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:03174464fab551c34df402102cac3b4f4b4efc0a4250a14c07f35318787ad9e2` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:e38bbe4ae16dc792be3b6e9e2e488588fdd9d12eed08f330cd5dfc5d318b74e0` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:58476a88fb548b0ad18d54a118024c628a555a67d75fa5fdf7e860cc43b25272` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:e1ea7a52fd45ab959d10b597dc7f455f02100973f3edc8a67d25dd8cb373bac3` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:e5eabe477da8f6fb11a8083c67723026f268ba1a35372d1dffde85cc9d09bae9` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:b5c1279f30ee301d7e8d28cb084262da50a5c495feca36f04489a29ecd24f24f` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:d2e70e3fe109c6dcf02d75830efae3ea13955a1e68f590eeaf2c42239cd4a00a` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:70c5975df4b4ae2f301e73e35e21eaef306c50ee62a51526c1c29a0487ef8f0c` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:b81dd737747421ebb71b8f02cd16534a80809f2c792425d04f78388b4e9b10f1` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:2b5a469f630a647626a99a78d5bfe9afec331a18ea895b42bd5aa68bebdca73e` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:5c5c54cfa3da78579e872fec36c49902e402ddb14ffbe4ef4c273e6767219ccf` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:d417cedae4b7eb455700084e3e305552bbd6b2c20d0bba3d03d4a95052002dbc` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:82258abbba72a1238dfa334da0046ffd760480d793f18cbea1441c3fdb596255` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:efad3474a24ba7662e3d10808e31e2641580e206aa381f5d43af79604b367fc0` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:86dc0a12fdd237abc00e14e26714440e311e9945dd07ff662ca24881f23a5b2f` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:52139db949594a13a1c6f98f49b30d880d9426ce2f239bfde6090e3532fd7351` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:0ab8ea9a70f378f6684e4fc7d9d4db0596e8790badf0217b4c415f4857bce38f` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:512853c5af3b374b82848d3c5117d69264473a08d460b85d072829e36e3bd92f` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:c3a871d1f4b6c22e78e92f96ac3af435129ea2cfbe80cfef97d10d88e68ac763` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:bd1ea7e260276d0ea29506270bc790c4eabb76b6d6026776b523628eb7806b08` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:005e23623966802ed801373457ad57bf19aed5031f5fcd197cacb387082c7d95` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:fb0c71003d5dd73d93e10c04b7316d13129152ca293f16ac2b8b91361ecde1ca` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:23d1e068a418845a1783e6f9beb365782dc95baea21304780ea4023444d63352` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:268ef7cec34fd0e2449f15d924a263566dcfb147b66f1596c3b593cdc9080119` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:229e68ab16658646556f76d61e1e675aa39751130b8e87f1aba1d723036230e2` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:764337c9d5145986a1e292dfd6b69fa2a2cc335e0bd9e53c4d4f45b8dff05cc4` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:4ba59e9b68686386055771d240d8b5ca8e5e12723c7017b15e2674f525c46395` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:aa8040e8467194f654cb7c8444e027757053e0322e87940b2f4434e09686cec3` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:b213da609a2f2c8631a71d3e74f6d155e237ddbf1367574a3e6f0fc2144c4b73` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:6b5f98a5c8573dc03557b62ccda6ce9a1426b0ad6f2d432788294c1e41cd9deb` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:b5f5955b4baf9d431fc46c1a8c1afe994e6811ff9ae575106954b1c40821a7d3` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:a1bc229571563ca5769664a2457e42cce82216dfee5820f871b6a870e29f6d26` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:f28b07751cbebcd020e0fba17811fc97ee1f49e53e5584e970d6db30f60e34e9` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:c4bea85be0d7236b86b1a2315de764cb094ab1e567699b90a86e53716ed467f6` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:189bc20605d93b17c739361364b94462d5007ff237ec8b28c0aa0f7aadc495ab` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:572887127159990a3d44f6f5c3e5616d3df5b9f7b4696487407dcda619570d72` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:4b961e96614ce3845d5456db84163ae3a14acca6a8d7adf1ebded8a242f59be8` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:1b2ca4c7ff3361241c6eb5090fd739f9d72c52a6ffcaf05b1d305ae9cac76661` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:4733f6390a776707fc082fd025a73b5e73532c859c6add626640b1705decaa8b` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:84ebb7ab14e9ccd04da97747bc2611bff3f5d7bb2494e16c7ca257f1dacf3742` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca3edf97d26ff985cfe10b1bdcec2f65825758884cf706caca6855c6b865f4fd` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:f3f9e5ee72abed81d93dae46a13be28491f833e96e43312d685388deee39af67` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:e0f5df9b49ebcd341fa4de899d4840c7b9e0cb291d5d6b3c8269f5e40420933c` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:895ce0059b0fafe145053e1521fb63188a6d856753467ab85bd24aa8926102c1` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:f74afc0b64860b97db449a8c6892fb1cb484e0ab9a02b15ab4e984a0f3a7c62d` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:963c4cca989f14861d56aafa1a58ad14f489f7b5ba2ac6052a617d8950ee507c` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:468d4511672566d7d3de35a1c6150fdaa70634664a2553ae871c11806b024cb8` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:4de5d11d77c1e7015090da0a82b81b3328973a389d99afeb2c188e70464bc544` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:8a643ce653efcbf7e8042d87d89e456cd44ab6f100970ed4a38a1d6b5491a6c0` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:8b11c142024cee74d395a5bf0d8e6ed980304ac7a127302b08ad248fb66d82ea` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:bd140766406a58c679e4cf7f4c48448d2cd9f9cacd005c1f5bfd4bf4642b4193` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:a47258027fdaf47b776e1e6f58d71a130574f42a0bccf14ba0a1d215d4546add` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:376cb98f99e733c4f6015cb283923bb07f3c126341959b0ba1cb5472619a2836` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:d0ae77a2e5539dbdd809d895eea123320fb4aab24932af38b769d26968a4150c` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:522c14b9cbb6a218839942bf7c36b3fc207f26cf6ce4068bc883e8dd7890237b` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:c5f1ef181cb8287c917b9db2ee68eaa24b4f05e59372a00081bec70797bd54d1` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:110e1e79bbb10254f9bd735b1c9cb70b0bf5a88f73da7a68985d2c861a40f201` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:c1e0830d3cb04c8151c2e9c8c6eb0fb97036a09829fc8539a06bb07ca68a8e5e` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:dd1ef4db3784594ba8c7c211f6196714690fbd360a8f81f5b109e8a023585b3d` |
# [Previous version](#tab/previous)
+Release note for `2.18.0-amd64-<locale>`:
+
+Regular monthly release
+ Release note for `2.17.0-amd64-<locale>`: **Features**
Release note for `2.5.0-amd64-<locale>`:
| Image Tags | Notes | |--|:--|
+| `2.18.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.18.0-amd64-en-us`.|
| `2.17.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.17.0-amd64-en-us`.| | `2.16.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.16.0-amd64-en-us`.| | `2.15.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.15.0-amd64-en-us`.|
Release note for `2.5.0-amd64-<locale>`:
This container has the following locales available.
+| Locale for v2.18.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:16e6f169cf2ea025fc7d21c805a4a452e12b8d7b9530c8e9fc54ae68ee4f08dd` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:05dd5bc85de5567809259339aa213fc802b38924d025dc1786600e663bfd4996` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:94973685069d212c19d67d9c0c8eb3f0124e08ff82807e976b59578f1bd67e97` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:0dd7f1985b8544136bb1049d1b40d7c5858551f81721181a2e34fd1f9cb68e5b` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:9879fce4158fb8af2457eb6503607f78b7aade76eb4146c1ee7c142e7f9a21d4` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:0b1cd0c810cabad4217833d44b91479cd416d375e7ea43f2d14645f7bf859aa6` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:5cbc37cc91e0608cf174be5f2a320ca7daf312ade59fd9a3983d5324e68edae2` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:7b206ca47a9004866857ad8b9c9ea824bd128089a8bdb374e6da565b0ea30f05` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:a560c4e58476dcd9e5044f81da766a350b3b3464faaa6c93741a094c4afb621c` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:405cb4f74d10d5ff50efe9161b5cf21204d51c74b83766ea31ec2b8a878de495` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:87bde59f8fc441165f638a8665c480d259a3107b0edae5f022cb1b8f7e02a837` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:ee6773b88378e9a01a35804f965bec0531b01327630174b927320553f023b7e9` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:f66bee7e43c05c1e434e0218d57ad094d47ec7be39e90ede3eb48fc9398fb873` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:adb77da42c2637c072850fb2b5b2b2e508dff79e1ccdc5111b8f635167e35cc1` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:7143c59231017104bab633a108b5605166355f78e9dde2e3a4ebe6ffe71faafb` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:4ce2fdeeaf53edc6811c079365e2aab56be75ea9abe3d94a6a96ca8dc0368573` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:e02827b1dcef490f792b04e7cd39eb7d46df4dbe57d340549b11193753136e76` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:f5411eccf7659b1cc2303e118ef1ef002a700dd1a7363688a224763a6d19b7fe` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:a87007b86fb1ca31b9a0368d01d7bfc4337b4262afb3356a88c752a29480f364` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:a6014d4cbfafd2d49453f3ff12ea82fe8abc1e14bae639a2e9361de85a095f34` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:aa6202c44028d4a8c608f04d6b66f473566d945012372182053d94dfc78eaa93` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:7ec9eaef19a2545e0a1afd70cb9707cf48029031e9f6b50cb6833045cbe66b29` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:48a95d03dc1200bfb56b1e3416dd1f94a0ad0227c0cf6c3c1730d862f2e99c15` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:ab220ea3063af44c0ee7f7b9805289302faea578a50f4da5790b587ea49d31bc` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:0f9cadefbe4d8236ef8e9c57b7473327541c1e37f53a2796f332bb2e190391f4` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:bb13765581c938cbdcdcdec16fbc86f098fcebeecd16f33a50d9e5728a9dedb7` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:096f4652fa8150cd1e2fa9b504cd2cce5bbb55b467ca9ba9f33d6b5c904fc51f` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:acccaa583aaedab78d6614ada897d948d1d36d994d2fcd7f6b7e6435fe0b224f` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:8d6631fefc679fe27366521a124d65dfa21c3e6b2a983f7da953e87d8711fad0` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:0cd131cc39c2fe1231b7442f43f81b5e7c5317b51f5c9d9306bfa38c6abee060` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:ef4dcdcbce5f0dadde35f52c4322084274312e7b4a1e7dd18d76f92471a0688a` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:8ee41457cf10efda1f3b126ae8dc21a1d5d2e966c9e3327a2134c597cfc16d89` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:d00af5e4c41c9a240b64029ea8035e5e0012f54eec970771e84cfc4b59ecc373` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:9905d776b637cc5de8014a36af94ecc67088c1725fc578f805b682e969e04b3f` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:a4e8d08b0a696d879cc20fb55171e90b32590514e999f73f98146b6921443cc3` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:1ecb4b3c86ff34b26b25058fd6c00b738c3c65d98f15c7a42e187f372ebadb60` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:fd575f64f124bcb909d0515666e0a2555c3f1fe31dc8383c7fc953b423eed2e7` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:5f96eebe2cea5a67e054c211cb744205e0ef15c957e8d38d618c746ff2c9f82a` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:f9c8beb68ac7a1090f974b192df158013da5817b84b7e4c478ca646afe777c70` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:150b98205f6802d85c4bb49fd8d334a6dd757ca1bb6cec747f93a5450a94eb85` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:b27591217dc5b6db01570e9afac00949cdd78b26fe3469ed538bda62d6fb9209` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:77dc8b771f638c2086de2ab573a28953865b95145cf82016459361e5cc3c5a47` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:9f429598b0fc09efc6e9ce575fde538d400ceb7fa92807319873daba4b19dcf1` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:5cdaefc98a799ddd3800176efd6ffb896f5356af9b53a215d0600e874d94d893` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:888bee57b4962c05c7a2cf569a22bb7bdc8bf2cf502e7f235ef1a0dafacb352d` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:b021255ff7916f2d4b669114f3e5aad06de0c0b87656a9cc37af1f5f452e910b` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:f69c019aa438f3f701b84805842dad98eeaa9a6998b261ea63e56dd80c1cd42c` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:6cbd6d11bf9a021277c2fd42ef53242f12b7df00b559e572bbbe6baf48a84bac` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:7b3a11a1e6f03ea4b802d97034588fbd461ebfed7ad08dc100c92586feff2208` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:eb765a640aa8ff89e9bc718b100635a7c6adc2342b2da8fc621e66b7ba8696d4` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:90127487c698e5d1a45c1a5813cda6805ba52a41468130f6dd4c28fe87f98fab` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:ffc7c3844873f7e639f2a137b991edc54b750b362756f6f8897fbfaaa32fe1df` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:ab41b4ad9161c342fac69fbd517264ad23579512a2500190b62e97586e5ec963` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:ac4da9f6d62baa41a193c4765e76eb507f51d069f989ae2860bada1c3e5ff968` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:9131208103997e9829239e3a8585c23f5dc2affc4ffbe3840270247d30b42be6` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:4ccb5056e7763b736362b7f7b663f71f2bd20b23fc4516a6c63dd105f2b99e9b` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:05a8d6be2d280cf8aa43fa059f4571417d47866bf603b8c1714ce079c4e66e6d` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:9e35544bc1a488d4b3fefc05860279c7a189505562fe2e4b1267da67154efded` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:a1a3a6a81916a98aa6df68704f8a2d8ad318e3cd54d78ed97a98ee3b6af1e599` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:67af86517f8915f3ebe107f65e62175dd2a7bb995416c963dca1eb398ed1502a` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:aa2248878811831ab58438f40c66be6332505f3194037275b37babfceaed1732` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:1ac940c96d054cf75e93cda1b88942ad5a7f4d3a269bbaf42060b91786394356` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca917fa5139516a75a9747f479fbbfb80819899c9d447c893578aadebf2d1c84` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:8f2e0aac8961d8c7d560b83ff02f9fdb50708c1e508f8c0c12662391940354df` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:7eae1acddc5341e653944dbe26fd44669e1868b70e5d49559529f2eeb8f33b02` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:5c3767d6f563b6b201a55338de1149fac43706c026c4ba6a358675d44c44d743` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:22ee4fd3a864576b58276b9a02821fba439f7ea5f5c462e62deca1778a8b91a6` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:660c69103e721206e14436882272e80396592a45801a186d2830993140d4c8e0` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:3579963235d8b05173fac42725e3509475bc42e197a5f0f325828a37ef2cf613` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:23c07debd00bf4a817898784fb77bdf3fd27071b196226a8df81de5bdf4bf9f8` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:b310ce3849e3c066678e4c90843ccf24e5972759a58b32863ba94801a481811b` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:a750a88a2c7677b2507730905819764ae56e560a96394abe3340888d4c986f3f` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:3b92dde403d279395de09c77e3f866fc5d6757fc1c9bbf52639be59aee57b3be` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:70291a568a3093db066fbeff4ae294dac1d3ee41789e293896793b9c76990eb9` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:e1a5d1a748137d549b858635c6c9f470e3049a14dc3f5b300dca46819765de9b` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:0e11a0d8be515c7149f4d1774c1621d6a3b27674a31beaa7a9f62e54f9497858` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:2164d04ab1f9821c4beccc2d34e97bc9cec7ad387b17e8257801cd25a28dc412` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:011ce659926bb4d4a56c8b3616b16ac7b80228c43e23d4b9154c96c67aa5db1b` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:c7357d975838ae827376cc10ef48c6db8ee65751ee4f15db9a31ab5e51a876f2` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:ea1c310631044b22fb61b79da59089db5ecd2e2ea0c3ab75d63e1c1c1d204a48` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:c3a2388d3cb7d22035b3a5e4562185541cbfe885ab6ed96f3b9e3a3aa65aa56c` |
+ | Locale for v2.17.0 | Notes | Digest | |--|:--|:--| | `ar-ae` | Container image with the `ar-AE` locale. | `sha256:4e7fd7c2412e13e4f5d642b105230e90ae6cc2f0457ceabd2db340e0ae29f316` |
Release notes for `v1.12.0`:
| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. | | `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. | + # [Previous version](#tab/previous) Release notes for `v1.11.0`:
cognitive-services Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/concepts/evaluation.md
Previously updated : 11/02/2021 Last updated : 01/24/2022 # Evaluation metrics
-Your [dataset is split](../how-to/train-model.md#data-splits) into two parts: a set for training, and a set for testing. The training set while building the model and the testing set is used as a blind set to evaluate model performance after training is completed.
+Your [dataset is split](../how-to/train-model.md#data-split) into two parts: a set for training, and a set for testing. The training set while building the model and the testing set is used as a blind set to evaluate model performance after training is completed.
Model evaluation is triggered after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for files in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/faq.md
When you're ready to start [using your model to make predictions](#how-do-i-use-
## What is the recommended CI/CD process?
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model, your dataset is [split](how-to/train-model.md#data-splits) randomly into training and testing sets. Because of this, there is no guarantee that the model evaluation is performed on the same test set, so results are not comparable. It is recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model, your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets. Because of this, there is no guarantee that the model evaluation is performed on the same test set, so results are not comparable. It is recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
## Does a low or high model score guarantee bad or good performance in production?
See the [data selection and schema design](how-to/design-schema.md) article for
## When I retrain my model I get different results, why is this?
-* When you train a new model your dataset is [split](how-to/train-model.md#data-splits) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+* When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/how-to/train-model.md
Before you train your model you need:
See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-## Data splits
+## Data split
Before starting the training process, files in your dataset are divided into three groups at random:
Before starting the training process, files in your dataset are divided into thr
2. Select **Train** from the left side menu.
-3. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
+3. Select **Start a training job** from the top menu.
+
+4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
:::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
-4. Select the **Train** button at the bottom of the page.
+5. Click on the **Train** button.
-The time to train a model varies on the dataset, and may take up to several hours. You can only train one model at a time, and you cannot create or train other models if one is already training in the same project.
+6. You can check the status of the training job in the same page. Only successfully completed tasks will generate models.
+You can only have one training job running at a time. You cannot create or start other tasks in the same project.
-After training has completed successfully, keep in mind:
+<!-- After training has completed successfully, keep in mind:
* [View the model's evaluation details](../how-to/view-model-evaluation.md) After model training, model evaluation is done against the [test set](../how-to/train-model.md#data-splits), which was not introduced to the model during training. By viewing the evaluation, you can get a sense of how the model performs in real-life scenarios. * [Examine data distribution](../how-to/improve-model.md#examine-data-distribution-from-language-studio) Make sure that all classes are well represented and that you have a balanced data distribution to make sure that all your classes are adequately represented. If a certain class is tagged far less frequent than the others, this class is likely under-represented and most occurrences probably won't be recognized properly by the model at runtime. In this case, consider adding more files that belong to this class to your dataset.-
+ -->
* [Improve performance (optional)](../how-to/improve-model.md) Other than revising [tagged data](tag-data.md) based on error analysis, you may want to increase the number of tags for under-performing entity types, or improve the diversity of your tagged data. This will help your model learn to give correct predictions, over potential linguistic phenomena that cause failure. <!-- * Define your own test set: If you are using a random split option and the resulting test set was not comprehensive enough, consider defining your own test to include a variety of data layouts and balanced tagged classes. -->-- ## Next steps After training is completed, you will be able to [use the model evaluation metrics](../how-to/view-model-evaluation.md) to optionally [improve your model](../how-to/improve-model.md). Once you're satisfied with your model, you can deploy it, making it available to use for [classifying text](call-api.md).
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/how-to/view-model-evaluation.md
The evaluation process uses the trained model to predict user-defined classes fo
## View the model details using Language Studio 1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- 1. Look for the section in Language Studio labeled **Classify text**.
- 2. Select **Custom text classification**.
2. Select **View model details** from the left side menu.
-3. View your model training status in the **Status** column, and the F1 score for the model in the **F1 score** column.
+3. In this page you can only view the successfully trained models. You can select the model name for more details.
- :::image type="content" source="../media/model-details-1.png" alt-text="View model details button" lightbox="../media/model-details-1.png":::
-
-1. Click on the model name for more details.
-
-2. You can find the **model-level** evaluation metrics under the **Overview** section and the **class-level** evaluation metrics under the **Class performance metrics** section. See [Evaluation metrics](../concepts/evaluation.md#model-level-and-class-level-evaluation-metrics) for more information.
+4. You can find the **model-level** evaluation metrics under the **Overview** section and the **class-level** evaluation metrics under the **Class performance metrics** section. See [Evaluation metrics](../concepts/evaluation.md#model-level-and-class-level-evaluation-metrics) for more information.
:::image type="content" source="../media/model-details-2.png" alt-text="Model performance metrics" lightbox="../media/model-details-2.png":::
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/quickstart.md
Previously updated : 01/13/2021 Last updated : 01/25/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/service-limits.md
description: Learn about the data and rate limits when using custom text classif
Last updated : 01/25/2022 Previously updated : 11/02/2021
Use this article to learn about the data and rate limits when using custom text
* All files should be available at the root of your container.
-* Your [training dataset](how-to/train-model.md#data-splits) should include at least 10 files and no more than 1,000,000 files.
+* Your [training dataset](how-to/train-model.md#data-split) should include at least 10 files and no more than 1,000,000 files.
## API limits
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/train-model.md
Before starting the training process, files in your dataset are divided into two
2. Select **Train** from the left side menu.
-3. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
+3. Select **Start a training job** from the top menu.
+
+4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
:::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
-4. Select the **Train** button at the bottom of the page.
+5. Click on the **Train** button.
+
+6. You can check the status of the training job in the same page. Only successfully completed tasks will generate models.
+
+You can only have one training job running at a time. You cannot create or start other tasks in the same project.
## Next steps
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
See the [application development lifecycle](../overview.md#application-developme
## View the model's evaluation details 1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
- 1. Look for the section in Language Studio labeled **Extract information**.
- 2. Select **Custom named entity extraction**.
2. Select **View model details** from the menu on the left side of the screen.
-3. View your model training status in the **Status** column, and the F1 score for the model in the **F1 score** column. you can click on the model name for more details.
+3. In this page you can only view the sucessfuly trained models. You can click on the model name for more details.
4. You can find the **model-level** evaluation metrics under **Overview**, and the **entity-level** evaluation metrics under **Entity performance metrics**. The confusion matrix for the model is located under **Test set confusion matrix**
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/quickstart.md
Previously updated : 11/02/2021 Last updated : 01/24/2022 zone_pivot_groups: usage-custom-language-features
zone_pivot_groups: usage-custom-language-features
# Quickstart: Custom Named Entity Recognition (preview)
-In this article, we use the Language studio to demonstrate key concepts of custom name entity recognition (NER). As an example we will build a custom NER model to extract relevant entities from loan agreements.
+In this article, we use the Language studio to demonstrate key concepts of custom Named Entity Recognition (NER). As an example weΓÇÖll build a custom NER model to extract relevant entities from loan agreements.
::: zone pivot="language-studio"
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
description: Migrate your legacy QnAMaker knowledge bases to custom question ans
-- Previously updated : 11/02/2021++ Last updated : 01/23/2022 # Migrate from QnA Maker to custom question answering
-Custom question answering was introduced in May 2021 with several new features including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each custom question answering project is equivalent to a knowledge base in QnA Maker. You can easily migrate knowledge bases from a QnA Maker resource to custom question answering projects within a [language resource](https://aka.ms/create-language-resource). You can also choose to migrate knowledge bases from multiple QnA Maker resources to a specific language resource.
+Custom question answering, a feature of Azure Cognitive Service for Language was introduced in May 2021 with several new capabilities including enhanced relevance using a deep learning ranker, precise answers, and end-to-end region support. Each custom question answering project is equivalent to a knowledge base in QnA Maker. You can easily migrate knowledge bases from a QnA Maker resource to custom question answering projects within a [language resource](https://aka.ms/create-language-resource). You can also choose to migrate knowledge bases from multiple QnA Maker resources to a specific language resource.
To successfully migrate knowledge bases, **the account performing the migration needs contributor access to the selected QnA Maker and language resource**. When a knowledge base is migrated, the following details are copied to the new custom question answering project:
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/language-support.md
description: A list of culture, natural languages supported by custom question answering for your knowledge base. Do not mix languages in the same knowledge base. ++
+recommendations: false
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/overview.md
Title: What is question answering?
description: Question answering is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information. ++
+recommendations: false
Last updated 11/02/2021 keywords: "qna maker, low code chat bots, multi-turn conversations"
cognitive-services Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/quickstart/sdk.md
Last updated 11/29/2021++
+recommendations: false
ms.devlang: csharp, python zone_pivot_groups: custom-qna-quickstart
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
This article provides information about limitations and known issues related to
The following sections provide information about known issues associated with the Communication Services JavaScript voice and video calling SDKs.
+### Some Android devices failing to join calls and meetings.
+
+A number of specific Android devices fail to join calls and meetings. The devices that run into this issue, wont recover and will fail on every attemp. These are mostly Samsung phones with biggest contributors A326U, A125U and A215U models.
+
+- This is a known regression introduced on [Chromium](https://bugs.chromium.org/p/webrtc/issues/detail?id=13223).
+ ### iOS 15.1 users joining group calls or Microsoft Teams meetings. * Low volume. Known regression introduced by Apple with the release of iOS 15.1. Related webkit bug [here](https://bugs.webkit.org/show_bug.cgi?id=230902).
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Communication Services APIs are documented alongside other Azure REST APIs in [d
| Azure Resource Manager | [REST](/rest/api/communication/communicationservice)| Service| Provision and manage Communication Services resources| | Common | N/A | Client & Service | Provides base types for other SDKs | | Identity | [REST](/rest/api/communication/communicationidentity/communication-identity) | Service| Manage users, access tokens|
-| Phone numbers| [REST](/rest/api/communication/phonenumbers)| Service| Acquire and manage phone numbers |
-| SMS| [REST](/rest/api/communication/sms) | Service| Send and receive SMS messages|
+| Phone numbers| [REST](/rest/api/communication/phonenumbers) | Service| Acquire and manage phone numbers |
+| SMS | [REST](/rest/api/communication/sms) | Service| Send and receive SMS messages|
| Chat | [REST](/rest/api/communication/) with proprietary signaling | Client & Service | Add real-time text chat to your applications |
-| Calling| Proprietary transport | Client | Voice, video, screen-sharing, and other real-time communication |
-| Calling Server | REST| Service| Make and manage calls, play audio, and configure recording |
-| Network Traversal| REST| Service| Access TURN servers for low-level data transport |
+| Calling | Proprietary transport | Client | Voice, video, screen-sharing, and other real-time communication |
+| Calling Server | [REST](/rest/api/communication/callautomation/server-calls) | Service| Make and manage calls, play audio, and configure recording |
+| Network Traversal | [REST](/api/communication/communication-network-traversal)| Service| Access TURN servers for low-level data transport |
| UI Library | N/A | Client | Production-ready UI components for chat and calling apps | ### Languages and publishing locations
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
# Voice and video concepts - You can use Azure Communication Services to make and receive one to one or group voice and video calls. Your calls can be made to other Internet-connected devices and to plain-old telephones. You can use the Communication Services JavaScript, Android, or iOS SDKs to build applications that allow your users to speak to one another in private conversations or in group discussions. Azure Communication Services supports calls to and from services or Bots. ## Call types in Azure Communication Services
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
# Calling SDK overview - The Calling SDK enables end-user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, please check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md). Once you've started development, check out the [known issues page](../known-issues.md) to find bugs we're working on.
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
# QuickStart: Add 1:1 video calling to your app - ::: zone pivot="platform-web" [!INCLUDE [Video calling with JavaScript](./includes/video-calling/video-calling-javascript.md)] ::: zone-end
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
# QuickStart: Add 1:1 video calling to your customized Teams application - [!INCLUDE [Video calling with JavaScript](./includes/custom-teams-endpoint/voice-video-calling-cte-javascript.md)] ## Clean up resources
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app. - [!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)] ::: zone pivot="platform-windows"
connectors Compare Built In Azure Connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/compare-built-in-azure-connectors.md
+
+ Title: Built-in operations versus Azure connectors in Standard
+description: Learn the differences between built-in operations and Azure connectors for Standard logic apps.
+
+ms.suite: integration
++ Last updated : 01/20/2022+
+# As a developer, I want to understand the differences between built-in and Azure connectors in Azure Logic Apps (Standard).
++
+# Differences between built-in operations and Azure connectors in Azure Logic Apps (Standard)
+
+When you create a **Logic App (Standard)** stateful workflow, the workflow designer shows the available triggers and actions as categories named **Built-in** and **Azure**.
+
+In **Logic Apps (Standard)**, built-in connectors are known as *service provider connectors*. These connectors are actually custom extensions that are implemented based on Azure Functions. Built-in connectors run in the same cluster as the Azure Logic Apps engine. Anyone can create their own service provider connector.
+
+Azure connectors are the same as the [*managed connectors*](managed.md) in **Logic App (Consumption)** workflows. These connectors are managed by Microsoft and run in shared connector clusters in the Azure cloud. Separately from the managed connector clusters, the Azure Logic Apps engine runs in a different cluster. If your workflow has to invoke a managed connector operation, the Azure Logic Apps engine makes a call to the connector in the connector clusters. In turn, the connector might then call the backend target service, which can be Office 365, Salesforce, and so on.
+
+<a name="considerations-authentication"></a>
+
+## Considerations for authentication
+
+Authentication considerations for built-in and Azure connectors differ based on whether you develop the workflow in the Azure portal or locally in Visual Studio Code.
+
+| Environment | Connector type | Authentication |
+|-|-|-|
+| Azure portal | Built-in | Connection strings, credentials, or connection parameters are stored in your logic app's configuration or app settings. |
+| Azure portal | Azure | Connections are authenticated using either a managed identity or [Azure Active Directory (Azure AD) app registration with access policies enabled on the Azure API connections](../logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md#set-up-connection-authentication). |
+| Visual Studio Code | Built-in | Connection strings or credentials are stored in the logic app project's **local.settings.json** file. |
+| Visual Studio Code | Azure | During workflow design, API connections are created and stored in the Azure cloud backend. To run these connections in your local environment, a bearer token is issued for seven days and is stored in your logic app project's **local.settings.json** file. |
+||||
+
+<a name="considerations-backend-communication"></a>
+
+## Considerations for backend communication
+
+For an Azure connector to work, your backend service, such as Office 365 or SQL Server, has to allow traffic through the [outbound IP addresses for managed connectors](/connectors/common/outbound-ip-addresses) in the region where you created your logic app.
+
+For a built-in connector to work, your backend service has to allow traffic from the Azure Logic Apps engine instead. You can find the outbound IP addresses for the Azure Logic Apps enine by using the following steps:
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On the logic app resource menu, under **Settings**, select **Properties**.
+
+1. Under **Outgoing IP addresses** and **Additional Outgoing IP addresses**, copy all the IP addresses, and set your backend service to allow traffic through these IP addresses.
+
+<a name="considerations-vnet"></a>
+
+## Considerations for virtual network integration
+
+Built-in connectors run in the same cluster as the Azure Logic Apps host runtime and can use virtual network (VNet) integration capabilities to access resources over a private network. However, Azure connectors run in shared managed connector environment and can't benefit from these VNET integration capabilities.
+
+Instead, for Azure connectors to work when VNet integration is enabled on a Standard logic app, you have to allow traffic through the [outbound IP addresses for managed connectors](/connectors/common/outbound-ip-addresses) in the region where you created your logic app. For example, if the subnet that's used in the VNet integration has a network security group (NSG) policy or firewall, that subnet has to allow outbound traffic to the outbound IP addresses for managed connectors.
+
+## Next steps
+
+- [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
+
+- [Azure Logic Apps Running Anywhere: Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272)
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-recurrence.md
Title: Schedule recurring tasks and workflows
-description: Schedule and run recurring automated tasks and workflows with the Recurrence trigger in Azure Logic Apps
+description: Schedule and run recurring automated tasks and workflows with the Recurrence trigger in Azure Logic Apps.
ms.suite: integration-- Previously updated : 12/18/2020++ Last updated : 01/24/2022 # Create, schedule, and run recurring tasks and workflows with the Recurrence trigger in Azure Logic Apps
For differences between this trigger and the Sliding Window trigger or for more
|||||| > [!IMPORTANT]
- > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time),
- > the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior,
- > provide a start date and time for when you want the first recurrence to run.
+ > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
+ >
+ > * **Day**: Set up the daily recurrence at least 24 hours in advance.
+ >
+ > * **Week**: Set up the weekly recurrence at least 7 days in advance.
+ >
+ > Otherwise, the workflow might skip the first recurrence.
+ >
+ > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time), the first recurrence runs immediately
+ > when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, provide a start
+ > date and time for when you want the first recurrence to run.
+ >
+ > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences,
+ > those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to
+ > factors such as latency during storage calls. To make sure that your logic app doesn't miss a recurrence, especially when
+ > the frequency is in days or longer, try these options:
>
- > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences, those recurrences are
- > based on the last run time. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
- > To make sure that your logic app doesn't miss a recurrence, especially when the frequency is in days or longer, try these options:
- >
> * Provide a start date and time for the recurrence plus the specific times when to run subsequent recurrences by using the properties > named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
- >
- > * Use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md),
- > rather than the Recurrence trigger.
+ >
+ > * Use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), rather than the Recurrence trigger.
1. To set advanced scheduling options, open the **Add new parameter** list. Any options that you select appear on the trigger after selection.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-apps/containers.md
The following example configuration shows the options available when setting up
| `args` | Start up command arguments. | Entries in the array are joined together to create a parameter list to pass to the startup command. | | `env` | An array of key/value pairs that define environment variables. | Use `secretRef` instead of the `value` field to refer to a secret. | | `resources.cpu` | The number of CPUs allocated to the container. | Values must adhere to the following rules: the value must be greater than zero and less than 2, and can be any decimal number, with a maximum of one decimal place. For example, `1.1` is valid, but `1.55` is invalid. The default is 0.5 CPU per container. |
-| `resources.memory` | The amount of RAM allocated to the container. | This value is up to `4Gi`. The only allowed united are [gibibytes](https://simple.wikipedia.org/wiki/Gibibyte) (`Gi`). Values must adhere to the following rules: the value must be greater than zero and less than `4Gi`, and can be any decimal number, with a maximum of two decimal places. For example, `1.25Gi` is valid, but `1.555Gi` is invalid. The default is `1Gi` per container. |
+| `resources.memory` | The amount of RAM allocated to the container. | This value is up to `4Gi`. The only allowed units are [gibibytes](https://simple.wikipedia.org/wiki/Gibibyte) (`Gi`). Values must adhere to the following rules: the value must be greater than zero and less than `4Gi`, and can be any decimal number, with a maximum of two decimal places. For example, `1.25Gi` is valid, but `1.555Gi` is invalid. The default is `1Gi` per container. |
The total amount of CPUs and memory requested for all the containers in a container app must add up to one of the following combinations.
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/best-practice-dotnet.md
Previously updated : 08/26/2021 Last updated : 01/25/2022
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
| <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. | | <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) | | <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
+| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you are using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
## Best practices when using Gateway mode Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
Previously updated : 07/08/2021 Last updated : 01/25/2022 ms.devlang: csharp
If you're testing at high throughput levels, or at rates that are greater than 5
> [!NOTE] > High CPU usage can cause increased latency and request timeout exceptions.
+## <a id="logging-and-tracing"></a> Logging and tracing
+
+Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
+
+Latest SDK versions (greater than 3.23.0) automatically remove it when they detect it, with older versions, you can remove it by:
+
+# [.NET 6 / .NET Core](#tab/trace-net-core)
+
+```csharp
+if (!Debugger.IsAttached)
+{
+ Type defaultTrace = Type.GetType("Microsoft.Azure.Cosmos.Core.Trace.DefaultTrace,Microsoft.Azure.Cosmos.Direct");
+ TraceSource traceSource = (TraceSource)defaultTrace.GetProperty("TraceSource").GetValue(null);
+ traceSource.Listeners.Remove("Default");
+ // Add your own trace listeners
+}
+```
+
+# [.NET Framework](#tab/trace-net-fx)
+
+Edit your `app.config` or `web.config` files:
+
+```xml
+<configuration>
+ <system.diagnostics>
+ <sources>
+ <source name="DocDBTrace" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >
+ <listeners>
+ <remove name="Default" />
+ <!--Add your own trace listeners-->
+ <add name="myListener" ... />
+ </listeners>
+ </source>
+ </sources>
+ </system.diagnostics>
+<configuration>
+```
+++ ## Networking <a id="direct-connection"></a>
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4. Previously updated : 06/11/2020 Last updated : 01/25/2022 ms.devlang: java
GoneException{error=null, resourceAddress='https://cdb-ms-prod-westus-fd4.docume
If you have a firewall running on your app machine, open port range 10,000 to 20,000 which are used by the direct mode. Also follow the [Connection limit on a host machine](#connection-limit-on-host).
+#### UnknownHostException
+
+UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved.
+ #### HTTP proxy If you use an HTTP proxy, make sure it can support the number of connections configured in the SDK `ConnectionPolicy`.
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-transfers.md
Title: Azure Enterprise transfers
-description: Describes Azure EA transfers
+ Title: Transfer Azure Enterprise enrollment accounts and subscriptions
+description: Describes how Azure Enterprise enrollment accounts and subscriptions are transferred.
Previously updated : 08/02/2021 Last updated : 01/24/2022
-# Azure Enterprise transfers
+# Transfer Azure Enterprise enrollment accounts and subscriptions
This article provides an overview of enterprise transfers.
This article provides an overview of enterprise transfers.
An account transfer moves an account owner from one enrollment to another. All related subscriptions under the account owner will move to the target enrollment. Use an account transfer when you have multiple active enrollments and only want to move selected account owners.
-This section is for informational purposes only as the action cannot be performed by an enterprise administrator. A support request is needed to transfer an enterprise account to a new enrollment.
+This section is for informational purposes only as the action can't be performed by an enterprise administrator. A support request is needed to transfer an enterprise account to a new enrollment.
Keep the following points in mind when you transfer an enterprise account to a new enrollment:
Keep the following points in mind when you transfer an enterprise account to a n
When you request an account transfer, provide the following information: - The number of the target enrollment, account name, and account owner email of account to transfer-- For the source enrollment, the enrollment number and account to transfer
+- The enrollment number and account to transfer for the source enrollment
Other points to keep in mind before an account transfer: -- Approval from an EA Administrator is required for the target and source enrollment-- If an account transfer doesn't meet your requirements, consider an enrollment transfer.-- The account transfer transfers all services and subscriptions related to the specific accounts.-- After the transfer is complete, the transferred account appears inactive under the source enrollment and appears active under the target enrollment.-- The account shows the end date corresponding to the effective transfer date on the source enrollment and as a start date on the target enrollment.-- Any usage occurred for the account before the effective transfer date remains under the source enrollment.
+- Approval from an EA Administrator is required for the target and source enrollment.
+- You should consider an enrollment transfer if an account transfer doesn't meet your requirements.
+- Your account transfer moves all services and subscriptions related to the specific accounts.
+- Your transferred account appears inactive under the source enrollment and appears active under the target enrollment when the transfer is complete.
+- Your account shows the end date corresponding to the effective transfer date on the source enrollment. The same date is the start date on the target enrollment.
+- Your account usage incurred before the effective transfer date remains under the source enrollment.
## Transfer enterprise enrollment to a new one
An enrollment transfer is considered when:
- An enrollment is in expired/extended status and a new agreement is negotiated. - You have multiple enrollments and want to combine all the accounts and billing under a single enrollment.
-This section is for informational purposes only as the action cannot be performed by an enterprise administrator. A support request is needed to transfer an enterprise enrollment to a new one, unless the enrollment qualifies for [Auto enrollment transfer](#auto-enrollment-transfer).
+This section is for informational purposes only as the action can't be performed by an enterprise administrator. A support request is needed to transfer an enterprise enrollment to a new one, unless the enrollment qualifies for [Auto enrollment transfer](#auto-enrollment-transfer).
When you request to transfer an entire enterprise enrollment to an enrollment, the following actions occur: - Usage transferred may take up to 72 hours to be reflected in the new enrollment.-- If DA or AO view charges were enabled on the transferred enrollment, they must be enabled on the new enrollment.-- If you are using API reports or Power BI, please generate a new API key under your new enrollment.
+- If department administrator (DA) or account owner (AO) view charges were enabled on the transferred enrollment, they must be enabled on the new enrollment.
+- If you're using API reports or Power BI, generate a new API key under your new enrollment.
- All Azure services, subscriptions, accounts, departments, and the entire enrollment structure, including all EA department administrators, transfer to a new target enrollment. - The enrollment status is set to _Transferred_. The transferred enrollment is available for historic usage reporting purposes only. - You can't add roles or subscriptions to a transferred enrollment. Transferred status prevents more usage against the enrollment. - Any remaining Azure Prepayment balance in the agreement is lost, including future terms.-- If the enrollment you're transferring from has RI purchases, the RI purchasing fee will remain in the source enrollment however all RI benefits will be transferred across for use in the new enrollment.
+- If the enrollment you're transferring from has reservation purchases, the reservation purchasing fee will remain in the source enrollment. However, all reservation benefits will be transferred across for use in the new enrollment.
- The marketplace one-time purchase fee and any monthly fixed fees already incurred on the old enrollment aren't transferred to the new enrollment. Consumption-based marketplace charges will be transferred. ### Effective transfer date
Other points to keep in mind before an enrollment transfer:
- The source enrollment status will be updated to transferred and will only be available for historic usage reporting purposes. - There's no downtime during an enrollment transfer. - Usage may take up to 24 - 48 hours to be reflected in the target enrollment.-- Cost view settings for Department Administrators or Account Owners don't carry over.
+- Cost view settings for department administrators or account owners don't carry over.
- If previously enabled, settings must be enabled for the target enrollment. - Any API keys used in the source enrollment must be regenerated for the target enrollment.-- If the source and destination enrollments are on different cloud instances, the transfer will fail. Azure Support can transfer only within the same cloud instance.
+- If the source and destination enrollments are on different cloud instances, the transfer will fail. Support personnel can transfer only within the same cloud instance.
- For reservations (reserved instances): - The enrollment or account transfer between different currencies affects monthly reservation purchases.
- - Whenever there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. This is intentional and affects only the monthly reservation purchases.
+ - Whenever there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment at the time of next monthly payment for an individual reservation. This cancellation is intentional and affects only the monthly reservation purchases.
- You may have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency. ### Auto enrollment transfer
-You might see that an enrollment has the **Transferred** state, even if you haven't submitted a support ticket to request an enrollment transfer. The **Transferred** state results from the auto enrollment transfer process. In order for the auto enrollment transfer process to occur during the renewal phrase, there are a few items that must be included in the new agreement:
+You might see that an enrollment has the **Transferred** state, even if you haven't submitted a support ticket to request an enrollment transfer. The **Transferred** state results from the auto enrollment transfer process. In order for the auto enrollment transfer to occur during the renewal phrase, there are a few items that must be included in the new agreement:
- Prior enrollment number (it must exist in EA portal) - Expiration date of the prior enrollment number is one day before the effective start date of the new agreement
You might see that an enrollment has the **Transferred** state, even if you have
If there's no missing usage data in the EA portal between the prior enrollment and the new enrollment, then you don't have to create a transfer support ticket.
-### Azure Prepayment
+### Prepayment isn't transferrable
-Azure Prepayment isn't transferrable between enrollments. Azure Prepayment balances are tied contractually to the enrollment where it was ordered. Azure Prepayment isn't transferred as part of the account or enrollment transfer process.
+Prepayment isn't transferrable between enrollments. Prepayment balances are tied contractually to the enrollment where it was ordered. Prepayment isn't transferred as part of the account or enrollment transfer process.
### No services affected for account and enrollment transfers
The Azure EA portal can transfer subscriptions from one account owner to another
## Subscription transfer effects
-When an Azure subscription is transferred to an account in the same Azure Active Directory tenant, then all users, groups, and service principals that had [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) to manage resources keep their access.
+When an Azure subscription is transferred to an account in the same Azure Active Directory tenant, then all users, groups, and service principals that had Azure role-based access control (RBAC) to manage resources keep their access. For more information, see [(Azure RBAC)](../../role-based-access-control/overview.md).
-To view users with RBAC access to the subscription:
+To view users with Azure RBAC access to the subscription:
-1. In the Azure portal, open **Subscriptions**.
+1. Open **Subscriptions** in the Azure portal.
2. Select the subscription you want to view, and then select **Access control (IAM)**.
-3. Select **Role assignments**. The role assignments page lists all users who have RBAC access to the subscription.
+3. Select **Role assignments**. The role assignments page lists all users who have Azure RBAC access to the subscription.
-If the subscription is transferred to an account in a different Azure AD tenant, then all users, groups, and service principals that had [RBAC](../../role-based-access-control/overview.md) to manage resources _lose_ their access. Although RBAC access isn't present, access to the subscription might be available through security mechanisms, including:
+If the subscription is transferred to an account in a different Azure AD tenant, then all users, groups, and service principals that had an [Azure RBAC role](../../role-based-access-control/overview.md) to manage resources _lose_ their access. Although Azure RBAC access isn't present, access to the subscription might be available through security mechanisms, including:
- Management certificates that grant the user admin rights to subscription resources. For more information, see [Create and Upload a Management Certificate for Azure](../../cloud-services/cloud-services-certs-create.md). - Access keys for services like Storage. For more information, see [Azure storage account overview](../../storage/common/storage-account-overview.md). - Remote Access credentials for services like Azure Virtual Machines.
-If the recipient needs to restrict, access to their Azure resources, they should consider updating any secrets associated with the service. Most resources can be updated by using the following steps:
+If the recipient needs to restrict, access to their Azure resources, they should consider updating any secrets associated with the service. Most resources can be updated by using the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. On the Hub menu, select **All resources**.
+2. select **All resources** on the Hub menu.
3. Select the resource.
-4. On the resource page, select **Settings** to view and update existing secrets.
+4. select **Settings** to view and update existing secrets on the resource page.
## Next steps -- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).
+- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).
data-catalog Data Catalog Dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-catalog/data-catalog-dsr.md
If you want to see a specific data source supported, suggest it (or voice your s
<td>Container</td> <td>Model</td> <td>
- <font size="2">
Protocol: mssql-mds <br>Authentication: {windows} <br>Address: <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; url <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; model <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; version
-
</td> </tr> <tr>
If you want to see a specific data source supported, suggest it (or voice your s
<td>Table</td> <td>Entity</td> <td>
- <font size="2">
Protocol: mssql-mds <br>Authentication: {windows} <br>Address:
If you want to see a specific data source supported, suggest it (or voice your s
<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; model <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; version <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; entity
-
</td> </tr> <tr>
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance-sinks.md
Here is a video walk through of how to use data flows with exits, alter row, and
### Impact of error row handling to performance
-When you enable error row handling ("continue on error") in the sink transformation, the service will take an additional step before writing the compatible rows to your destination table. This additional step will have a small performance penalty that can be in the range of 5% added for this step with an additional small performance hit also added if you set the option to also with the incompatible rows to a log file.
+When you enable error row handling ("continue on error") in the sink transformation, the service will take an additional step before writing the compatible rows to your destination table. This additional step will have a small performance penalty that can be in the range of 5% added for this step with an additional small performance hit also added if you set the option to also write the incompatible rows to a log file.
### Disabling indexes using a SQL Script
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Previously updated : 09/09/2021 Last updated : 01/20/2022 # Copy data to and from Azure Databricks Delta Lake using Azure Data Factory or Azure Synapse Analytics
The following sections provide details about properties that define entities spe
## Linked service properties
-The following properties are supported for an Azure Databricks Delta Lake linked service.
+This Azure Databricks Delta Lake connector supports the following authentication types. See the corresponding sections for details.
+
+- [Access token](#access-token)
+- [System-assigned managed identity authentication](#managed-identity)
+- [User-assigned managed identity authentication](#user-assigned-managed-identity-authentication)
+
+### Access token
+
+The following properties are supported for the Azure Databricks Delta Lake linked service:
| Property | Description | Required | | :- | :-- | :- |
The following properties are supported for an Azure Databricks Delta Lake linked
} ```
+### <a name="managed-identity"></a> System-assigned managed identity authentication
+
+To learn more about system-assigned managed identities for Azure resources, see [system-assigned managed identity for Azure resources](data-factory-service-identity.md#system-assigned-managed-identity).
+
+To use system-assigned managed identity authentication, follow these steps to grant permissions:
+
+1. [Retrieve the managed identity information](data-factory-service-identity.md#retrieve-managed-identity) by copying the value of the **managed identity object ID** generated along with your data factory or Synapse workspace.
+
+2. Grant the managed identity the correct permissions in Azure Databricks. In general, you must grant at least the **Contributor** role to your system-assigned managed identity in **Access control (IAM)** of Azure Databricks.
+
+The following properties are supported for the Azure Databricks Delta Lake linked service:
+
+| Property | Description | Required |
+| :- | :-- | :- |
+| type | The type property must be set to **AzureDatabricksDeltaLake**. | Yes |
+| domain | Specify the Azure Databricks workspace URL, e.g. `https://adb-xxxxxxxxx.xx.azuredatabricks.net`. | Yes |
+| clusterId | Specify the cluster ID of an existing cluster. It should be an already created Interactive Cluster. <br>You can find the Cluster ID of an Interactive Cluster on Databricks workspace -> Clusters -> Interactive Cluster Name -> Configuration -> Tags. [Learn more](/azure/databricks/clusters/configure#cluster-tags). | Yes |
+| workspaceResourceId | Specify the workspace resource ID of your Azure Databricks.| Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No |
+
+**Example:**
+```json
+{
+ "name": "AzureDatabricksDeltaLakeLinkedService",
+ "properties": {
+ "type": "AzureDatabricksDeltaLake",
+ "typeProperties": {
+ "domain": "https://adb-xxxxxxxxx.xx.azuredatabricks.net",
+ "clusterId": "<cluster id>",
+ "workspaceResourceId": "<workspace resource id>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+### User-assigned managed identity authentication
+
+To learn more about user-assigned managed identities for Azure resources, see [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity)
+
+To use user-assigned managed identity authentication, follow these steps:
+
+1. [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and grant permission in your Azure Databricks. In general, you must grant at least the **Contributor** role to your user-assigned managed identity in **Access control (IAM)** of Azure Databricks.
+
+2. Assign one or multiple user-assigned managed identities to your data factory or Synapse workspace, and [create credentials](credentials.md) for each user-assigned managed identity.
+
+The following properties are supported for the Azure Databricks Delta Lake linked service:
+
+| Property | Description | Required |
+| :- | :-- | :- |
+| type | The type property must be set to **AzureDatabricksDeltaLake**. | Yes |
+| domain | Specify the Azure Databricks workspace URL, e.g. `https://adb-xxxxxxxxx.xx.azuredatabricks.net`. | Yes |
+| clusterId | Specify the cluster ID of an existing cluster. It should be an already created Interactive Cluster. <br>You can find the Cluster ID of an Interactive Cluster on Databricks workspace -> Clusters -> Interactive Cluster Name -> Configuration -> Tags. [Learn more](/azure/databricks/clusters/configure#cluster-tags). | Yes |
+| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
+| workspaceResourceId | Specify the workspace resource ID of your Azure Databricks. | Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No |
+
+**Example:**
+
+```json
+{
+ "name": "AzureDatabricksDeltaLakeLinkedService",
+ "properties": {
+ "type": "AzureDatabricksDeltaLake",
+ "typeProperties": {
+ "domain": "https://adb-xxxxxxxxx.xx.azuredatabricks.net",
+ "clusterId": "<cluster id>",
+ "credential": {
+ "referenceName": "credential1",
+ "type": "CredentialReference"
+ },
+ "workspaceResourceId": "<workspace resource id>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-cloud-for-customer.md
Title: Copy data from/to SAP Cloud for Customer description: Learn how to copy data from SAP Cloud for Customer to supported sink data stores (or) from supported source data stores to SAP Cloud for Customer using an Azure Data Factory or Synapse Analytics pipeline. --++ Previously updated : 09/09/2021 Last updated : 01/25/2022
-# Copy data from SAP Cloud for Customer (C4C) using Azure Data Factory or Synapse Analytics
+# Copy data from or to SAP Cloud for Customer (C4C) using Azure Data Factory or Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory Data Flow Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-script.md
Data flow script (DFS) is the underlying metadata, similar to a coding language,
:::image type="content" source="media/data-flow/scriptbutton.png" alt-text="Script button":::
-For instance, `allowSchemaDrift: true,` in a source transformation tells the service to include all columns from the source dataset in the data flow even if they are not included in the schema projection.
+For instance, `allowSchemaDrift: true,` in a source transformation tells the service to include all columns from the source dataset in the data flow even if they aren't included in the schema projection.
## Use cases The DFS is automatically produced by the user interface. You can click the Script button to view and customize the script. You can also generate scripts outside of the ADF UI and then pass that into the PowerShell cmdlet. When debugging complex data flows, you may find it easier to scan the script code-behind instead of scanning the UI graph representation of your flows.
Here are a few example use cases:
- Complex expressions that are difficult to manage in the UI or are resulting in validation issues. - Debugging and better understanding various errors returned during execution.
-When you build a data flow script to use with PowerShell or an API, you must collapse the formatted text into a single line. You can keep tabs and newlines as escape characters. But the text must be formatted to fit inside a JSON property. There is a button on the script editor UI at the bottom that will format the script as a single line for you.
+When you build a data flow script to use with PowerShell or an API, you must collapse the formatted text into a single line. You can keep tabs and newlines as escape characters. But the text must be formatted to fit inside a JSON property. There's a button on the script editor UI at the bottom that will format the script as a single line for you.
:::image type="content" source="media/data-flow/copybutton.png" alt-text="Copy button":::
source1 derive(
) ~> derive1 ```
-And a sink with no schema would simply be:
+And a sink with no schema would be:
``` derive1 sink(allowSchemaDrift: true, validateSchema: false) ~> sink1
ValueDistAgg aggregate(numofunique = countIf(countunique==1),
``` ### Include all columns in an aggregate
-This is a generic aggregate pattern that demonstrates how you can keep the remaining columns in your output metadata when you are building aggregates. In this case, we use the ```first()``` function to choose the first value in every column whose name is not "movie". To use this, create an Aggregate transformation called DistinctRows and then paste this in your script over top of the existing DistinctRows aggregate script.
+This is a generic aggregate pattern that demonstrates how you can keep the remaining columns in your output metadata when you're building aggregates. In this case, we use the ```first()``` function to choose the first value in every column whose name isn't "movie". To use this, create an Aggregate transformation called DistinctRows and then paste this in your script over top of the existing DistinctRows aggregate script.
``` aggregate(groupBy(movie),
aggregate(updates = countIf(isUpdate(), 1),
``` ### Distinct row using all columns
-This snippet will add a new Aggregate transformation to your data flow which will take all incoming columns, generate a hash that is used for grouping to eliminate duplicates, then provide the first occurrence of each duplicate as output. You do not need to explicitly name the columns, they will be automatically generated from your incoming data stream.
+This snippet will add a new Aggregate transformation to your data flow, which will take all incoming columns, generate a hash that is used for grouping to eliminate duplicates, then provide the first occurrence of each duplicate as output. You don't need to explicitly name the columns, they'll be automatically generated from your incoming data stream.
``` aggregate(groupBy(mycols = sha2(256,columns())),
aggregate(groupBy(mycols = sha2(256,columns())),
This is a snippet that you can paste into your data flow to generically check all of your columns for NULL values. This technique leverages schema drift to look through all columns in all rows and uses a Conditional Split to separate the rows with NULLs from the rows with no NULLs. ```
-split(contains(array(columns()),isNull(#item)),
+split(contains(array(toString(columns())),isNull(#item)),
disjoint: false) ~> LookForNULLs@(hasNULLs, noNULLs) ``` ### AutoMap schema drift with a select
-When you need to load an existing database schema from an unknown or dynamic set of incoming columns, you must map the right-side columns in the Sink transformation. This is only needed when you are loading an existing table. Add this snippet before your Sink to create a Select that auto-maps your columns. Leave your Sink mapping to auto-map.
+When you need to load an existing database schema from an unknown or dynamic set of incoming columns, you must map the right-side columns in the Sink transformation. This is only needed when you're loading an existing table. Add this snippet before your Sink to create a Select that auto-maps your columns. Leave your Sink mapping to auto-map.
``` select(mapColumn(
derive(each(match(type=='string'), $$ = 'string'),
``` ### Fill down
-Here is how to implement the common "Fill Down" problem with data sets when you want to replace NULL values with the value from the previous non-NULL value in the sequence. Note that this operation can have negative performance implications because you must create a synthetic window across your entire data set with a "dummy" category value. Additionally, you must sort by a value to create the proper data sequence to find the previous non-NULL value. This snippet below creates the synthetic category as "dummy" and sorts by a surrogate key. You can remove the surrogate key and use your own data-specific sort key. This code snippet assumes you've already added a Source transformation called ```source1```
+Here's how to implement the common "Fill Down" problem with data sets when you want to replace NULL values with the value from the previous non-NULL value in the sequence. Note that this operation can have negative performance implications because you must create a synthetic window across your entire data set with a "dummy" category value. Additionally, you must sort by a value to create the proper data sequence to find the previous non-NULL value. This snippet below creates the synthetic category as "dummy" and sorts by a surrogate key. You can remove the surrogate key and use your own data-specific sort key. This code snippet assumes you've already added a Source transformation called ```source1```
``` source1 derive(dummy = 1) ~> DerivedColumn
aggregate(each(match(true()), $$ = countDistinct($$))) ~> KeyPattern
``` ### Compare previous or next row values
-This sample snippet demonstrates how the Window transformation can be used to compare column values from the current row context with column values from rows before and after the current row. In this example, a Derived Column is used to generate a dummy value to enable a window partition across the entire data set. A Surrogate Key transformation is used to assign a unique key value for each row. When you apply this pattern to your data transformations, you can remove the surrogate key if you are a column that you wish to order by and you can remove the derived column if you have columns to use to partition your data by.
+This sample snippet demonstrates how the Window transformation can be used to compare column values from the current row context with column values from rows before and after the current row. In this example, a Derived Column is used to generate a dummy value to enable a window partition across the entire data set. A Surrogate Key transformation is used to assign a unique key value for each row. When you apply this pattern to your data transformations, you can remove the surrogate key if you're a column that you wish to order by and you can remove the derived column if you have columns to use to partition your data by.
``` source1 keyGenerate(output(sk as long),
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-functions.md
Keep and Remove Top, Keep Range (corresponding M functions,
| Table.CombineColumns | This is a common scenario that isn't directly supported but can be achieved by adding a new column that concatenates two given columns. For example, Table.AddColumn(RemoveEmailColumn, "Name", each [FirstName] & " " & [LastName]) | | Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type. | | Table.NestedJoin | Just doing a join will result in a validation error. The columns must be expanded for it to work. |
-| Table.Distinct | Remove duplicate rows isn't supported. |
| Table.RemoveLastN | Remove bottom rows isn't supported. | | Table.RowCount | Not supported, but can be achieved by adding a custom column containing the value 1, then aggregating that column with List.Sum. Table.Group is supported. |
-| Row level error handling | Row level error handling is currently not supported. For example, to filter out non-numeric values from a column, one approach would be to transform the text column to a number. Every cell which fails to transform will be in an error state and need to be filtered. This scenario isn't possible in scaled-out M. |
+| Row level error handling | Row level error handling is currently not supported. For example, to filter out non-numeric values from a column, one approach would be to transform the text column to a number. Every cell, which fails to transform will be in an error state and need to be filtered. This scenario isn't possible in scaled-out M. |
| Table.Transpose | Not supported | ## M script workarounds
This option is accessible from the Extract option in the ribbon
![Power Query Pivot Selector](media/wrangling-data-flow/power-query-pivot-2.png)
-* When you click OK, you will see the data in the editor updated with the pivoted values
-* You will also see a warning message that the transformation may be unsupported
+* When you click OK, you'll see the data in the editor updated with the pivoted values
+* You'll also see a warning message that the transformation may be unsupported
* To fix this warning, expand the pivoted list manually using the PQ editor * Select Advanced Editor option from the ribbon * Expand the list of pivoted values manually
To set the date/time format when using Power Query ADF, please follow these sets
![Power Query Change Type](media/data-flow/power-query-date-2.png) 1. Select the column in the Power Query UI and choose Change Type > Date/Time
-2. You will see a warning message
+2. You'll see a warning message
3. Open Advanced Editor and change ```TransformColumnTypes``` to ```TransformColumns```. Specify the format and culture based on the input data. ![Power Query Editor](media/data-flow/power-query-date-3.png)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
In addition to the above prerequisites that are used for VM creation, you'll als
1. Identify all the VMs running on your device. This includes Kubernetes VMs, or any VM workloads that you may have deployed. ```powershell
- get-vm -force
+ get-vm
``` 1. Stop all the running VMs.
databox Data Box Deploy Export Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-export-picked-up.md
Previously updated : 10/29/2021 Last updated : 01/25/2022 # Customer intent: As an IT admin, I need to be able to return Data Box to upload on-premises data from my server onto Azure.
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-picked-up.md
Previously updated : 11/16/2021 Last updated : 01/25/2022 # Customer intent: As an IT admin, I need to be able to return a Data Box to upload on-premises data from my server onto Azure.
databox Data Box Disk Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-deploy-picked-up.md
Previously updated : 11/15/2021 Last updated : 01/25/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
Azure datacenters in Australia have an additional security notification. All the
2. Email Quantium solution using the following email template. ```
- To: Customerservice.JP@quantiumsolutions.com
+ To: azure.qsjp@quantiumsolutions.com
Subject: Pickup request for Microsoft Azure Data Box Disk|Job Name: Body: - Japan Post Yu-Pack tracking number (reference number):
Azure datacenters in Australia have an additional security notification. All the
If needed, you can contact Quantium Solution Support (Japanese language) at the following information: -- Email:[Customerservice.JP@quantiumsolutions.com](mailto:Customerservice.JP@quantiumsolutions.com)
+- Email:[azure.qsjp@quantiumsolutions.com](mailto:azure.qsjp@quantiumsolutions.com)
- Telephone:03-5755-0150 ### [Korea](#tab/in-korea)
databox Data Box File Acls Preservation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-file-acls-preservation.md
Previously updated : 01/20/2022 Last updated : 01/21/2022
Read-only attributes on directories aren't transferred.
#### ACLs
-All the ACLs for directories and files that you copy to your Data Box over SMB are copied and transferred. Transfers include both discretionary ACLs (DACLs) and system ACLs (SACLs). For Linux, only Windows NT ACLs are transferred.
+<!--ACLs DEFINITION
-ACLs aren't transferred during data copies over Network File System (NTS) and when you use the data copy service to transfer your data. The data copy service reads data directly from your shares and can't read ACLs.
+**Transfer methods.** Support for ACLs transfer during a data copy varies with the file transfer protocol or service that you use. There are also some differences when you use a Windows client vs. a Linux client for the data copy.
-Even if your data copy tool does not copy ACLs, the default ACLs on directories and files are transferred to Azure Files. The default ACLs have permissions for the built-in Administrator account, the SYSTEM account, and the SMB share user account that was used to mount and copy data in the Data Box.
+- SMB transfers. When you [copy data over SMB](databox/data-box-deploy-copy-data.md), all the ACLs for directories and files that you copy to your Data Box over SMB are copied and transferred. Transfers include both discretionary ACLs (DACLs) and system ACLs (SACLs). If you're using a Linux client for an SMB transfer, only Windows NT ACLs are transferred.
+
+- NFS transfers. ACLs aren't transferred when you [copy data over Network File System (NFS)](databox/data-box-deploy-copy-data-via-nfs.md).
+
+- Data copy service - ACLs aren't transferred when you [copy data via the data copy service](data-box-deploy-copy-data-via-copy-service.md). The data copy service reads data directly from your shares and can't read ACLs.
+
+**Default ACLs.** Even if your data copy tool does not copy ACLs, in Windows, the default ACLs on directories and files are transferred to Azure Files. The default ACLs aren't transferred in Linux.
+
+The default ACLs have permissions for the built-in Administrator account, the SYSTEM account, and the SMB share user account that was used to mount and copy data in the Data Box.
The ACLs contain security descriptors with the following properties: ACLs, Owner, Group, SACL.
-Transfer of ACLs is enabled by default. You might want to disable this setting in the local web UI on your Data Box. For more information, see [Use the local web UI to administer your Data Box and Data Box Heavy](./data-box-local-web-ui-admin.md).
+**Disabling ACLs transfer.** Transfer of ACLs is enabled by default. You might want to disable this setting in the local web UI on your Data Box. For more information, see [Use the local web UI to administer your Data Box and Data Box.-->
+Depending on the transfer method used and whether you're using a Windows or Linux client, some or all discretionary and default access control lists (ACLs) on files and folders may be transferred during the data copy to Azure Files.
+
+Transfer of ACLs is enabled by default. You might want to disable this setting in the local web UI on your Data Box. For more information, see [Use the local web UI to administer your Data Box and Data Box Heavy](./data-box-local-web-ui-admin.md).
+
> [!NOTE] > Files with ACLs containing conditional access control entry (ACE) strings are not copied. This is a known issue. To work around this, copy these files to the Azure Files share manually by mounting the share and then using a copy tool that supports copying ACLs.
+**ACLs transfer over SMB**
+
+During an [SMB file transfer](./data-box-deploy-copy-data.md), the following ACLs are transferred:
+
+- Discretionary ACLs (DACLs) and system ACLs (SACLs) for directories and files that you copy to your Data Box
+- If you use a Linux client, only Windows NT ACLs are transferred.<!--Kyle asked: What are Windows NT ACLs.-->
+
+ACLs aren't transferred when you [copy data over NFS](./data-box-deploy-copy-data-via-nfs.md) or [use the data copy service](data-box-deploy-copy-data-via-copy-service.md). The data copy service reads data directly from your shares and can't read ACLs.
+
+**Default ACLs transfer**
+
+Even if your data copy tool doesn't copy ACLs, the default ACLs on directories and files are transferred to Azure Files when you use a Windows client. The default ACLs aren't transferred when you use a Linux client.
+
+The following default ACLs are transferred:
+
+- Account permissions:
+ - Built-in Administrator account
+ - SYSTEM account
+ - SMB share user account used to mount and copy data in the Data Box
+
+- Security descriptors with these properties: DACL, Owner, Group, SACL
+ ## Copying data and metadata
-To transfer the ACLs, timestamps, and attributes for your data, use the following procedures to copy data into the Data Box.
+To transfer the ACLs, timestamps, and attributes for your data, use the following procedures to copy data into the Data Box.
### Windows data copy tool
Here are some of the common scenarios you'll use when copying data using `roboco
For more information, see [Using robocopy commands](/windows-server/administration/windows-commands/robocopy).
-### Linux data copy tool
+### Linux data copy tools
-Transferring metadata in Linux is a two-step process. First, you copy the source data using a tool such as `rsync`, which does not copy metadata. After you copy the data, you can copy the metadata using a tool such as `smbcacls` or `cifsacl`.
+Transferring metadata in Linux is a two-step process. First, you copy the source data using a tool such as `rsync`, which does not copy metadata. After you copy the data, you can copy the metadata using a tool such as `smbcacls` or `cifsacl`.
The following sample commands do the first step, copying the data using `rsync`.
databox Data Box Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-limits.md
Previously updated : 01/21/2022 Last updated : 01/25/2022 # Azure Data Box limits
Consider these limits as you deploy and operate your Microsoft Azure Data Box. T
- Data Box can store a maximum of 500 million files for both import and export. - Data Box supports a maximum of 512 containers or shares in the cloud. The top-level directories within the user share become containers or Azure file shares in the cloud. -- Data Box usage capacity may be less than 80 TB because of ReFS metadata space consumption.
+- Data Box usage capacity may be less than 80 TiB because of ReFS metadata space consumption.
- Data Box supports a maximum of 10 client connections at a time on an NFS share. ## Azure storage limits
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies | Microsoft Docs description: Learn how to work with security policies in Microsoft Defender for Cloud. Previously updated : 11/16/2021 Last updated : 01/25/2022 # Manage security policies
To view your security policies in Defender for Cloud:
:::image type="content" source="./media/tutorial-security-policy/security-policy-page.png" alt-text="Defender for Cloud's security policy page" lightbox="./media/tutorial-security-policy/security-policy-page.png"::: > [!NOTE]
- > If there is a label "MG Inherited" alongside your default policy, it means that the policy has been assigned to a management group and inherited by the subscription you're viewing.
+ > If there is a label "MG Inherited" alongside your default initiative, it means that the initiative has been assigned to a management group and inherited by the subscription you're viewing.
1. Choose from the available options on this page:
For more information about recommendations, see [Managing security recommendatio
1. Open the **Security policy** page.
-1. From the **Default policy**, **Industry & regulatory standards**, or **Your custom initiatives** sections, select the relevant initiative containing the policy you want to disable.
+1. From the **Default initiative** or **Your custom initiatives** sections, select the relevant initiative containing the policy you want to disable.
1. Open the **Parameters** section and search for the policy that invokes the recommendation that you want to disable.
To enable a disabled policy and ensure it's assessed for your resources:
1. Open the **Security policy** page.
-1. From the **Default policy**, **Industry & regulatory standards**, or **Your custom initiatives** sections, select the relevant initiative with the policy you want to enable.
+1. From the **Default initiative**, **Industry & regulatory standards**, or **Your custom initiatives** sections, select the relevant initiative with the policy you want to enable.
1. Open the **Parameters** section and search for the policy that invokes the recommendation that you want to disable.
digital-twins How To Create App Registration Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-cli.md
description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the CLI. Previously updated : 1/5/2022 Last updated : 1/24/2022
This app registration is where you configure access permissions to the [Azure Di
>[!TIP] > You may prefer to set up a new app registration every time you need one, *or* to do this only once, establishing a single app registration that will be shared among all scenarios that require it. ## Create manifest
The static value `0b07f429-9f4b-4714-9392-cc5e8e80c8b0` is the resource ID for t
Save the finished file.
-### Upload to Cloud Shell
+### Cloud Shell users: Upload manifest
-Next, upload the manifest file you created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration.
+If you're using Cloud Shell for this tutorial, you'll need to upload the manifest file you created to the Cloud Shell, so that you can access it in Cloud Shell commands when configuring the app registration. If you're using a local installation of the Azure CLI, you can skip this step.
To upload the file, go to the Cloud Shell window in your browser. Select the "Upload/Download files" icon and choose "Upload".
Navigate to the **manifest.json** file on your machine and select "Open." Doing
## Create the registration
-In this section, you'll run a Cloud Shell command to create an app registration with the following settings:
+In this section, you'll run a CLI command to create an app registration with the following settings:
* Name of your choice * Available only to accounts in the default directory (single tenant) * A web reply URL of `http://localhost` * Read/write permissions to the Azure Digital Twins APIs
-Run the following command to create the registration:
+Run the following command to create the registration. If you're using Cloud Shell, the path to the manifest.json file is `@manifest.json`.
```azurecli-interactive
-az ad app create --display-name <app-registration-name> --available-to-other-tenants false --reply-urls http://localhost --native-app --required-resource-accesses "@manifest.json"
+az ad app create --display-name <app-registration-name> --available-to-other-tenants false --reply-urls http://localhost --native-app --required-resource-accesses "<path-to-manifest.json>"
``` The output of the command is information about the app registration you've created.
You can confirm that the Azure Digital Twins permissions were granted by looking
:::image type="content" source="media/how-to-create-app-registration/cli-required-resource-access.png" alt-text="Screenshot of Cloud Shell output of the app registration creation command. The items under 'requiredResourceAccess' are highlighted: there's a 'resourceAppId' value of 0b07f429-9f4b-4714-9392-cc5e8e80c8b0, and a 'resourceAccess > id' value of 4589bd03-58cb-4e6c-b17f-b580e39652f8.":::
-You can also verify the app registration was successfully created by using the Azure portal. For portal instructions, see [Verify success (portal)](how-to-create-app-registration-portal.md#verify-success).
+You can also verify the app registration was successfully created with the necessary API permissions by using the Azure portal. For portal instructions, see [Verify API permissions (portal)](how-to-create-app-registration-portal.md#verify-api-permissions).
## Collect important values
The output of this command is information about the client secret that you've cr
>[!IMPORTANT] >Make sure to copy the value now and store it in a safe place, as it cannot be retrieved again. If you can't find the value later, you'll have to create a new secret.
+## Create Azure Digital Twins role assignment
+
+In this section, you'll create a role assignment for the app registration to set its permissions on the Azure Digital Twins instance. This role will determine what permissions the app registration holds on the instance, so you should select the role that matches the appropriate level of permission for your situation. One possible role is [Azure Digital Twins Data Owner](../role-based-access-control/built-in-roles.md#azure-digital-twins-data-owner). For a full list of roles and their descriptions, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+
+Use the following command to assign the role (must be run by a user with [sufficient permissions](how-to-set-up-instance-cli.md#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the name of the app registration.
+
+```azurecli-interactive
+az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<name-of-app-registration>" --role "<appropriate-role-name>"
+```
+
+The result of this command is outputted information about the role assignment that's been created for the app registration.
+
+### Verify role assignment
+
+To further verify the role assignment, you can look for it in the Azure portal. Follow the instructions in [Verify role assignment (portal)](how-to-create-app-registration-portal.md#verify-role-assignment).
+ ## Other possible steps for your organization It's possible that your organization requires more actions from subscription Owners/administrators to successfully set up an app registration. The steps required may vary depending on your organization's specific settings.
digital-twins How To Create App Registration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-app-registration-portal.md
description: Learn how to create an Azure AD app registration, as an authentication option for client apps, using the Azure portal. Previously updated : 1/5/2022 Last updated : 1/24/2022
To set up a **client secret** for your app registration, start on your app regis
>[!IMPORTANT] >Make sure to copy the values now and store them in a safe place, as they can't be retrieved again. If you can't find them later, you'll have to create a new secret.
-## Provide Azure Digital Twins API permission
+## Provide Azure Digital Twins permissions
-Next, configure the app registration you've created with baseline permissions to the Azure Digital Twins APIs.
+Next, configure the app registration you've created with permissions to access Azure Digital Twins. First, **create a role assignment** for the app registration within the Azure Digital Twins instance. Then, **provide API permissions** for the app to read and write to the Azure Digital Twins APIs.
+
+### Create role assignment
+
+In this section, you'll create a role assignment for the app registration on the Azure Digital Twins instance. This role will determine what permissions the app registration holds on the instance, so you should select the role that matches the appropriate level of permission for your situation. One possible role is [Azure Digital Twins Data Owner](../role-based-access-control/built-in-roles.md#azure-digital-twins-data-owner). For a full list of roles and their descriptions, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+
+1. First, open the page for your Azure Digital Twins instance in the Azure portal.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+
+1. Assign the appropriate role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Select as appropriate |
+ | Assign access to | User, group, or service principal |
+ | Members | Search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
+
+ ![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+#### Verify role assignment
+
+You can view the role assignment you've set up under *Access control (IAM) > Role assignments*.
++
+The app registration should show up in the list along with the role you assigned to it.
+
+### Provide API permissions
+
+In this section, you'll grant your app baseline read/write permissions to the Azure Digital Twins APIs.
From the portal page for your app registration, select *API permissions* from the menu. On the following permissions page, select the *+ Add a permission* button.
Next, you'll select which permissions to grant for these APIs. Expand the **Read
Select *Add permissions* when finished.
-### Verify success
+#### Verify API permissions
On the *API permissions* page, verify that there's now an entry for Azure Digital Twins reflecting Read/Write permissions:
These values are shown in the screenshot below:
:::image type="content" source="media/how-to-create-app-registration/verify-manifest.png" alt-text="Screenshot of the manifest for the Azure AD app registration in the Azure portal.":::
-If these values are missing, retry the steps in the [section for adding the API permission](#provide-azure-digital-twins-api-permission).
+If these values are missing, retry the steps in the [section for adding the API permission](#provide-api-permissions).
## Other possible steps for your organization
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-cli.md
This article covers the steps to **set up a new Azure Digital Twins instance**,
## Create the Azure Digital Twins instance
-In this section, you'll **create a new instance of Azure Digital Twins** using the CLI command. You'll need to provide:
-* A resource group where the instance will be deployed. If you don't already have an existing resource group in mind, you can create one now with this command:
+In this section, you will **create a new instance of Azure Digital Twins** using the CLI command. You will need to provide:
+* A resource group where the instance will be deployed. If you do not already have an existing resource group in mind, you can create one now with this command:
```azurecli-interactive az group create --location <region> --name <name-for-your-resource-group> ``` * A region for the deployment. To see what regions support Azure Digital Twins, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=digital-twins).
-* A name for your instance. If your subscription has another Azure Digital Twins instance in the region that's
+* A name for your instance. If your subscription has another Azure Digital Twins instance in the region that is
already using the specified name, you'll be asked to pick a different name. Use these values in the following [az dt command](/cli/azure/dt) to create the instance:
az dt create --dt-name <name-for-your-Azure-Digital-Twins-instance> --resource-g
### Verify success and collect important values
-If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you've created:
+If the instance was created successfully, the result in the CLI looks something like this, outputting information about the resource you have created:
:::image type="content" source="media/how-to-set-up-instance/cloud-shell/create-instance.png" alt-text="Screenshot of the Cloud Shell window with successful creation of a resource group and Azure Digital Twins instance in the Azure portal." lightbox="media/how-to-set-up-instance/cloud-shell/create-instance.png":::
Note the Azure Digital Twins instance's **hostName**, **name**, and **resourceGr
> [!TIP] > You can see these properties, along with all the properties of your instance, at any time by running `az dt show --dt-name <your-Azure-Digital-Twins-instance>`.
-You now have an Azure Digital Twins instance ready to go. Next, you'll give the appropriate Azure user permissions to manage it.
+You now have an Azure Digital Twins instance ready to go. Next, you will give the appropriate Azure user permissions to manage it.
## Set up user access permissions
You now have an Azure Digital Twins instance ready to go. Next, you'll give the
### Assign the role
-To give a user permissions to manage an Azure Digital Twins instance, you must assign them the **Azure Digital Twins Data Owner** role within the instance.
+To give a user permission to manage an Azure Digital Twins instance, you must assign them the **Azure Digital Twins Data Owner** role within the instance.
Use the following command to assign the role (must be run by a user with [sufficient permissions](#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the *user principal name* on the Azure AD account for the user that should be assigned the role. In most cases, this value will match the user's email on the Azure AD account.
Use the following command to assign the role (must be run by a user with [suffic
az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<Azure-AD-user-principal-name-of-user-to-assign>" --role "Azure Digital Twins Data Owner" ```
-The result of this command is outputted information about the role assignment that's been created.
+The result of this command is outputted information about the role assignment that has been created for the user.
> [!NOTE] > If this command returns an error saying that the CLI **cannot find user or service principal in graph database**:
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
You can also assign the **Azure Digital Twins Data Owner** role using the access
| | | | Role | [Azure Digital Twins Data Owner](../role-based-access-control/built-in-roles.md#azure-digital-twins-data-owner) | | Assign access to | User, group, or service principal |
- | Members | Search for the name or email address of the user to assign. |
+ | Members | Search for the name or email address of the user to assign |
![Add role assignment page](../../includes/role-based-access-control/media/add-role-assignment-page.png)
education-hub About Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/azure-dev-tools-teaching/about-program.md
# What is Azure Dev Tools for Teaching?
-Microsoft Azure Dev Tools for Teaching puts professional developer tools, software, and services
-from Microsoft in the hands of faculty and students with low-cost subscriptions. Students
-receive developer tools at no cost--everything needed to create apps, games, and websites--so
-they can chase their dreams, create the next big breakthrough in technology, or get a head
-start on their career.
+Microsoft Azure Dev Tools for Teaching puts professional developer tools,
+software, and services from Microsoft in the hands of faculty and students
+with plans that come as a part of various Academic Volume Licensing Agreements.
+Students receive developer tools at no cost--everything needed to create apps, games,
+and websites--so they can chase their dreams, create the next big breakthrough in technology,
+or get a head start on their career.
As an administrator of the Azure Dev Tools for Teaching subscription, you'll be able to:
need to maintain a WebStore or an internal site.
Azure cloud platform through the same online portal. ## Program details-
-We designed Azure Dev Tools for Teaching for STEM-focused instruction. Any course curriculum
-focused on science, technology, engineering, or mathematics is eligible to
-use Azure Dev Tools for Teaching to help professors teach and students learn more effectively.
+We designed Azure Dev Tools for Teaching to help professors teach and students learn more effectively.
Your Microsoft Azure Dev Tools for Teaching subscription provides you with access to certain software developer tools. These tools are available to download for free. If you're a faculty member is enrolled
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 08/27/2021 Last updated : 01/24/2022 # Features and terminology in Azure Event Hubs
lakes or long-term archives for event sourcing.
> The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream. Inspection of event payloads and indexing aren't within the feature scope of Event Hubs (or Apache Kafka). Databases and specialized analytics stores and engines such as [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) and [Azure Synapse](../synapse-analytics/overview-what-is.md) are therefore far better suited for storing historic events. > > [Event Hubs Capture](event-hubs-capture-overview.md) integrates directly with Azure Blob Storage and Azure Data Lake Storage and, through that integration, also enables [flowing events directly into Azure Synapse](store-captured-data-data-warehouse.md).
->
-> If you want to use the [Event Sourcing](/azure/architecture/patterns/event-sourcing) pattern for your application, you should align your snapshot strategy with the retention limits of Event Hubs. Do not aim to rebuild materialized views from raw events starting at the beginning of time. You would surely come to regret such a strategy once your application is in production for a while and is well used, and your projection builder has to churn through years of change events while trying to catch up to the latest and ongoing changes.
+ ### Publisher policy
firewall-manager Migrate To Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/migrate-to-policy.md
If ($azfw.NetworkRuleCollections.Count -gt 0) {
} elseif($rule.DestinationIpGroups) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} } elseif($rule.SourceIpGroups){ If($rule.DestinationAddresses) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
} elseif($rule.DestinationIpGroups) {
frontdoor Front Door How To Redirect Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-how-to-redirect-https.md
You can use the Azure portal to [create a Front Door](quickstart-create-front-do
1. Select **Review + create** and then **Create**, to create your Front Door profile. Go to the resource once created.
+ > [!NOTE]
+ > The creation of this redirect rule will incur a small charge.
+ ## Next steps - Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-routing-architecture.md
When Azure Front Door receives your client requests, it will do one of two thing
Traffic routed to the Azure Front Door environments uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic, which allows for user requests to reach the closest environment in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of Split TCP. Front Door organizes its environments into primary and fallback "rings". The outer ring has environments that are closer to users, offering lower latencies. The inner ring has environments that can handle the failover for the outer ring environment in case any issues happen. The outer ring is the preferred target for all traffic and the inner ring is to handle traffic overflow from the outer ring. Each frontend host or domain served by Front Door gets assigned a primary VIP (Virtual Internet Protocol addresses), which gets announced by environments in both the inner and outer ring. A fallback VIP is only announced by environments in the inner ring.
-This architecture ensures that requests from your end users always reach the closest Front Door environment. Even if the preferred Front Door environment is unhealthy all traffic automatically moves to the next closest environment.
+This architecture ensures that requests from your end users always reach the closest Front Door environment. If the preferred Front Door environment is unhealthy, all traffic automatically moves to the next closest environment.
## <a name = "splittcp"></a>Connecting to Front Door environment (Split TCP)
governance Built In Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-packages.md
Each row represents a package used by a built-in policy definition.
- **Definition**: Links to the policy definition in the Azure portal. - **Configuration**: Links to the `.mof` file in the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy) containing the configuration that is used to audit and/or remediate machines.-- **Required modules**: Links to the [PowerShell Desired State Configuration (DSC)](/powershell/dsc/overview/overview)
+- **Required modules**: Links to the [PowerShell Desired State Configuration (DSC)](https://docs.microsoft.com/powershell/dsc/overview?view=dsc-1.1)
modules used by each configuration. The resource modules contain the script logic used to evaluate each setting in the configuration.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
In order for a client application to access Azure API for FHIR, it must present
There are many ways to obtain a token, but the Azure API for FHIR doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
-Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps below:
+Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps:
![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
-1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token). 1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
-It's important to note that the Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The Azure API for FHIR simply validates that the token is signed correctly (it is authentic) and that it has appropriate claims.
+It's important to note that the Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The Azure API for FHIR simply validates that the token is signed correctly (it's authentic) and that it has appropriate claims.
## Structure of an access token
Development of FHIR applications often involves debugging access issues. If a cl
FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts: **Part 1**: A header, which could look like:
- ```json
+```json
{ "alg": "HS256", "typ": "JWT" }
- ```
+```
**Part 2**: The payload (the claims), for example:
- ```json
+```json
{ "oid": "123", "iss": "https://issuerurl",
FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/J
"admin" ] }
- ```
+```
**Part 3**: A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server will be able to obtain public keys from the identity provider and validate that this token was issued by a specific identity provider and it hasn't been tampered with.
The token can be decoded and inspected with tools such as [https://jwt.ms](https
## Obtaining an access token
-As mentioned above, there are several ways to obtain a token from Azure AD. They are described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
+As mentioned, there are several ways to obtain a token from Azure AD. They're described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
-When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you are using in your client application.
+When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you're using in your client application.
The pertinent sections of the Azure AD documentation are:
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
# Azure API for FHIR access token validation
-How Azure API for FHIR validates the access token will depend on implementation and configuration. In this article, we will walk through the validation steps, which can be helpful when troubleshooting access issues.
+How Azure API for FHIR validates the access token will depend on implementation and configuration. In this article, we'll walk through the validation steps, which can be helpful when troubleshooting access issues.
## Validate token has no issues with identity provider
GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configurati
where `<TENANT-ID>` is the specific Azure AD tenant (either a tenant ID or a domain name).
-Azure AD will return a document like the one below to the FHIR server.
+Azure AD will return a document like this one to the FHIR server.
```json {
Azure AD will return a document like the one below to the FHIR server.
"rbac_url": "https://pas.windows.net" } ```
-The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it is receiving an authentic token.
+The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it's receiving an authentic token.
## Validate claims of the token
When using the OSS Microsoft FHIR server for Azure, the server will validate:
Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
-A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the Azure API for FHIR and the FHIR server for Azure do not validate token scopes.
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the Azure API for FHIR and the FHIR server for Azure don't validate token scopes.
## Next steps Now that you know how to walk through token validation, you can complete the tutorial to create a JavaScript application and read FHIR data.
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
Outside of defining search parameters, the other update you need to make to pass
### Sample rest file
-To assist with creation of these search parameters and profiles, we have a [sample http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB.http) that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone.
+To assist with creation of these search parameters and profiles, we have a [sample http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB.http) that includes all the steps outlined in this tutorial in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone.
:::image type="content" source="media/cms-tutorials/capability-test-script-execution-results.png" alt-text="Capability test script execution results."::: ## Touchstone read test
-After testing the capabilities statement, we will test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) in Azure API for FHIR against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You will need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
+After testing the capabilities statement, we'll test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) in Azure API for FHIR against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You'll need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
:::image type="content" source="media/cms-tutorials/test-execution-results-touchstone.png" alt-text="Touchstone read test execution results.":::
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
# Patient-everything in FHIR
-The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a patient with access to their entire record or, for a provider or other user, to perform a bulk data download. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the Azure API for FHIR, Patient-everything is available to pull data related to a specific patient.
+The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the Azure API for FHIR, Patient-everything is available to pull data related to a specific patient.
## Use Patient-everything To call Patient-everything, use the following command:
To call Patient-everything, use the following command:
```json GET {FHIRURL}/Patient/{ID}/$everything ```+
+> [!Note]
+> You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
+ The Azure API for FHIR validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information: * [Patient resource](https://www.hl7.org/fhir/patient.html)
-* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that are not of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
+* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
* If there are `seealso` link reference(s) to other patient(s), the results will include Patient-everything operation against the `seealso` patient(s) listed. * Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html)
-* [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource. This resource is limited to 100 devices. If the patient has more than 100 devices linked to them, only 100 will be returned.
+* [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource.
+> [!Note]
+> If the patient has more than 100 devices linked to them, only 100 will be returned.
## Patient-everything parameters The Azure API for FHIR supports the following query parameters. All of these parameters are optional:
The Azure API for FHIR supports the following query parameters. All of these par
| end | Specifying the end date will pull in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. | > [!Note]
-> You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
+> This implementation of Patient-everything does not support the _count parameter.
## Processing patient links On a patient resource, there's an element called link, which links a patient to other patients or related persons. These linked patients help give a holistic view of the original patient. The link reference can be used when a patient is replacing another patient or when two patient resources have complementary information. One use case for links is when an ADT 38 or 39 HL7v2 message comes. The ADT38/39 describe an update to a patient. This update can be stored as a reference between two patients in the link element.
-The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but below is a high-level summary:
+The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but here's a high-level summary:
* [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) - The Patient resource replaces a different Patient. * [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) - Patient is valid, but it's not considered the main source of information. Points to another patient to retrieve additional information. * [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) - Patient contains a link to another patient that's equally valid. * [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by) - The Patient resource replaces a different Patient.
-### Patient-everything patient links details:
+### Patient-everything patient links details
The Patient-everything operation in Azure API for FHIR processes patient links in different ways to give you the most holistic view of the patient. > [!Note] > A link can also reference a `RelatedPerson`. Right now, `RelatedPerson` resources are not processed in Patient-everything and are not returned in the bundle.
-Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient is not returned in the bundle.
+Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient isn't returned in the bundle.
-As described above, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients.
+As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients.
> [!Note]
-> This is set up to only follow `seealso` links one **layer deep**. It doesn't process a `seealso` link's `seealso` links.
+> This is set up to only follow `seealso` links one layer deep. It doesn't process a `seealso` link's `seealso` links.
-[ ![See also flow diagram.](media/patient-everything/see-also-flow.png) ](media/patient-everything/see-also-flow.png#lightbox)
+[![See also flow diagram.](media/patient-everything/see-also-flow.png)](media/patient-everything/see-also-flow.png#lightbox)
The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` will include by default an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`.
In addition, you can set the `Prefer` header to `handling=strict` to throw an er
> [!Note] > If a `replaced-by` link is present, `Prefer: handling=lenient` and results are returned asynchronously in multiple bundles, only an operation outcome is returned in one bundle.
+## Patient-everything response order
+
+The Patient-everything operation returns results in phases:
+
+1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources ir references.
+1. Phase 2 and 3 both return resources in the patient compartment. If the start or end query parameters are specified, Phase 2 returns resources from the compartment that can be filtered by their clinical date, and Phase 3 returns resources from the compartment that can't be filtered by their clinical date. If neither of these parameters are specified, Phase 2 is skipped and Phase 3 returns all patient-compartment resources.
+1. Phase 4 will return any devices that reference the patient.
+
+Each phase will return results in a bundle. If the results span multiple pages, the next link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase.
+
+If the original patient has any `seealso` links, phases 1 through 4 will be repeated for each of those patients.
+ ## Examples of Patient-everything
-Below are some examples of using the Patient-everything operation. In addition to the examples below, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
+Here are some examples of using the Patient-everything operation. In addition to the examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
-To use Patient-everything to query a patient's "everything" between 2010 and 2020, use the following call:
+To use Patient-everything to query between 2010 and 2020, use the following call:
```json GET {FHIRURL}/Patient/{ID}/$everything?start=2010&end=2020
If a patient is found for each of these calls, you'll get back a 200 response wi
## Next steps
-Now that you know how to use the Patient-everything operation, you can learn about the search options. For more information, see
+Now that you know how to use the Patient-everything operation, you can learn about the search options.
>[!div class="nextstepaction"] >[Overview of search in Azure API for FHIR](overview-of-search.md)
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
For example:
- `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations. - `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and it identifies mandatory fields such as extensions and value sets.
-When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource.
+When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
```json {
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides. Some common Implementation Guides are:
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
|Name |URL |- |-
CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/> Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+> [!NOTE]
+> The Azure API for FHIR does not store any profiles from implementation guides by default. You will need to load them into the Azure API for FHIR.
+ ## Accessing profiles and storing profiles ### Storing profiles
To store profiles in Azure API for FHIR, you can `POST` the `StructureDefinition
} ```
-For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd do the following:
+For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We have included a snippet of this profile for the example.
```rest POST https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
POST https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=h
], "description" : "Defines constraints and extensions on the AllergyIntolerance resource for the minimal set of data to query and retrieve allergy information.", ```
-For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles.
+For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles you should get the profiles directly from HL7 and the implementation guide that defines them.
### Viewing profiles
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Last updated 08/10/2021
# Testing the FHIR API on Azure API for FHIR
-In the previous two steps, you deployed the Azure API for FHIR and registered your client application. You are now ready to test that your Azure API for FHIR is set up with your client application.
+In the previous tutorial, you deployed the Azure API for FHIR and registered your client application. You're now ready to test your Azure API for FHIR.
## Retrieve capability statement
-First we will get the capability statement for your Azure API for FHIR.
-1. Open Postman
-1. Retrieve the capability statement by doing GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/metadata. In the image below the FHIR server name is **fhirserver**.
+First we'll get the capability statement for your Azure API for FHIR.
+1. Open Postman.
+1. Retrieve the capability statement by doing `GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/metadata`. In the image below the FHIR server name is **fhirserver**.
![Capability Statement](media/tutorial-web-app/postman-capability-statement.png)
-Next we will attempt to retrieve a patient. To retrieve a patient, enter GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/Patient. You will receive a 401 Unauthorized error. This error is because you haven't proven that you should have access to patient data.
+Next we'll attempt to retrieve a patient. To retrieve a patient, enter `GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/Patient`. YouΓÇÖll receive a 401 Unauthorized error. This error is because you haven't proven that you should have access to patient data.
## Get patient from FHIR server ![Failed Patient](media/tutorial-web-app/postman-patient-authorization-failed.png) In order to gain access, you need an access token.
-1. In Postman, select **Authorization** and set the Type to **OAuth2.0**
-1. Select **Get New Access Token**
+1. Select **Authorization** and set the Type to **OAuth2.0** in Postman.
+1. Select **Get New Access Token**.
1. Fill in the fields and select **Request Token**. Below you can see the values for each field for this tutorial. |Field |Value |
In order to gain access, you need an access token.
![Success Patient](media/tutorial-web-app/postman-patient-authorization-success.png) ## Post patient into FHIR server
-Now you have access, you can create a new patient. Here is a sample of a simple patient you can add into your FHIR server. Enter the code below into the **Body** section of Postman.
+Now you have access, you can create a new patient. Here's a sample of a simple patient you can add into your FHIR server. Enter this `json` into the **Body** section of Postman.
``` json {
Now you have access, you can create a new patient. Here is a sample of a simple
This POST will create a new patient in your FHIR server with the name James Tiberious Kirk. ![Post Patient](media/tutorial-web-app/postman-post-patient.png)
-If you do the GET step above to retrieve a patient again, you will see James Tiberious Kirk listed in the output.
+If you do the GET command to retrieve a patient again, you'll see James Tiberious Kirk listed in the output.
+
+> [!NOTE]
+> When sending requests to the Azure API for FHIR, you need to ensure that you've set the content-type header to `application/json`
## Troubleshooting access issues If you ran into issues during any of these steps, review the documents we have put together on Azure Active Directory and the Azure API for FHIR.
If you ran into issues during any of these steps, review the documents we have p
* [Access token validation](azure-api-fhir-access-token-validation.md) - This how-to guide gives more specific details on access token validation and steps to take to resolve access issues. ## Next Steps
-Now that you can successfully connect to your client application, you are ready to write your web application.
+Now that you can successfully connect to your client application, youΓÇÖre ready to write your web application.
>[!div class="nextstepaction"] >[Write a web application](tutorial-web-app-write-web-app.md)
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/export-data.md
The FHIR service supports the following query parameters. All of these parameter
| \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or just ndjson. All export jobs will return `ndjson` and the passed value has no effect on code behavior. | | \_since | Yes | Allows you to only export resources that have been modified since the time provided | | \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
-| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
+| \_typeFilter | Yes | To request finer-grained filtering, you can use \_typeFilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container is not specified, the data will be exported to a new container. | > [!Note]
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/patient-everything.md
To call Patient-everything, use the following command:
```json GET {FHIRURL}/Patient/{ID}/$everything ```+
+> [!Note]
+> You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
+ The FHIR service validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information:
-* [Patient resource](https://www.hl7.org/fhir/patient.html)
-* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that are not of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
+* [Patient resource](https://www.hl7.org/fhir/patient.html).
+* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
* If there are `seealso` link reference(s) to other patient(s), the results will include Patient-everything operation against the `seealso` patient(s) listed.
-* Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html)
-* [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource. This resource is limited to 100 devices. If the patient has more than 100 devices linked to them, only 100 will be returned.
+* Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html).
+* [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource.
+
+> [!Note]
+> If the patient has more than 100 devices linked to them, only 100 will be returned.
## Patient-everything parameters
The FHIR service supports the following query parameters. All of these parameter
| end | Specifying the end date will pull in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. | > [!Note]
-> You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
+> This implementation of Patient-everything does not support the _count parameter.
## Processing patient links On a patient resource, there's an element called link, which links a patient to other patients or related persons. These linked patients help give a holistic view of the original patient. The link reference can be used when a patient is replacing another patient or when two patient resources have complementary information. One use case for links is when an ADT 38 or 39 HL7v2 message comes. It describes an update to a patient. This update can be stored as a reference between two patients in the link element.
-The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but below is a high-level summary:
+The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but we've include a high-level summary:
* [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) - The Patient resource replaces a different Patient. * [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) - Patient is valid, but it's not considered the main source of information. Points to another patient to retrieve additional information. * [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) - Patient contains a link to another patient that's equally valid. * [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by) - The Patient resource replaces a different Patient.
-### Patient-everything patient links details:
+### Patient-everything patient links details
The Patient-everything operation in the FHIR service processes patient links in different ways to give you the most holistic view of the patient. > [!Note] > A link can also reference a `RelatedPerson`. Right now, `RelatedPerson` resources are not processed in Patient-everything and are not returned in the bundle.
-Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient is not returned in the bundle.
+Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient isn't returned in the bundle.
-As described above, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients.
+As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients.
> [!Note] > This is set up to only follow `seealso` links one **layer deep**. It doesn't process a `seealso` link's `seealso` links.
-[ ![See also flow diagram.](media/patient-everything/see-also-flow.png) ](media/patient-everything/see-also-flow.png#lightbox)
+[![See also flow diagram.](media/patient-everything/see-also-flow.png)](media/patient-everything/see-also-flow.png#lightbox)
The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` will include by default an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`.
In addition, you can set the `Prefer` header to `handling=strict` to throw an er
> [!Note] > If a `replaced-by` link is present, `Prefer: handling=lenient` and results are returned asynchronously in multiple bundles, only an operation outcome is returned in one bundle.
+## Patient-everything response order
+
+The Patient-everything operation returns results in phases:
+
+1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources ir references.
+1. Phase 2 and 3 both return resources in the patient compartment. If the `start` or `end` query parameters are specified, Phase 2 returns resources from the compartment that can be filtered by their clinical date, and Phase 3 returns resources from the compartment that can't be filtered by their clinical date. If neither of these parameters are specified, Phase 2 is skipped and Phase 3 returns all patient-compartment resources.
+1. Phase 4 will return any devices that reference the patient.
+
+Each phase will return results in a bundle. If the results span multiple pages, the next link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase.
+
+If the original patient has any `seealso` links, phases 1 through 4 will be repeated for each of those patients.
+ ## Examples of Patient-everything
-Below are some examples of using the Patient-everything operation. In addition to the examples below, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
+Here are some examples of using the Patient-everything operation. In addition to these examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
To use Patient-everything to query a patient's "everything" between 2010 and 2020, use the following call:
If a patient is found for each of these calls, you'll get back a 200 response wi
## Next steps
-Now that you know how to use the Patient-everything operation, you can learn about the search options. For more information, see
+Now that you know how to use the Patient-everything operation, you can learn about the search options.
>[!div class="nextstepaction"] >[Overview of FHIR search](overview-of-search.md)
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
For example:
- `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations. - `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and it identifies mandatory fields such as extensions and value sets.
-When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource.
+When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
```json {
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides. Some common Implementation Guides are:
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
|Name |URL |- |-
CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/> Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+> [!NOTE]
+> The FHIR service does not store any profiles from implementation guides by default. You will need to load them into the FHIR service.
+ ## Accessing profiles and storing profiles ### Storing profiles
To store profiles to the FHIR server, you can `POST` the `StructureDefinition` w
} ```
-For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd do the following:
+For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We have included a snippet of this profile for the example.
```rest POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefi
], "description" : "Defines constraints and extensions on the AllergyIntolerance resource for the minimal set of data to query and retrieve allergy information.", ```
-For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles.
+For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles you should get the profiles directly from HL7 and the implementation guide that defines them.
### Viewing profiles
healthcare-apis Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/github-projects.md
Previously updated : 10/18/2021 Last updated : 01/24/2022 # GitHub Projects
-We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You are always welcome to visit our GitHub repositories to learn and experiment with our features and products.
+We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. YouΓÇÖre always welcome to visit our GitHub repositories to learn and experiment with our features and products.
## Healthcare APIs samples
We have many open-source projects on GitHub that provide you the source code and
## FHIR Server
-* [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for FHIR service
+* [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for FHIR service
* For information about the latest releases, see [Release notes](https://github.com/microsoft/fhir-server/releases) * [microsoft/fhir-server-samples](https://github.com/microsoft/fhir-server-samples): a sample environment
We have many open-source projects on GitHub that provide you the source code and
* Released to Visual Studio Marketplace * Used for authoring Liquid templates to be used in the FHIR Converter
+## Analytic Pipelines
+
+FHIR Analytics Pipelines help you build components and pipelines for rectangularizing and moving FHIR data from Azure FHIR servers namely [Azure Healthcare APIs FHIR Server](https://docs.microsoft.com/azure/healthcare-apis/), [Azure API for FHIR](https://docs.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/), and [FHIR Server for Azure](https://github.com/microsoft/fhir-server) to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/) and thereby make it available for analytics with [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), [Power BI](https://powerbi.microsoft.com/), and [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/).
+
+The descriptions and capabilities of these two solutions are summarized below:
+
+### FHIR to Synapse Sync Agent
+
+The FHIR to Synapse Sync Agent is an Azure function that extracts data from a FHIR Server using FHIR Resource APIs, and converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This agent also contains a [script](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/scripts/Set-SynapseEnvironment.ps1) to create external tables and views in Synapse Serverless SQL pool pointing to the Parquet files.
+
+This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider using this solution if you want to access all your FHIR data in near real time and want to defer custom transformation to downstream systems.
+
+### FHIR to CDM Pipeline Generator
+
+The FHIR to CDM Pipeline Generator is a tool to generate an ADF pipeline for moving a snapshot of data from a FHIR server using $export API to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2 in `.csv` format. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the [instructions](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/cdm-to-synapse.md) for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+
+This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema as it is extracted from the FHIR server.
+ ## IoT connector #### Integration with IoT Hub and IoT Central
We have many open-source projects on GitHub that provide you the source code and
#### HealthKit and FHIR Integration
-* [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): a Swift library that automates the export of Apple HealthKit Data to a FHIR Server
+* [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): a Swift library that automates the export of Apple HealthKit Data to a FHIR Server.
++
+## Next steps
+
+In this article, you learned about some of the Healthcare APIs open-source GitHub projects that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Healthcare APIs, see
+ >[!div class="nextstepaction"] >[Overview of Azure Healthcare APIs](healthcare-apis-overview.md)
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/iot/how-to-display-metrics.md
Title: Display IoT connector Metrics logging - Azure Healthcare APIs
+ Title: Display IoT connector metrics logging - Azure Healthcare APIs
description: This article explains how to display IoT connector Metrics Previously updated : 11/22/2021 Last updated : 1/24/2022
-# How to display IoT connector Metrics
+# How to display IoT connector metrics
> [!IMPORTANT] > Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In this article, you'll learn how to display IoT connector Metrics in the Azure portal.
+In this article, you'll learn how to display IoT connector metrics in the Azure portal.
-## Display Metrics
+## Display metrics
1. Within your Azure Healthcare APIs Workspace, select **IoT connectors**.
- :::image type="content" source="media\iot-metrics\iot-workspace-displayed-with-connectors-button.png" alt-text="Select the IoT connectors button." lightbox="media\iot-metrics\iot-connectors-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the IoT connectors button." lightbox="media\iot-metrics\iot-connectors-button.png":::
-2. Select the IoT connector that you would like to display the Metrics for.
+2. Select the IoT connector that you would like to display the metrics for.
- :::image type="content" source="media\iot-metrics\iot-connector-select.png" alt-text="Select IoT connector you would like to display Metrics for." lightbox="media\iot-metrics\iot-connector-select.png":::
+ :::image type="content" source="media\iot-metrics\iot-connector-select.png" alt-text="Screenshot of select IoT connector you would like to display metrics for." lightbox="media\iot-metrics\iot-connector-select.png":::
-3. Select **Metrics** within the IoT connector page.
+3. Select **Metrics** button within the IoT connector page.
- :::image type="content" source="media\iot-metrics\iot-select-metrics.png" alt-text="Select the Metrics button." lightbox="media\iot-metrics\iot-metrics-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-select-metrics.png" alt-text="Screenshot of Select the Metrics button." lightbox="media\iot-metrics\iot-metrics-button.png":::
-4. From the Metrics page, you can create the Metrics that you want to display for your IoT connector. For this example, we'll be choosing the following selections:
+4. From the metrics page, you can create the metrics that you want to display for your IoT connector. For this example, we'll be choosing the following selections:
- * **Scope** = IoT connector name (**Default**)
- * **Metric Namespace** = Standard Metrics (**Default**)
- * **Metric** = IoT connector metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
- * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
+ * **Scope** = IoT connector name (**Default**)
+ * **Metric Namespace** = Standard Metrics (**Default**)
+ * **Metric** = IoT connector metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
+ * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
- :::image type="content" source="media\iot-metrics\iot-select-metrics-to-display.png" alt-text="Select Metrics to display." lightbox="media\iot-metrics\iot-metrics-selection-close-up.png":::
+ :::image type="content" source="media\iot-metrics\iot-select-metrics-to-display.png" alt-text="Screenshpt of select metrics to display." lightbox="media\iot-metrics\iot-metrics-selection-close-up.png":::
-5. We can now see the IoT connector Metrics for **Number of Incoming Messages** displayed on the Azure portal.
+5. We can now see the IoT connector metrics for **Number of Incoming Messages** displayed on the Azure portal.
- > [!TIP]
- > You can add additional Metrics by selecting the **Add metric** button and making your choices.
+ > [!TIP]
+ > You can add additional metrics by selecting the **Add metric** button and making your choices.
- :::image type="content" source="media\iot-metrics\iot-metrics-add-button.png" alt-text="Select Add metric button to add more Metrics." lightbox="media\iot-metrics\iot-add-metric-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-metrics-add-button.png" alt-text="Screenshot of select Add metric button to add more metrics." lightbox="media\iot-metrics\iot-add-metric-button.png":::
- > [!IMPORTANT]
- > If you leave the Metrics page, the Metrics settings are lost and will have to be recreated. If you would like to save your IoT connector Metrics for future viewing, you can pin them to an Azure dashboard as a tile.
+ > [!IMPORTANT]
+ > If you leave the metrics page, the metrics settings are lost and will have to be recreated. If you would like to save your IoT connector metrics for future viewing, you can pin them to an Azure dashboard as a tile.
-## Pinning Metrics tile on Azure portal dashboard
+## Pinning metrics tile on Azure portal dashboard
-1. To pin the Metrics tile to an Azure portal dashboard, select the **Pin to dashboard** button:
+1. To pin the metrics tile to an Azure portal dashboard, select the **Pin to dashboard** button.
- :::image type="content" source="media\iot-metrics\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Select the Pin to dashboard button." lightbox="media\iot-metrics\iot-pin-to-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard button." lightbox="media\iot-metrics\iot-pin-to-dashboard-button.png":::
-2. Select the dashboard you would like to display IoT connector Metrics on. For this example, we'll use a private dashboard named `IoT connector Metrics`. Select **Pin** to add the Metrics tile to the dashboard.
+2. Select the dashboard you would like to display IoT connector metrics on. For this example, we'll use a private dashboard named `IoT connector Metrics`. Select **Pin** to add the metrics tile to the dashboard.
- :::image type="content" source="media\iot-metrics\iot-select-pin-to-dashboard.png" alt-text="Select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics\iot-select-pin-to-dashboard.png":::
+ :::image type="content" source="media\iot-metrics\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics\iot-select-pin-to-dashboard.png":::
-3. You'll receive a confirmation that the Metrics tile was successfully added to the dashboard.
+3. You'll receive a confirmation that the metrics tile was successfully added to the dashboard.
- :::image type="content" source="media\iot-metrics\iot-select-dashboard-pinned-successful.png" alt-text="Metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics\iot-select-dashboard-pinned-successful.png":::
+ :::image type="content" source="media\iot-metrics\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics\iot-select-dashboard-pinned-successful.png":::
4. Once you've received a successful confirmation, select **Dashboard**.
- :::image type="content" source="media\iot-metrics\iot-select-dashboard-with-metrics-tile.png" alt-text="Select the Dashboard button." lightbox="media\iot-metrics\iot-dashboard-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard button." lightbox="media\iot-metrics\iot-dashboard-button.png":::
-5. Select the dashboard that you pinned the Metrics tile to. For this example, the dashboard is `IoT connector Metrics`. The dashboard will display the IoT connector Metrics tile that you created in the previous steps.
+5. Select the dashboard that you pinned the metrics tile to. For this example, the dashboard is **IoT connector Metrics**. The dashboard will display the IoT connector metrics tile that you created in the previous steps.
- :::image type="content" source="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Dashboard with pinned IoT connector Metrics tile." lightbox="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png":::
+ :::image type="content" source="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned IoT connector metrics tile." lightbox="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png":::
-> [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
-
-## Conclusion
-
-Having access to Metrics is essential for monitoring and troubleshooting. IoT connector assists you to do these actions through Metrics.
+ > [!TIP]
+ > See the [IoT connector troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors, conditions and issues.
## Next steps
-Check out frequently asked questions about IoT connector.
+To learn how to export Iot connector metrics, see
>[!div class="nextstepaction"]
->[IoT connector FAQs](iot-connector-faqs.md)
+>[Configure diagnostic setting for IoT connector metrics exporting](./iot-metrics-diagnostics-export.md)
(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
description: How to define storage targets so that your Azure HPC Cache can use
Previously updated : 01/06/2022 Last updated : 01/19/2022
Add storage targets after creating your cache. Follow this process:
The procedure to add a storage target is slightly different depending on the type of storage it uses. Details for each are below.
-Click the image below to watch a [video demonstration](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) of creating a cache and adding a storage target from the Azure portal.
+<!-- Click the image below to watch a [video demonstration](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) of creating a cache and adding a storage target from the Azure portal.
-[![video thumbnail: Azure HPC Cache: Setup (click to visit the video page)](media/video-4-setup.png)](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/)
+[![video thumbnail: Azure HPC Cache: Setup (click to visit the video page)](media/video-4-setup.png)](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) -->
## Size your cache correctly to support your storage targets
hpc-cache Hpc Cache Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-create.md
Title: Create an Azure HPC Cache description: How to create an Azure HPC Cache instance-+ Previously updated : 07/15/2021- Last updated : 01/19/2022+
Use the Azure portal or the Azure CLI to create your cache. ![screenshot of cache overview in Azure portal, with create button at the bottom](media/hpc-cache-home-page.png)-
+<!--
Click the image below to watch a [video demonstration](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) of creating a cache and adding a storage target.
-[![video thumbnail: Azure HPC Cache: Setup (click to visit the video page)](media/video-4-setup.png)](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/)
+[![video thumbnail: Azure HPC Cache: Setup (click to visit the video page)](media/video-4-setup.png)](https://azure.microsoft.com/resources/videos/set-up-hpc-cache/) -->
## [Portal](#tab/azure-portal)
hpc-cache Hpc Cache Edit Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-edit-storage.md
description: How to edit Azure HPC Cache storage targets
Previously updated : 01/10/2022 Last updated : 01/19/2022
Depending on the type of storage, you can modify these storage target values:
You can't edit a storage target's name, type, or back-end storage system. If you need to change these properties, delete the storage target and create a replacement with the new value.
-The [Managing Azure HPC Cache video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) shows how to edit a storage target in the Azure portal.
+<!-- The [Managing Azure HPC Cache video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) shows how to edit a storage target in the Azure portal. -->
## Change a blob storage target's namespace path or access policy
hpc-cache Hpc Cache Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-manage.md
Title: Manage and update Azure HPC Cache description: How to manage and update Azure HPC Cache using the Azure portal or Azure CLI-+ Previously updated : 07/08/2021- Last updated : 01/19/2022+ # Manage your cache
Read more about these options below.
> [!TIP] > You can also manage individual storage targets - read [View and manage storage targets](manage-storage-targets.md) for details.
-Click the image below to watch a [video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) that demonstrates cache management tasks.
+<!-- Click the image below to watch a [video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) that demonstrates cache management tasks.
-[![video thumbnail: Azure HPC Cache: Manage (click to visit the video page)](media/video-5-manage.png)](https://azure.microsoft.com/resources/videos/managing-hpc-cache/)
+[![video thumbnail: Azure HPC Cache: Manage (click to visit the video page)](media/video-5-manage.png)](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) -->
## Stop the cache
hpc-cache Hpc Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-overview.md
Title: Azure HPC Cache overview description: Describes Azure HPC Cache, a file access accelerator solution for high-performance computing -+ Previously updated : 03/11/2021- Last updated : 01/19/2022+ # What is Azure HPC Cache?
Azure HPC Cache speeds access to your data for high-performance computing (HPC)
Azure HPC Cache is easy to launch and monitor from the Azure portal. Existing NFS storage or new Blob containers can become part of its aggregated namespace, which makes client access simple even if you change the back-end storage target.
-## Overview video
+<!-- ## Overview video
[![video thumbnail: Azure HPC Cache overview - click to visit video page](media/video-1-overview.png)](https://azure.microsoft.com/resources/videos/hpc-cache-overview/)
-Click the image above to watch a [short overview of Azure HPC Cache](https://azure.microsoft.com/resources/videos/hpc-cache-overview/).
+Click the image above to watch a [short overview of Azure HPC Cache](https://azure.microsoft.com/resources/videos/hpc-cache-overview/). -->
## Use cases
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-prerequisites.md
description: Prerequisites for using Azure HPC Cache
Previously updated : 01/13/2022 Last updated : 01/19/2022
Before creating a new Azure HPC Cache, make sure your environment meets these requirements.
-## Video overviews
+<!-- ## Video overviews
Watch these videos for a quick overview of the system's components and what they need to work together.
Watch these videos for a quick overview of the system's components and what they
[![video thumbnail image: Azure HPC Cache: Prerequisites (click to visit video page)](media/video-3-prerequisites.png)](https://azure.microsoft.com/resources/videos/hpc-cache-prerequisites/)
-Read the rest of this article for specific recommendations.
+Read the rest of this article for specific recommendations. -->
## Azure subscription
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-quotas-limits.md
There are various quotas and limits that apply to IoT Central applications. IoT
| - | -- | -- | | Number of rules in an application | 50 | Contact support to discuss increasing this quota for your application. | | Number of actions in a rule | 5 | This quota is fixed and can't be changed. |
+| Number of alerts for an email action | One alert every minute per rule | This quota is fixed and can't be changed. |
+| Number of alerts for a webhook action | One alert every 10 seconds per action | This quota is fixed and can't be changed. |
+| Number of alerts for a Power Automate action | One alert every 10 seconds per action | This quota is fixed and can't be changed. |
+| Number of alerts for an Azure Logic App action | One alert every 10 seconds per action | This quota is fixed and can't be changed. |
+| Number of alerts for an Azure Monitor Group action | One alert every 10 seconds per action | This quota is fixed and can't be changed. |
## Jobs
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/production-checklist.md
Once your IoT Edge device connects, be sure to continue configuring the Upstream
* Set up host storage for system modules * Reduce memory space used by the IoT Edge hub * Do not use debug versions of module images
+ * Be mindful of twin size limits when using custom modules
### Be consistent with upstream protocol
The default value of the timeToLiveSecs parameter is 7200 seconds, which is two
When moving from test scenarios to production scenarios, remember to remove debug configurations from deployment manifests. Check that none of the module images in the deployment manifests have the **\.debug** suffix. If you added create options to expose ports in the modules for debugging, remove those create options as well.
+### Be mindful of twin size limits when using custom modules
+
+The deployment manifest that contains custom modules is part of the EdgeAgent twin. Review the [limitation on module twin size](../iot-hub/iot-hub-devguide-module-twins.md#module-twin-size).
+
+If you deploy a large number of modules, you might exhaust this twin size limit. Consider some common mitigations to this hard limit:
+
+- Store any configuration in the custom module twin, which has its own limit.
+- Store some configuration that points to a non-space-limited location (that is, to a blob store).
+ ## Container management * **Important**
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot-common-errors.md
You can set DNS server for each module's *createOptions* in the IoT Edge deploym
} ```
+> [!WARNING]
+> If you use this method and specify the wrong DNS address, *edgeAgent* loses connection with IoT Hub and can't receive new deployments to fix the issue. To resolve this issue, you can reinstall the IoT Edge runtime. Before you install a new instance of IoT Edge, be sure to remove any *edgeAgent* containers from the previous installation.
+ Be sure to set this configuration for the *edgeAgent* and *edgeHub* modules as well. ## IoT Edge hub fails to start
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
Title: Scheduling recurring tasks and workflows in Azure Logic Apps
-description: An overview about scheduling recurring automated tasks, processes, and workflows with Azure Logic Apps
+ Title: Scheduling recurring tasks and workflows
+description: An overview about scheduling recurring automated tasks, processes, and workflows with Azure Logic Apps.
ms.suite: integration-+ Previously updated : 02/16/2021 Last updated : 01/24/2022 # Schedule and run recurring automated tasks, processes, and workflows with Azure Logic Apps Logic Apps helps you create and run automated recurring tasks and processes on a schedule. By creating a logic app workflow that starts with a built-in Recurrence trigger or Sliding Window trigger, which are Schedule-type triggers, you can run tasks immediately, at a later time, or on a recurring interval. You can call services inside and outside Azure, such as HTTP or HTTPS endpoints, post messages to Azure services such as Azure Storage and Azure Service Bus, or get files uploaded to a file share. With the Recurrence trigger, you can also set up complex schedules and advanced recurrences for running tasks. To learn more about the built-in Schedule triggers and actions, see [Schedule triggers](#schedule-triggers) and [Schedule actions](#schedule-actions).
-> [!TIP]
+> [!NOTE]
> You can schedule and run recurring workloads without creating a separate logic app for each scheduled job and running into the [limit on workflows per region and subscription](../logic-apps/logic-apps-limits-and-config.md#definition-limits). Instead, you can use the logic app pattern that's created by the [Azure QuickStart template: Logic Apps job scheduler](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logicapps-jobscheduler/). >
-> The Logic Apps job scheduler template creates a CreateTimerJob logic app that calls a TimerJob logic app. You can then call the CreateTimerJob logic app as an API by making an HTTP request and passing a schedule as input for the request. Each call to the CreateTimerJob logic app also calls the TimerJob logic app, which creates a new TimerJob instance that continuously runs based on the specified schedule or until meeting a specified limit. That way, you can run as many TimerJob instances as you want without worrying about workflow limits because instances aren't individual logic app workflow definitions or resources.
+> The Azure Logic Apps job scheduler template creates a CreateTimerJob logic app that calls a TimerJob logic app. You can then call the CreateTimerJob logic app as an API by making an HTTP request and passing a schedule as input for the request. Each call to the CreateTimerJob logic app also calls the TimerJob logic app, which creates a new TimerJob instance that continuously runs based on the specified schedule or until meeting a specified limit. That way, you can run as many TimerJob instances as you want without worrying about workflow limits because instances aren't individual logic app workflow definitions or resources.
This list shows some example tasks that you can run with the Schedule built-in triggers:
This article describes the capabilities for the Schedule built-in triggers and a
## Schedule triggers
-You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Logic Apps creates and runs a new workflow instance for your logic app.
+You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Azure Logic Apps creates and runs a new workflow instance for your logic app.
Here are the differences between these triggers:
Here are the differences between these triggers:
If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule.
- > [!TIP]
+ > [!IMPORTANT]
+ > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
+ >
+ > * **Day**: Set up the daily recurrence at least 24 hours in advance.
+ >
+ > * **Week**: Set up the weekly recurrence at least 7 days in advance.
+ >
+ > Otherwise, the workflow might skip the first recurrence.
+ >
> If a recurrence doesn't specify a specific [start date and time](#start-time), the first recurrence runs immediately > when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, provide a start > date and time for when you want the first recurrence to run.
After any action in your logic app workflow, you can use the Delay and Delay Unt
## Patterns for start date and time
-Here are some patterns that show how you can control recurrence with the start date and time, and how the Logic Apps service runs these recurrences:
+Here are some patterns that show how you can control recurrence with the start date and time, and how Azure Logic Apps runs these recurrences:
| Start time | Recurrence without schedule | Recurrence with schedule (Recurrence trigger only) | ||--|-| | {none} | Runs the first workload instantly. <p>Runs future workloads based on the last run time. | Runs the first workload instantly. <p>Runs future workloads based on the specified schedule. |
-| Start time in the past | **Recurrence** trigger: Calculates run times based on the specified start time and discards past run times. <p><p>Runs the first workload at the next future run time. <p><p>Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Calculates run times based on the specified start time and honors past run times. <p><p>Runs future workloads based on the specified start time. <p><p>For more explanation, see the example following this table. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. <p><p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
-| Start time now or in the future | Runs the first workload at the specified start time. <p><p>**Recurrence** trigger: Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Runs future workloads based on the specified start time. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. <p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
+| Start time in the past | **Recurrence** trigger: Calculates run times based on the specified start time and discards past run times. <p><p>Runs the first workload at the next future run time. <p><p>Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Calculates run times based on the specified start time and honors past run times. <p><p>Runs future workloads based on the specified start time. <p><p>For more explanation, see the example following this table. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. <p><p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Azure Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
+| Start time now or in the future | Runs the first workload at the specified start time. <p><p>**Recurrence** trigger: Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Runs future workloads based on the specified start time. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance: <p>- **Day**: Set up the daily recurrence at least 24 hours in advance. <p>- **Week**: Set up the weekly recurrence at least 7 days in advance. <p>Otherwise, the workflow might skip the first recurrence. <p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Azure Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
|||| *Example for past start time and recurrence but no schedule*
Suppose the current date and time is September 8, 2017 at 1:00 PM. You specify t
| 2017-09-**07**T14:00:00Z <br>(2017-09-**07** at 2:00 PM) | 2017-09-**08**T13:00:00Z <br>(2017-09-**08** at 1:00 PM) | Every two days | {none} | |||||
-For the Recurrence trigger, the Logic Apps engine calculates run times based on the start time, discards past run times, uses the next future start time for the first run, and calculates future runs based on the last run time.
+For the Recurrence trigger, the Azure Logic Apps engine calculates run times based on the start time, discards past run times, uses the next future start time for the first run, and calculates future runs based on the last run time.
Here's how this recurrence looks:
So, no matter how far in the past you specify the start time, for example, 2017-
## Recurrence for daylight saving time and standard time
-Recurring built-in triggers honor the schedule that you set, including any time zone that you specify. If you don't select a time zone, daylight saving time (DST) might affect when triggers run, for example, shifting the start time one hour forward when DST starts and one hour backward when DST ends. When scheduling jobs, Logic Apps puts the message for processing into the queue and specifies when that message becomes available, based on the UTC time when the last job ran and the UTC time when the next job is scheduled to run.
+Recurring built-in triggers honor the schedule that you set, including any time zone that you specify. If you don't select a time zone, daylight saving time (DST) might affect when triggers run, for example, shifting the start time one hour forward when DST starts and one hour backward when DST ends. When scheduling jobs, Azure Logic Apps puts the message for processing into the queue and specifies when that message becomes available, based on the UTC time when the last job ran and the UTC time when the next job is scheduled to run.
To avoid this shift so that your logic app runs at your specified start time, make sure that you select a time zone. That way, the UTC time for your logic app also shifts to counter the seasonal time change.
If these logic apps use the UTC-6:00 Central Time (US & Canada) zone, this simul
## Run one time only
-If you want to run your logic app only at one time in the future, you can use the **Scheduler: Run once jobs** template. After you create a new logic app but before opening the Logic Apps Designer, under the **Templates** section, from the **Category** list, select **Schedule**, and then select this template:
+If you want to run your logic app only at one time in the future, you can use the **Scheduler: Run once jobs** template. After you create a new logic app but before opening the workflow designer, under the **Templates** section, from the **Category** list, select **Schedule**, and then select this template:
![Select "Scheduler: Run once jobs" template](./media/concepts-schedule-automated-recurring-tasks-workflows/choose-run-once-template.png)
Here are various example recurrences that you can set up for the triggers that s
|||-|--|||-||| | Recurrence, <br>Sliding Window | Run every 15 minutes (no start date and time) | 15 | Minute | {none} | {unavailable} | {none} | {none} | This schedule starts immediately, then calculates future recurrences based on the last run time. | | Recurrence, <br>Sliding Window | Run every 15 minutes (with start date and time) | 15 | Minute | *startDate*T*startTime*Z | {unavailable} | {none} | {none} | This schedule doesn't start *any sooner* than the specified start date and time, then calculates future recurrences based on the last run time. |
-| Recurrence, <br>Sliding Window | Run every hour, on the hour (with start date and time) | 1 | Hour | *startDate*Thh:00:00Z | {unavailable} | {none} | {none} | This schedule doesn't start *any sooner* than the specified start date and time. Future recurrences run every hour at the "00" minute mark, which Logic Apps calculates from the start time. <p>If the frequency is "Week" or "Month", this schedule respectively runs only one day per week or one day per month. |
+| Recurrence, <br>Sliding Window | Run every hour, on the hour (with start date and time) | 1 | Hour | *startDate*Thh:00:00Z | {unavailable} | {none} | {none} | This schedule doesn't start *any sooner* than the specified start date and time. Future recurrences run every hour at the "00" minute mark, which Azure Logic Apps calculates from the start time. <p>If the frequency is "Week" or "Month", this schedule respectively runs only one day per week or one day per month. |
| Recurrence, <br>Sliding Window | Run every hour, every day (no start date and time) | 1 | Hour | {none} | {unavailable} | {none} | {none} | This schedule starts immediately and calculates future recurrences based on the last run time. <p>If the frequency is "Week" or "Month", this schedule respectively runs only one day per week or one day per month. | | Recurrence, <br>Sliding Window | Run every hour, every day (with start date and time) | 1 | Hour | *startDate*T*startTime*Z | {unavailable} | {none} | {none} | This schedule doesn't start *any sooner* than the specified start date and time, then calculates future recurrences based on the last run time. <p>If the frequency is "Week" or "Month", this schedule respectively runs only one day per week or one day per month. | | Recurrence, <br>Sliding Window | Run every 15 minutes past the hour, every hour (with start date and time) | 1 | Hour | *startDate*T00:15:00Z | {unavailable} | {none} | {none} | This schedule doesn't start *any sooner* than the specified start date and time. Future recurrences run at the "15" minute mark, which Logic Apps calculates from the start time, so at 00:15 AM, 1:15 AM, 2:15 AM, and so on. |
logic-apps Ise Manage Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/ise-manage-integration-service-environment.md
You can view and manage the custom connectors that you deployed to your ISE.
The Premium ISE base unit has fixed capacity, so if you need more throughput, you can add more scale units, either during creation or afterwards. The Developer SKU doesn't include the capability to add scale units.
+> [!IMPORTANT]
+> Scaling out an ISE can take 20-30 minutes on average.
++ 1. In the [Azure portal](https://portal.azure.com), go to your ISE. 1. To review usage and performance metrics for your ISE, on your ISE menu, select **Overview**.
machine-learning Web Service Input Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/component-reference/web-service-input-output.md
The following example shows how to manually create real-time inference pipeline
![Example](media/module/web-service-input-output-example.png)
-After you submit the pipeline and the run finishes successfully, you can deploy the real-time endpoint.
+After you submit the pipeline and the run finishes successfully, you can [deploy the real-time endpoint](../tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint).
> [!NOTE] > In the preceding example, **Enter Data Manually** provides the data schema for web service input and is necessary for deploying the real-time endpoint. Generally, you should always connect a component or dataset to the port where **Web Service Input** is connected to provide the data schema.
After you submit the pipeline and the run finishes successfully, you can deploy
## Next steps Learn more about [deploying the real-time endpoint](../tutorial-designer-automobile-price-deploy.md#deploy-the-real-time-endpoint).
-See the [set of components available](component-reference.md) to Azure Machine Learning.
+See the [set of components available](component-reference.md) to Azure Machine Learning.
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
Title: Differential privacy in machine learning (preview) description: Learn what differential privacy is and how differentially private systems preserve data privacy. --++ Last updated 10/21/2021
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-fairness-ml.md
--++ Last updated 10/21/2021 #Customer intent: As a data scientist, I want to learn about machine learning fairness and how to assess and mitigate unfairness in machine learning models.
machine-learning Concept Open Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-open-source.md
--++ Last updated 11/04/2021
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-responsible-ml.md
--++ Last updated 10/21/2021 #Customer intent: As a data scientist, I want to know learn what responsible machine learning is and how I can use it in Azure Machine Learning.
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-arc-kubernetes.md
Title: Azure Arc-enabled machine learning (preview) description: Configure Azure Kubernetes Service and Azure Arc-enabled Kubernetes clusters to train and inference machine learning models in Azure Machine Learning --++ Last updated 11/23/2021
kubectl get pods -n azureml
``` ## Update Azure Machine Learning extension
-Use ```k8s-extension update``` CLI command to update the mutable properties of Azure Machine Learning extension. For more information, see the [`k8s-extension update` CLI command documentation](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_update).
+Use ```k8s-extension update``` CLI command to update the mutable properties of Azure Machine Learning extension. For more information, see the [`k8s-extension update` CLI command documentation](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_update&preserve-view=true).
1. Azure Arc supports update of ``--auto-upgrade-minor-version``, ``--version``, ``--configuration-settings``, ``--configuration-protected-settings``. 2. For configurationSettings, only the settings that require update need to be provided. If the user provides all settings, they would be merged/overwritten with the provided values.
Use ```k8s-extension update``` CLI command to update the mutable properties of
## Delete Azure Machine Learning extension
-Use [`k8s-extension delete`](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_delete) CLI command to delete the Azure Machine Learning extension.
+Use [`k8s-extension delete`](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_delete&preserve-view=true) CLI command to delete the Azure Machine Learning extension.
It takes around 10 minutes to delete all components deployed to the Kubernetes cluster. Run `kubectl get pods -n azureml` to check if all components were deleted.
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
description: Learn how to use Visual Studio Code to test and debug online endpoi
--++ Last updated 11/03/2021
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-visual-studio-code.md
--++ Last updated 10/21/2021
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-differential-privacy.md
-+ Last updated 10/21/2021 # Customer intent: As an experienced data scientist, I want to use differential privacy in Azure Machine Learning.
machine-learning How To Homomorphic Encryption Seal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-homomorphic-encryption-seal.md
Title: Deploy an encrypted inferencing service (preview) description: Learn how to use Microsoft SEAL to deploy an encrypted prediction service for image classification--++ Last updated 10/21/2021
machine-learning How To Kubernetes Instance Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-kubernetes-instance-type.md
Title: How to create and select Kubernetes instance types (preview) description: Create and select Azure Arc-enabled Kubernetes cluster instance types for training and inferencing workloads in Azure Machine Learning. --++ Last updated 10/21/2021
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-fairness-aml.md
-+ Last updated 10/21/2021
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-optimize-cost.md
Title: Manage and optimize costs description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning--++
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-resources-vscode.md
Title: Create and manage resources VS Code extension (preview)
description: Learn how to create and manage Azure Machine Learning resources using the Azure Machine Learning Visual Studio Code extension. ---+++
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
+ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC): - "Microsoft.Network/virtualNetworks/join/action" on the virtual network resource.
- - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
+ - "Microsoft.Network/virtualNetworks/subnets/join/action" on the subnet resource.
For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking)
machine-learning How To Set Up Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-vs-code-remote.md
--++ Last updated 10/21/2021 # As a data scientist, I want to connect to an Azure Machine Learning compute instance in Visual Studio Code to access my resources and run my code.
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-setup-vs-code.md
Title: Set up Visual Studio Code extension (preview)
description: Learn how to set up the Azure Machine Learning Visual Studio Code extension. --++ Last updated 10/21/2021
machine-learning How To Troubleshoot Deployment Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment-local.md
description: Try a local model deployment as a first step in troubleshooting mod
-+ Last updated 10/21/2021
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
Title: Make predictions with AutoML ONNX Model in .NET description: Learn how to make predictions using an AutoML ONNX model in .NET with ML.NET --++ Last updated 10/21/2021
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-private-python-packages.md
with token based authentication, such as private GitHub repositories.
ws = Workspace.from_config() ws.set_connection(name="connection-1", category = "PythonFeed",
- target = "https://<my-org>.pkgs.visualstudio.com",
+ target = "https://pkgs.dev.azure.com/<MY-ORG>",
authType = "PAT", value = pat_token) ```
with token based authentication, such as private GitHub repositories.
env = Environment(name="my-env") cd = CondaDependencies() cd.add_pip_package("<my-package>")
- cd.set_pip_option("--extra-index-url https://<my-org>.pkgs.visualstudio.com/<my-project>/_packaging/<my-feed>/pypi/simple")
+ cd.set_pip_option("--extra-index-url https://pkgs.dev.azure.com/<MY-ORG>/_packaging/<MY-FEED>/pypi/simple")")
env.python.conda_dependencies=cd ```
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
Title: Curated environments
description: Learn about Azure Machine Learning curated environments, a set of pre-configured environments that help reduce experiment and deployment preparation times. ---+++
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
--++ Last updated 05/25/2021
marketplace Supported Html Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/supported-html-tags.md
Previously updated : 06/05/2020 Last updated : 01/25/2021 # HTML tags supported in commercial marketplace offer descriptions
media-services Encode Basic Encoding Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-basic-encoding-python-quickstart.md
ms.devlang: python Previously updated : 1/10/2022 Last updated : 01/25/2022
This quickstart shows you how to do basic encoding with Python and Azure Media S
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) to use with this quickstart.+
+ > [!IMPORTANT]
+ > When you create the storage account for your media services account, change the storage authentication type to *System authentication*. Otherwise, you will get authentication errors for this example.
++ - [Create a Media Services v3 account](account-create-how-to.md). - [Get your storage account key](../../storage/common/storage-account-keys-manage.md#view-account-access-keys). - [Create a service principal and key](../../purview/create-service-principal-azure.md).
Create a fork and clone the sample located in the [Python samples repository](ht
## Create the .env file
-Get the values from your account to create an *.env* file. That is correct, save it with no name, just the extension. Use *sample.env* as a template then save the *.env* file to the BasicEncoder folder in your local clone.
+Get the values from your account to create a *.env* file. Save it without a name and just the extension. Use *sample.env* as a template for your *.env* file. Save the *.env* file to the *BasicEncoding* folder in your local clone.
## Use Python virtual environments
For samples, we recommend you always create and activate a Python virtual enviro
3. Activate the virtual environment: ``` bash
- .venv\scripts\activate
+ . .venv/Scripts/activate
``` A virtual environment is a folder within a project that isolates a copy of a specific Python interpreter. Once you activate that environment (which Visual Studio Code does automatically), running `pip install` installs a library into that environment only. When you then run your Python code, it runs in the environment's exact context with specific versions of every library. And when you run `pip freeze`, you get the exact list of those libraries. (In many of the samples, you create a requirements.txt file for the libraries you need, then use `pip install -r requirements.txt`. A requirements file is usually needed when you deploy code to Azure.)
For samples, we recommend you always create and activate a Python virtual enviro
1. Set up and [configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).
-2. Install the azure-identity library for Python. This module is needed for Azure Active Directory authentication. See the details at [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables).
+1. Install the `python-dotenv` library. This will enable you to load the environment variables quickly and easily.
+
+ ```bash
+ pip install python-dotenv
+ ```
+
+1. Install the `azure-identity` library for Python. This module is needed for Azure Active Directory authentication. See the details at [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables).
``` bash pip install azure-identity ```
-3. Install the Python SDK for [Azure Media Services](/python/api/overview/azure/media-services).
+1. Install the Python SDK for [Azure Media Services](/python/api/overview/azure/media-services).
The Pypi page for the Media Services Python SDK with latest version details is located at - [azure-mgmt-media](https://pypi.org/project/azure-mgmt-media/).
For samples, we recommend you always create and activate a Python virtual enviro
pip install azure-mgmt-media ```
-4. Install the [Azure Storage SDK for Python](https://pypi.org/project/azure-storage-blob/).
+1. Install the [Azure Storage SDK for Python](https://pypi.org/project/azure-storage-blob/).
``` bash pip install azure-storage-blob ```
-You can optionally install ALL of the requirements for a given sample by using the "requirements.txt" file in the samples folder.
+You can optionally install ALL of the requirements for a given sample by using the *requirements.txt* file in the samples folder.
``` bash pip install -r requirements.txt
You can optionally install ALL of the requirements for a given sample by using t
The code below is thoroughly commented. Use the whole script or use parts of it for your own script.
-In this sample, a random number is generated for naming things so you can identify them as a group that was created together when you ran the script. The random number is optional, and can be removed when you're done testing the script.
+In this sample, a random number gets generated for naming things so you can identify them as a group that gets created together when you ran the script. The random number is optional, and can be removed when you're done testing the script.
We're not using the SAS URL for the input asset in this sample.
We're not using the SAS URL for the input asset in this sample.
## Delete resources
-When you're finished with the quickstart, delete the resources created in the resource group.
+Once you successfully complete the quickstart, delete the resources created in the resource group.
## Next steps
Get familiar with the [Media Services Python SDK](/python/api/azure-mgmt-media/)
- Learn about the [Azure Python SDKs](/azure/developer/python) - Learn more about [usage patterns for Azure Python SDKs](/azure/developer/python/azure-sdk-library-usage-patterns) - Find more Azure Python SDKs in the [Azure Python SDK index](/azure/developer/python/azure-sdk-library-package-index)-- [Azure Storage Blob Python SDK reference](/python/api/azure-storage-blob/)
+- [Azure Storage Blob Python SDK reference](/python/api/azure-storage-blob/)
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-supported-versions.md
In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7
## Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server (Preview)](./flexible-server/overview.md) <br/> Current minor version |
+| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version |
|:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported| |MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html)|
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
| TLE title line | Enter TLE title line | | TLE line 1 | Enter TLE line 1 | | TLE line 2 | Enter TLE line 2 |
+
+ > [!NOTE]
+ > TLE stands for Two-Line Element.
:::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-backup-restore.md
Backups on flexible servers are snapshot-based. The first snapshot backup is sch
Azure Database for PostgreSQL stores multiple copies of your backups so that your data is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Azure Database for PostgreSQL provides the flexibility to choose between a local backup copy within a region or a geo-redundant backup (Preview). By default, Azure Database for PostgreSQL server backup uses zone redundant storage if available in the region. If not, it uses locally redundant storage. In addition, customers can choose geo-redundant backup, which is in preview, for Disaster Recovery at the time of server create. Refer to the list of regions where the geo-redundant backups are supported.
-Backup redundancy ensures that your database meets its availability and durability targets even in the face of failures and Azure Database for PostgreSQL extends three options to users -
+Backup redundancy ensures that your database meets its availability and durability targets even in the case of failures and Azure Database for PostgreSQL extends three options to users -
- **Zone-redundant backup storage** : This is automatically chosen for regions that support Availability zones. When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the availability zone in which your server is hosted, but are also replicated to another availability zone in the same region. This option can be leveraged for scenarios that require high availability or for restricting replication of data to within a country/region to meet data residency requirements. Also this provides at least 99.9999999999% (12 9's) durability of Backups objects over a given year.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Flexible Server
-description: Learn about the available Postgres extensions in Azure Database for PostgreSQL - Flexible Server
+description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-high-availability.md
Flexible server provides two methods for you to perform on-demand failover to th
You can use this feature to simulate an unplanned outage scenario while running your production workload and observe your application downtime. Alternatively, in rare case where your primary server becomes unresponsive for whatever reason, you may use this feature.
-This feature triggers brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background and that does not impact the uptime.
+This feature brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background and that does not impact the uptime.
The following are the steps during forced-failover:
role-based-access-control Deny Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/deny-assignments-portal.md
na Previously updated : 06/10/2019 Last updated : 01/24/2022
[Azure deny assignments](deny-assignments.md) block users from performing specific Azure resource actions even if a role assignment grants them access. This article describes how to list deny assignments using the Azure portal. > [!NOTE]
-> You can't directly create your own deny assignments. For information about how deny assignments are created, see [Azure deny assignments](deny-assignments.md).
+> You can't directly create your own deny assignments. For more information, see [Azure deny assignments](deny-assignments.md).
## Prerequisites
role-based-access-control Deny Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/deny-assignments-powershell.md
na Previously updated : 06/12/2019 Last updated : 01/24/2022
[Azure deny assignments](deny-assignments.md) block users from performing specific Azure resource actions even if a role assignment grants them access. This article describes how to list deny assignments using Azure PowerShell. > [!NOTE]
-> You can't directly create your own deny assignments. For information about how deny assignments are created, see [Azure deny assignments](deny-assignments.md).
+> You can't directly create your own deny assignments. For more information, see [Azure deny assignments](deny-assignments.md).
## Prerequisites
role-based-access-control Deny Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/deny-assignments-rest.md
rest-api Previously updated : 03/19/2020 Last updated : 01/24/2022
[Azure deny assignments](deny-assignments.md) block users from performing specific Azure resource actions even if a role assignment grants them access. This article describes how to list deny assignments using the REST API. > [!NOTE]
-> You can't directly create your own deny assignments. For information about how deny assignments are created, see [Azure deny assignments](deny-assignments.md).
+> You can't directly create your own deny assignments. For more information, see [Azure deny assignments](deny-assignments.md).
## Prerequisites
role-based-access-control Deny Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/deny-assignments.md
na Previously updated : 03/26/2020 Last updated : 01/24/2022
This article describes how deny assignments are defined.
## How deny assignments are created
-Deny assignments are created and managed by Azure to protect resources. Azure Blueprints and Azure managed apps use deny assignments to protect system-managed resources. Azure Blueprints and Azure managed apps are the only way that deny assignments can be created. You can't directly create your own deny assignments. For more information about how Blueprints uses deny assignments to lock resources, see [Understand resource locking in Azure Blueprints](../governance/blueprints/concepts/resource-locking.md).
+Deny assignments are created and managed by Azure to protect resources. Azure Blueprints and Azure managed apps use deny assignments to protect system-managed resources. Azure Blueprints and Azure managed apps are the only way that deny assignments can be created. You can't directly create your own deny assignments. Azure Blueprints uses deny assignments to lock resources, but just for resources deployed as part of a blueprint. For more information, see [Understand resource locking in Azure Blueprints](../governance/blueprints/concepts/resource-locking.md).
> [!NOTE] > You can't directly create your own deny assignments.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-how-to-debug-skillset.md
Last updated 12/31/2021
Start a debug session to identify and resolve errors, validate changes, and push changes to a published skillset in your Azure Cognitive Search service.
-A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test your changes interactively. If you are unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
+A debug session is a cached indexer and skillset execution, scoped to a single document, that you can use to edit and test your changes interactively. If you're unfamiliar with how a debug session works, see [Debug sessions in Azure Cognitive Search](cognitive-search-debug-session.md). To practice a debug workflow with a sample document, see [Tutorial: Debug sessions](cognitive-search-tutorial-debug-sessions.md).
> [!Important] > Debug sessions is a preview portal feature, provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
A debug session is a cached indexer and skillset execution, scoped to a single d
+ An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
-+ Azure Storage, used to save session state.
+ A debug session works with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The MongoDB API (preview) of Cosmos DB is currently not supported.
-Debug sessions work with all generally available data sources and most preview data sources. The MongoDB API (preview) of Cosmos DB is currently not supported.
++ Azure Storage, used to save session state. ## Create a debug session
Debug sessions work with all generally available data sources and most preview d
1. Select **+ New Debug Session**.
-1. Provide a name for the session and specify a general-purpose storage account that will be used to cache the skill executions.
+1. Provide a name for the session, for example *cog-search-debug-sessions*.
+
+1. Specify a general-purpose storage account that will be used to cache the skill executions. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create.
1. Select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to create the session.
-1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through.
+1. Choose a document. The session will default to the first document in the data source, but you can also choose which document to step through by providing its URL.
If your document resides in a blob container in the same storage account used to cache your debug session, you can copy the document URL from the blob property page in the portal. :::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true":::
-1. Optionally, specify any indexer execution settings that should be used to create the session. Any indexer options that you specify in a debug session have no effect on the indexer itself.
+1. Optionally, specify any indexer execution settings that should be used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
1. Select **Save Session** to get started.
To prove whether a modification resolves an error, follow these steps:
1. Select **Run** in the session window to invoke skillset execution using the modified definition.
-1. Return to **Errors/Warnings** to see if the count is reduced. The list will not be refreshed until you open the tab.
+1. Return to **Errors/Warnings** to see if the count is reduced. The list won't be refreshed until you open the tab.
## View content of enrichment nodes
-AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`) plus nodes for any content that is directly ported from the data source (such as a document key) and metadata. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
+AI enrichment pipelines extract or infer information and structure from source documents, creating an enriched document in the process. An enriched document is first created during document cracking and populated with a root node (`/document`), plus nodes for any content that is lifted directly from the data source, such as metadata and the document key. Additional nodes are created by skills during skill execution, where each skill output adds a new node to the enrichment tree.
Enriched documents are internal, but a debug session gives you access to the content produced during skill execution. To view the content or output of each skill, follow these steps:
Enriched documents are internal, but a debug session gives you access to the con
1. Select a skill.
-1. In the details pane to the right, select **Executions**, select an OUTPUT, and then open the Expression Evaluator (**`</>`**) to view the expression and it's result.
+1. In the details pane to the right, select **Executions**, select an OUTPUT, and then open the Expression Evaluator (**`</>`**) to view the expression and its result.
:::image type="content" source="media/cognitive-search-debug/enriched-doc-output-expression.png" alt-text="Screenshot of a skill execution showing output values." border="true":::
If skills produce output but the search index is empty, check the field mappings
1. Select **Field Mappings** near the top. You should find at least the document key that uniquely identifies and associates each search document in the search index with it's source document in the data source.
- If you are importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
+ If you're importing raw content straight from the data source, bypassing enrichment, you should find those fields in **Field Mappings**.
-1. Select **Output Field Mappings** at the bottom of the graph. Here you will find mappings from skill outputs to target fields in the search index. Unless you used the Import Data wizard, output field mappings are defined manually and could be incomplete or mistyped.
+1. Select **Output Field Mappings** at the bottom of the graph. Here you'll find mappings from skill outputs to target fields in the search index. Unless you used the Import Data wizard, output field mappings are defined manually and could be incomplete or mistyped.
Verify that the fields in **Output Field Mappings** exist in the search index as specified, checking for spelling and [enrichment node path syntax](cognitive-search-concept-annotations-syntax.md).
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-data-sources-gallery.md
layout: LandingPage Previously updated : 06/23/2021 Last updated : 01/25/2022
Extract rows from an Azure Table, serialized into JSON documents, and imported i
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to Azure Storage through Azure Data Laker Storage Gen2 to extract content from a hierarchy of directories and nested subdirectories.
+Connect to Azure Storage through Azure Data Lake Storage Gen2 to extract content from a hierarchy of directories and nested subdirectories.
[More details](search-howto-index-azure-data-lake-storage.md)
Connect to Cosmos DB through the Mongo API to extract items from a container, se
:::image type="icon" source="media/search-data-sources-gallery/azure_cosmos_db_logo_small.png"::: +++
+### SharePoint
+
+by [Cognitive Search](search-what-is-azure-search.md)
+
+Connect to a SharePoint site and index documents from one or more document libraries, for accounts and search services in the same tenant. Text and normalized images will be extracted by default. Optionally, you can configure a skillset for more content transformation and enrichment, or configure change tracking to refresh a search index with new or changed content in SharePoint.
+
+[More details](search-howto-index-sharepoint-online.md)
++ :::column-end::: :::row-end::: :::row:::
Connect to Cosmos DB through the Mongo API to extract items from a container, se
-### SharePoint
+### Azure MySQL
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to a SharePoint site and index documents from one or more Document Libraries, for accounts and search services in the same tenant. Text and normalized images will be extracted by default. Optionally, you can configure a skillset for more content transformation and enrichment, or configure change tracking to refresh a search index with new or changed content in SharePoint.
+Connect to MySQL database on Azure to extract rows in a table, serialized into JSON documents, and imported into a search index as search documents. On subsequent runs, assuming High Water Mark change detection policy is configured, the indexer will take all changes, uploads, and delete and reflect those changes in your search index.
-[More details](search-howto-index-sharepoint-online.md)
+[More details](search-howto-index-mysql.md)
:::column-end::: :::column span="":::
-### Azure MySQL
+### Azure Files
by [Cognitive Search](search-what-is-azure-search.md)
-Connect to MySQL database on Azure to extract rows in a table, serialized into JSON documents, and imported into a search index as search documents. On subsequent runs, the indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index.
+Connect to Azure Storage through Azure Files share to extract content serialized into JSON documents, and imported into a search index as search documents.
-[More details](search-howto-index-mysql.md)
+[More details](search-file-storage-integration.md)
:::column-end::: :::column span="":::
The BA Insight Azure Active Directory Connector makes it possible to surface con
by [Raytion](https://www.raytion.com/contact)
-Secure enterprise search connector for reliably indexing content from Microsoft Azure Active Directory (Azure AD) and intelligently searching it with Azure Cognitive Search. It indexes objects from Azure AD via the Microsoft Graph API. The connector can be used for ingesting principals into Cognitive Search in near real time to implement use cases like expert search, equipment search, and location search or to provide early-binding security trimming in conjunction with custom data sources. The connector supports federated authentication against Microsoft 365.
+Secure enterprise search connector for reliably indexing content from Microsoft Azure Active Directory (Azure AD) and intelligently searching it with Azure Cognitive Search. It indexes objects from Azure AD via the Microsoft Graph API. The connector can be used for ingesting principals into Cognitive Search in near real time to implement use cases like expert search, equipment search, and location search or to provide early-binding security trimming in conjunction with custom data sources. The connector supports federated authentication against Microsoft 365.
[More details](https://www.raytion.com/connectors/raytion-azure-ad-connector)
BA Insight's OpenText Documentum Cloud Connector securely indexes both the full
by [Raytion](https://www.raytion.com/contact)
-Secure enterprise search connector for reliably indexing content from OpenText Documentum eRoom and intelligently searching it with Azure Cognitive Search. It robustly indexes repositories, folders and files together with their meta data and properties from Documentum eRoom in near real time. The connector fully supports OpenText Documentum eRoom’s built-in user and group management.
+Secure enterprise search connector for reliably indexing content from OpenText Documentum eRoom and intelligently searching it with Azure Cognitive Search. It robustly indexes repositories, folders and files together with their meta data and properties from Documentum eRoom in near real time. The connector fully supports OpenText Documentum eRoomΓÇÖs built-in user and group management.
[More details](https://www.raytion.com/connectors/raytion-opentext-documentum-eroom-connector)
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-large-index.md
In practical terms, for index loads spanning several days, you can put the index
## Parallel indexers
-A parallel indexing strategy is based on indexing multiple data sources in unison, where each data source definition specifies a subset of the data.
+If you have partitioned data, you can create indexer-data-source combinations that pull from each data source and write to the same search index. Because each indexer is distinct, you can run them at the same time, populating a search index more quickly than if you ran them sequentially.
-For non-routine, computationally intensive indexing requirements - such as OCR on scanned documents in a cognitive search pipeline, image analysis, or natural language processing - a parallel indexing strategy is often the right approach for completing a long-running process in the shortest time. If you can eliminate or reduce query requests, parallel indexing on a service that is not simultaneously handling queries is your best strategy option for working through a large body of slow-processing content.
+There are some risks associated with parallel indexing. First, recall that indexing does not run in the background, increasing the likelihood that queries will be throttled or dropped. Second, Azure Cognitive Search does not lock the index for updates. Concurrent writes are managed, invoking a retry if a particular write does not succeed on first attempt, but you might notice an increase in indexing failures.
-Azure Cognitive Search does not lock the index for updates. Concurrent writes are managed, with retry if a particular write does not succeed on first attempt.
+The number of indexing jobs that can run simultaneously varies for text-based and skills-based indexing. For more information, see [Indexer execution](search-howto-run-reset-indexers.md#indexer-execution).
-1. In the [Azure portal](https://portal.azure.com), check the number of search units used by your search service. Select **Settings** > **Scale** to view the number at the top of the page. The number of indexers that will run in parallel is approximately equal to the number of search units.
+1. For text-based indexing, [sign in to Azure portal](https://portal.azure.com) and check the number of search units used by your search service. Select **Settings** > **Scale** to view the number at the top of the page. The number of indexers that will run in parallel is approximately equal to the number of search units.
1. Partition source data among multiple containers or multiple virtual folders inside the same container.
Azure Cognitive Search does not lock the index for updates. Concurrent writes ar
1. Specify the same target search index in each indexer.
-1. Schedule the indexers.
+1. Schedule the indexers. Review indexer status and execution history for confirmation.
Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer cannot merge values from multiple runs into the same field.
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-run-reset-indexers.md
Indexer limits vary by the workload. For each workload, the following job limits
| Workload | Maximum duration | Maximum jobs | Execution environment <sup>1</sup> | |-|||--| | Text-based indexing | 24 hours | One per search unit <sup>2</sup> | Typically runs on the search service. |
-| Skills-based indexing | 2 hours | Indeterminate | Typically runs on an internally-managed, multi-tenant cluster. If a skills-based indexing is executed off the search service, the number of concurrent jobs can exceed the maximum of one per search unit. |
+| Skills-based indexing | 2 hours | Indeterminate | Typically runs on an internally-managed, multi-tenant content processing cluster, depending on how complex the skillset is. A simple skill might execute on your search service if the service has capacity. Otherwise, skills-based indexer jobs execute off-service. Because the content processing cluster is multi-tenant, nodes are added to meet demand. If you experience a delay in on-demand or scheduled execution, it's probably because the system is either adding nodes or waiting for one to become available.|
<sup>1</sup> For optimum processing, a search service will determine an internal execution environment for the indexer operation. You cannot control or configure the environment, but depending on the number and complexity of tasks, the search service will either run the job itself, or offload computationally-intensive tasks to an internally-managed cluster, leaving more service-specific resources available for routine operations. The multi-tenant environment used for performing computationally-intensive tasks is managed and secured by Microsoft, at no extra cost to the customer.
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-schedule-indexers.md
Last updated 01/21/2022
# Schedule an indexer in Azure Cognitive Search
-Indexers can be configured to run on a schedule when you set the "schedule" property in the indexer definition. By default, an indexer runs once, immediately after it is created. Afterwards, you can run it again on demand or set up a schedule. Some situations where indexer scheduling is useful include:
+Indexers can be configured to run on a schedule when you set the "schedule" property in the indexer definition. By default, an indexer runs once, immediately after it is created. Afterwards, you can run it again on demand or on a schedule. Some situations where indexer scheduling is useful include:
-* Source data will change over time, and you want the search indexer to automatically process the difference.
++ Source data is changing over time, and you want the indexer to automatically process the difference.
-* A search index will be populated from multiple data sources, and you want to stagger the indexer jobs to reduce conflicts.
++ A search index is populated from multiple data sources, and you want to stagger the indexer jobs to reduce conflicts.
-* Source data is very large and you want to spread the indexer processing over time.
++ Source data is very large and you want to spread the indexer processing over time.
- Indexer jobs are subject to a maximum running time of 24 hours for regular data sources and 2 hours for indexers with skillsets. If indexing cannot complete within the maximum interval, you can configure a schedule that runs every 2 hours. Indexers can automatically pick up where they left off, as evidenced by an internal high water mark that marks where indexing last ended. Running an indexer on a recurring 2-hour schedule allows it to process a very large data set (many millions of documents) beyond the 24-interval allowed for a single job. For more information about indexing large data volumes, see [How to index large data sets in Azure Cognitive Search](search-howto-large-index.md).
+ Indexer jobs are subject to a maximum running time of 24 hours for regular data sources and 2 hours for indexers with skillsets. If indexing cannot complete within the maximum interval, you can configure a schedule that runs every 2 hours. Indexers can automatically pick up where they left off, based on an internal high water mark that marks where indexing last ended. Running an indexer on a recurring 2-hour schedule allows it to process a very large data set (many millions of documents) beyond the 24-interval allowed for a single job. For more information about indexing large data volumes, see [How to index large data sets in Azure Cognitive Search](search-howto-large-index.md).
-## Schedule property
+## Prerequisites
+++ A valid indexer configured with a data source and index.+++ Change detection in the data source. Azure Storage and SharePoint have built-in change detection. Other data sources, such as [Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) and [Cosmos DB](search-howto-index-cosmosdb.md) must be enabled manually.+
+## Schedule definition
A schedule is part of the indexer definition. If the "schedule" property is omitted, the indexer will only run on demand. The property has two parts. | Property | Description | |-|-|
-| `"interval"` (minutes) | (required) The amount of time between the start of two consecutive indexer executions. The smallest interval allowed is 5 minutes, and the longest is 1440 minutes (24 hours). It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). </br></br>The pattern for this is: `P(nD)(T(nH)(nM))`. </br></br>Examples: `PT15M` for every 15 minutes, `PT2H` for every 2 hours.|
-| `"startTime"` | (optional) Start time is specified in coordinated universal time (UTC). If omitted, the current time is used. This time can be in the past, in which case the first execution is scheduled as if the indexer has been running continuously since the original start time.|
+| "interval" | (required) The amount of time between the start of two consecutive indexer executions. The smallest interval allowed is 5 minutes, and the longest is 1440 minutes (24 hours). It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). </br></br>The pattern for this is: `P(nD)(T(nH)(nM))`. </br></br>Examples: `PT15M` for every 15 minutes, `PT2H` for every 2 hours.|
+| "startTime" | (optional) Start time is specified in coordinated universal time (UTC). If omitted, the current time is used. This time can be in the past, in which case the first execution is scheduled as if the indexer has been running continuously since the original start time.|
The following example is a schedule that starts on January 1 at midnight and runs every 50 minutes.
The following example is a schedule that starts on January 1 at midnight and run
} ```
-## Scheduling behavior
-
-The scheduler can kick off as many indexer jobs as the search service supports, which is based on the number of search units. For example, the service has three replicas and four partitions, you should be able to have twelve indexer jobs in active execution, whether initiated on demand or on a schedule.
-
-Although multiple indexers can run simultaneously, a given indexer is single instance. You cannot run two copies of the same indexer concurrently. If an indexer happens to still be running when its next scheduled execution is set to start, the pending execution is postponed until the next scheduled occurrence, allowing the current job to finish.
-
-LetΓÇÖs consider an example to make this more concrete. Suppose we configure an indexer schedule with an interval of hourly and a start time of June 1, 2021 at 8:00:00 AM UTC. Here's what could happen when an indexer run takes longer than an hour:
-
-* The first indexer execution starts at or around June 1, 2021 at 8:00 AM UTC. Assume this execution takes 20 minutes (or any time less than 1 hour).
-* The second execution starts at or around June 1, 2021 9:00 AM UTC. Suppose that this execution takes 70 minutes - more than an hour ΓÇô and it will not complete until 10:10 AM UTC.
-* The third execution is scheduled to start at 10:00 AM UTC, but at that time the previous execution is still running. This scheduled execution is then skipped. The next execution of the indexer will not start until 11:00 AM UTC.
-
-> [!NOTE]
-> If an indexer is set to a certain schedule but repeatedly fails on the same document each time, the indexer will begin running on a less frequent interval (up to the maximum interval of at least once every 24 hours) until it successfully makes progress again. If you believe you have fixed whatever the underlying issue, you can [run the indexer manually](search-howto-run-reset-indexers.md), and if indexing succeeds, the indexer will return to its regular schedule.
- ## Configure a schedule Schedules are specified in an indexer definition. To set up a schedule, you can use Azure portal, REST APIs, or an Azure SDK.
await indexerClient.CreateOrUpdateIndexerAsync(indexer);
+## Scheduling behavior
+
+For text-based indexing, the scheduler can kick off as many indexer jobs as the search service supports, which is determined by the number of search units. For example, if the service has three replicas and four partitions, you can generally have twelve indexer jobs in active execution, whether initiated on demand or on a schedule.
+
+Skills-based indexers run in a different [execution environment](search-howto-run-reset-indexers.md#indexer-execution). For this reason, the number of service units has no bearing on the number of skills-based indexer jobs you can run. Multiple skills-based indexers can run in parallel, but doing so depends on node availability within the execution environment.
+
+Although multiple indexers can run simultaneously, a given indexer is single instance. You cannot run two copies of the same indexer concurrently. If an indexer happens to still be running when its next scheduled execution is set to start, the pending execution is postponed until the next scheduled occurrence, allowing the current job to finish.
+
+LetΓÇÖs consider an example to make this more concrete. Suppose we configure an indexer schedule with an interval of hourly and a start time of June 1, 2021 at 8:00:00 AM UTC. Here's what could happen when an indexer run takes longer than an hour:
+++ The first indexer execution starts at or around June 1, 2021 at 8:00 AM UTC. Assume this execution takes 20 minutes (or any time less than 1 hour).+++ The second execution starts at or around June 1, 2021 9:00 AM UTC. Suppose that this execution takes 70 minutes - more than an hour ΓÇô and it will not complete until 10:10 AM UTC.+++ The third execution is scheduled to start at 10:00 AM UTC, but at that time the previous execution is still running. This scheduled execution is then skipped. The next execution of the indexer will not start until 11:00 AM UTC.+
+> [!NOTE]
+> If an indexer is set to a certain schedule but repeatedly fails on the same document each time, the indexer will begin running on a less frequent interval (up to the maximum interval of at least once every 24 hours) until it successfully makes progress again. If you believe you have fixed whatever the underlying issue, you can [run the indexer manually](search-howto-run-reset-indexers.md), and if indexing succeeds, the indexer will return to its regular schedule.
+ ## Next steps For indexers that run on a schedule, you can monitor operations by retrieving status from the search service, or obtain detailed information by enabling diagnostic logging.
-* [Monitor search indexer status](search-howto-monitor-indexers.md)
-* [Collect and analyze log data](monitor-azure-cognitive-search.md)
++ [Monitor search indexer status](search-howto-monitor-indexers.md)++ [Collect and analyze log data](monitor-azure-cognitive-search.md)++ [Index large data sets](search-howto-large-index.md)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
SFTP support in Azure Blob Storage currently limits its cryptographic algorithm
- PowerShell and Azure CLI are not supported. You can leverage Portal and ARM templates for Public Preview. -- `ssh-keycan` is not supported.
+- `ssh-keyscan` is not supported.
+
+- SSH commands, that are not SFTP, are not supported.
## Troubleshooting
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
Title: Soft delete for blobs
-description: Soft delete for blobs protects your data so that you can more easily recover your data when it is erroneously modified or deleted by an application or by another storage account user.
+description: Soft delete for blobs protects your data so that you can more easily recover your data when it's erroneously modified or deleted by an application or by another storage account user.
Blob soft delete protects an individual blob, snapshot, or version from accident
Blob soft delete is part of a comprehensive data protection strategy for blob data. For optimal protection for your blob data, Microsoft recommends enabling all of the following data protection features: - Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).-- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it's erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
- Blob soft delete, to restore a blob, snapshot, or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md). To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md). ## How blob soft delete works
-When you enable blob soft delete for a storage account, you specify a retention period for deleted objects of between 1 and 365 days. The retention period indicates how long the data remains available after it is deleted or overwritten. The clock starts on the retention period as soon as an object is deleted or overwritten.
+When you enable blob soft delete for a storage account, you specify a retention period for deleted objects of between 1 and 365 days. The retention period indicates how long the data remains available after it's deleted or overwritten. The clock starts on the retention period as soon as an object is deleted or overwritten.
While the retention period is active, you can restore a deleted blob, together with its snapshots, or a deleted version by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation. The following diagram shows how a deleted object can be restored when blob soft delete is enabled:
While the retention period is active, you can restore a deleted blob, together w
You can change the soft delete retention period at any time. An updated retention period applies only to data that was deleted after the retention period was changed. Any data that was deleted before the retention period was changed is subject to the retention period that was in effect when it was deleted.
-Attempting to delete a soft-deleted object does not affect its expiry time.
+Attempting to delete a soft-deleted object doesn't affect its expiry time.
If you disable blob soft delete, you can continue to access and recover soft-deleted objects in your storage account until the soft delete retention period has elapsed.
Version 2017-07-29 and higher of the Azure Storage REST API support blob soft de
When blob soft delete is enabled, deleting a blob marks that blob as soft-deleted. No snapshot is created. When the retention period expires, the soft-deleted blob is permanently deleted.
-If a blob has snapshots, the blob cannot be deleted unless the snapshots are also deleted. When you delete a blob and its snapshots, both the blob and snapshots are marked as soft-deleted. No new snapshots are created.
+If a blob has snapshots, the blob can't be deleted unless the snapshots are also deleted. When you delete a blob and its snapshots, both the blob and snapshots are marked as soft-deleted. No new snapshots are created.
You can also delete one or more active snapshots without deleting the base blob. In this case, the snapshot is soft-deleted. If a directory is deleted in an account that has the hierarchical namespace feature enabled on it, the directory and all its contents are marked as soft-deleted.
-Soft-deleted objects are invisible unless they are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
+Soft-deleted objects are invisible unless they're explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
### How overwrites are handled when soft delete is enabled
Soft-deleted snapshots are invisible unless soft-deleted objects are explicitly
To protect a copy operation, blob soft delete must be enabled for the destination storage account.
-Blob soft delete does not protect against operations to write blob metadata or properties. No soft-deleted snapshot is created when a blob's metadata or properties are updated.
+Blob soft delete doesn't protect against operations to write blob metadata or properties. No soft-deleted snapshot is created when a blob's metadata or properties are updated.
-Blob soft delete does not afford overwrite protection for blobs in the archive tier. If a blob in the archive tier is overwritten with a new blob in any tier, then the overwritten blob is permanently deleted.
+Blob soft delete doesn't afford overwrite protection for blobs in the archive tier. If a blob in the archive tier is overwritten with a new blob in any tier, then the overwritten blob is permanently deleted.
-For premium storage accounts, soft-deleted snapshots do not count toward the per-blob limit of 100 snapshots.
+For premium storage accounts, soft-deleted snapshots don't count toward the per-blob limit of 100 snapshots.
### Restoring soft-deleted objects You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
-In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs.
+In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs.
-Calling **Undelete Blob** on a blob that is not soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and is not soft-deleted, then calling **Undelete Blob** has no effect.
+Calling **Undelete Blob** on a blob that isn't soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and isn't soft-deleted, then calling **Undelete Blob** has no effect.
To promote a soft-deleted snapshot to the base blob, first call **Undelete Blob** on the base blob to restore the blob and its snapshots. Next, copy the desired snapshot over the base blob. You can also copy the snapshot to a new blob.
-Data in a soft-deleted blob or snapshot cannot be read until the object has been restored.
+Data in a soft-deleted blob or snapshot can't be read until the object has been restored.
For more information on how to restore soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
For more information on how to restore soft-deleted objects, see [Manage and res
> [!IMPORTANT] > Versioning is not supported for accounts that have a hierarchical namespace.
-If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. No soft-deleted snapshots are created. When you delete a blob, the current version of the blob becomes a previous version, and there is no longer a current version. No new version is created and no soft-deleted snapshots are created.
+If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version isn't soft-deleted and isn't removed when the soft-delete retention period expires. No soft-deleted snapshots are created. When you delete a blob, the current version of the blob becomes a previous version, and there's no longer a current version. No new version is created and no soft-deleted snapshots are created.
-Enabling soft delete and versioning together protects blob versions from deletion. When soft delete is enabled, deleting a version creates a soft-deleted version. You can use the **Undelete Blob** operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It is not possible to restore only a single soft-deleted version.
+Enabling soft delete and versioning together protects blob versions from deletion. When soft delete is enabled, deleting a version creates a soft-deleted version. You can use the **Undelete Blob** operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It isn't possible to restore only a single soft-deleted version.
After the soft-delete retention period has elapsed, any soft-deleted blob versions are permanently deleted.
The following table describes the expected behavior for delete and write operati
| REST API operations | Soft delete enabled | Soft delete and versioning enabled | |--|--|--|
-| [Delete Storage Account](/rest/api/storagerp/storageaccounts/delete) | No change. Containers and blobs in the deleted account are not recoverable. | No change. Containers and blobs in the deleted account are not recoverable. |
-| [Delete Container](/rest/api/storageservices/delete-container) | No change. Blobs in the deleted container are not recoverable. | No change. Blobs in the deleted container are not recoverable. |
+| [Delete Storage Account](/rest/api/storagerp/storageaccounts/delete) | No change. Containers and blobs in the deleted account aren't recoverable. | No change. Containers and blobs in the deleted account aren't recoverable. |
+| [Delete Container](/rest/api/storageservices/delete-container) | No change. Blobs in the deleted container aren't recoverable. | No change. Blobs in the deleted container aren't recoverable. |
| [Delete Blob](/rest/api/storageservices/delete-blob) | If used to delete a blob, that blob is marked as soft deleted. <br /><br /> If used to delete a blob snapshot, the snapshot is marked as soft deleted. | If used to delete a blob, the current version becomes a previous version, and the current version is deleted. No new version is created and no soft-deleted snapshots are created.<br /><br /> If used to delete a blob version, the version is marked as soft deleted. | | [Undelete Blob](/rest/api/storageservices/undelete-blob) | Restores a blob and any snapshots that were deleted within the retention period. | Restores a blob and any versions that were deleted within the retention period. |
-| [Put Blob](/rest/api/storageservices/put-blob)<br />[Put Block List](/rest/api/storageservices/put-block-list)<br />[Copy Blob](/rest/api/storageservices/copy-blob)<br />[Copy Blob from URL](/rest/api/storageservices/copy-blob) | If called on an active blob, then a snapshot of the blob's state prior to the operation is automatically generated. <br /><br /> If called on a soft-deleted blob, then a snapshot of the blob's prior state is generated only if it is being replaced by a blob of the same type. If the blob is of a different type, then all existing soft deleted data is permanently deleted. | A new version that captures the blob's state prior to the operation is automatically generated. |
-| [Put Block](/rest/api/storageservices/put-block) | If used to commit a block to an active blob, there is no change.<br /><br />If used to commit a block to a blob that is soft-deleted, a new blob is created and a snapshot is automatically generated to capture the state of the soft-deleted blob. | No change. |
-| [Put Page](/rest/api/storageservices/put-page)<br />[Put Page from URL](/rest/api/storageservices/put-page-from-url) | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. |
+| [Put Blob](/rest/api/storageservices/put-blob)<br />[Put Block List](/rest/api/storageservices/put-block-list)<br />[Copy Blob](/rest/api/storageservices/copy-blob)<br />[Copy Blob from URL](/rest/api/storageservices/copy-blob) | If called on an active blob, then a snapshot of the blob's state prior to the operation is automatically generated. <br /><br /> If called on a soft-deleted blob, then a snapshot of the blob's prior state is generated only if it's being replaced by a blob of the same type. If the blob is of a different type, then all existing soft deleted data is permanently deleted. | A new version that captures the blob's state prior to the operation is automatically generated. |
+| [Put Block](/rest/api/storageservices/put-block) | If used to commit a block to an active blob, there's no change.<br /><br />If used to commit a block to a blob that is soft-deleted, a new blob is created and a snapshot is automatically generated to capture the state of the soft-deleted blob. | No change. |
+| [Put Page](/rest/api/storageservices/put-page)<br />[Put Page from URL](/rest/api/storageservices/put-page-from-url) | No change. Page blob data that is overwritten or cleared using this operation isn't saved and isn't recoverable. | No change. Page blob data that is overwritten or cleared using this operation isn't saved and isn't recoverable. |
| [Append Block](/rest/api/storageservices/append-block)<br />[Append Block from URL](/rest/api/storageservices/append-block-from-url) | No change. | No change. |
-| [Set Blob Properties](/rest/api/storageservices/set-blob-properties) | No change. Overwritten blob properties are not recoverable. | No change. Overwritten blob properties are not recoverable. |
-| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | No change. Overwritten blob metadata is not recoverable. | A new version that captures the blob's state prior to the operation is automatically generated. |
+| [Set Blob Properties](/rest/api/storageservices/set-blob-properties) | No change. Overwritten blob properties aren't recoverable. | No change. Overwritten blob properties aren't recoverable. |
+| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | No change. Overwritten blob metadata isn't recoverable. | A new version that captures the blob's state prior to the operation is automatically generated. |
| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) | The base blob is moved to the new tier. Any active or soft-deleted snapshots remain in the original tier. No soft-deleted snapshot is created. | The base blob is moved to the new tier. Any active or soft-deleted versions remain in the original tier. No new version is created. | ### Storage account (hierarchical namespace)
The following table describes the expected behavior for delete and write operati
|**REST API operation**|**Soft Delete enabled**| ||| |[Path - Delete](/rest/api/storageservices/datalakestoragegen2/path/delete) |A soft deleted blob or directory is created. The soft deleted object is deleted after the retention period.|
-|[Delete Blob](/rest/api/storageservices/delete-blob)|A soft deleted object is created. The soft deleted object is deleted after the retention period. Soft delete will not be supported for blobs with snapshots and snapshots.|
+|[Delete Blob](/rest/api/storageservices/delete-blob)|A soft deleted object is created. The soft deleted object is deleted after the retention period. Soft delete won't be supported for blobs with snapshots and snapshots.|
|[Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) that renames a blob or directory | Existing destination blob or empty directory will get soft deleted and the source will replace it. The soft deleted object is deleted after the retention period.| ## Feature support
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
This table shows how this feature is supported in your account and the impact on
## Pricing and billing
-All soft deleted data is billed at the same rate as active data. You will not be charged for data that is permanently deleted after the retention period elapses.
+All soft deleted data is billed at the same rate as active data. You won't be charged for data that is permanently deleted after the retention period elapses.
When you enable soft delete, Microsoft recommends using a short retention period to better understand how the feature will affect your bill. The minimum recommended retention period is seven days. Enabling soft delete for frequently overwritten data may result in increased storage capacity charges and increased latency when listing blobs. You can mitigate this additional cost and latency by storing the frequently overwritten data in a separate storage account where soft delete is disabled.
-You are not billed for transactions related to the automatic generation of snapshots or versions when a blob is overwritten or deleted. You are billed for calls to the **Undelete Blob** operation at the transaction rate for write operations.
+You aren't billed for transactions related to the automatic generation of snapshots or versions when a blob is overwritten or deleted. You're billed for calls to the **Undelete Blob** operation at the transaction rate for write operations.
For more information on pricing for Blob Storage, see the [Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) page.
For more information on pricing for Blob Storage, see the [Blob Storage pricing]
Blob soft delete is available for both premium and standard unmanaged disks, which are page blobs under the covers. Soft delete can help you recover data deleted or overwritten by the [Delete Blob](/rest/api/storageservices/delete-blob), [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), and [Copy Blob](/rest/api/storageservices/copy-blob) operations only.
-Data that is overwritten by a call to [Put Page](/rest/api/storageservices/put-page) is not recoverable. An Azure virtual machine writes to an unmanaged disk using calls to [Put Page](/rest/api/storageservices/put-page), so using soft delete to undo writes to an unmanaged disk from an Azure VM is not a supported scenario.
+Data that is overwritten by a call to [Put Page](/rest/api/storageservices/put-page) isn't recoverable. An Azure virtual machine writes to an unmanaged disk using calls to [Put Page](/rest/api/storageservices/put-page), so using soft delete to undo writes to an unmanaged disk from an Azure VM isn't a supported scenario.
## Next steps
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-overview.md
Title: Soft delete for containers
-description: Soft delete for containers protects your data so that you can more easily recover your data when it is erroneously modified or deleted by an application or by another storage account user.
+description: Soft delete for containers protects your data so that you can more easily recover your data when it's erroneously modified or deleted by an application or by another storage account user.
Container soft delete protects your data from being accidentally deleted by main
Blob soft delete is part of a comprehensive data protection strategy for blob data. For optimal protection for your blob data, Microsoft recommends enabling all of the following data protection features: - Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md).-- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
+- Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it's erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).
- Blob soft delete, to restore a blob, snapshot, or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md). To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md). ## How container soft delete works
-When you enable container soft delete, you can specify a retention period for deleted containers that is between 1 and 365 days. The default retention period is 7 days. During the retention period, you can recover a deleted container by calling the **Restore Container** operation.
+When you enable container soft delete, you can specify a retention period for deleted containers that is between 1 and 365 days. The default retention period is seven days. During the retention period, you can recover a deleted container by calling the **Restore Container** operation.
-When you restore a container, the container's blobs and any blob versions and snapshots are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container has not been deleted, you must use blob soft delete or blob versioning.
+When you restore a container, the container's blobs and any blob versions and snapshots are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container hasn't been deleted, you must use blob soft delete or blob versioning.
> [!WARNING] > Container soft delete can restore only whole containers and their contents at the time of deletion. You cannot restore a deleted blob within a container by using container soft delete. Microsoft recommends also enabling blob soft delete and blob versioning to protect individual blobs in a container.
The following diagram shows how a deleted container can be restored when contain
:::image type="content" source="media/soft-delete-container-overview/container-soft-delete-diagram.png" alt-text="Diagram showing how a soft-deleted container may be restored":::
-After the retention period has expired, the container is permanently deleted from Azure Storage and cannot be recovered. The clock starts on the retention period at the point that the container is deleted. You can change the retention period at any time, but keep in mind that an updated retention period applies only to newly deleted containers. Previously deleted containers will be permanently deleted based on the retention period that was in effect at the time that the container was deleted.
+After the retention period has expired, the container is permanently deleted from Azure Storage and can't be recovered. The clock starts on the retention period at the point that the container is deleted. You can change the retention period at any time, but keep in mind that an updated retention period applies only to newly deleted containers. Previously deleted containers will be permanently deleted based on the retention period that was in effect at the time that the container was deleted.
-Disabling container soft delete does not result in permanent deletion of containers that were previously soft-deleted. Any soft-deleted containers will be permanently deleted at the expiration of the retention period that was in effect at the time that the container was deleted.
+Disabling container soft delete doesn't result in permanent deletion of containers that were previously soft-deleted. Any soft-deleted containers will be permanently deleted at the expiration of the retention period that was in effect at the time that the container was deleted.
Container soft delete is available for the following types of storage accounts:
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled. ## Pricing and billing
-There is no additional charge to enable container soft delete. Data in soft deleted containers is billed at the same rate as active data.
+There's no additional charge to enable container soft delete. Data in soft deleted containers is billed at the same rate as active data.
## Next steps
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 01/13/2022 Last updated : 01/24/2022
To create a new key vault with PowerShell, install version 2.0.0 or later of the
The following example creates a new key vault with both soft delete and purge protection enabled. Remember to replace the placeholder values in brackets with your own values.
-```powershell
+```azurepowershell
$keyVault = New-AzKeyVault -Name <key-vault> ` -ResourceGroupName <resource_group> ` -Location <location> ` -EnablePurgeProtection ```
-To learn how to enable purge protection on an existing key vault with PowerShell, see [How to use soft-delete with PowerShell](../../key-vault/general/key-vault-recovery.md).
-
-Next, assign a system-assigned managed identity to your storage account. You'll use this managed identity to grant the storage account permissions to access the key vault. For more information about system-assigned managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md).
-
-To assign a managed identity using PowerShell, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
-
-```powershell
-$storageAccount = Set-AzStorageAccount -ResourceGroupName <resource_group> `
- -Name <storage-account> `
- -AssignIdentity
-```
-
-Finally, configure the access policy for the key vault so that the storage account has permissions to access it. In this step, you'll use the managed identity that you previously assigned to the storage account.
-
-To set the access policy for the key vault, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy):
-
-```powershell
-Set-AzKeyVaultAccessPolicy `
- -VaultName $keyVault.VaultName `
- -ObjectId $storageAccount.Identity.PrincipalId `
- -PermissionsToKeys wrapkey,unwrapkey,get
-```
+To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell).
# [Azure CLI](#tab/azure-cli) To create a new key vault using Azure CLI, call [az keyvault create](/cli/azure/keyvault#az_keyvault_create). Remember to replace the placeholder values in brackets with your own values:
-```azurecli-interactive
+```azurecli
az keyvault create \ --name <key-vault> \ --resource-group <resource_group> \
az keyvault create \
--enable-purge-protection ```
-To learn how to enable purge protection on an existing key vault with Azure CLI, see [How to use soft-delete with CLI](../../key-vault/general/key-vault-recovery.md).
-
-Next, assign a system-assigned managed identity to the storage account. You'll use this managed identity to grant the storage account permissions to access the key vault. For more information about system-assigned managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md).
-
-To assign a managed identity using Azure CLI, call [az storage account update](/cli/azure/storage/account#az_storage_account_update):
-
-```azurecli-interactive
-az storage account update \
- --name <storage-account> \
- --resource-group <resource_group> \
- --assign-identity
-```
-
-Finally, configure the access policy for the key vault so that the storage account has permissions to access it. In this step, you'll use the managed identity that you previously assigned to the storage account.
-
-To set the access policy for the key vault, call [az keyvault set-policy](/cli/azure/keyvault#az_keyvault_set_policy):
-
-```azurecli-interactive
-storage_account_principal=$(az storage account show \
- --name <storage-account> \
- --resource-group <resource-group> \
- --query identity.principalId \
- --output tsv)
-az keyvault set-policy \
- --name <key-vault> \
- --resource-group <resource_group>
- --object-id $storage_account_principal \
- --key-permissions get unwrapKey wrapKey
-```
+To learn how to enable purge protection on an existing key vault with Azure CLI, see [Azure Key Vault recovery overview](../../key-vault/general/key-vault-recovery.md?tabs=azure-cli).
az keyvault set-policy \
Next, add a key to the key vault.
-Azure Storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
+Azure Storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about supported key types, see [About keys](../../key-vault/keys/about-keys.md).
# [Azure portal](#tab/portal)
To learn how to add a key with the Azure portal, see [Quickstart: Set and retrie
To add a key with PowerShell, call [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey). Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```powershell
+```azurepowershell
$key = Add-AzKeyVaultKey -VaultName $keyVault.VaultName ` -Name <key> ` -Destination 'Software'
$key = Add-AzKeyVaultKey -VaultName $keyVault.VaultName `
To add a key with Azure CLI, call [az keyvault key create](/cli/azure/keyvault/key#az_keyvault_key_create). Remember to replace the placeholder values in brackets with your own values.
-```azurecli-interactive
+```azurecli
az keyvault key create \ --name <key> \ --vault-name <key-vault>
az keyvault key create \
-## Configure encryption with customer-managed keys
+## Choose a managed identity to authorize access to the key vault
-Next, configure your Azure Storage account to use customer-managed keys with Azure Key Vault, then specify the key to associate with the storage account.
+When you enable customer-managed keys for a storage account, you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault.
-When you configure encryption with customer-managed keys, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
+The managed identity that authorizes access to the key vault may be either a user-assigned or system-assigned managed identity, depending on your scenario:
-> [!NOTE]
-> To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle the rotation of the key in Azure Key Vault, so you will need to rotate your key manually or create a function to rotate it on a schedule.
+- When you configure customer-managed keys at the time that you create a storage account, you must specify a user-assigned managed identity.
+- When you configure customer-managed keys on an existing storage account, you can specify either a user-assigned managed identity or a system-assigned managed identity.
-### Configure encryption for automatic updating of key versions
+To learn more about system-assigned versus user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
-Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version. When the customer-managed key is rotated in Azure Key Vault, Azure Storage will automatically begin using the latest version of the key for encryption.
+### Use a user-assigned managed identity to authorize access
-### [Azure portal](#tab/portal)
+A user-assigned is a standalone Azure resource. To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-To configure customer-managed keys with automatic updating of the key version in the Azure portal, follow these steps:
+Both new and existing storage accounts can use a user-assigned identity to authorize access to the key vault. You must create the user-assigned identity before you configure customer-managed keys.
-1. Navigate to your storage account.
-1. On the **Settings** blade for the storage account, click **Encryption**. By default, key management is set to **Microsoft Managed Keys**, as shown in the following image.
+#### [Azure portal](#tab/portal)
- ![Portal screenshot showing encryption option](./media/customer-managed-keys-configure-key-vault/portal-configure-encryption-keys.png)
+When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see one of the following sections:
-1. Select the **Customer Managed Keys** option.
-1. Choose the **Select from Key Vault** option.
-1. Select **Select a key vault and key**.
-1. Select the key vault containing the key you want to use. You can also create a new key vault.
-1. Select the key from the key vault. You can also create a new key.
+- [Configure customer-managed keys for a new account](#configure-customer-managed-keys-for-a-new-account)
+- [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account)
- ![Screenshot showing how to select key vault and key](./media/customer-managed-keys-configure-key-vault/portal-select-key-from-key-vault.png)
+#### [PowerShell](#tab/powershell)
-1. Select the type of identity to use to authenticate access to the key vault. The options include **System-assigned** (the default) or **User-assigned**. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You will need these values in subsequent steps:
- 1. If you select **System-assigned**, the system-assigned managed identity for the storage account is created under the covers, if it does not already exist.
- 1. If you select **User-assigned**, then you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+```azurepowershell
+$userIdentityId = (Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>).Id
+$principalId = $userIdentity.PrincipalId
+```
- :::image type="content" source="media/customer-managed-keys-configure-key-vault/select-user-assigned-managed-identity-portal.png" alt-text="Screenshot showing how to select a user-assigned managed identity for key vault authentication":::
+#### [Azure CLI](#tab/azure-cli)
-1. Save your changes.
+To authorize access to the key vault with a user-assigned managed identity, you will need the resource ID and principal ID of the user-assigned managed identity. Call [az identity show](/cli/azure/identity#az-identity-show) command to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You will need these values in subsequent steps:
-After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption.
+```azurecli
+userIdentityId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query id)
+principalId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query principalId)
+```
+
-### [PowerShell](#tab/powershell)
+### Use a system-assigned managed identity to authorize access
+
+A system-assigned managed identity is associated with an instance of an Azure service, in this case an Azure Storage account. You must explicitly assign a system-assigned managed identity to a storage account before you can use the system-assigned managed identity to authorize access to the key vault that contains your customer-managed key.
-To configure customer-managed keys with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
+Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New storage accounts must use a user-assigned identity, if customer-managed keys are configured on account creation.
-You can use either a system-assigned managed identity or a user-assigned managed identity to authenticate access to the key vault. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+#### [Azure portal](#tab/portal)
-To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
+When you configure customer-managed keys with the Azure portal with a system-assigned managed identity, the system-assigned managed identity is assigned to the storage account for you under the covers. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
-```powershell
+#### [PowerShell](#tab/powershell)
+
+To assign a system-assigned managed identity to your storage account, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
+
+```azurepowershell
$storageAccount = Set-AzStorageAccount -ResourceGroupName <resource_group> ` -Name <storage-account> ` -AssignIdentity
-$objectId = $storageAccount.Identity.PrincipalId
```
-To authenticate access to the key vault with a user-assigned managed identity, first find the object ID of the user-assigned managed identity. To run this example, you'll need the resource ID of the user-assigned managed identity.
+Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You will need this value in the next step to create the key vault access policy:
-```powershell
-$userManagedIdentityResourceId = '/subscriptions/{my subscription ID}/resourceGroups/{my resource group name}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{my managed identity name}'
-$objectId = (Get-AzResource -ResourceId $userManagedIdentityResourceId).Properties.PrincipalId
+```azurepowershell
+$principalId = $storageAccount.Identity.PrincipalId
```
-Next, to set the access policy for the key vault, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the identifier for the system-assigned managed identity. For more information about assigning the key vault access policy, see [Assign a Key Vault access policy using Azure PowerShell](../../key-vault/general/assign-access-policy-powershell.md)).
+#### [Azure CLI](#tab/azure-cli)
-```powershell
+To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az_storage_account_update):
+
+```azurecli
+az storage account update \
+ --name <storage-account> \
+ --resource-group <resource_group> \
+ --assign-identity
+```
+
+Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You will need this value in the next step to create the key vault access policy:
+
+```azurecli
+principalId = $(az storage account show --name <storage-account> --resource-group <resource_group> --query identity.principalId)
+```
+++
+## Configure the key vault access policy
+
+The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. For more information about assigning the key vault access policy, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+
+### [Azure portal](#tab/portal)
+
+When you configure customer-managed keys with the Azure portal, the key vault access policy is configured for you under the covers.
+
+### [PowerShell](#tab/powershell)
+
+To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the managed identity.
+
+```azurepowershell
Set-AzKeyVaultAccessPolicy ` -VaultName $keyVault.VaultName `
- -ObjectId $objectId `
+ -ObjectId $principalId `
-PermissionsToKeys wrapkey,unwrapkey,get ```
-For more information, see [Assign a Key Vault access policy using Azure PowerShell](../../key-vault/general/assign-access-policy-powershell.md)).
+### [Azure CLI](#tab/azure-cli)
-Finally, configure the customer-managed key. To automatically update the key version for the customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the managed identity.
-```powershell
-Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
- -AccountName $storageAccount.StorageAccountName `
- -KeyvaultEncryption `
- -KeyName $key.Name `
- -KeyVaultUri $keyVault.VaultUri
+```azurecli
+az keyvault set-policy \
+ --name <key-vault> \
+ --resource-group <resource_group>
+ --object-id $principalId \
+ --key-permissions get unwrapKey wrapKey
```
-# [Azure CLI](#tab/azure-cli)
+
-To configure customer-managed keys with automatic updating of the key version with Azure CLI, install [Azure CLI version 2.4.0](/cli/azure/release-notes-azure-cli#april-21-2020) or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+## Configure customer-managed keys for a new account
-You can use either a system-assigned managed identity or a user-assigned managed identity to authenticate access to the key vault. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+When you configure encryption with customer-managed keys for a new storage account, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
-To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az_storage_account_update):
+You must use an existing user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys while creating the storage account. The user-assigned managed identity must have appropriate permissions to access the key vault.
-```azurecli-interactive
-az storage account update \
- --name <storage-account> \
- --resource-group <resource_group> \
- --assign-identity
+### [Azure portal](#tab/portal)
+
+To configure customer-managed keys for a new storage account with automatic updating of the key version, follow these steps:
+
+1. In the Azure portal, navigate to the **Storage accounts** page, and select the **Create** button to create a new account.
+1. Follow the steps outlined in [Create a storage account](storage-account-create.md) to fill out the fields on the **Basics**, **Advanced**, **Networking**, and **Data Protection** tabs.
+1. On the **Encryption** tab, indicate for which services you want to enable support for customer-managed keys in the **Enable support for customer-managed keys** field.
+1. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
+1. In the **Encryption key** field, choose **Select a key vault and key**, and specify the key vault and key.
+1. For the **User-assigned identity** field, select an existing user-assigned managed identity.
+
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-new-account-configure-cmk.png" alt-text="Screenshot showing how to configure customer-managed keys for a new storage account in Azure portal":::
+
+1. Select **Review + create** to validate and create the new account.
+
+You can also configure customer-managed keys with manual updating of the key version when you create a new storage account. Follow the steps described in [Configure encryption for manual updating of key versions](#configure-encryption-for-manual-updating-of-key-versions).
+
+### [PowerShell](#tab/powershell)
+
+To configure customer-managed keys for a new storage account with automatic updating of the key version, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You will also need the key vault URI and key name:
+
+```azurepowershell
+New-AzStorageAccount -ResourceGroupName <resource-group> `
+ -Name <storage-account> `
+ -Kind StorageV2 `
+ -SkuName Standard_LRS `
+ -Location $location `
+ -IdentityType SystemAssignedUserAssigned `
+ -UserIdentityId $userIdentityId `
+ -KeyVaultUri $keyVault.VaultUri `
+ -KeyName $key.Name `
+ -KeyVaultUserAssignedIdentityId $userIdentityId
```
-To authenticate access to the key vault with a user-assigned managed identity, first find the object ID of the user-assigned managed identity.
+### [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
-az identity show \
- --name <name-of-user-assigned-managed-identity> \
- --resource-group <resource-group>
+To configure customer-managed keys for a new storage account with automatic updating of the key version, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You will also need the key vault URI and key name:
+
+```azurecli
+az storage account create \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --location <location> \
+ --sku Standard_LRS \
+ --kind StorageV2 \
+ --identity-type SystemAssigned,UserAssigned \
+ --user-identity-id <user-assigned-managed-identity> \
+ --encryption-key-vault <key-vault-uri> \
+ --encryption-key-name <key-name> \
+ --encryption-key-source Microsoft.Keyvault \
+ --key-vault-user-identity-id <user-assigned-managed-identity>
```
-Next, to set the access policy for the key vault, call [az keyvault set-policy](/cli/azure/keyvault#az_keyvault_set_policy) and provide the object ID of the managed identity:
++
+## Configure customer-managed keys for an existing account
-```azurecli-interactive
-az keyvault set-policy \
- --name <key-vault> \
- --resource-group <resource_group>
- --object-id <object-id> \
- --key-permissions get unwrapKey wrapKey
+When you configure encryption with customer-managed keys for an existing storage account, you can choose to automatically update the key version used for Azure Storage encryption whenever a new version is available in the associated key vault. Alternately, you can explicitly specify a key version to be used for encryption until the key version is manually updated.
+
+You can use either a system-assigned or user-assigned managed identity to authorize access to the key vault when you configure customer-managed keys for an existing storage account.
+
+> [!NOTE]
+> To rotate a key, create a new version of the key in Azure Key Vault. Azure Storage does not handle the rotation of the key in Azure Key Vault, so you will need to rotate your key manually or create a function to rotate it on a schedule.
+
+### Configure encryption for automatic updating of key versions
+
+Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version. When the customer-managed key is rotated in Azure Key Vault, Azure Storage will automatically begin using the latest version of the key for encryption.
+
+### [Azure portal](#tab/portal)
+
+To configure customer-managed keys for an existing account with automatic updating of the key version in the Azure portal, follow these steps:
+
+1. Navigate to your storage account.
+1. On the **Settings** blade for the storage account, click **Encryption**. By default, key management is set to **Microsoft Managed Keys**, as shown in the following image.
+
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-configure-encryption-keys.png" alt-text="Screenshot showing encryption options in Azure portal" lightbox="media/customer-managed-keys-configure-key-vault/portal-configure-encryption-keys.png":::
+
+1. Select the **Customer Managed Keys** option.
+1. Choose the **Select from Key Vault** option.
+1. Select **Select a key vault and key**.
+1. Select the key vault containing the key you want to use. You can also create a new key vault.
+1. Select the key from the key vault. You can also create a new key.
+
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-select-key-from-key-vault.png" alt-text="Screenshot showing how to select key vault and key in Azure portal.":::
+
+1. Select the type of identity to use to authenticate access to the key vault. The options include **System-assigned** (the default) or **User-assigned**. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+ 1. If you select **System-assigned**, the system-assigned managed identity for the storage account is created under the covers, if it does not already exist.
+ 1. If you select **User-assigned**, then you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/select-user-assigned-managed-identity-portal.png" alt-text="Screenshot showing how to select a user-assigned managed identity for key vault authentication":::
+
+1. Save your changes.
+
+After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption. The portal also displays the type of managed identity used to authorize access to the key vault and the principal ID for the managed identity.
++
+### [PowerShell](#tab/powershell)
+
+To configure customer-managed keys for an existing account with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
+
+Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, omitting the key version. Include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+
+```azurepowershell
+Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
+ -AccountName $storageAccount.StorageAccountName `
+ -KeyvaultEncryption `
+ -KeyName $key.Name `
+ -KeyVaultUri $keyVault.VaultUri
```
-Finally, configure the customer-managed key. To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
+### [Azure CLI](#tab/azure-cli)
-Remember to replace the placeholder values in brackets with your own values.
+To configure customer-managed keys for an existing account with automatic updating of the key version with Azure CLI, install [Azure CLI version 2.4.0](/cli/azure/release-notes-azure-cli#april-21-2020) or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+Next, call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings, omitting the key version. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
-```azurecli-interactive
+```azurecli
key_vault_uri=$(az keyvault show \ --name <key-vault> \ --resource-group <resource_group> \
az storage account update
### Configure encryption for manual updating of key versions
-If you prefer to manually update the key version, then explicitly specify the version at the time that you configure encryption with customer-managed keys. In this case, Azure Storage will not automatically update the key version when a new version is created in the key vault.To use a new key version, you must manually update the version used for Azure Storage encryption.
+If you prefer to manually update the key version, then explicitly specify the version at the time that you configure encryption with customer-managed keys. In this case, Azure Storage will not automatically update the key version when a new version is created in the key vault. To use a new key version, you must manually update the version used for Azure Storage encryption.
# [Azure portal](#tab/portal)
To configure customer-managed keys with manual updating of the key version in th
1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version. 1. Copy the value of the **Key Identifier** field, which provides the URI.
- ![Screenshot showing key vault key URI](media/customer-managed-keys-configure-key-vault/portal-copy-key-identifier.png)
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-copy-key-identifier.png" alt-text="Screenshot showing key vault key URI in Azure portal":::
1. In the **Encryption key** settings for your storage account, choose the **Enter key URI** option. 1. Paste the URI that you copied into the **Key URI** field. Omit the key version from the URI to enable automatic updating of the key version.
- ![Screenshot showing how to enter key URI](./media/customer-managed-keys-configure-key-vault/portal-specify-key-uri.png)
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-specify-key-uri.png" alt-text="Screenshot showing how to enter key URI in Azure portal":::
1. Specify the subscription that contains the key vault. 1. Specify either a system-assigned or user-assigned managed identity.
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```powershell
+```azurepowershell
Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName ` -AccountName $storageAccount.StorageAccountName ` -KeyvaultEncryption `
Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
-KeyVaultUri $keyVault.VaultUri ```
-When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
# [Azure CLI](#tab/azure-cli)
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values.
-```azurecli-interactive
+```azurecli
key_vault_uri=$(az keyvault show \ --name <key-vault> \ --resource-group <resource_group> \
az storage account update
--encryption-key-vault $key_vault_uri ```
-When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az_keyvault_show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az_keyvault_key_list-versions). Then call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
+When you manually update the key version, you will need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az_keyvault_show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az_keyvault_key_list-versions). Then call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
To change the key with the Azure portal, follow these steps:
# [PowerShell](#tab/powershell)
-To change the key with PowerShell, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) as shown in [Configure encryption with customer-managed keys](#configure-encryption-with-customer-managed-keys) and provide the new key name and version. If the new key is in a different key vault, then you must also update the key vault URI.
+To change the key with PowerShell, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) as shown in [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account) and provide the new key name and version. If the new key is in a different key vault, then you must also update the key vault URI.
# [Azure CLI](#tab/azure-cli)
-To change the key with Azure CLI, call [az storage account update](/cli/azure/storage/account#az_storage_account_update) as shown in [Configure encryption with customer-managed keys](#configure-encryption-with-customer-managed-keys) and provide the new key name and version. If the new key is in a different key vault, then you must also update the key vault URI.
+To change the key with Azure CLI, call [az storage account update](/cli/azure/storage/account#az_storage_account_update) as shown in [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account) and provide the new key name and version. If the new key is in a different key vault, then you must also update the key vault URI.
To revoke customer-managed keys with the Azure portal, disable the key as descri
You can revoke customer-managed keys by removing the key vault access policy. To revoke a customer-managed key with PowerShell, call the [Remove-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/remove-azkeyvaultaccesspolicy) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```powershell
+```azurepowershell
Remove-AzKeyVaultAccessPolicy -VaultName $keyVault.VaultName ` -ObjectId $storageAccount.Identity.PrincipalId ` ```
Remove-AzKeyVaultAccessPolicy -VaultName $keyVault.VaultName `
You can revoke customer-managed keys by removing the key vault access policy. To revoke a customer-managed key with Azure CLI, call the [az keyvault delete-policy](/cli/azure/keyvault#az_keyvault_delete_policy) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```azurecli-interactive
+```azurecli
az keyvault delete-policy \ --name <key-vault> \ --object-id $storage_account_principal
To disable customer-managed keys in the Azure portal, follow these steps:
To disable customer-managed keys with PowerShell, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) with the `-StorageEncryption` option, as shown in the following example. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```powershell
+```azurepowershell
Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName ` -AccountName $storageAccount.StorageAccountName ` -StorageEncryption
Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
To disable customer-managed keys with Azure CLI, call [az storage account update](/cli/azure/storage/account#az_storage_account_update) and set the `--encryption-key-source parameter` to `Microsoft.Storage`, as shown in the following example. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
-```azurecli-interactive
+```azurecli
az storage account update --name <storage-account> \ --resource-group <resource_group> \
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 01/13/2022 Last updated : 01/24/2022
The following diagram shows how Azure Storage uses Azure AD and a key vault or m
The following list explains the numbered steps in the diagram:
-1. An Azure Key Vault admin grants permissions to encryption keys to either a user-assigned managed identity, or to the system-assigned managed identity that's associated with the storage account.
+1. An Azure Key Vault admin grants permissions to encryption keys to a managed identity. The managed identity may be either a user-assigned managed identity that you create and manage, or a system-assigned managed identity that is associated with the storage account.
1. An Azure Storage admin configures encryption with a customer-managed key for the storage account. 1. Azure Storage uses the managed identity to which the Azure Key Vault admin granted permissions in step 1 to authenticate access to Azure Key Vault via Azure AD. 1. Azure Storage wraps the account encryption key with the customer-managed key in Azure Key Vault. 1. For read/write operations, Azure Storage sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
-The managed identity that's associated with the storage account must have these permissions at a minimum to access a customer-managed key in Azure Key Vault:
+The managed identity that is associated with the storage account must have these permissions at a minimum to access a customer-managed key in Azure Key Vault:
- *wrapkey* - *unwrapkey*
Data in Blob storage and Azure Files is always protected by customer-managed key
When you configure a customer-managed key, Azure Storage wraps the root data encryption key for the account with the customer-managed key in the associated key vault or managed HSM. Enabling customer-managed keys does not impact performance, and takes effect immediately.
-When you enable or disable customer managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account does not need to be re-encrypted.
+When you enable or disable customer-managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account does not need to be re-encrypted.
-You can enable customer-managed keys on existing storage accounts or on new accounts when you create them. When you enable customer-managed keys while creating an account, only user-assigned managed identities are available. To use a system-assigned managed identity, you must first create the account and then enable customer-managed keys, because the system-assigned managed identity can exist only after the account is created. For more information on system-assigned versus user-assigned managed identities, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+You can enable customer-managed keys on both new and existing storage accounts. When you enable customer-managed keys, you must specify a managed identity to be used to authorize access to the key vault that contains the key. The managed identity may be either a user-assigned or system-assigned managed identity:
+
+- When you configure customer-managed keys at the time that you create a storage account, you must use a user-assigned managed identity.
+- When you configure customer-managed keys on an existing storage account, you can use either a user-assigned managed identity or a system-assigned managed identity.
+
+To learn more about system-assigned versus user-assigned managed identities, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
You can switch between customer-managed keys and Microsoft-managed keys at any time. For more information about Microsoft-managed keys, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management).
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-create.md
Previously updated : 01/13/2022 Last updated : 01/24/2022
The following table describes the fields on the **Advanced** tab.
| Blob storage | Allow cross-tenant replication | Required | By default, users with appropriate permissions can configure object replication across Azure AD tenants. To prevent replication across tenants, deselect this option. For more information, see [Prevent replication across Azure AD tenants](../blobs/object-replication-overview.md#prevent-replication-across-azure-ad-tenants). | | Blob storage | Access tier | Required | Blob access tiers enable you to store blob data in the most cost-effective manner, based on usage. Select the hot tier (default) for frequently accessed data. Select the cool tier for infrequently accessed data. For more information, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md). | | Azure Files | Enable large file shares | Optional | Available only for standard file shares with the LRS or ZRS redundancies. |
-| Tables and queues | Enable support for customer-managed keys | Optional | To enable support for customer-managed keys for tables and queues, you must select this setting at the time that you create the storage account. For more information, see [Create an account that supports customer-managed keys for tables and queues](account-encryption-key-create.md). |
### Networking tab
The following table describes the fields on the **Data protection** tab.
### Encryption tab
-On the **Encryption** tab, you can configure options that relate to how your data is encrypted when it is persisted to the cloud. Some of these options can be configured only when you create the storage account.
+On the **Encryption** tab, you can configure options that relate to how your data is encrypted when it is persisted to the cloud. Some of these options can be configured only when you create the storage account.
| Field | Required or optional | Description | |--|--|--| | Encryption type| Required | By default, data in the storage account is encrypted by using Microsoft-managed keys. You can rely on Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. For more information, see [Azure Storage encryption for data at rest](storage-service-encryption.md). |
-| Enable support for customer-managed keys | Required | By default, customer managed keys can be used to encrypt only blobs and files. You can use the options presented in this section to enable support for tables and queues as well. This option can be configured only when you create the storage account. For more information, see [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md). |
+| Enable support for customer-managed keys | Required | By default, customer managed keys can be used to encrypt only blobs and files. Set this option to **All service types (blobs, files, tables, and queues)** to enable support for customer-managed keys for all services. You are not required to use customer-managed keys if you choose this option. For more information, see [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md). |
+| Encryption key | Required if **Encryption type** field is set to **Customer-managed keys**. | If you choose **Select a key vault and key**, you are presented with the option to navigate to the key vault and key that you wish to use. If you choose **Enter key from URI**, then you are presented with a field to enter the key URI and the subscription. |
+| User-assigned identity | Required if **Encryption type** field is set to **Customer-managed keys**. | If you are configuring customer-managed keys at create time for the storage account, you must provide a user-assigned identity to use for authorizing access to the key vault. |
| Enable infrastructure encryption | Optional | By default, infrastructure encryption is not enabled. Enable infrastructure encryption to encrypt your data at both the service level and the infrastructure level. For more information, see [Create a storage account with infrastructure encryption enabled for double encryption of data](infrastructure-encryption-enable.md). | ### Tags tab
storage Storage Files Migration Robocopy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-migration-robocopy.md
RoboCopy, as a trusted, Windows-based copy tool, has the home-turf advantage whe
AzCopy, on the other hand, has only recently expanded to support file copy with some fidelity and added the first features needed to be considered as a migration tool. However, there are still gaps and there can easily be misunderstandings of functionality when comparing AzCopy flags to RoboCopy flags.
-An example: *RoboCopy /MIR* will mirror source to target - that means added, changed, and deleted files are considered. An important difference to *AzCopy -sync* is that deleted files on the source will not be removed on the target. That makes for an incomplete differential-copy feature set. AzCopy will continue to evolve. At this time AzCopy is not a recommended tool for migration scenarios with Azure file shares as the target.
+An example: *RoboCopy /MIR* will mirror source to target - that means added, changed, and deleted files are considered. An important difference in using *AzCopy -sync* is that deleted files on the source are not removed on the target. That makes for an incomplete differential-copy feature set. AzCopy will continue to evolve. At this time, AzCopy is not a recommended tool for migration scenarios with Azure file shares as the target.
## Migration goals
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
You can create a [Text Analytics](https://ms.portal.azure.com/#create/Microsoft.
![Screenshot that shows Text Analytics in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00b.png)
-You can create an [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource in the Azure portal:
+You can create an [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal:
![Screenshot that shows Anomaly Detector in the portal, with the Create button.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00a.png)
virtual-desktop Compare Virtual Desktop Windows 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/compare-virtual-desktop-windows-365.md
The following table describes high-level differences in the technical features b
|-|--|--|--|--| |Design|Designed to be flexible.|Designed to be flexible.|Designed to be simple and easy to use.|Designed to be simple and easy to use.| |Type of desktop|Personal desktop|Pooled (single and multi-session) desktop|Personal desktop|Personal desktop|
-|Pricing model|Based on your own resource usage|Based on your own resource usage|Fixed per-user pricing ([Windows 365 Enterprise pricing](https://www.microsoft.com/windows-365/enterprise/compare-plans-pricing-b))|Fixed per-user pricing ([Windows 365 Business pricing](https://www.microsoft.com/windows-365/business/compare-plans-pricing-b))|
+|Pricing model|Based on your own resource usage|Based on your own resource usage|Fixed per-user pricing ([Windows 365 Enterprise pricing](https://www.microsoft.com/windows-365/enterprise/compare-plans-pricing))|Fixed per-user pricing ([Windows 365 Business pricing](https://www.microsoft.com/windows-365/business/compare-plans-pricing))|
|Subscription|Customer-managed|Customer-managed|Microsoft-managed (except networking)|Fully Microsoft-managed| |VM stock-keeping units (SKUs)|Any Azure virtual machine (VM) including graphics processing unit (GPU)-enabled SKUs|Any Azure VM including GPU-enabled SKUs|Multiple optimized options for a range of use cases|Multiple optimized options for a range of use cases| |Backup|Azure backup services|Azure backup services|Local redundant storage for disaster recovery|Local redundant storage for disaster recovery|
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set.md
Title: Associate a virtual machine scale set to a Capacity Reservation Group (preview)
+ Title: Associate a virtual machine scale set to a Capacity Reservation group (preview)
description: Learn how to associate a new or existing virtual machine scale set to a Capacity Reservation group.--++ Last updated 08/09/2021
-# Associate a virtual machine scale set to a Capacity Reservation Group (preview)
+# Associate a virtual machine scale set to a Capacity Reservation group (preview)
Virtual Machine Scale Sets have two modes: -- **Uniform Orchestration Mode:** In this mode, virtual machine scale sets use a VM profile or a template to scale up to the desired capacity. While there's some ability to manage or customize individual VM instances, Uniform uses identical VM instances. These instances are exposed through the virtual machine scale sets VM APIs and aren't compatible with the standard Azure IaaS VM API commands. Since the scale set performs all the actual VM operations, reservations are associated with the virtual machine scale set directly. Once the scale set is associated with the reservation, all the subsequent VM allocations will be done against the reservation. -- **Flexible Orchestration Mode:** In this mode, you get more flexibility managing the individual virtual machine scale set VM instances as they can use the standard Azure IaaS VM APIs instead of using the scale set interface. This mode won't work with Capacity Reservation during public preview.
+- **Uniform Orchestration Mode:** In this mode, virtual machine scale sets use a VM profile or a template to scale up to the desired capacity. While there is some ability to manage or customize individual VM instances, Uniform uses identical VM instances. These instances are exposed through the virtual machine scale sets VM APIs and are not compatible with the standard Azure IaaS VM API commands. Since the scale set performs all the actual VM operations, reservations are associated with the virtual machine scale set directly. Once the scale set is associated with the reservation, all the subsequent VM allocations will be done against the reservation.
+- **Flexible Orchestration Mode:** In this mode, you get more flexibility managing the individual virtual machine scale set VM instances as they can use the standard Azure IaaS VM APIs instead of using the scale set interface. This mode will not work with Capacity Reservation during public preview.
-To learn more about these modes, go to [Virtual Machine Scale Sets Orchestration Modes](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md). The rest of this article will cover how to associate a Uniform virtual machine scale set to a Capacity Reservation Group.
+To learn more about these modes, go to [Virtual Machine Scale Sets Orchestration Modes](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md). The rest of this article will cover how to associate a Uniform virtual machine scale set to a Capacity Reservation group.
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
To learn more about these modes, go to [Virtual Machine Scale Sets Orchestration
There are some other restrictions while using Capacity Reservation. For the complete list, refer the [Capacity Reservations overview](capacity-reservation-overview.md).
-## Associate a new virtual machine scale set to a Capacity Reservation Group
+## Associate a new virtual machine scale set to a Capacity Reservation group
### [API](#tab/api1)
-To associate a new Uniform virtual machine scale set to a Capacity Reservation Group, construct the following PUT request to the *Microsoft.Compute* provider:
+To associate a new Uniform virtual machine scale set to a Capacity Reservation group, construct the following PUT request to the *Microsoft.Compute* provider:
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}?api-version=2021-04-01 ```
-Add the `capacityReservationGroup` property in the `virtualMachineProfile` as shown below:
+Add the `capacityReservationGroup` property in the `virtualMachineProfile` property:
```json {
Add the `capacityReservationGroup` property in the `virtualMachineProfile` as sh
### [CLI](#tab/cli1)
-Use `az vmss create` to create a new virtual machine scale set and add the `capacity-reservation-group` property to associate the scale set to an existing capacity reservation group. The example below creates a Uniform scale set for a Standard_Ds1_v2 VM in the East US location and associates the scale set to a capacity reservation group.
+Use `az vmss create` to create a new virtual machine scale set and add the `capacity-reservation-group` property to associate the scale set to an existing Capacity Reservation group. The following example creates a Uniform scale set for a Standard_Ds1_v2 VM in the East US location and associates the scale set to a Capacity Reservation group.
```azurecli-interactive az vmss create
az vmss create
### [PowerShell](#tab/powershell1)
-Use `New-AzVmss` to create a new virtual machine scale set and add the `CapacityReservationGroupId` property to associate the scale set to an existing capacity reservation group. The example below creates a Uniform scale set for a Standard_Ds1_v2 VM in the East US location and associates the scale set to a capacity reservation group.
+Use `New-AzVmss` to create a new virtual machine scale set and add the `CapacityReservationGroupId` property to associate the scale set to an existing Capacity Reservation group. The following example creates a Uniform scale set for a Standard_Ds1_v2 VM in the East US location and associates the scale set to a Capacity Reservation group.
```powershell-interactive $vmssName = <"VMSSNAME">
To learn more, go to Azure PowerShell command [New-AzVmss](/powershell/module/az
An [ARM template](../azure-resource-manager/templates/overview.md) is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
-ARM templates let you deploy groups of related resources. In a single template, you can create capacity reservation group and capacity reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create Capacity Reservation group and Capacity Reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration/continuous delivery (CI/CD) pipelines.
-If your environment meets the prerequisites and you're familiar with using ARM templates, use this [Create Virtual Machine Scale Sets with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineScaleSetWithReservation.json) template.
+If your environment meets the prerequisites and you are familiar with using ARM templates, use this [Create Virtual Machine Scale Sets with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineScaleSetWithReservation.json) template.
<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
-## Associate an existing virtual machine scale set to Capacity Reservation Group
+## Associate an existing virtual machine scale set to Capacity Reservation group
-For Public Preview, in order to associate an existing Uniform virtual machine scale set to the capacity reservation group, it is required to first deallocate the scale set and then do the association at the time of reallocation. This ensures that all the scale set VMs consume capacity reservation at the time of reallocation.
+During public preview, you are first required to deallocate your scale set. Then you can associate the existing Uniform virtual machine scale set to the Capacity Reservation group at the time of reallocation. This ensures that all the scale set VMs consume Capacity Reservation at the time of reallocation.
### Important notes on Upgrade Policies -- **Automatic Upgrade** ΓÇô In this mode, the scale set VM instances are automatically associated with the Capacity Reservation Group without any further action from you. When the scale set VMs are reallocated, they start consuming the reserved capacity.-- **Rolling Upgrade** ΓÇô In this mode, scale set VM instances are associated with the Capacity Reservation Group without any further action from you. However, they're updated in batches with an optional pause time between them. When the scale set VMs are reallocated, they start consuming the reserved capacity.-- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the virtual machine scale set is attached to a capacity reservation group. You'll need to do individual updates to each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
+- **Automatic Upgrade** ΓÇô In this mode, the scale set VM instances are automatically associated with the Capacity Reservation group without any further action from you. When the scale set VMs are reallocated, they start consuming the reserved capacity.
+- **Rolling Upgrade** ΓÇô In this mode, scale set VM instances are associated with the Capacity Reservation group without any further action from you. However, they are updated in batches with an optional pause time between them. When the scale set VMs are reallocated, they start consuming the reserved capacity.
+- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the virtual machine scale set is attached to a Capacity Reservation group. You will need to do individual updates to each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
### [API](#tab/api2)
For Public Preview, in order to associate an existing Uniform virtual machine sc
--name myVMSS ```
-1. Associate the scale set to the capacity reservation group.
+1. Associate the scale set to the Capacity Reservation group.
```azurecli-interactive az vmss update
For Public Preview, in order to associate an existing Uniform virtual machine sc
-VMScaleSetName "myVmssΓÇ¥ ```
-1. Associate the scale set to the capacity reservation group.
+1. Associate the scale set to the Capacity Reservation group.
```powershell-interactive $vmss =
To learn more, go to Azure PowerShell commands [Stop-AzVmss](/powershell/module/
## View virtual machine scale set association with Instance View
-Once the Uniform virtual machine scale set is associated with the Capacity Reservation Group, all the subsequent VM allocations will happen against the Capacity Reservation. Azure automatically finds the matching Capacity Reservation in the group and consumes a reserved slot.
+Once the Uniform virtual machine scale set is associated with the Capacity Reservation group, all the subsequent VM allocations will happen against the Capacity Reservation. Azure automatically finds the matching Capacity Reservation in the group and consumes a reserved slot.
### [API](#tab/api3)
-The Capacity Reservation Group *Instance View* will reflect the new scale set VMs under the `virtualMachinesAssociated` & `virtualMachinesAllocated` properties as shown below:
+The Capacity Reservation group *Instance View* will reflect the new scale set VMs under the `virtualMachinesAssociated` & `virtualMachinesAllocated` properties:
```rest GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}?$expand=instanceview&api-version=2021-04-01
az capacity reservation group show
### [PowerShell](#tab/powershell3)
-View your virtual machine scale set and capacity reservation group association with Instance View using PowerShell.
+View your virtual machine scale set and Capacity Reservation group association with Instance View using PowerShell.
```powershell-interactive $CapRes=
To learn more, go to Azure PowerShell command [Get-AzCapacityReservationGroup](/
### [Portal](#tab/portal3) 1. Open [Azure portal](https://portal.azure.com)
-1. Go to your capacity reservation group
-1. Select **Resources** under **Setting** on the left
-1. In the table, you will be able to see all the scale set VMs that are associated with the capacity reservation group
+1. Go to your Capacity Reservation group
+1. Select **Resources** under **Setting**
+1. In the table, you will be able to see all the scale set VMs that are associated with the Capacity Reservation group
<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
Virtual machine scale sets can be created regionally or in one or more Availabil
>[!IMPORTANT]
-> The location (Region and Availability Zones) of the virtual machine scale set and the Capacity Reservation Group must match for the association to succeed. For a regional scale set, the region must match between the scale set and the Capacity Reservation Group. For a zonal scale set, both the regions and the zones must match between the scale set and the Capacity Reservation Group.
+> The location (Region and Availability Zones) of the virtual machine scale set and the Capacity Reservation group must match for the association to succeed. For a regional scale set, the region must match between the scale set and the Capacity Reservation group. For a zonal scale set, both the regions and the zones must match between the scale set and the Capacity Reservation group.
-When a scale set is spread across multiple zones, it always attempts to deploy evenly across the included Availability Zones. Because of that even deployment, a Capacity Reservation Group should always have the same quantity of reserved VMs in each zone. As an illustration of why this is important, consider the example below.
+When a scale set is spread across multiple zones, it always attempts to deploy evenly across the included Availability Zones. Because of that even deployment, a Capacity Reservation group should always have the same quantity of reserved VMs in each zone. As an illustration of why this is important, consider the following example.
In this example, each zone has a different quantity reserved. LetΓÇÖs say that the virtual machine scale set scales out to 75 instances. Since scale set will always attempt to deploy evenly across zones, the VM distribution should look like this:
In this example, each zone has a different quantity reserved. LetΓÇÖs say that t
| 2 | 20 | 25 | 0 | 5 | | 3 | 15 | 25 | 0 | 10 |
-In this case, the scale set is incurring extra cost for 15 unused instances in Zone 1. The scale-out is also relying on 5 VMs in Zone 2 and 10 VMs in Zone 3 that aren't protected by Capacity Reservation. If each zone had 25 capacity instances reserved, then all 75 VMs would be protected by Capacity Reservation and the deployment wouldn't incur any extra cost for unused instances.
+In this case, the scale set is incurring extra cost for 15 unused instances in Zone 1. The scale-out is also relying on 5 VMs in Zone 2 and 10 VMs in Zone 3 that are not protected by Capacity Reservation. If each zone had 25 capacity instances reserved, then all 75 VMs would be protected by Capacity Reservation and the deployment would not incur any extra cost for unused instances.
-Since the reservations can be overallocated, the scale set can continue to scale normally beyond the limits of the reservation. The only difference is that the VMs allocated above the quantity reserved aren't covered by Capacity Reservation SLA. To learn more, go to [Overallocating Capacity Reservation](capacity-reservation-overallocate.md).
+Since the reservations can be overallocated, the scale set can continue to scale normally beyond the limits of the reservation. The only difference is that the VMs allocated above the quantity reserved are not covered by Capacity Reservation SLA. To learn more, go to [Overallocating Capacity Reservation](capacity-reservation-overallocate.md).
## Next steps
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-associate-vm.md
Title: Associate a virtual machine to a Capacity Reservation group (preview) description: Learn how to associate a new or existing virtual machine to a Capacity Reservation group.--++ Last updated 01/03/2022
# Associate a VM to a Capacity Reservation group (preview)
-This article walks you through the steps of associating a new or existing virtual machine to a Capacity Reservation Group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
+This article walks through the steps of associating a new or existing virtual machine to a Capacity Reservation group. To learn more about Capacity Reservations, see the [overview article](capacity-reservation-overview.md).
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
This article walks you through the steps of associating a new or existing virtua
## Associate a new VM
-To associate a new VM to the Capacity Reservation Group, the group must be explicitly referenced as a property of the virtual machine. This reference protects the matching reservation in the group from accidental consumption by less critical applications and workloads that aren't intended to use it.
+To associate a new VM to the Capacity Reservation group, the group must be explicitly referenced as a property of the virtual machine. This reference protects the matching reservation in the group from accidental consumption by less critical applications and workloads that are not intended to use it.
### [API](#tab/api1)
To add the `capacityReservationGroup` property to a VM, construct the following
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{VirtualMachineName}?api-version=2021-04-01 ```
-In the request body, include the `capacityReservationGroup` property as shown below:
+In the request body, include the `capacityReservationGroup` property:
```json {
In the request body, include the `capacityReservationGroup` property as shown be
1. Under *Administrator account*, provide a **username** and a **password** 1. The password must be at least 12 characters long and meet the defined complexity requirements 1. Go to the *Advanced section*
-1. In the **Capacity Reservations** dropdown, select the capacity reservation group that you want the VM to be associated with
+1. In the **Capacity Reservations** dropdown, select the Capacity Reservation group that you want the VM to be associated with
1. Select the **Review + create** button 1. After validation runs, select the **Create** button 1. After the deployment is complete, select **Go to resource** ### [CLI](#tab/cli1)
-Use `az vm create` to create a new VM and add the `capacity-reservation-group` property to associate it to an existing capacity reservation group. The example below creates a Standard_D2s_v3 VM in the East US location and associate the VM to a capacity reservation group.
+Use `az vm create` to create a new VM and add the `capacity-reservation-group` property to associate it to an existing Capacity Reservation group. The following example creates a Standard_D2s_v3 VM in the East US location and associate the VM to a Capacity Reservation group.
```azurecli-interactive az vm create
az vm create
### [PowerShell](#tab/powershell1)
-Use `New-AzVM` to create a new VM and add the `CapacityReservationGroupId` property to associate it to an existing capacity reservation group. The example below creates a Standard_D2s_v3 VM in the East US location and associate the VM to a capacity reservation group.
+Use `New-AzVM` to create a new VM and add the `CapacityReservationGroupId` property to associate it to an existing Capacity Reservation group. The following example creates a Standard_D2s_v3 VM in the East US location and associate the VM to a Capacity Reservation group.
```powershell-interactive New-AzVm
To learn more, go to Azure PowerShell command [New-AzVM](/powershell/module/az.c
An [ARM template](../azure-resource-manager/templates/overview.md) is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
-ARM templates let you deploy groups of related resources. In a single template, you can create capacity reservation group and capacity reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create Capacity Reservation group and capacity reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration/continuous delivery (CI/CD) pipelines.
-If your environment meets the prerequisites and you're familiar with using ARM templates, use this [Create VM with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineWithReservation.json) template.
+If your environment meets the prerequisites and you are familiar with using ARM templates, use this [Create VM with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineWithReservation.json) template.
If your environment meets the prerequisites and you're familiar with using ARM t
## Associate an existing VM
-While Capacity Reservation is in preview, to associate an existing VM to a Capacity Reservation Group, it is required to first deallocate the VM and then do the association at the time of reallocation. This process ensures the VM consumes one of the empty spots in the reservation.
+During public preview, you are first required to deallocate your scale set. Then you can associate the existing Uniform virtual machine scale set to the Capacity Reservation group at the time of reallocation. This ensures that all the scale set VMs consume Capacity Reservation at the time of reallocation.
### [API](#tab/api2)
While Capacity Reservation is in preview, to associate an existing VM to a Capac
1. Open [Azure portal](https://portal.azure.com) 1. Go to your virtual machine
-1. Select **Overview** on the left
+1. Select **Overview**
1. Select **Stop** at the top of the page to deallocate the VM 1. Go to **Configurations** on the left
-1. In the **Capacity Reservation Group** dropdown, select the group that you want the VM to be associated with
+1. In the **Capacity Reservation group** dropdown, select the group that you want the VM to be associated with
### [CLI](#tab/cli2)
While Capacity Reservation is in preview, to associate an existing VM to a Capac
-n myVM ```
-1. Associate the VM to a capacity reservation group
+1. Associate the VM to a Capacity Reservation group
```azurecli-interactive az vm update
While Capacity Reservation is in preview, to associate an existing VM to a Capac
-Name "myVM" ```
-1. Associate the VM to a capacity reservation group
+1. Associate the VM to a Capacity Reservation group
```powershell-interactive $VirtualMachine =
To learn more, go to Azure PowerShell commands [Stop-AzVM](/powershell/module/az
## View VM association with Instance View
-Once the `capacityReservationGroup` property is set, an association now exists between the VM and the group. Azure automatically finds the matching capacity reservation in the group and consumes a reserved slot. The Capacity ReservationΓÇÖs *Instance View* will reflect the new VM in the `virtualMachinesAllocated` property as shown below:
+Once the `capacityReservationGroup` property is set, an association now exists between the VM and the group. Azure automatically finds the matching Capacity Reservation in the group and consumes a reserved slot. The Capacity ReservationΓÇÖs *Instance View* will reflect the new VM in the `virtualMachinesAllocated` property:
### [API](#tab/api3)
To learn more, go to Azure PowerShell command [Get-AzCapacityReservation](/power
### [Portal](#tab/portal3) 1. Open [Azure portal](https://portal.azure.com)
-1. Go to your capacity reservation group
+1. Go to your Capacity Reservation group
1. Select **Resources** under **Settings** on the left
-1. Look at the table to see all the VMs that are associated with the capacity reservation group
+1. Look at the table to see all the VMs that are associated with the Capacity Reservation group
<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
To learn more, go to Azure PowerShell command [Get-AzCapacityReservation](/power
## Next steps > [!div class="nextstepaction"]
-> [Remove a VMs association to a Capacity Reservation Group](capacity-reservation-remove-vm.md)
+> [Remove a VMs association to a Capacity Reservation group](capacity-reservation-remove-vm.md)
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-create.md
Title: Create a Capacity Reservation in Azure (preview) description: Learn how to reserve Compute capacity in an Azure region or an Availability Zone by creating a Capacity Reservation.--++ Last updated 08/09/2021
# Create a Capacity Reservation (preview)
-Capacity Reservation is always created as part of a Capacity Reservation Group. The first step is to create a group if a suitable one doesnΓÇÖt exist already, then create reservations. Once successfully created, reservations are immediately available for use with virtual machines. The capacity is reserved for your use as long as the reservation isn't deleted.
+Capacity Reservation is always created as part of a Capacity Reservation group. The first step is to create a group if a suitable one doesnΓÇÖt exist already, then create reservations. Once successfully created, reservations are immediately available for use with virtual machines. The capacity is reserved for your use as long as the reservation is not deleted.
-A well-formed request for capacity reservation group should always succeed as it doesn't reserve any capacity. It just acts as a container for reservations. However, a request for capacity reservation could fail if you don't have the required quota for the VM series or if Azure doesnΓÇÖt have enough capacity to fulfill the request. Either request more quota or try a different VM size, location, or zone combination.
+A well-formed request for Capacity Reservation group should always succeed as it does not reserve any capacity. It just acts as a container for reservations. However, a request for Capacity Reservation could fail if you do not have the required quota for the VM series or if Azure doesnΓÇÖt have enough capacity to fulfill the request. Either request more quota or try a different VM size, location, or zone combination.
-A Capacity Reservation creation succeeds or fails in its entirety. For a request to reserve 10 instances, success is returned only if all 10 could be allocated. Otherwise, the capacity reservation creation will fail.
+A Capacity Reservation creation succeeds or fails in its entirety. For a request to reserve 10 instances, success is returned only if all 10 could be allocated. Otherwise, the Capacity Reservation creation will fail.
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service-level agreement, and we do not recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Considerations The Capacity Reservation must meet the following rules: -- The location parameter must match the location property for the parent Capacity Reservation Group. A mismatch will result in an error.
+- The location parameter must match the location property for the parent Capacity Reservation group. A mismatch will result in an error.
- The VM size must be available in the target region. Otherwise, the reservation creation will fail. - The subscription must have sufficient approved quota equal to or more than the quantity of VMs being reserved for the VM series and for the region overall. If needed, [request more quota](../azure-portal/supportability/per-vm-quota-requests.md).-- Each Capacity Reservation Group can have exactly one reservation for a given VM size. For example, only one Capacity Reservation can be created for the VM size `Standard_D2s_v3`. Attempt to create a second reservation for `Standard_D2s_v3` in the same Capacity Reservation Group will result in an error. However, another reservation can be created in the same group for other VM sizes, such as `Standard_D4s_v3`, `Standard_D8s_v3` and so on. -- For a Capacity Reservation Group that supports zones, each reservation type is defined by the combination of **VM size** and **zone**. For example, one Capacity Reservation for `Standard_D2s_v3` in `Zone 1`, another Capacity Reservation for `Standard_D2s_v3` in `Zone 2`, and a third Capacity Reservation for `Standard_D2s_v3` in `Zone 3` is supported.
+- Each Capacity Reservation group can have exactly one reservation for a given VM size. For example, only one Capacity Reservation can be created for the VM size `Standard_D2s_v3`. Attempt to create a second reservation for `Standard_D2s_v3` in the same Capacity Reservation group will result in an error. However, another reservation can be created in the same group for other VM sizes, such as `Standard_D4s_v3`, `Standard_D8s_v3`, and so on.
+- For a Capacity Reservation group that supports zones, each reservation type is defined by the combination of **VM size** and **zone**. For example, one Capacity Reservation for `Standard_D2s_v3` in `Zone 1`, another Capacity Reservation for `Standard_D2s_v3` in `Zone 2`, and a third Capacity Reservation for `Standard_D2s_v3` in `Zone 3` is supported.
-## Create a capacity reservation
+## Create a Capacity Reservation
### [API](#tab/api1)
-1. Create a Capacity Reservation Group
+1. Create a Capacity Reservation group
- To create a capacity reservation group, construct the following PUT request on *Microsoft.Compute* provider:
+ To create a Capacity Reservation group, construct the following PUT request on *Microsoft.Compute* provider:
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}&api-version=2021-04-01 ```
- In the request body, include the following:
+ In the request body, include the following parameter:
```json {
The Capacity Reservation must meet the following rules:
This group is created to contain reservations for the US East location.
- In this example, the group will support only regional reservations because zones weren't specified at the time of creation. To create a zonal group, pass an extra parameter *zones* in the request body as shown below:
+ The group in the following example will only support regional reservations, because zones were not specified at the time of creation. To create a zonal group, pass an extra parameter *zones* in the request body:
```json {
The Capacity Reservation must meet the following rules:
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}/capacityReservations/{capacityReservationName}?api-version=2021-04-01 ```
- In the request body, include the following:
+ In the request body, include the following parameters:
```json {
The Capacity Reservation must meet the following rules:
<!-- no images necessary if steps are straightforward --> 1. Open [Azure portal](https://portal.azure.com)
-1. In the search bar, type **Capacity Reservation Groups**
-1. Select **Capacity Reservation Groups** from the options
+1. In the search bar, type **Capacity Reservation groups**
+1. Select **Capacity Reservation groups** from the options
1. Select **Create**
-1. Under the *Basics* tab, create a Capacity Reservation Group:
+1. Under the *Basics* tab, create a Capacity Reservation group:
1. Select a **Subscription** 1. Select or create a **Resource group** 1. **Name** your group
The Capacity Reservation must meet the following rules:
1. Select **Next** 1. Under the *Tags* tab, optionally create tags 1. Select **Next**
-1. Under the *Review + Create* tab, review your capacity reservation group information
+1. Under the *Review + Create* tab, review your Capacity Reservation group information
1. Select **Create** ### [CLI](#tab/cli1)
-1. Before you can create a capacity reservation, create a resource group with `az group create`. The following example creates a resource group *myResourceGroup* in the East US location.
+1. Before you can create a Capacity Reservation, create a resource group with `az group create`. The following example creates a resource group *myResourceGroup* in the East US location.
```azurecli-interactive az group create
The Capacity Reservation must meet the following rules:
-g myResourceGroup ```
-1. Now create a Capacity Reservation Group with `az capacity reservation group create`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
+1. Now create a Capacity Reservation group with `az capacity reservation group create`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
```azurecli-interactive az capacity reservation group create
The Capacity Reservation must meet the following rules:
--zones 1 2 3 ```
-1. Once the Capacity Reservation Group is created, create a new Capacity Reservation with `az capacity reservation create`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
+1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `az capacity reservation create`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
```azurecli-interactive az capacity reservation create
The Capacity Reservation must meet the following rules:
### [PowerShell](#tab/powershell1)
-1. Before you can create a capacity reservation, create a resource group with `New-AzResourceGroup`. The following example creates a resource group *myResourceGroup* in the East US location.
+1. Before you can create a Capacity Reservation, create a resource group with `New-AzResourceGroup`. The following example creates a resource group *myResourceGroup* in the East US location.
```powershell-interactive New-AzResourceGroup
The Capacity Reservation must meet the following rules:
-Location "eastus" ```
-1. Now create a Capacity Reservation Group with `New-AzCapacityReservationGroup`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
+1. Now create a Capacity Reservation group with `New-AzCapacityReservationGroup`. The following example creates a group *myCapacityReservationGroup* in the East US location for all 3 availability zones.
```powershell-interactive New-AzCapacityReservationGroup
The Capacity Reservation must meet the following rules:
-Name "myCapacityReservationGroup" ```
-1. Once the Capacity Reservation Group is created, create a new Capacity Reservation with `New-AzCapacityReservation`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
+1. Once the Capacity Reservation group is created, create a new Capacity Reservation with `New-AzCapacityReservation`. The following example creates *myCapacityReservation* for 5 quantities of Standard_D2s_v3 VM size in Zone 1 of East US location.
```powershell-interactive New-AzCapacityReservation
To learn more, go to Azure PowerShell commands [New-AzResourceGroup](/powershell
An [ARM template](../azure-resource-manager/templates/overview.md) is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
-ARM templates let you deploy groups of related resources. In a single template, you can create capacity reservation group and capacity reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create Capacity Reservation group and Capacity Reservations. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration/continuous delivery (CI/CD) pipelines.
-If your environment meets the prerequisites and you're familiar with using ARM templates, use any of the following templates:
+If your environment meets the prerequisites and you are familiar with using ARM templates, use any of the following templates:
- [Create Zonal Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/ZonalCapacityReservation.json) - [Create VM with Capacity Reservation](https://github.com/Azure/on-demand-capacity-reservation/blob/main/VirtualMachineWithReservation.json)
If your environment meets the prerequisites and you're familiar with using ARM t
-<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
+<!-- The three dashes above show that your section of tabbed content is complete. Do not remove them :) -->
## Check on your Capacity Reservation
Get-AzCapacityReservation
-Name <"CapacityReservationName"> ```
-To find the VM size and the quantity reserved, use the following:
+To find the VM size and the quantity reserved, use the following command:
```powershell-interactive $CapRes =
To learn more, go to Azure PowerShell command [Get-AzCapacityReservation](/power
### [Portal](#tab/portal3) 1. Open [Azure portal](https://portal.azure.com)
-1. In the search bar, type **Capacity Reservation Groups**
-1. Select **Capacity Reservation Groups** from the options
-1. From the list, select the capacity reservation group name you just created
-1. Select **Overview** on the left
+1. In the search bar, type **Capacity Reservation groups**
+1. Select **Capacity Reservation groups** from the options
+1. From the list, select the Capacity Reservation group name you just created
+1. Select **Overview**
1. Select **Reservations** 1. In this view, you will be able to see all the reservations in the group along with the VM size and quantity reserved
-<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
+<!-- The three dashes above show that your section of tabbed content is complete. Do not remove them :) -->
## Next steps > [!div class="nextstepaction"]
-> [Learn how to modify your capacity reservation](capacity-reservation-modify.md)
+> [Learn how to modify your Capacity Reservation](capacity-reservation-modify.md)
virtual-machines Capacity Reservation Modify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-modify.md
Title: Modify a Capacity Reservation in Azure (preview) description: Learn how to modify a Capacity Reservation.--++ Last updated 08/09/2021
# Modify a Capacity Reservation (preview)
-After creating a Capacity Reservation Group and Capacity Reservation, you may want to modify your reservations. This article explains how to do the following using API, Azure portal, and PowerShell.
+After creating a Capacity Reservation group and Capacity Reservation, you may want to modify your reservations. This article explains how to do the following actions using API, Azure portal, and PowerShell.
> [!div class="checklist"] > * Update the number of instances reserved in a Capacity Reservation
-> * Resize VMs associated with a Capacity Reservation Group
-> * Delete the Capacity Reservation Group and Capacity Reservation
+> * Resize VMs associated with a Capacity Reservation group
+> * Delete the Capacity Reservation group and Capacity Reservation
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
After creating a Capacity Reservation Group and Capacity Reservation, you may wa
## Update the number of instances reserved
-Update the number of virtual machine instances reserved in a capacity reservation.
+Update the number of virtual machine instances reserved in a Capacity Reservation.
> [!IMPORTANT] > In rare cases when Azure cannot fulfill the request to increase the quantity reserved for existing Capacity Reservations, it is possible that a reservation goes into a *Failed* state and becomes unavailable until the [quantity is restored to the original amount](#restore-instance-quantity).
Note that the `capacity` property is set to 5 now in this example.
### [Portal](#tab/portal1) 1. Open the [Azure portal](https://portal.azure.com)
-1. Go to your Capacity Reservation Group
+1. Go to your Capacity Reservation group
1. Select **Overview** 1. Select **Reservations** 1. Select **Manage Reservation** at the top
To learn more, go to Azure PowerShell command [Update-AzCapacityReservation](/po
<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
-## Resize VMs associated with a Capacity Reservation Group
+## Resize VMs associated with a Capacity Reservation group
-If the virtual machine being resized is currently attached to a capacity reservation group and that group doesnΓÇÖt have a reservation for the target size, then create a new reservation for that size or remove the virtual machine from the reservation group before resizing.
+You must do one of the following options if the VM being resized is currently attached to a Capacity Reservation group and that group doesnΓÇÖt have a reservation for the target size:
+- Create a new reservation for that size
+- Remove the virtual machine from the reservation group before resizing.
Check if the target size is part of the reservation group:
Check if the target size is part of the reservation group:
} ```
-1. Consider the following:
- 1. If the target VM size isn't part of the group, [create a new capacity reservation](capacity-reservation-create.md) for the target VM
+1. Consider the following scenarios:
+ 1. If the target VM size is not part of the group, [create a new Capacity Reservation](capacity-reservation-create.md) for the target VM
1. If the target VM size already exists in the group, [resize the virtual machine](resize-vm.md) ### [Portal](#tab/portal2) 1. Open the [Azure portal](https://portal.azure.com)
-1. Go to your Capacity Reservation Group
+1. Go to your Capacity Reservation group
1. Select **Overview** 1. Select **Reservations** 1. Look at the *VM size* reserved for each reservation
- 1. If the target VM size isn't part of the group, [create a new capacity reservation](capacity-reservation-create.md) for the target VM
+ 1. If the target VM size is not part of the group, [create a new Capacity Reservation](capacity-reservation-create.md) for the target VM
1. If the target VM size already exists in the group, [resize the virtual machine](resize-vm.md) ### [CLI](#tab/cli2)
-1. Get the names of all Capacity Reservations within the capacity reservation group with `az capacity reservation group show`
+1. Get the names of all Capacity Reservations within the Capacity Reservation group with `az capacity reservation group show`
```azurecli-interactive az capacity reservation group show
Check if the target size is part of the reservation group:
-n myCapacityReservationGroup ```
-1. From the response, find the names of all the capacity reservations
+1. From the response, find the names of all the Capacity Reservations
1. Run the following commands to find out the VM size(s) reserved for each reservation
Check if the target size is part of the reservation group:
-n myCapacityReservation ```
-1. Consider the following:
- 1. If the target VM size isn't part of the group, [create a new capacity reservation](capacity-reservation-create.md) for the target VM
+1. Consider the following scenarios:
+ 1. If the target VM size is not part of the group, [create a new Capacity Reservation](capacity-reservation-create.md) for the target VM
1. If the target VM size already exists in the group, [resize the virtual machine](resize-vm.md)
Check if the target size is part of the reservation group:
-Name "myCapacityReservationGroup" ```
-1. From the response, find the names of all the capacity reservations
+1. From the response, find the names of all the Capacity Reservations
1. Run the following commands to find out the VM size(s) reserved for each reservation
Check if the target size is part of the reservation group:
$CapRes.Sku ```
-1. Consider the following:
- 1. If the target VM size isn't part of the group, [create a new capacity reservation](capacity-reservation-create.md) for the target VM
+1. Consider the following scenarios:
+ 1. If the target VM size is not part of the group, [create a new Capacity Reservation](capacity-reservation-create.md) for the target VM
1. If the target VM size already exists in the group, [resize the virtual machine](resize-vm.md)
To learn more, go to Azure PowerShell commands [Get-AzCapacityReservationGroup](
<!-- The three dashes above show that your section of tabbed content is complete. Don't remove them :) -->
-## Delete a Capacity Reservation Group and Capacity Reservation
+## Delete a Capacity Reservation group and Capacity Reservation
-Azure allows a Capacity Reservation Group to be deleted when all the member Capacity Reservations have been deleted and no virtual machines are associated to the group.
+Azure allows a group to be deleted when all the member Capacity Reservations have been deleted and no VMs are associated to the group.
-To delete a capacity reservation, first find out all of the virtual machines that are associated to it. The list of virtual machines is available under `virtualMachinesAssociated` property.
+To delete a Capacity Reservation, first find out all of the virtual machines that are associated to it. The list of virtual machines is available under `virtualMachinesAssociated` property.
### [API](#tab/api3)
-First, find all virtual machines associated with the Capacity Reservation Group and dissociate them.
+First, find all virtual machines associated with the Capacity Reservation group and dissociate them.
```rest GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}?$expand=instanceView&api-version=2021-04-01
First, find all virtual machines associated with the Capacity Reservation Group
    } } ```
-From the above response, find the names of all virtual machines under the `virtualMachinesAssociated` property and remove them from the Capacity Reservation Group using the steps in [Remove a VM association to a Capacity Reservation](capacity-reservation-remove-vm.md).
+From the above response, find the names of all virtual machines under the `virtualMachinesAssociated` property and remove them from the Capacity Reservation group using the steps in [Remove a VM association to a Capacity Reservation](capacity-reservation-remove-vm.md).
-Once all the virtual machines are removed from the Capacity Reservation Group, delete the member Capacity Reservation(s):
+Once all the virtual machines are removed from the Capacity Reservation group, delete the member Capacity Reservation(s):
```rest DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}/capacityReservations/{capacityReservationName}?api-version=2021-04-01 ```
-Lastly, delete the parent Capacity Reservation Group.
+Lastly, delete the parent Capacity Reservation group.
```rest DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}?api-version=2021-04-01
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroup
### [Portal](#tab/portal3) 1. Open the [Azure portal](https://portal.azure.com)
-1. Go to your Capacity Reservation Group
+1. Go to your Capacity Reservation group
1. Select **Resources** 1. Find out all the virtual machines that are associated with the group 1. [Disassociate every virtual machine](capacity-reservation-remove-vm.md)
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroup
1. Go to **Reservations** 1. Select each reservation 1. Select **Delete**
-1. Delete the Capacity Reservation Group
- 1. Go to the Capacity Reservation Group
+1. Delete the Capacity Reservation group
+ 1. Go to the Capacity Reservation group
1. Select **Delete** at the top of the page ### [CLI](#tab/cli3)
-Find out all the virtual machines associated with Capacity Reservation Group and dissociate them.
+Find out all the virtual machines associated with Capacity Reservation group and dissociate them.
-1. Run the following:
+1. Run the following command:
```azurecli-interactive az capacity reservation group show
Find out all the virtual machines associated with Capacity Reservation Group and
-n myCapacityReservationGroup ```
-1. From the above response, find out the names of all the virtual machines under the `VirtualMachinesAssociated` property and remove them from the Capacity Reservation Group using the steps detailed in [Remove a virtual machine association from a Capacity Reservation group](capacity-reservation-remove-vm.md).
+1. From the above response, find out the names of all the virtual machines under the `VirtualMachinesAssociated` property and remove them from the Capacity Reservation group using the steps detailed in [Remove a virtual machine association from a Capacity Reservation group](capacity-reservation-remove-vm.md).
1. Once all the virtual machines are removed from the group, proceed to the next steps.
Find out all the virtual machines associated with Capacity Reservation Group and
-n myCapacityReservation ```
-1. Delete the Capacity Reservation Group:
+1. Delete the Capacity Reservation group:
```azurecli-interactive az capacity reservation group delete
Find out all the virtual machines associated with Capacity Reservation Group and
### [PowerShell](#tab/powershell3)
-Find out all the virtual machines associated with Capacity Reservation Group and dissociate them.
+Find out all the virtual machines associated with Capacity Reservation group and dissociate them.
-1. Run the following:
+1. Run the following command:
```powershell-interactive Get-AzCapacityReservationGroup
Find out all the virtual machines associated with Capacity Reservation Group and
-Name "myCapacityReservationGroup" ```
-1. From the above response, find out the names of all the virtual machines under the `VirtualMachinesAssociated` property and remove them from the Capacity Reservation Group using the steps detailed in [Remove a virtual machine association from a Capacity Reservation group](capacity-reservation-remove-vm.md).
+1. From the above response, find out the names of all the virtual machines under the `VirtualMachinesAssociated` property and remove them from the Capacity Reservation group using the steps detailed in [Remove a virtual machine association from a Capacity Reservation group](capacity-reservation-remove-vm.md).
1. Once all the virtual machines are removed from the group, proceed to the next steps.
Find out all the virtual machines associated with Capacity Reservation Group and
-Name "myCapacityReservation" ```
-1. Delete the Capacity Reservation Group:
+1. Delete the Capacity Reservation group:
```powershell-interactive Remove-AzCapacityReservationGroup
To learn more, go to Azure PowerShell commands [Get-AzCapacityReservationGroup](
## Restore instance quantity
-A well-formed request for reducing the quantity reserved should always succeed no matter the number of virtual machines associated with the reservation. However, increasing the quantity reserved may require more quota and for Azure to fulfill the additional capacity request. In a rare scenario in which Azure canΓÇÖt fulfill the request to increase the quantity reserved for existing reservations, it's possible that the reservation goes into a *Failed* state and becomes unavailable until the quantity reserved is restored to the original amount.
+A well-formed request for reducing the quantity reserved should always succeed no matter the number of VMs associated with the reservation. However, increasing the quantity reserved may require more quota and for Azure to fulfill the additional capacity request. In a rare scenario in which Azure canΓÇÖt fulfill the request to increase the quantity reserved for existing reservations, it is possible that the reservation goes into a *Failed* state and becomes unavailable until the quantity reserved is restored to the original amount.
> [!NOTE] > If a reservation is in a *Failed* state, all the VMs that are associated with the reservation will continue to work as normal.
To resolve this failure, take the following steps to locate the old quantity res
1. Go to [Application Change Analysis](https://ms.portal.azure.com/#blade/Microsoft_Azure_ChangeAnalysis/ChangeAnalysisBaseBlade) in the Azure portal 1. Select the applicable **Subscription**, **Resource group**, and **Time range** in the filters - You can only go back up to 14 days in the past in the **Time range** filter
-1. Search for the name of the capacity reservation
+1. Search for the name of the Capacity Reservation
1. Look for the change in `sku.capacity` property for that reservation - The old quantity reserved will be the value under the **Old Value** column
virtual-machines Capacity Reservation Overallocate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-overallocate.md
Title: Overallocating Capacity Reservation in Azure (preview) description: Learn how overallocation works when it comes to Capacity Reservation.--++ Last updated 08/09/2021
# Overallocating Capacity Reservation (preview)
-Azure permits association of extra VMs beyond the reserved count of a Capacity Reservation to facilitate burst and other scale out scenarios, without the overhead of managing around the limits of reserved capacity. The only difference is that the count of VMs beyond the quantity reserved does not receive the capacity availability SLA benefit. As long as Azure has available capacity that meets the virtual machine requirements, the extra allocations will succeed.
+Azure permits association of extra VMs beyond the reserved count of a Capacity Reservation to facilitate burst and other scale-out scenarios, without the overhead of managing around the limits of reserved capacity. The only difference is that the count of VMs beyond the quantity reserved does not receive the capacity availability SLA benefit. As long as Azure has available capacity that meets the virtual machine requirements, the extra allocations will succeed.
-The Instance View of a Capacity Reservation Group provides a snapshot of usage for each member Capacity Reservation. You can use the Instance View to see how overallocation works.
+The Instance View of a Capacity Reservation group provides a snapshot of usage for each member Capacity Reservation. You can use the Instance View to see how overallocation works.
-This article assumes you've created a Capacity Reservation Group (`myCapacityReservationGroup`), a member Capacity Reservation (`myCapacityReservation`), and a virtual machine (*myVM1*) that is associated to the group. Go to [Create a Capacity Reservation](capacity-reservation-create.md) and [Associate a VM to a Capacity Reservation](capacity-reservation-associate-vm.md) for more details.
+This article assumes you have created a Capacity Reservation group (`myCapacityReservationGroup`), a member Capacity Reservation (`myCapacityReservation`), and a virtual machine (*myVM1*) that is associated to the group. Go to [Create a Capacity Reservation](capacity-reservation-create.md) and [Associate a VM to a Capacity Reservation](capacity-reservation-associate-vm.md) for more details.
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
This article assumes you've created a Capacity Reservation Group (`myCapacityRes
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Instance View for Capacity Reservation Group
+## Instance View for Capacity Reservation group
-The Instance View for a Capacity Reservation Group will look like this:
+The Instance View for a Capacity Reservation group will look like this:
```rest GET
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
} ```
-Lets say we create another virtual machine named *myVM2* and associate it with the above Capacity Reservation Group.
+Let's say we create another virtual machine named *myVM2* and associate it with the above Capacity Reservation group.
-The Instance View for the Capacity Reservation Group will now look like this:
+The Instance View for the Capacity Reservation group will now look like this:
```json {
The Instance View for the Capacity Reservation Group will now look like this:
Notice that the length of `virtualMachinesAllocated` (2) is greater than `capacity` (1). This valid state is referred to as *overallocated*. > [!IMPORTANT]
-> Azure won't stop allocations just because a Capacity Reservation is fully consumed. Auto-scale rules, temporary scale-out, and related requirements will work beyond the quantity of reserved capacity as long as Azure has available capacity.
+> Azure will not stop allocations just because a Capacity Reservation is fully consumed. Auto-scale rules, temporary scale-out, and related requirements will work beyond the quantity of reserved capacity as long as Azure has available capacity.
## States and considerations
There are three valid states for a given Capacity Reservations:
| State | Status | Considerations | |||| | Reserved capacity available | Length of `virtualMachinesAllocated` < `capacity` | Is all the reserved capacity needed? Optionally reduce the capacity to reduce costs. |
-| Reservation consumed | Length of `virtualMachinesAllocated` == `capacity` | Additional VMs won't receive the capacity SLA unless some existing VMs are deallocated. Optionally try to increase the capacity so extra planned VMs will receive an SLA. |
-| Reservation overallocated | Length of `virtualMachinesAllocated` > `capacity` | Additional VMs won't receive the capacity SLA. Also, the quantity of VMs (Length of `virtualMachinesAllocated` ΓÇô `capacity`) won't receive a capacity SLA if deallocated. Optionally increase the capacity to add capacity SLA to more of the existing VMs. |
+| Reservation consumed | Length of `virtualMachinesAllocated` == `capacity` | Additional VMs will not receive the capacity SLA unless some existing VMs are deallocated. Optionally try to increase the capacity so extra planned VMs will receive an SLA. |
+| Reservation overallocated | Length of `virtualMachinesAllocated` > `capacity` | Additional VMs will not receive the capacity SLA. Also, the quantity of VMs (Length of `virtualMachinesAllocated` ΓÇô `capacity`) will not receive a capacity SLA if deallocated. Optionally increase the capacity to add capacity SLA to more of the existing VMs. |
## Next steps
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-overview.md
Title: On-demand Capacity Reservation in Azure (preview) description: Learn how to reserve compute capacity in an Azure region or an Availability Zone with Capacity Reservation.--++ Last updated 08/09/2021
# On-demand Capacity Reservation (preview)
-On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time. Unlike [Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/), you don't have to sign up for a 1-year or a 3-year term commitment. Create and delete reservations at any time and have full control over how you want to manage your reservations.
+On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time. Unlike [Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/), you do not have to sign up for a 1-year or a 3-year term commitment. Create and delete reservations at any time and have full control over how you want to manage your reservations.
-Once the capacity reservation is created, the capacity is available immediately and is exclusively reserved for your use until the reservation is deleted.
+Once the Capacity Reservation is created, the capacity is available immediately and is exclusively reserved for your use until the reservation is deleted.
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service-level agreement, and we do not recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Capacity Reservation has some basic properties that are always defined at the ti
- **Location**ΓÇ»- Each reservation is for one location (region). If that location has availability zones, then the reservation can also specify one of the zones. - **Quantity**ΓÇ»- Each reservation has a quantity of instances to be reserved.
-To create a Capacity Reservation, these parameters are passed to Azure as a capacity request. If the subscription lacks the required quota or Azure doesn't have capacity available that meets the specification, the reservation will fail to deploy. To avoid deployment failure, request more quota or try a different VM size, location, or zone combination.
+To create a Capacity Reservation, these parameters are passed to Azure as a capacity request. If the subscription lacks the required quota or Azure does not have capacity available that meets the specification, the reservation will fail to deploy. To avoid deployment failure, request more quota or try a different VM size, location, or zone combination.
-Once Azure accepts a reservation request, it's available to be consumed by VMs of matching configurations. To consume capacity reservation, the VM will have to specify the reservation as one of its properties. Otherwise, the capacity reservation will remain unused. One benefit of this design is that you can target only critical workloads to reservations and other non-critical workloads can run without reserved capacity.
+Once Azure accepts a reservation request, it is available to be consumed by VMs of matching configurations. To consume Capacity Reservation, the VM will have to specify the reservation as one of its properties. Otherwise, the Capacity Reservation will remain unused. One benefit of this design is that you can target only critical workloads to reservations and other non-critical workloads can run without reserved capacity.
> [!NOTE] > Capacity Reservation also comes with Azure availability SLA for use with virtual machines. The SLA won't be enforced during public preview and will be defined when Capacity Reservation is generally available.
The SLA for Capacity Reservation will be defined later when the feature is gener
## Limitations and restrictions - Creating capacity reservations requires quota in the same manner as creating virtual machines. -- Spot VMs and Azure Dedicated Host Nodes aren't supported with Capacity Reservation. -- Some deployment constraints aren't supported:
+- Spot VMs and Azure Dedicated Host Nodes are not supported with Capacity Reservation.
+- Some deployment constraints are not supported:
- Proximity Placement Group - Update domains - UltraSSD storage - Only Av2, B, D, E, & F VM series are supported during public preview. -- For the supported VM series during public preview, up to 3 Fault Domains (FDs) will be supported. A deployment with more than 3 FDs will fail to deploy against capacity reservation. -- Availability Sets aren't supported with capacity reservation.
+- For the supported VM series during public preview, up to 3 Fault Domains (FDs) will be supported. A deployment with more than 3 FDs will fail to deploy against Capacity Reservation.
+- Availability Sets are not supported with Capacity Reservation.
- During this preview, only the subscription that created the reservation can use it. -- Reservations are only available to paid Azure customers. Sponsored accounts such as Free Trial and Azure for Students aren't eligible to use this feature.
+- Reservations are only available to paid Azure customers. Sponsored accounts such as Free Trial and Azure for Students are not eligible to use this feature.
## Pricing and billing
-Capacity Reservations are priced at the same rate as the underlying VM size. For example, if you create a reservation for 10 quantities of D2s_v3 VM, as soon as the reservation is created, you'll start getting billed for 10 D2s_v3 VMs, even if the reservation isn't being used.
+Capacity Reservations are priced at the same rate as the underlying VM size. For example, if you create a reservation for ten quantities of D2s_v3 VM, as soon as the reservation is created, you will start getting billed for ten D2s_v3 VMs, even if the reservation is not being used.
-If you then deploy a D2s_v3 VM and specify reservation as its property, the capacity reservation gets used. Once in use, you'll only pay for the VM and nothing extra for the capacity reservation. LetΓÇÖs say you deploy 5 D2s_v3 VMs against the previously mentioned capacity reservation. You will see a bill for 5 D2s_v3 VMs and 5 unused capacity reservation, both charged at the same rate as a D2s_v3 VM.
+If you then deploy a D2s_v3 VM and specify reservation as its property, the Capacity Reservation gets used. Once in use, you will only pay for the VM and nothing extra for the Capacity Reservation. LetΓÇÖs say you deploy five D2s_v3 VMs against the previously mentioned Capacity Reservation. You will see a bill for five D2s_v3 VMs and five unused Capacity Reservation, both charged at the same rate as a D2s_v3 VM.
-Both used and unused capacity reservation are eligible for Reserved Instances term commitment discounts. In the above example, if you have Reserved Instances for 2 D2s_v3 VM in the same Azure region, the billing for 2 resources (either VM or unused capacity reservation) will be zeroed out and you'll only pay for the rest of the 8 resources (that is, 5 unused capacity reservations and 3 D2s_v3 VMs). In this case, the term commitment discounts could be applied on either the VM or the unused Capacity Reservation, both of which are charged at the same PAYG rate.
+Both used and unused Capacity Reservation are eligible for Reserved Instances term commitment discounts. In the previous example, if you have Reserved Instances for two D2s_v3 VMs in the same Azure region, the billing for two resources (either VM or unused Capacity Reservation) will be zeroed out and you will only pay for the rest of the eight resources. Those eight resources are the five unused capacity reservations and three D2s_v3 VMs. In this case, the term commitment discounts could be applied on either the VM or the unused Capacity Reservation, both of which are charged at the same PAYG rate.
## Difference between On-demand Capacity Reservation and Reserved Instances
Both used and unused capacity reservation are eligible for Reserved Instances te
## Work with Capacity Reservation
-Capacity reservation can be created for a specific VM size in an Azure region or an Availability Zone. All reservations are created and managed as part of a Capacity Reservation group, which allows creation of a group to manage different VM sizes in a single multi-tier application. Each reservation is for one VM size and a group can have only one reservation per VM size.
+Capacity Reservation can be created for a specific VM size in an Azure region or an Availability Zone. All reservations are created and managed as part of a Capacity Reservation group, which allows creation of a group to manage different VM sizes in a single multi-tier application. Each reservation is for one VM size and a group can have only one reservation per VM size.
-To consume capacity reservation, specify capacity reservation group as one of the VM properties. If the group doesnΓÇÖt have a matching reservation, Azure will return an error message.
+To consume Capacity Reservation, specify Capacity Reservation group as one of the VM properties. If the group doesnΓÇÖt have a matching reservation, Azure will return an error message.
-The quantity reserved for reservation can be adjusted after initial deployment by changing the capacity property. Other changes to capacity reservation, such as VM size or location, aren't permitted. The recommended approach is to delete the existing reservation and create a new one with the new requirements.
+The quantity reserved for reservation can be adjusted after initial deployment by changing the capacity property. Other changes to Capacity Reservation, such as VM size or location, are not permitted. The recommended approach is to delete the existing reservation and create a new one with the new requirements.
Capacity Reservation doesnΓÇÖt create limits on the number of VM deployments. Azure supports allocating as many VMs as desired against the reservation. As the reservation itself requires quota, the quota checks are omitted for VM deployment up to the reserved quantity. Allocating more VMs against the reservation is subject to quota checks and Azure fulfilling the extra capacity. Once deployed, these extra VM instances can cause the quantity of VMs allocated against the reservation to exceed the reserved quantity. This state is called overallocating. To learn more, go to [Overallocating Capacity Reservation](capacity-reservation-overallocate.md).
When a reservation is created, Azure sets aside the requested number of capacity
![Capacity Reservation image 1.](./media/capacity-reservation-overview/capacity-reservation-1.jpg) Track the state of the overall reservation through the following properties: -- `capacity` = Total quantity of instances reserved by the customer -- `virtualMachinesAllocated` = List of VMs allocated against the capacity reservation and count towards consuming the capacity. These VMs are either *Running* or *Stopped* (*Allocated*), or may be in a transitional state such as *Starting* or *Stopping*. This list doesnΓÇÖt include the VMs that are in deallocated state, referred to as *Stopped* (*deallocated*). -- `virtualMachinesAssociated` = List of VMs associated with the capacity reservation. This list has all the VMs that have been configured to use the reservation, including the ones that are in deallocated state.
+- `capacity` = Total quantity of instances reserved by the customer.
+- `virtualMachinesAllocated` = List of VMs allocated against the Capacity Reservation and count towards consuming the capacity. These VMs are either *Running*, *Stopped* (*Allocated*), or in a transitional state such as *Starting* or *Stopping*. This list doesnΓÇÖt include the VMs that are in deallocated state, referred to as *Stopped* (*deallocated*).
+- `virtualMachinesAssociated` = List of VMs associated with the Capacity Reservation. This list has all the VMs that have been configured to use the reservation, including the ones that are in deallocated state.
-The above example will start with `capacity` as 2 and length of `virutalMachinesAllocated` and `virtualMachinesAssociated` as 0.
+The previous example will start with `capacity` as 2 and length of `virutalMachinesAllocated` and `virtualMachinesAssociated` as 0.
When a VM is then allocated against the Capacity Reservation, it will logically consume one of the reserved capacity instances:
Using our example, when a third VM is allocated against the Capacity Reservation
The `capacity` is 2 and the length of `virutalMachinesAllocated` and `virtualMachinesAssociated` is 3.
-Now suppose the application scales down to the minimum of two VMs. Since VM 0 needs an update, it's chosen for deallocation. The reservation automatically shifts to this state:
+Now suppose the application scales down to the minimum of two VMs. Since VM 0 needs an update, it is chosen for deallocation. The reservation automatically shifts to this state:
![Capacity Reservation image 4.](./media/capacity-reservation-overview/capacity-reservation-4.jpg)
-The `capacity` and the length of `virtualMachinesAllocated` are both 2. However, the length for `virtualMachinesAssociated` is still 3 as VM 0, though deallocated, is still associated with the capacity reservation.
+The `capacity` and the length of `virtualMachinesAllocated` are both 2. However, the length for `virtualMachinesAssociated` is still 3 as VM 0, though deallocated, is still associated with the Capacity Reservation.
The Capacity Reservation will exist until explicitly deleted. To delete a Capacity Reservation, the first step is to dissociate all the VMs in the `virtualMachinesAssociated` property. Once disassociation is complete, the Capacity Reservation should look like this: ![Capacity Reservation image 5.](./media/capacity-reservation-overview/capacity-reservation-5.jpg)
-The status of the Capacity Reservation will now show `capacity` as 2 and length of `virtualMachinesAssociated` and `virtualMachinesAllocated` as 0. From this state, the Capacity Reservation can be deleted. Once deleted, you'll not pay for the reservation anymore.
+The status of the Capacity Reservation will now show `capacity` as 2 and length of `virtualMachinesAssociated` and `virtualMachinesAllocated` as 0. From this state, the Capacity Reservation can be deleted. Once deleted, you will not pay for the reservation anymore.
![Capacity Reservation image 6.](./media/capacity-reservation-overview/capacity-reservation-6.jpg) ## Usage and billing
-When a Capacity Reservation is empty, VM usage will be reported for the corresponding VM size and the location. [VM Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) can cover some or all of the Capacity Reservation usage even when VMs aren't deployed.
+When a Capacity Reservation is empty, VM usage will be reported for the corresponding VM size and the location. [VM Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) can cover some or all of the Capacity Reservation usage even when VMs are not deployed.
### Example
-For example, let's say a Capacity Reservation with quantity reserved 2 has been created. The subscription has access to one matching Reserved VM Instance of the same size. The result is two usage streams for the Capacity Reservation, one of which is covered by the Reserved Instance:
+For example, lets say a Capacity Reservation with quantity reserved 2 has been created. The subscription has access to one matching Reserved VM Instance of the same size. The result is two usage streams for the Capacity Reservation, one of which is covered by the Reserved Instance:
![Capacity Reservation image 7.](./media/capacity-reservation-overview/capacity-reservation-7.jpg)
-In the image above, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance will be zeroed out. For the other instance, PAYG rate will be charged for the VM size reserved.
+In the previous image, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance will be zeroed out. For the other instance, PAYG rate will be charged for the VM size reserved.
When a VM is allocated against the Capacity Reservation, the other VM components such as disks, network, extensions, and any other requested components must also be allocated. In this state, the VM usage will reflect one allocated VM and one unused capacity instance. The Reserved VM Instance will zero out the cost of either the VM or the unused capacity instance. The other charges for disks, networking, and other components associated with the allocated VM will also appear on the bill. ![Capacity Reservation image 8.](./media/capacity-reservation-overview/capacity-reservation-8.jpg)
-In the image above, the VM Reserved Instance discount is applied to VM 0, which will only be charged for other components such as disk and networking. The other unused instance is being charged at PAYG rate for the VM size reserved.
+In the previous image, the VM Reserved Instance discount is applied to VM 0, which will only be charged for other components such as disk and networking. The other unused instance is being charged at PAYG rate for the VM size reserved.
## Frequently asked questions -- **WhatΓÇÖs the price of on-demand capacity reservation?**
+- **WhatΓÇÖs the price of on-demand Capacity Reservation?**
- The price of your on-demand capacity reservation is same as the price of underlying VM size associated with the reservation. When using capacity reservation, you will be charged for the VM size you selected at pay-as-you-go rates, whether the VM has been provisioned or not. Visit the [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) VM pricing pages for more details.
+ The price of your on-demand Capacity Reservation is same as the price of underlying VM size associated with the reservation. When using Capacity Reservation, you will be charged for the VM size you selected at pay-as-you-go rates, whether the VM has been provisioned or not. Visit the [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) VM pricing pages for more details.
-- **Will I get charged twice, for the cost of on-demand capacity reservation and for the actual VM when I finally provision it?**
+- **Will I get charged twice, for the cost of on-demand Capacity Reservation and for the actual VM when I finally provision it?**
- No, you will only get charged once for on-demand capacity reservation.
+ No, you will only get charged once for on-demand Capacity Reservation.
-- **Can I apply Reserved Virtual Machine Instance (RI) to on-demand capacity reservation to lower my costs?**
+- **Can I apply Reserved Virtual Machine Instance (RI) to on-demand Capacity Reservation to lower my costs?**
- Yes, you can apply existing or future RIs to on-demand capacity reservations and receive RI discounts. Available RIs are applied automatically to capacity reservation the same way they are applied to VMs.
+ Yes, you can apply existing or future RIs to on-demand capacity reservations and receive RI discounts. Available RIs are applied automatically to Capacity Reservation the same way they are applied to VMs.
-- **What is the difference between Reserved Virtual Machine Instance (RI) and on-demand capacity reservation?**
+- **What is the difference between Reserved Virtual Machine Instance (RI) and on-demand Capacity Reservation?**
- Both RIs and on-demand capacity reservations are applicable to Azure VMs. However, RIs provide discounted reservation rates for your VMs compared to pay-as-you-go rates as a result of a term commitment, 1-year or 3-year terms. Conversely, on-demand capacity reservations do not require a commitment. You can create or cancel a capacity reservation at any time. However, no discounts are applied, and you will incur charges at pay-as-you-go rates after your capacity reservation has been successfully provisioned. Unlike RIs, which prioritize capacity but do not guarantee it, when you purchase an on-demand capacity reservation, Azure sets aside compute capacity for your VM and provides an SLA guarantee.
+ Both RIs and on-demand capacity reservations are applicable to Azure VMs. However, RIs provide discounted reservation rates for your VMs compared to pay-as-you-go rates as a result of a 1-year or 3-year term commitment. Conversely, on-demand capacity reservations do not require a commitment. You can create or cancel a Capacity Reservation at any time. However, no discounts are applied, and you will incur charges at pay-as-you-go rates after your Capacity Reservation has been successfully provisioned. Unlike RIs, which prioritize capacity but do not guarantee it, when you purchase an on-demand Capacity Reservation, Azure sets aside compute capacity for your VM and provides an SLA guarantee.
- **Which scenarios would benefit the most from on-demand capacity reservations?**
virtual-machines Capacity Reservation Remove Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-remove-virtual-machine-scale-set.md
Title: Remove a virtual machine scale set association from a Capacity Reservation group (preview) description: Learn how to remove a virtual machine scale set from a Capacity Reservation group.--++ Last updated 08/09/2021
# Remove a virtual machine scale set association from a Capacity Reservation group
-This article walks you through the steps of removing a virtual machine scale set association from a Capacity Reservation Group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
+This article walks you through removing a virtual machine scale set association from a Capacity Reservation group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
Because both the VM and the underlying Capacity Reservation logically occupy capacity, Azure imposes some constraints on this process to avoid ambiguous allocation states and unexpected errors. There are two ways to change an association: -- Option 1: Deallocate the Virtual machine scale set, change the Capacity Reservation Group property at the scale set level, and then update the underlying VMs-- Option 2: Update the reserved quantity to zero and then change the Capacity Reservation Group property
+- Option 1: Deallocate the Virtual machine scale set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs
+- Option 2: Update the reserved quantity to zero and then change the Capacity Reservation group property
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
There are two ways to change an association:
## Deallocate the Virtual machine scale set
-The first option is to deallocate the virtual machine scale set, change the Capacity Reservation Group property at the scale set level, and then update the underlying VMs.
+The first option is to deallocate the virtual machine scale set, change the Capacity Reservation group property at the scale set level, and then update the underlying VMs.
Go to [upgrade policies](#upgrade-policies) for more information about automatic, rolling, and manual upgrades.
Go to [upgrade policies](#upgrade-policies) for more information about automatic
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/deallocate?api-version=2021-04-01 ```
-1. Update the virtual machine scale set to remove association with the Capacity Reservation Group
+1. Update the virtual machine scale set to remove association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/update?api-version=2021-04-01
Go to [upgrade policies](#upgrade-policies) for more information about automatic
### [CLI](#tab/cli1)
-1. Deallocate the virtual machine scale set. The following will deallocate all virtual machines within the scale set:
+1. Deallocate the virtual machine scale set. The following command will deallocate all virtual machines within the scale set:
```azurecli-interactive az vmss deallocate
Go to [upgrade policies](#upgrade-policies) for more information about automatic
--name myVMSS ```
-1. Update the scale set to remove association with the Capacity Reservation Group. Setting the `capacity-reservation-group` property to None removes the association of scale set to the Capacity Reservation Group:
+1. Update the scale set to remove association with the Capacity Reservation group. Setting the `capacity-reservation-group` property to None removes the association of scale set to the Capacity Reservation group:
```azurecli-interactive az vmss update
Go to [upgrade policies](#upgrade-policies) for more information about automatic
### [PowerShell](#tab/powershell1)
-1. Deallocate the virtual machine scale set. The following will deallocate all virtual machines within the scale set:
+1. Deallocate the virtual machine scale set. The following command will deallocate all virtual machines within the scale set:
```powershell-interactive Stop-AzVmss
Go to [upgrade policies](#upgrade-policies) for more information about automatic
-VMScaleSetName "myVmss" ```
-1. Update the scale set to remove association with the Capacity Reservation Group. Setting the `CapacityReservationGroupId` property to null removes the association of scale set to the Capacity Reservation Group:
+1. Update the scale set to remove association with the Capacity Reservation group. Setting the `CapacityReservationGroupId` property to null removes the association of scale set to the Capacity Reservation group:
```powershell-interactive $vmss =
To learn more, go to Azure PowerShell commands [Stop-AzVmss](/powershell/module/
## Update the reserved quantity to zero
-The second option involves updating the reserved quantity to zero and then changing the Capacity Reservation Group property.
+The second option involves updating the reserved quantity to zero and then changing the Capacity Reservation group property.
-This option works well when the virtual machine scale set canΓÇÖt be deallocated and when a reservation is no longer needed. For example, you may create a capacity reservation to temporarily assure capacity during a large-scale deployment. Once completed, the reservation is no longer needed.
+This option works well when the scale set cannot be deallocated and when a reservation is no longer needed. For example, you may create a Capacity Reservation to temporarily assure capacity during a large-scale deployment. Once completed, the reservation is no longer needed.
Go to [upgrade policies](#upgrade-policies) for more information about automatic, rolling, and manual upgrades.
Go to [upgrade policies](#upgrade-policies) for more information about automatic
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}/CapacityReservations/{CapacityReservationName}?api-version=2021-04-01 ```
- In the request body, include the following:
+ In the request body, include the following parameters:
```json {
Go to [upgrade policies](#upgrade-policies) for more information about automatic
} ```
- Note that `capacity` property is set to 0 above.
+ Note that `capacity` property is set to 0.
-1. Update the virtual machine scale set to remove the association with the Capacity Reservation Group
+1. Update the virtual machine scale set to remove the association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{VMScaleSetName}/update?api-version=2021-04-01
Go to [upgrade policies](#upgrade-policies) for more information about automatic
--capacity 0 ```
-2. Update the scale set to remove association with Capacity Reservation Group by setting the `capacity-reservation-group` property to None:
+2. Update the scale set to remove association with Capacity Reservation group by setting the `capacity-reservation-group` property to None:
```azurecli-interactive az vmss update
Go to [upgrade policies](#upgrade-policies) for more information about automatic
-CapacityToReserve 0 ```
-2. Update the scale set to remove association with Capacity Reservation Group by setting the `CapacityReservationGroupId` property to null:
+2. Update the scale set to remove association with Capacity Reservation group by setting the `CapacityReservationGroupId` property to null:
```powershell-interactive $vmss =
To learn more, go to Azure PowerShell commands [New-AzCapacityReservation](/powe
## Upgrade policies -- **Automatic Upgrade** ΓÇô In this mode, the scale set VM instances are automatically dissociated from the Capacity Reservation Group without any further action from you.-- **Rolling Upgrade** ΓÇô In this mode, the scale set VM instances are dissociated from the Capacity Reservation Group without any further action from you. However, they're updated in batches with an optional pause time between them.-- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the virtual machine scale set is updated. You'll need to individually remove each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
+- **Automatic Upgrade** ΓÇô In this mode, the scale set VM instances are automatically dissociated from the Capacity Reservation group without any further action from you.
+- **Rolling Upgrade** ΓÇô In this mode, the scale set VM instances are dissociated from the Capacity Reservation group without any further action from you. However, they are updated in batches with an optional pause time between them.
+- **Manual Upgrade** ΓÇô In this mode, nothing happens to the scale set VM instances when the virtual machine scale set is updated. You will need to individually remove each scale set VM by [upgrading it with the latest Scale Set model](../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
+ ## Next steps
virtual-machines Capacity Reservation Remove Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-remove-vm.md
Title: Remove a virtual machine association from a Capacity Reservation group (preview) description: Learn how to remove a virtual machine from a Capacity Reservation group.--++ Last updated 08/09/2021
-# Remove a VM association from a Capacity Reservation Group (preview)
+# Remove a VM association from a Capacity Reservation group (preview)
-This article walks you through the steps of removing a VM association to a Capacity Reservation Group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
+This article walks you through the steps of removing a VM association to a Capacity Reservation group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
Because both the VM and the underlying Capacity Reservation logically occupy capacity, Azure imposes some constraints on this process to avoid ambiguous allocation states and unexpected errors. There are two ways to change an association: -- Option 1: Deallocate the Virtual Machine, change the Capacity Reservation Group property, and optionally restart the virtual machine-- Option 2: Update the reserved quantity to zero and then change the Capacity Reservation Group property
+- Option 1: Deallocate the virtual machine, change the Capacity Reservation group property, and optionally restart the virtual machine
+- Option 2: Update the reserved quantity to zero and then change the Capacity Reservation group property
> [!IMPORTANT] > Capacity Reservation is currently in public preview.
There are two ways to change an association:
## Deallocate the VM
-The first option is to deallocate the Virtual Machine, change the Capacity Reservation Group property, and optionally restart the VM.
+The first option is to deallocate the VM, change the Capacity Reservation group property, and optionally restart the VM.
### [API](#tab/api1)
The first option is to deallocate the Virtual Machine, change the Capacity Reser
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{virtualMachineName}/deallocate?api-version=2021-04-01 ```
-1. Update the VM to remove association with the Capacity Reservation Group
+1. Update the VM to remove association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{virtualMachineName}/update?api-version=2021-04-01
The first option is to deallocate the Virtual Machine, change the Capacity Reser
<!-- no images necessary if steps are straightforward --> 1. Open [Azure portal](https://portal.azure.com)
-1. Go to your Virtual Machine and select **Overview**
+1. Go to your VM and select **Overview**
1. Select **Stop**
- 1. You'll know your VM is deallocated when the status changes to *Stopped (deallocated)*
- 1. At this point in the process, the VM is still associated with the Capacity Reservation Group, which is reflected in the `virtualMachinesAssociated` property of the Capacity Reservation
+ 1. You will know your VM is deallocated when the status changes to *Stopped (deallocated)*
+ 1. At this point in the process, the VM is still associated with the Capacity Reservation group, which is reflected in the `virtualMachinesAssociated` property of the Capacity Reservation
1. Select **Configuration**
-1. Set the **Capacity Reservation Group** value to *None*
- - The VM is no longer associated with the Capacity Reservation Group
+1. Set the **Capacity Reservation group** value to *None*
+ - The VM is no longer associated with the Capacity Reservation group
### [CLI](#tab/cli1)
-1. Deallocate the Virtual Machine
+1. Deallocate the virtual machine
```azurecli-interactive az vm deallocate
The first option is to deallocate the Virtual Machine, change the Capacity Reser
Once the status changes to **Stopped (deallocated)**, the virtual machine is deallocated.
-1. Update the VM to remove association with the Capacity Reservation Group by setting the `capacity-reservation-group` property to None:
+1. Update the VM to remove association with the Capacity Reservation group by setting the `capacity-reservation-group` property to None:
```azurecli-interactive az vm update
The first option is to deallocate the Virtual Machine, change the Capacity Reser
### [PowerShell](#tab/powershell1)
-1. Deallocate the Virtual Machine
+1. Deallocate the virtual machine
```powershell-interactive Stop-AzVM
The first option is to deallocate the Virtual Machine, change the Capacity Reser
Once the status changes to **Stopped (deallocated)**, the virtual machine is deallocated.
-1. Update the VM to remove association with the Capacity Reservation Group by setting the `CapacityReservationGroupId` property to null:
+1. Update the VM to remove association with the Capacity Reservation group by setting the `CapacityReservationGroupId` property to null:
```powershell-interactive $VirtualMachine =
To learn more, go to Azure PowerShell commands [Stop-AzVM](/powershell/module/az
## Update the reserved quantity to zero
-The second option involves updating the reserved quantity to zero and then changing the Capacity Reservation Group property.
+The second option involves updating the reserved quantity to zero and then changing the Capacity Reservation group property.
-This option works well when the virtual machine canΓÇÖt be deallocated and when a reservation is no longer needed. For example, you may create a capacity reservation to temporarily assure capacity during a large-scale deployment. Once completed, the reservation is no longer needed.
+This option works well when the virtual machine canΓÇÖt be deallocated and when a reservation is no longer needed. For example, you may create a Capacity Reservation to temporarily assure capacity during a large-scale deployment. Once completed, the reservation is no longer needed.
### [API](#tab/api2)
This option works well when the virtual machine canΓÇÖt be deallocated and when
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/CapacityReservationGroups/{CapacityReservationGroupName}/CapacityReservations/{CapacityReservationName}?api-version=2021-04-01 ```
- In the request body, include the following:
+ In the request body, include the following parameters:
```json {
This option works well when the virtual machine canΓÇÖt be deallocated and when
} ```
- Note that `capacity` property is set to 0 above.
+ Note that `capacity` property is set to 0.
-1. Update the VM to remove the association with the Capacity Reservation Group
+1. Update the VM to remove the association with the Capacity Reservation group
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{VirtualMachineName}/update?api-version=2021-04-01
This option works well when the virtual machine canΓÇÖt be deallocated and when
<!-- no images necessary if steps are straightforward --> 1. Open [Azure portal](https://portal.azure.com)
-1. Go to your Capacity Reservation Group and select **Overview**
+1. Go to your Capacity Reservation group and select **Overview**
1. Select **Reservations** 1. Select **Manage Reservation** at the top of the page 1. On the *Manage Reservations* blade: 1. Enter `0` in the **Instances** field 1. Select **Save**
-1. Go to your Virtual Machine and select **Configuration**
-1. Set the **Capacity Reservation Group** value to *None*
- - Note that the VM is no longer associated with the Capacity Reservation Group
+1. Go to your VM and select **Configuration**
+1. Set the **Capacity Reservation group** value to *None*
+ - Note that the VM is no longer associated with the Capacity Reservation group
### [CLI](#tab/cli2)
This option works well when the virtual machine canΓÇÖt be deallocated and when
--capacity 0 ```
-1. Update the VM to remove association with the Capacity Reservation Group by setting the `capacity-reservation-group` property to None:
+1. Update the VM to remove association with the Capacity Reservation group by setting the `capacity-reservation-group` property to None:
```azurecli-interactive az vm update
This option works well when the virtual machine canΓÇÖt be deallocated and when
-CapacityToReserve 0 ```
-1. Update the VM to remove association with the Capacity Reservation Group by setting the `CapacityReservationGroupId` property to null:
+1. Update the VM to remove association with the Capacity Reservation group by setting the `CapacityReservationGroupId` property to null:
```powershell-interactive $VirtualMachine =
To learn more, go to Azure PowerShell commands [New-AzCapacityReservation](/powe
## Next steps > [!div class="nextstepaction"]
-> [Learn how to associate a scale set to a capacity reservation group](capacity-reservation-associate-virtual-machine-scale-set.md)
+> [Learn how to associate a scale set to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md)
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
-## Benefits
+## Benefits
- Securely deploy virtual machines with verified boot loaders, OS kernels, and drivers. - Securely protect keys, certificates, and secrets in the virtual machines.
Azure offers trusted launch as a seamless way to improve the security of [genera
- Windows 10 Enterprise - Windows 10 Enterprise multi-session
-**Regions**:
+**Regions**:
- All public regions **Pricing**:
No additional cost to existing VM pricing.
- Shared disk - Ultra disk - Managed image-- Azure Dedicated Host
+- Azure Dedicated Host
- Nested Virtualization ## Secure boot
At the root of trusted launch is Secure Boot for your VM. This mode, which is im
## vTPM
-Trusted launch also introduces vTPM for Azure VMs. This is a virtualized version of a hardware [Trusted Platform Module](/windows/security/information-protection/tpm/trusted-platform-module-overview), compliant with the TPM2.0 spec. It serves as a dedicated secure vault for keys and measurements. Trusted launch provides your VM with its own dedicated TPM instance, running in a secure environment outside the reach of any VM. The vTPM enables [attestation](/windows/security/information-protection/tpm/tpm-fundamentals#measured-boot-with-support-for-attestation) by measuring the entire boot chain of your VM (UEFI, OS, system, and drivers).
+Trusted launch also introduces vTPM for Azure VMs. This is a virtualized version of a hardware [Trusted Platform Module](/windows/security/information-protection/tpm/trusted-platform-module-overview), compliant with the TPM2.0 spec. It serves as a dedicated secure vault for keys and measurements. Trusted launch provides your VM with its own dedicated TPM instance, running in a secure environment outside the reach of any VM. The vTPM enables [attestation](/windows/security/information-protection/tpm/tpm-fundamentals#measured-boot-with-support-for-attestation) by measuring the entire boot chain of your VM (UEFI, OS, system, and drivers).
Trusted launch uses the vTPM to perform remote attestation by the cloud. This is used for platform health checks and for making trust-based decisions. As a health check, trusted launch can cryptographically certify that your VM booted correctly. If the process fails, possibly because your VM is running an unauthorized component, Microsoft Defender for Cloud will issue integrity alerts. The alerts include details on which components failed to pass integrity checks.
With trusted launch and VBS you can enable Windows Defender Credential Guard. Th
## Azure Defender for Cloud integration
-Trusted launch is integrated with Azure Defender for Cloud to ensure your VMs are properly configured. Azure Azure Defender for Cloud will continually assess compatible VMs and issue relevant recommendations.
+Trusted launch is integrated with Azure Defender for Cloud to ensure your VMs are properly configured. Azure Defender for Cloud will continually assess compatible VMs and issue relevant recommendations.
-- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Azure Azure Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.-- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Azure Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Azure Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it. -- **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Azure Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Azure Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.
+- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Azure Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.
+- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Azure Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Azure Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it.
+- **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Azure Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Azure Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.
- **Attestation health assessment** - If your VM has vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as remote attestation. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation.
If your VMs are properly set up with trusted launch, Microsoft Defender for Clou
VM attestation can fail for the following reasons: - The attested information, which includes a boot log, deviates from a trusted baseline. This can indicate that untrusted modules have been loaded, and the OS may be compromised. - The attestation quote could not be verified to originate from the vTPM of the attested VM. This can indicate that malware is present and may be intercepting traffic to the vTPM.
-
+ > [!NOTE] > This alert is available for VMs with vTPM enabled and the Attestation extension installed. Secure Boot must be enabled for attestation to pass. Attestation will fail if Secure Boot is disabled. If you must disable Secure Boot, you can suppress this alert to avoid false positives.
Frequently asked questions about trusted launch.
### Why should I use trusted launch? What does trusted launch guard against? Trusted launch guards against boot kits, rootkits, and kernel-level malware. These sophisticated types of malware run in kernel mode and remain hidden from users. For example:-- Firmware rootkits: these kits overwrite the firmware of the virtual machineΓÇÖs BIOS, so the rootkit can start before the OS.
+- Firmware rootkits: these kits overwrite the firmware of the virtual machineΓÇÖs BIOS, so the rootkit can start before the OS.
- Boot kits: these kits replace the OSΓÇÖs bootloader so that the virtual machine loads the boot kit before the OS. - Kernel rootkits: these kits replace a portion of the OS kernel so the rootkit can start automatically when the OS loads. - Driver rootkits: these kits pretend to be one of the trusted drivers that OS uses to communicate with the virtual machineΓÇÖs components.
Trusted launch for Azure virtual machines is monitored for advanced threats. If
Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons: Trusted launch for Azure virtual machines is monitored for advanced threats. If such threats are detected, an alert will be triggered. Alerts are only available in the [Standard Tier](../security-center/security-center-pricing.md) of Azure Defender for Cloud.
-Azure Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons:
+Azure Defender for Cloud periodically performs attestation. If the attestation fails, a medium severity alert will be triggered. Trusted launch attestation can fail for the following reasons:
- The attested information, which includes a log of the Trusted Computing Base (TCB), deviates from a trusted baseline (like when Secure Boot is enabled). This can indicate that untrusted modules have been loaded and the OS may be compromised. - The attestation quote could not be verified to originate from the vTPM of the attested VM. This can indicate that malware is present and may be intercepting traffic to the TPM. - The attestation extension on the VM is not responding. This can indicate a denial-of-service attack by malware, or an OS admin. ### How does trusted launch compared to Hyper-V Shielded VM?
-Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed in conjunction with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are intended for use in fabrics where the data and state of the virtual machine must be protected from both fabric administrators and untrusted software that might be running on the Hyper-V hosts. Trusted launch on the other hand can be deployed as a standalone virtual machine or virtual machine scale sets on Azure without additional deployment and management of HGS. All of the trusted launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
+Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed in conjunction with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are intended for use in fabrics where the data and state of the virtual machine must be protected from both fabric administrators and untrusted software that might be running on the Hyper-V hosts. Trusted launch on the other hand can be deployed as a standalone virtual machine or virtual machine scale sets on Azure without additional deployment and management of HGS. All of the trusted launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
### What is VM Guest State (VMGS)?
-VM Guest State (VMGS) is specific to Trusted Launch VM. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS Disk.
+VM Guest State (VMGS) is specific to Trusted Launch VM. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS Disk.
## Next steps
virtual-machines Tutorial Availability Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/tutorial-availability-sets.md
If you look at the availability set in the portal by going to **Resource Groups*
![Availability set in the portal](./media/tutorial-availability-sets/fd-ud.png) > [!NOTE]
-> Under certain circumstances, 2 VMs in the same AvailabilitySet could shared the same FaultDomain. This can be confirmed by going into your availability set and checking the Fault Domain column. This can be cause from the following sequence while deploying the VMs:
-> 1. Deploy the 1st VM
-> 1. Stop/Deallocate the 1st VM
-> 1. Deploy the 2nd VM Under these circumstances, the OS Disk of the 2nd VM might be created on the same Fault Domain as the 1st VM, and so the 2nd VM will also land on the same FaultDomain. To avoid this issue, it's recommended to not stop/deallocate the VMs between deployments.
+> Under certain circumstances, 2 VMs in the same AvailabilitySet could share the same FaultDomain. This can be confirmed by going into your availability set and checking the Fault Domain column. This can be causeed by the following sequence of events while deploying the VMs:
+> 1. The 1st VM is Deployed
+> 1. The 1st VM is Stopped/Deallocated
+> 1. The 2nd VM is Deployed.
+> Under these circumstances, the OS Disk of the 2nd VM might be created on the same Fault Domain as the 1st VM, and so the 2nd VM will also land on the same FaultDomain. To avoid this issue, it's recommended to not stop/deallocate the VMs between deployments.
## Check for available VM sizes
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 01/12/2022 Last updated : 01/24/2022
If you have specific questions, we are going to point you to specific documents
- [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md) - [Supported scenarios for HANA Large Instance](./hana-supported-scenario.md) - What Azure Services, Azure VM types and Azure storage services are available in the different Azure regions, check the site [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) -- Are third party HA frame works, besides Windows and Pacemaker supported? Check bottom part of [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533)
+- Are third-party HA frameworks, besides Windows and Pacemaker supported? Check bottom part of [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533)
- What Azure storage is best for my scenario? Read [Azure Storage types for SAP workload](./planning-guide-storage.md) - Is the Red Hat kernel in Oracle Enterprise Linux supported by SAP? Read SAP [SAP support note #1565179](https://launchpad.support.sap.com/#/notes/1565179)-- Why are the Azure [Da(s)v4](../../dav4-dasv4-series.md)/[Ea(s)](../../eav4-easv4-series.md) VM families not certified for SAP HANA? The Azure Das/Eas VM families are based on AMD processor driven hardware. SAP HANA does not support AMD processors, not even in virtualized scenarios
+- Why are the Azure [Da(s)v4](../../dav4-dasv4-series.md)/[Ea(s)](../../eav4-easv4-series.md) VM families not certified for SAP HANA? The Azure Das/Eas VM families are based on AMD processor-driven hardware. SAP HANA does not support AMD processors, not even in virtualized scenarios
- Why am I still getting the message: 'The cpu flags for the RDTSCP instruction or the cpu flags for constant_tsc or nonstop_tsc are not set or current_clocksource and available_clocksource are not correctly configured' with SAP HANA, despite the fact that I am running the most recent Linux kernels. For the answer, check [SAP support note #2791572](https://launchpad.support.sap.com/#/notes/2791572) - Where can I find architectures for deploying SAP Fiori on Azure? Check out the blog [SAP on Azure: Application Gateway Web Application Firewall (WAF) v2 Setup for Internet facing SAP Fiori Apps](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- January 24, 2022: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md), [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md), [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md), [HA for SAP NNW on Azure VMs on SLES multi-SID guide](./high-availability-guide-suse-multi-sid.md), [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to remove cidr_netmask from Pacemaker configuration to allow the resource agent to determine the value automatically
- January 12, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to remove obsolete information for the SAP kernel that supports the scenario. -- December 08, 2021: Change in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) to clarify Azure Load Balancer settings. -- December 08, 2021: Release of scenario [HA of SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md).
+- December 08, 2021: Change in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md) to clarify Azure Load Balancer settings
+- December 08, 2021: Release of scenario [HA of SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md)
- December 07, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to clarify that the instructions are applicable for both RHEL 7 and RHEL 8 - December 07, 2021: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to adjust the instructions for configuring SWAP file. -- December 02, 2021: Introduction of new STONITH fencing method in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) using Azure shared disk SBD device.-- December 01, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md), [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) and [HA for SAP NetWeaver on Azure VMs on Windows with Azure Files(SMB)](./high-availability-guide-windows-azure-files-smb.md) to update the SAP kernel version, required to support clustering SAP on Windows with file share.
+- December 02, 2021: Introduction of new STONITH fencing method in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) using Azure shared disk SBD device
+- December 01, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md), [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) and [HA for SAP NetWeaver on Azure VMs on Windows with Azure Files(SMB)](./high-availability-guide-windows-azure-files-smb.md) to update the SAP kernel version, required to support clustering SAP on Windows with file share
- November 30, 2021: Added [Using Windows DFS-N to support flexible SAPMNT share creation for SMB-based file share](./high-availability-guide-windows-dfs.md) - November 22, 2021: Change in [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) and [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md) to clarify the guidelines for J2EE SAP systems and share consolidations per storage account.-- November 16, 2021: Release of high availability guides for SAP ASCS/ERS with NFS on Azure files [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) and [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md).
+- November 16, 2021: Release of high availability guides for SAP ASCS/ERS with NFS on Azure files [HA for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) and [HA for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md)
- November 15, 2021: Introduction of new proximity placement architecture for zonal deployments in [Azure proximity placement groups for optimal network latency with SAP applications](./sap-proximity-placement-scenarios.md) - November 02, 2021: Changed [Azure Storage types for SAP workload](./planning-guide-storage.md) and [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_sapase.md) to declare SAP ASE support for NFS on Azure NetApp Files. - November 02, 2021: Changed [SAP workload configurations with Azure Availability Zones](./sap-ha-availability-zones.md) to move Singapore SouthEast to regions for active/active configurations
In this section, you find documents about Microsoft Power BI integration into SA
- June 24, 2020: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to release new improved Azure Fence Agent and more resilient STONITH configuration for devices, based on Azure Fence Agent - June 24, 2020: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to release more resilient STONITH configuration - June 23, 2020: Changes to [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) guide and introduction of [Azure Storage types for SAP workload](./planning-guide-storage.md) guide-- 06/22/2020: Add installation steps for new VM Extension for SAP to the [Deployment Guide](deployment-guide.md)
+- June 22, 2020: Add installation steps for new VM Extension for SAP to the [Deployment Guide](deployment-guide.md)
- June 16, 2020: Change in [Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md) to add a link to SUSE Public Cloud Infrastructure 101 documentation - June 10, 2020: Adding new HLI SKUs into [Available SKUs for HLI](./hana-available-skus.md) and [SAP HANA (Large Instances) storage architecture](./hana-storage-architecture.md) - May 21, 2020: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) and [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add a link to [Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)
In this section, you find documents about Microsoft Power BI integration into SA
- March 26, 2020: Change in [High availability for SAP NetWeaver on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md), [High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md), [High availability for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md), [High availability for SAP NetWeaver on Azure VMs on RHEL multi-SID guide](./high-availability-guide-suse-multi-sid.md), [High availability for SAP NetWeaver on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md) and [High availability for SAP NetWeaver on Azure VMs on RHEL with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md) to update diagrams and clarify instructions for Azure Load Balancer backend pool creation - March 19, 2020: Major revision of document [Quickstart: Manual installation of single-instance SAP HANA on Azure Virtual Machines](./hana-get-started.md) to [Installation of SAP HANA on Azure Virtual Machines](./hana-get-started.md) - March 17, 2020: Change in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md) to remove SBD configuration setting that is no longer necessary-- March 16 2020: Clarification of column certification scenario in SAP HANA IaaS certified platform in [What SAP software is supported for Azure deployments](./sap-supported-product-on-azure.md)-- 03/11/2020: Change in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md) to clarify multiple databases per DBMS instance support
+- March 16, 2020: Clarification of column certification scenario in SAP HANA IaaS certified platform in [What SAP software is supported for Azure deployments](./sap-supported-product-on-azure.md)
+- March 11, 2020: Change in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md) to clarify multiple databases per DBMS instance support
- March 11, 2020: Change in [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) explaining Generation 1 and Generation 2 VMs - March 10, 2020: Change in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) to clarify real existing throughput limits of ANF
virtual-machines High Availability Guide Rhel Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid.md
vm-windows Previously updated : 08/03/2021 Last updated : 01/24/2022
This documentation assumes that:
--group g-NW2_ASCS sudo pcs resource create vip_NW2_ASCS IPaddr2 \
- ip=10.3.1.52 cidr_netmask=24 \
+ ip=10.3.1.52 \
--group g-NW2_ASCS sudo pcs resource create nc_NW2_ASCS azure-lb port=62010 \
This documentation assumes that:
--group g-NW3_ASCS sudo pcs resource create vip_NW3_ASCS IPaddr2 \
- ip=10.3.1.54 cidr_netmask=24 \
+ ip=10.3.1.54 \
--group g-NW3_ASCS sudo pcs resource create nc_NW3_ASCS azure-lb port=62020 \
This documentation assumes that:
--group g-NW2_AERS sudo pcs resource create vip_NW2_AERS IPaddr2 \
- ip=10.3.1.53 cidr_netmask=24 \
+ ip=10.3.1.53 \
--group g-NW2_AERS sudo pcs resource create nc_NW2_AERS azure-lb port=62112 \
This documentation assumes that:
--group g-NW3_AERS sudo pcs resource create vip_NW3_AERS IPaddr2 \
- ip=10.3.1.55 cidr_netmask=24 \
+ ip=10.3.1.55 \
--group g-NW3_AERS sudo pcs resource create nc_NW3_AERS azure-lb port=62122 \
virtual-machines High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md
vm-windows Previously updated : 08/11/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-QAS_ASCS sudo pcs resource create vip_QAS_ASCS IPaddr2 \
- ip=192.168.14.9 cidr_netmask=24 \
+ ip=192.168.14.9 \
--group g-QAS_ASCS sudo pcs resource create nc_QAS_ASCS azure-lb port=62000 \
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-QAS_AERS sudo pcs resource create vip_QAS_AERS IPaddr2 \
- ip=192.168.14.10 cidr_netmask=24 \
+ ip=192.168.14.10 \
--group g-QAS_AERS sudo pcs resource create nc_QAS_AERS azure-lb port=62101 \
virtual-machines High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-nfs-azure-files.md
vm-windows Previously updated : 11/22/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_ASCS sudo pcs resource create vip_NW1_ASCS IPaddr2 \
- ip=10.90.90.10 cidr_netmask=24 \
+ ip=10.90.90.10 \
--group g-NW1_ASCS sudo pcs resource create nc_NW1_ASCS azure-lb port=62000 \
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_AERS sudo pcs resource create vip_NW1_AERS IPaddr2 \
- ip=10.90.90.9 cidr_netmask=24 \
+ ip=10.90.90.9 \
--group g-NW1_AERS sudo pcs resource create nc_NW1_AERS azure-lb port=62101 \
virtual-machines High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel.md
vm-windows Previously updated : 08/11/2021 Last updated : 01/24/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-<b>NW1</b>_ASCS sudo pcs resource create vip_<b>NW1</b>_ASCS IPaddr2 \
- ip=<b>10.0.0.7</b> cidr_netmask=<b>24</b> \
+ ip=<b>10.0.0.7</b> \
--group g-<b>NW1</b>_ASCS sudo pcs resource create nc_<b>NW1</b>_ASCS azure-lb port=620<b>00</b> \
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-<b>NW1</b>_AERS sudo pcs resource create vip_<b>NW1</b>_AERS IPaddr2 \
- ip=<b>10.0.0.8</b> cidr_netmask=<b>24</b> \
+ ip=<b>10.0.0.8</b> \
--group g-<b>NW1</b>_AERS sudo pcs resource create nc_<b>NW1</b>_AERS azure-lb port=621<b>02</b> \
virtual-machines Sap Certifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-certifications.md
vm-linux Previously updated : 07/21/2021 Last updated : 01/25/2022
References:
| SAP Product | Guest OS | RDBMS | Virtual Machine Types | | | | | |
-| SAP Business Suite Software | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
-| SAP Business All-in-One | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
-| SAP BusinessObjects BI | Windows |N/A |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
-| SAP NetWeaver | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE |A5 to A11, D11 to D14, DS11 to DS14, DS11_v2 to DS15_v2, GS1 to GS5, D2s_v3 to D64s_v3, D2as_v4 to D64as_v4, E2s_v3 to E64s_v3, E2as_v4 to E64as_v4, M64s, M64ms, M128s, M128ms, M64ls, M32ls, M32ts, M208s_v2, M208ms_v2, M416s_v2, M416ms_v2, M32(d)ms_v2, M64(d)s_v2, M64(d)ms_v2, M128(d)s_v2, M128(d)ms_v2, M192i(d)s_v2, M192i(d)ms_v2 |
+| SAP Business Suite Software | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE | [1928533 - SAP Applications on Azure: Supported Products and Azure VM types](https://launchpad.support.sap.com/#/notes/1928533) |
+| SAP Business All-in-One | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE | [1928533 - SAP Applications on Azure: Supported Products and Azure VM types](https://launchpad.support.sap.com/#/notes/1928533)|
+| SAP BusinessObjects BI | Windows |N/A | [1928533 - SAP Applications on Azure: Supported Products and Azure VM types](https://launchpad.support.sap.com/#/notes/1928533) |
+| SAP NetWeaver | Windows, SUSE Linux Enterprise, Red Hat Enterprise Linux, Oracle Linux |SQL Server, Oracle (Windows and Oracle Linux only), DB2, SAP ASE | [1928533 - SAP Applications on Azure: Supported Products and Azure VM types](https://launchpad.support.sap.com/#/notes/1928533) |
## Other SAP Workload supported on Azure
virtual-wan Virtual Wan Expressroute Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-expressroute-portal.md
From a browser, navigate to the [Azure portal](https://portal.azure.com) and sig
2. Select **Virtual WAN** from the results. On the Virtual WAN page, click **Create** to open the Create WAN page. 3. On the **Create WAN** page, on the **Basics** tab, fill in the following fields:
- ![Create WAN](./media/virtual-wan-expressroute-portal/createwan.png)
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/createwan.png" alt-text="Screenshot shows Create WAN page." border="false":::
* **Subscription** - Select the subscription that you want to use. * **Resource Group** - Create new or use existing.
You can also create a gateway in an existing hub by editing it.
2. On the **Edit virtual hub** page, select the checkbox **Include ExpressRoute gateway**. 3. Select **Confirm** to confirm your changes. It takes about 30 minutes for the hub and hub resources to fully create.
- ![existing hub](./media/virtual-wan-expressroute-portal/edithub.png "edit a hub")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/edithub.png" alt-text="Screenshot shows editing an existing hub." border="false":::
### To view a gateway Once you have created an ExpressRoute gateway, you can view gateway details. Navigate to the hub, select **ExpressRoute**, and view the gateway.
-![View gateway](./media/virtual-wan-expressroute-portal/viewgw.png "view gateway")
## <a name="connectvnet"></a>Connect your VNet to the hub
In the portal, go to the **Virtual hub -> Connectivity -> ExpressRoute** page. I
1. Select the circuit. 2. Select **Connect circuit(s)**.
- ![connect circuits](./media/virtual-wan-expressroute-portal/cktconnect.png "connect circuits")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/cktconnect.png" alt-text="Screenshot shows connect circuits." border="false":::
### <a name="authkey"></a>To connect by redeeming an authorization key
Use the authorization key and circuit URI you were provided in order to connect.
1. On the ExpressRoute page, click **+Redeem authorization key**
- ![Screenshot shows the ExpressRoute for a virtual hub with Redeem authorization key selected.](./media/virtual-wan-expressroute-portal/redeem.png "redeem")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/redeem.png" alt-text="Screenshot shows the ExpressRoute for a virtual hub with Redeem authorization key selected."border="false":::
2. On the Redeem authorization key page, fill in the values.
- ![redeem key values](./media/virtual-wan-expressroute-portal/redeemkey2.png "redeem key values")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/redeemkey2.png" alt-text="Screenshot shows redeem authorization key values." border="false":::
3. Select **Add** to add the key. 4. View the circuit. A redeemed circuit only shows the name (without the type, provider and other information) because it is in a different subscription than that of the user.
If you have sites connected to a Virtual WAN VPN gateway in the same hub as the
If you want to change the size of your ExpressRoute gateway, locate the ExpressRoute gateway inside the hub, and select the scale units from the dropdown. Save your change. It will take approximately 30 minutes to update the hub gateway.
-![change gateway size](./media/virtual-wan-expressroute-portal/changescale.png "change gateway size")
## To advertise default route 0.0.0.0/0 to endpoints
If you would like the Azure virtual hub to advertise the default route 0.0.0.0/0
1. Select your **Circuit ->…-> Edit connection**.
- ![Edit connection](./media/virtual-wan-expressroute-portal/defaultroute1.png "Edit connection")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/defaultroute1.png" alt-text="Screenshot shows Edit ExpressRoute Gateway page." border="false":::
+1. Select **Enable** to propagate the default route.
-2. Select **Enable** to propagate the default route.
-
- ![Propagate default route](./media/virtual-wan-expressroute-portal/defaultroute2.png "Propagate default route")
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/defaultroute2.png" alt-text="Screenshot shows enable propagate default route." border="false":::
## <a name="cleanup"></a>Clean up resources