Updates from: 03/18/2023 02:12:18
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
The following video provides an overview of on-premises provisoning.
- [App provisioning](user-provisioning.md) - [Generic SQL connector](on-premises-sql-connector-configure.md) - [Tutorial: ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
+- [Known issues](known-issues.md)
active-directory Scim Validator Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-validator-tutorial.md
Previously updated : 09/13/2022 Last updated : 03/17/2023
The first step is to select a testing method to validate your SCIM endpoint.
**Use default attributes** - The system provides the default attributes, and you modify them to meet your need.
-**Discover schema** - If your end point supports /Schema, this option will allow the tool to discover the supported attributes. We recommend this option as it reduces the overhead of updating your app as you build it out.
+**Discover schema** - If your end point supports /Schema, this option lets the tool discover the supported attributes. We recommend this option as it reduces the overhead of updating your app as you build it out.
**Upload Azure AD Schema** - Upload the schema you've downloaded from your sample app on Azure AD.
Finally, you need to test and validate your endpoint.
### Use Postman to test endpoints (optional)
-In addition to using the SCIM Validator tool, you can also use Postman to validate an endpoint. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
+In addition to using the SCIM Validator tool, you can also use Postman to validate an endpoint. This example provides a set of tests in Postman. The example validates create, read, update, and delete (CRUD) operations. The operations are validated on users and groups, filtering, updates to group membership, and disabling users.
The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
If you created any Azure resources in your testing that are no longer needed, do
## Known Issues with Azure AD SCIM Validator - Soft deletes (disables) arenΓÇÖt yet supported.-- The time zone format is randomly generated and will fail for systems that try to validate it.-- The preferred language format is randomly generated and will fail for systems that try to validate it.
+- The time zone format is randomly generated and fails for systems that try to validate it.
+- The preferred language format is randomly generated and fails for systems that try to validate it.
- The patch user remove attributes may attempt to remove mandatory/required attributes for certain systems. Such failures should be ignored. ## Next steps-- [Learn how to add an app that is not in the Azure AD app gallery](../manage-apps/overview-application-gallery.md)
+- [Learn how to add an app that's not in the Azure AD app gallery](../manage-apps/overview-application-gallery.md)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Previously updated : 03/16/2023 Last updated : 03/17/2023
That's it! Your SCIM endpoint is now published, and you can use the Azure App Se
## Test your SCIM endpoint
-Requests to a SCIM endpoint require authorization. The SCIM standard has multiple options for authentication and authorization, including cookies, basic authentication, TLS client authentication, or any of the methods listed in [RFC 7644](https://tools.ietf.org/html/rfc7644#section-2).
+Requests to a SCIM endpoint require authorization. The SCIM standard has multiple options available. Requests can use cookies, basic authentication, TLS client authentication, or any of the methods listed in [RFC 7644](https://tools.ietf.org/html/rfc7644#section-2).
Be sure to avoid methods that aren't secure, such as username and password, in favor of a more secure method such as OAuth. Azure AD supports long-lived bearer tokens (for gallery and non-gallery applications) and the OAuth authorization grant (for gallery applications).
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 03/16/2023 Last updated : 03/17/2023
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
ID tokens are passed to websites and native clients. ID tokens contain profile i
## Token lifetime policies for refresh tokens and session tokens
-You can not set token lifetime policies for refresh tokens and session tokens. For lifetime, timeout, and revocation information on refresh tokens, see [Refresh tokens](refresh-tokens.md).
+You cannot set token lifetime policies for refresh tokens and session tokens. For lifetime, timeout, and revocation information on refresh tokens, see [Refresh tokens](refresh-tokens.md).
> [!IMPORTANT]
-> As of January 30, 2021 you can not configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement.
+> As of January 30, 2021 you cannot configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement.
> > Existing token's lifetime will not be changed. After they expire, a new token will be issued based on the default value. >
A token lifetime policy is a type of policy object that contains token lifetime
Reducing the Access Token Lifetime property mitigates the risk of an access token or ID token being used by a malicious actor for an extended period of time. (These tokens cannot be revoked.) The trade-off is that performance is adversely affected, because the tokens have to be replaced more often.
-For an example, see [Create a policy for web sign-in](configure-token-lifetimes.md#create-a-policy-for-web-sign-in).
+For an example, see [Create a policy for web sign-in](registration-config-change-token-lifetime-how-to.md).
Access, ID, and SAML2 token configuration are affected by the following properties and their respectively set values:
Refresh and session token configuration are affected by the following properties
|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked | |Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
-Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Anytime the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token isn't used within its Max Inactive Time period, it's considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 90 days. Anytime the SSO session token is used within its validity period, the validity period is extended another 24 hours or 90 days. If the SSO session token isn't used within its Max Inactive Time period, it's considered expired and will no longer be accepted. Any changes to this default period should be changed using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps and service principals are linked to a specific policy.
active-directory Active Directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-signing-key-rollover.md
Previously updated : 09/03/2021 Last updated : 03/16/2023 # Signing key rollover in the Microsoft identity platform
-This article discusses what you need to know about the public keys that are used by the Microsoft identity platform to sign security tokens. It is important to note that these keys roll over on a periodic basis and, in an emergency, could be rolled over immediately. All applications that use the Microsoft identity platform should be able to programmatically handle the key rollover process. Continue reading to understand how the keys work, how to assess the impact of the rollover to your application and how to update your application or establish a periodic manual rollover process to handle key rollover if necessary.
+This article discusses what you need to know about the public keys that are used by the Microsoft identity platform to sign security tokens. It's important to note that these keys roll over on a periodic basis and, in an emergency, could be rolled over immediately. All applications that use the Microsoft identity platform should be able to programmatically handle the key rollover process. Continue reading to understand how the keys work, how to assess the impact of the rollover to your application and how to update your application or establish a periodic manual rollover process to handle key rollover if necessary.
## Overview of signing keys in the Microsoft identity platform
-The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it is sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](../azuread-dev/azure-ad-federation-metadata.md).
+The Microsoft identity platform uses public-key cryptography built on industry standards to establish trust between itself and the applications that use it. In practical terms, this works in the following way: The Microsoft identity platform uses a signing key that consists of a public and private key pair. When a user signs in to an application that uses the Microsoft identity platform for authentication, the Microsoft identity platform creates a security token that contains information about the user. This token is signed by the Microsoft identity platform using its private key before it's sent back to the application. To verify that the token is valid and originated from Microsoft identity platform, the application must validate the tokenΓÇÖs signature using the public keys exposed by the Microsoft identity platform that is contained in the tenantΓÇÖs [OpenID Connect discovery document](https://openid.net/specs/openid-connect-discovery-1_0.html) or SAML/WS-Fed [federation metadata document](../azuread-dev/azure-ad-federation-metadata.md).
-For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There is no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If your application doesn't handle sudden refreshes, and attempts to use an expired key to verify the signature on a token, your application will incorrectly reject the token. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered that doesn't validate with the keys in your application's cache.
+For security purposes, the Microsoft identity platformΓÇÖs signing key rolls on a periodic basis and, in the case of an emergency, could be rolled over immediately. There's no set or guaranteed time between these key rolls - any application that integrates with the Microsoft identity platform should be prepared to handle a key rollover event no matter how frequently it may occur. If your application doesn't handle sudden refreshes, and attempts to use an expired key to verify the signature on a token, your application will incorrectly reject the token. Checking every 24 hours for updates is a best practice, with throttled (once every five minutes at most) immediate refreshes of the key document if a token is encountered that doesn't validate with the keys in your application's cache.
-There is always more than one valid key available in the OpenID Connect discovery document and the federation metadata document. Your application should be prepared to use any and all of the keys specified in the document, since one key may be rolled soon, another may be its replacement, and so forth. The number of keys present can change over time based on the internal architecture of the Microsoft identity platform as we support new platforms, new clouds, or new authentication protocols. Neither the order of the keys in the JSON response nor the order in which they were exposed should be considered meaningful to your app.
+There's always more than one valid key available in the OpenID Connect discovery document and the federation metadata document. Your application should be prepared to use any and all of the keys specified in the document, since one key may be rolled soon, another may be its replacement, and so forth. The number of keys present can change over time based on the internal architecture of the Microsoft identity platform as we support new platforms, new clouds, or new authentication protocols. Neither the order of the keys in the JSON response nor the order in which they were exposed should be considered meaningful to your app.
-Applications that support only a single signing key, or those that require manual updates to the signing keys, are inherently less secure and less reliable. They should be updated to use [standard libraries](reference-v2-libraries.md) to ensure that they are always using up-to-date signing keys, among other best practices.
+Applications that support only a single signing key, or those that require manual updates to the signing keys, are inherently less secure and less reliable. They should be updated to use [standard libraries](reference-v2-libraries.md) to ensure that they're always using up-to-date signing keys, among other best practices.
## How to assess if your application will be affected and what to do about it How your application handles key rollover depends on variables such as the type of application or what identity protocol and library was used. The sections below assess whether the most common types of applications are impacted by the key rollover and provide guidance on how to update the application to support automatic rollover or manually update the key.
This guidance is **not** applicable for:
* On-premises applications published via application proxy don't have to worry about signing keys. ### <a name="nativeclient"></a>Native client applications accessing resources
-Applications that are only accessing resources (for example, Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
+Applications that are only accessing resources (for example, Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) only obtain a token and pass it along to the resource owner. Given that they aren't protecting any resources, they don't inspect the token and therefore don't need to ensure it's properly signed.
Native client applications, whether desktop or mobile, fall into this category and are thus not impacted by the rollover. ### <a name="webclient"></a>Web applications / APIs accessing resources
-Applications that are only accessing resources (such as Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
+Applications that are only accessing resources (such as Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) only obtain a token and pass it along to the resource owner. Given that they aren't protecting any resources, they don't inspect the token and therefore don't need to ensure it's properly signed.
Web applications and web APIs that are using the app-only flow (client credentials / client certificate) to request tokens fall into this category and are thus not impacted by the rollover.
passport.use(new OIDCStrategy({
### <a name="vs2015"></a>Web applications / APIs protecting resources and created with Visual Studio 2015 or later If your application was built using a web application template in Visual Studio 2015 or later and you selected **Work Or School Accounts** from the **Change Authentication** menu, it already has the necessary logic to handle key rollover automatically. This logic, embedded in the OWIN OpenID Connect middleware, retrieves and caches the keys from the OpenID Connect discovery document and periodically refreshes them.
-If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You will need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols](#other).
+If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You'll need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols](#other).
### <a name="vs2013"></a>Web applications protecting resources and created with Visual Studio 2013 If your application was built using a web application template in Visual Studio 2013 and you selected **Organizational Accounts** from the **Change Authentication** menu, it already has the necessary logic to handle key rollover automatically. This logic stores your organizationΓÇÖs unique identifier and the signing key information in two database tables associated with the project. You can find the connection string for the database in the projectΓÇÖs Web.config file.
-If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You will need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols.](#other).
+If you added authentication to your solution manually, your application might not have the necessary key rollover logic. You'll need to write it yourself, or follow the steps in [Web applications / APIs using any other libraries or manually implementing any of the supported protocols.](#other).
-The following steps will help you verify that the logic is working properly in your application.
+The following steps help you verify that the logic is working properly in your application.
-1. In Visual Studio 2013, open the solution, and then click on the **Server Explorer** tab on the right window.
-2. Expand **Data Connections**, **DefaultConnection**, and then **Tables**. Locate the **IssuingAuthorityKeys** table, right-click it, and then click **Show Table Data**.
+1. In Visual Studio 2013, open the solution, and then select on the **Server Explorer** tab on the right window.
+2. Expand **Data Connections**, **DefaultConnection**, and then **Tables**. Locate the **IssuingAuthorityKeys** table, right-click it, and then select **Show Table Data**.
3. In the **IssuingAuthorityKeys** table, there will be at least one row, which corresponds to the thumbprint value for the key. Delete any rows in the table. 4. Right-click the **Tenants** table, and then click **Show Table Data**. 5. In the **Tenants** table, there will be at least one row, which corresponds to a unique directory tenant identifier. Delete any rows in the table. If you don't delete the rows in both the **Tenants** table and **IssuingAuthorityKeys** table, you will get an error at runtime.
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Though we don't recommend that you use it, the [username/password flow](scenario
Using the username/password flow constrains your applications. For instance, applications can't sign in a user who needs to use multifactor authentication or the Conditional Access tool in Azure AD. Your applications also don't benefit from single sign-on. Authentication with the username/password flow goes against the principles of modern authentication and is provided only for legacy reasons.
-In desktop apps, if you want the token cache to persist, you can customize the [token cache serialization](msal-net-token-cache-serialization.md). By implementing [dual token cache serialization](msal-net-token-cache-serialization.md#dual-token-cache-serialization-msal-unified-cache-and-adal-v3), you can use backward-compatible and forward-compatible token caches. These tokens support previous generations of authentication libraries. Specific libraries include Azure AD Authentication Library for .NET (ADAL.NET) version 3 and version 4.
+In desktop apps, if you want the token cache to persist, you can customize the [token cache serialization](msal-net-token-cache-serialization.md). By implementing dual token cache serialization, you can use backward-compatible and forward-compatible token caches. These tokens support previous generations of authentication libraries. Specific libraries include Azure AD Authentication Library for .NET (ADAL.NET) version 3 and version 4.
For more information, see [Desktop app that calls web APIs](scenario-desktop-overview.md).
active-directory Howto Add Branding In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-branding-in-azure-ad-apps.md
Previously updated : 08/31/2020 Last updated : 03/16/2023
# Sign in with Microsoft: Branding guidelines for applications
-When developing applications with the Microsoft identity platform, you'll need to direct your customers when they want to use their work or school account (managed in Azure AD), or their personal account for sign-up and sign-in to your application.
+When developing applications with the Microsoft identity platform, you need to direct your customers when they want to use their work or school account (managed in Azure AD), or their personal account for sign-up and sign-in to your application.
In this article, you will:
Microsoft doesnΓÇÖt expose end users to the Azure or the Active Directory brand
## User account pictogram
-In an earlier version of these guidelines, we recommended using a ΓÇ£blue badgeΓÇ¥ pictogram. Based on user and developer feedback, we now recommend the use of the Microsoft logo instead. The Microsoft logo will help users understand that they can reuse the account they use with Microsoft 365 or other Microsoft business services to sign into your app.
+In an earlier version of these guidelines, we recommended using a ΓÇ£blue badgeΓÇ¥ pictogram. Based on user and developer feedback, we now recommend the use of the Microsoft logo instead. The Microsoft logo helps users understand that they can reuse the account they use with Microsoft 365 or other Microsoft business services to sign into your app.
## Signing up and signing in with Azure AD Your app may present separate paths for sign-up and sign-in and the following sections provide visual guidance for both scenarios.
-**If your app supports end-user sign-up (for example, free to trial or freemium model)**: You can show a **sign-in** button that allows users to access your app with their work account or their personal account. Azure AD will show a consent prompt the first time they access your app.
+**If your app supports end-user sign-up (for example, free to trial or freemium model)**: You can show a **sign-in** button that allows users to access your app with their work account or their personal account. Azure AD shows a consent prompt the first time they access your app.
**If your app requires permissions that only admins can consent to, or if your app requires organizational licensing**: Separate admin acquisition from user sign-in. The **ΓÇ£get this appΓÇ¥ button** will redirect admins to sign in then ask them to grant consent on behalf of users in their organization, which has the added benefit of suppressing end-user consent prompts to your app.
Your app may present separate paths for sign-up and sign-in and the following se
Your ΓÇ£get the appΓÇ¥ link must redirect the user to the Azure AD grant access (authorize) page, to allow an organizationΓÇÖs administrator to authorize your app to have access to their organizationΓÇÖs data, which is hosted by Microsoft. Details on how to request access are discussed in the [Integrating Applications with Azure Active Directory](./quickstart-register-app.md) article.
-After admins consent to your app, they can choose to add it to their usersΓÇÖ Microsoft 365 app launcher experience (accessible from the waffle and from [https://portal.office.com/myapps](https://portal.office.com/myapps)). If you want to advertise this capability, you can use terms like ΓÇ£Add this app to your organizationΓÇ¥ and show a button like the following example:
+After admins consent to your app, they can choose to add it to their usersΓÇÖ Microsoft 365 app launcher experience (accessible from the waffle and from [https://www.office.com/](https://www.office.com/)). If you want to advertise this capability, you can use terms like ΓÇ£Add this app to your organizationΓÇ¥ and show a button like the following example:
![Button showing the Microsoft logo and "Add to my organization" text](./media/howto-add-branding-in-azure-ad-apps/add-to-my-org.png)
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Previously updated : 11/12/2022 Last updated : 03/16/2023
If you are already enrolled in the Microsoft Partner Network (MPN) and have met
1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md)
-1. Choose an app and click **Branding**.
+1. Choose an app and click **Branding & properties**.
1. Click **Add MPN ID to verify publisher** and review the listed requirements.
If you are already enrolled in the Microsoft Partner Network (MPN) and have met
For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). ## Mark your app as publisher verified
-Make sure you have met the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
+Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
-1. Ensure you are signed in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account that is authorized to make changes to the app(s) you want to mark as Publisher Verified and on the MPN Account in Partner Center.
+1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the MPN Account in Partner Center.
- - In Azure AD this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
+ - The Azure AD user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
- - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (this is a shared role mastered in Azure AD).
+ - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD).
1. Navigate to the **App registrations** blade:
-1. Click on an app you would like to mark as Publisher Verified and open the **Branding** blade.
+1. Click on an app you would like to mark as Publisher Verified and open the **Branding & properties** blade.
1. Ensure the appΓÇÖs [publisher domain](howto-configure-publisher-domain.md) is set.
Make sure you have met the [pre-requisites](publisher-verification-overview.md#r
1. Click **Add MPN ID to verify publisher** near the bottom of the page.
-1. Enter your **MPN ID**. This MPN ID must be for:
+1. Enter the **MPN ID** for:
- A valid Microsoft Partner Network account that has completed the verification process.
Make sure you have met the [pre-requisites](publisher-verification-overview.md#r
1. Wait for the request to process, this may take a few minutes.
-1. If the verification was successful, the publisher verification window will close, returning you to the Branding blade. You will see a blue verified badge next to your verified **Publisher display name**.
+1. If the verification was successful, the publisher verification window closes, returning you to the **Branding & properties** blade. You see a blue verified badge next to your verified **Publisher display name**.
-1. Users who get prompted to consent to your app will start seeing the badge soon after you have gone through the process successfully, although it may take some time for this to replicate throughout the system.
+1. Users who get prompted to consent to your app start seeing the badge soon after you've gone through the process successfully, although it may take some time for updates to replicate throughout the system.
-1. Test this functionality by signing into your application and ensuring the verified badge shows up on the consent screen. If you are signed in as a user who has already granted consent to the app, you can use the *prompt=consent* query parameter to force a consent prompt. This parameter should be used for testing only, and never hard-coded into your app's requests.
+1. Test this functionality by signing into your application and ensuring the verified badge shows up on the consent screen. If you're signed in as a user who has already granted consent to the app, you can use the *prompt=consent* query parameter to force a consent prompt. This parameter should be used for testing only, and never hard-coded into your app's requests.
-1. Repeat this process as needed for any additional apps you would like the badge to be displayed for. You can use Microsoft Graph to do this more quickly in bulk, and PowerShell cmdlets will be available soon. See [Making Microsoft API Graph calls](troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) for more info.
+1. Repeat these steps as needed for any more apps you would like the badge to be displayed for. You can use Microsoft Graph to do this more quickly in bulk, and PowerShell cmdlets will be available soon. See [Making Microsoft API Graph calls](troubleshoot-publisher-verification.md#making-microsoft-graph-api-calls) for more info.
ThatΓÇÖs it! Let us know if you have any feedback about the process, the results, or the feature in general.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
Previously updated : 01/26/2023 Last updated : 03/02/2023
After Microsoft Authentication Library (MSAL) [acquires a token](msal-acquire-ca
## Quick summary The recommendation is:-- When you're writing a desktop application, use the cross-platform token cache as explained in [Desktop apps](msal-net-token-cache-serialization.md?tabs=desktop).-- Do nothing for [mobile and Universal Windows Platform (UWP) apps](msal-net-token-cache-serialization.md?tabs=mobile). MSAL.NET provides secure storage for the cache.-- In ASP.NET Core [web apps](scenario-web-app-call-api-overview.md) and [web APIs](scenario-web-api-call-api-overview.md), use [Microsoft.Identity.Web](microsoft-identity-web.md) as a higher-level API. You'll get token caches and much more. See [ASP.NET Core web apps and web APIs](msal-net-token-cache-serialization.md?tabs=aspnetcore).
+- When writing a desktop application, use the cross-platform token cache as explained in [desktop apps](msal-net-token-cache-serialization.md?tabs=desktop).
+- No action required for [mobile and Universal Windows Platform (UWP) apps](msal-net-token-cache-serialization.md?tabs=mobile). MSAL.NET provides secure storage for the cache.
+- In ASP.NET Core [web apps](scenario-web-app-call-api-overview.md) and [web APIs](scenario-web-api-call-api-overview.md), use [Microsoft.Identity.Web](microsoft-identity-web.md) as a higher-level API. `Microsoft.Identity.Web` provides token caches as explained in [ASP.NET Core web apps and web APIs](msal-net-token-cache-serialization.md?tabs=aspnetcore).
- In the other cases of [web apps](scenario-web-app-call-api-overview.md) and [web APIs](scenario-web-api-call-api-overview.md): - If you request tokens for users in a production application, use a [distributed token cache](msal-net-token-cache-serialization.md?tabs=aspnet#distributed-caches) (Redis, SQL Server, Azure Cosmos DB, distributed memory). Use token cache serializers available from [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache/).
- - Otherwise, if you want to use an in-memory cache:
- - If you're only using `AcquireTokenForClient`, either reuse the confidential client application instance and don't add a serializer, or create a new confidential client application and enable the [shared cache option](msal-net-token-cache-serialization.md?tabs=aspnet#no-token-cache-serialization).
+ - If you want to use an in-memory cache and you're only using [`AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.acquiretokenforclientparameterbuilder), either reuse the confidential client application instance and don't add a serializer, or create a new confidential client application and enable the [shared cache option](msal-net-token-cache-serialization.md?tabs=aspnet#no-token-cache-serialization).
- A shared cache is faster because it's not serialized. However, the memory will grow as tokens are cached. The number of tokens is equal to the number of tenants times the number of downstream APIs. An app token is about 2 KB in size, whereas tokens for a user are about 7 KB in size. It's great for development, or if you have few users.
- - If you want to use an in-memory token cache and control its size and eviction policies, use the [Microsoft.Identity.Web in-memory cache option](msal-net-token-cache-serialization.md?tabs=aspnet#in-memory-token-cache-1).
-- If you build an SDK and want to write your own token cache serializer for confidential client applications, inherit from [Microsoft.Identity.Web.MsalAbstractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the `WriteCacheBytesAsync` and `ReadCacheBytesAsync` methods.
+ A shared cache is faster because it's not serialized. However, the memory grows as tokens are cached. The number of tokens is equal to the number of tenants times the number of downstream APIs. An app token is about 2 KB in size, whereas tokens for a user are about 7 KB in size. It's great for development, or if you have few users.
+ - If you want to use an in-memory token cache and control its size and eviction policies, use the [Microsoft.Identity.Web in-memory cache option](msal-net-token-cache-serialization.md?tabs=aspnet#in-memory-token-cache-1).
+- If you build an SDK and want to write your own token cache serializer for confidential client applications, inherit from [Microsoft.Identity.Web.MsalAbstractTokenCacheProvider](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web.TokenCache/MsalAbstractTokenCacheProvider.cs) and override the [WriteCacheBytesAsync](/dotnet/api/microsoft.identity.web.tokencacheproviders.msalabstracttokencacheprovider.writecachebytesasync) and [ReadCacheBytesAsync](/dotnet/api/microsoft.identity.web.tokencacheproviders.msalabstracttokencacheprovider.readcachebytesasync) methods.
## [ASP.NET Core web apps and web APIs](#tab/aspnetcore) The [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache) NuGet package provides token cache serialization within the [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) library.
-If you're using the MSAL library directly in an ASP.NET Core app, consider moving to use [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web), which provides a simpler, higher-level API. Otherwise, see the [Non-ASP.NET Core web apps and web APIs](?tabs=aspnet#configuring-the-token-cache), which covers direct MSAL usage.
+If you're using the [MSAL library](/dotnet/api/microsoft.identity.client) directly in an ASP.NET Core app, consider using [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web), which provides a simpler, higher-level API. Otherwise, see [Non-ASP.NET Core web apps and web APIs](?tabs=aspnet#configuring-the-token-cache), which covers direct MSAL usage.
| Extension method | Description | | - | |
-| [AddInMemoryTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addinmemorytokencaches) | Creates a temporary cache in memory for token storage and retrieval. In-memory token caches are faster than the other cache types, but their tokens aren't persisted between application restarts, and you can't control the cache size. In-memory caches are good for applications that don't require tokens to persist between app restarts. Use an in-memory token cache in apps that participate in machine-to-machine auth scenarios like services, daemons, and others that use [AcquireTokenForClient](/dotnet/api/microsoft.identity.client.acquiretokenforclientparameterbuilder) (the client credentials grant). In-memory token caches are also good for sample applications and during local app development. Microsoft.Identity.Web versions 1.19.0+ share an in-memory token cache across all application instances.
-| `AddSessionTokenCaches` | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie will become too large.
+| [AddInMemoryTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addinmemorytokencaches) | Creates a temporary cache in memory for token storage and retrieval. In-memory token caches are faster than other cache types, but their tokens aren't persisted between application restarts, and you can't control the cache size. In-memory caches are good for applications that don't require tokens to persist between app restarts. Use an in-memory token cache in apps that participate in machine-to-machine auth scenarios like services, daemons, and others that use [AcquireTokenForClient](/dotnet/api/microsoft.identity.client.acquiretokenforclientparameterbuilder) (the client credentials grant). In-memory token caches are also good for sample applications and during local app development. Microsoft.Identity.Web versions 1.19.0+ share an in-memory token cache across all application instances.
+| [AddSessionTokenCaches](/dotnet/api/microsoft.identity.web.microsoftidentityappcallswebapiauthenticationbuilder.addsessiontokencaches) | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims, because the cookie becomes too large.
| `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation. It enables you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed).
If you're using the MSAL library directly in an ASP.NET Core app, consider movin
Here's an example of code that uses the in-memory cache in the [ConfigureServices](/dotnet/api/microsoft.aspnetcore.hosting.startupbase.configureservices) method of the [Startup](/aspnet/core/fundamentals/startup) class in an ASP.NET Core application:
-```CSharp
-#using Microsoft.Identity.Web
-```
- ```CSharp using Microsoft.Identity.Web;
The usage of distributed cache is featured in the [ASP.NET Core web app tutorial
## [Non-ASP.NET Core web apps and web APIs](#tab/aspnet)
-Even when you use MSAL.NET, you can benefit from token cache serializers in Microsoft.Identity.Web.TokenCache.
+If you're using MSAL.NET in your web app or web API, you can benefit from token cache serializers in [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache)
### Referencing the NuGet package
Instead of `app.AddInMemoryTokenCache();`, you can use different caching seriali
<a id="no-token-cache-serialization"></a> #### Token cache without serialization
-You can specify that you don't want to have any token cache serialization and instead rely on the MSAL.NET internal cache:
-- Use `.WithCacheOptions(CacheOptions.EnableSharedCacheOptions)` when you build the application.-- Don't add any serializer.
+You can specify that you don't want to have any token cache serialization and instead rely on the MSAL.NET internal cache. Use `.WithCacheOptions(CacheOptions.EnableSharedCacheOptions)` when building the application and don't add any serializer.
+r.
```CSharp // Create the confidential client application
You can specify that you don't want to have any token cache serialization and in
.Build(); ```
-`WithCacheOptions(CacheOptions.EnableSharedCacheOptions)` makes the internal MSAL token cache shared between MSAL client application instances. Sharing a token cache is faster than using any token cache serialization, but the internal in-memory token cache doesn't have eviction policies. Existing tokens will be refreshed in place, but fetching tokens for different users, tenants, and resources makes the cache grow accordingly.
+`WithCacheOptions(CacheOptions.EnableSharedCacheOptions)` makes the internal MSAL token cache shared between MSAL client application instances. Sharing a token cache is faster than using any token cache serialization, but the internal in-memory token cache doesn't have eviction policies. Existing tokens are refreshed in place, but fetching tokens for different users, tenants, and resources makes the cache grow accordingly.
If you use this approach and have a large number of users or tenants, be sure to monitor the memory footprint. If the memory footprint becomes a problem, consider enabling token cache serialization, which might reduce the internal cache size. Currently, you can't use shared cache and cache serialization together.
Add the [Microsoft.Identity.Client.Extensions.Msal](https://www.nuget.org/packag
#### Configuring the token cache
-For details, see the [wiki page](https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/wiki/Cross-platform-Token-Cache). Here's an example of usage of the cross-platform token cache:
+For details, see the [Cross platform Token Cache](https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/wiki/Cross-platform-Token-Cache). Here's an example using the cross-platform token cache:
```csharp var storageProperties =
For example, websites might choose to store tokens in a Redis cache, or desktop
The following classes and interfaces are used in token cache serialization: -- `ITokenCache` defines events to subscribe to token cache serialization requests and methods to serialize or deserialize the cache at various formats (ADAL v3.0, MSAL 2.x, and MSAL 3.x = ADAL v5.0).-- `TokenCacheCallback` is a callback passed to the events so that you can handle the serialization. They'll be called with arguments of type `TokenCacheNotificationArgs`.
+- `ITokenCache` defines events to subscribe to token cache serialization requests and methods to serialize or deserialize the cache at various formats (MSAL 2.x and MSAL 3.x).
+- `TokenCacheCallback` is a callback passed to the events so that you can handle the serialization. They are called with arguments of type `TokenCacheNotificationArgs`.
- `TokenCacheNotificationArgs` only provides the `ClientId` value of the application and a reference to the user for which the token is available. ![Diagram that shows the classes in token cache serialization.](media/msal-net-token-cache-serialization/class-diagram.png)
Examples of token cache serializers are provided in [Microsoft.Identity.Web/Toke
### Custom token cache for a desktop or mobile app (public client application)
-Since MSAL.NET v2.x, you have several options for serializing the token cache of a public client. You can serialize the cache only to the MSAL.NET format. (The unified format cache is common across MSAL and the platforms.) You can also support the [legacy](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization) token cache serialization of ADAL v3.
+MSAL.NET v2.x and later versions provide several options for serializing the token cache of a public client. You can serialize the cache only to the MSAL.NET format. (The unified format cache is common across MSAL and the platforms.) You can also support the [legacy](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization) token cache serialization of ADAL v3.
Customizing the token cache serialization to share the single sign-on state between ADAL.NET 3.x, ADAL.NET 5.x, and MSAL.NET is explained in part of the following sample: [active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2).
static class TokenCacheHelper
A product-quality, file-based token cache serializer for public client applications (for desktop applications running on Windows, Mac, and Linux) is available from the [Microsoft.Identity.Client.Extensions.Msal](https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/tree/master/src/Microsoft.Identity.Client.Extensions.Msal) open-source library. You can include it in your applications from the following NuGet package: [Microsoft.Identity.Client.Extensions.Msal](https://www.nuget.org/packages/Microsoft.Identity.Client.Extensions.Msal/).
-#### Dual token cache serialization (MSAL unified cache and ADAL v3)
+#### Dual token cache serialization (MSAL unified cache)
-If you want to implement token cache serialization with the unified cache format (common to ADAL.NET 4.x, MSAL.NET 2.x, and other MSALs of the same generation or older, on the same platform), take a look at the following sample: https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/TokenCacheMigration/ADAL2MSAL.
+If you want to implement token cache serialization with the unified cache format (common to MSAL.NET 2.x and other MSALs of the same generation or older, on the same platform), take a look at the following sample: https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/TokenCacheMigration/ADAL2MSAL.
MSAL exposes important metrics as part of [AuthenticationResult.AuthenticationRe
| Metric | Meaning | When to trigger an alarm? | | :-: | :-: | :--: |
-| `DurationTotalInMs` | Total time spent in MSAL, including network calls and cache. | Alarm on overall high latency (> 1 second). Value depends on token source. From the cache: one cache access. From Azure Active Directory (Azure AD): two cache accesses plus one HTTP call. First ever call (per-process) will take longer because of one extra HTTP call. |
+| `DurationTotalInMs` | Total time spent in MSAL, including network calls and cache. | Alarm on overall high latency (> 1 second). Value depends on token source. From the cache: one cache access. From Azure Active Directory (Azure AD): two cache accesses plus one HTTP call. First ever call (per-process) takes longer because of one extra HTTP call. |
| `DurationInCacheInMs` | Time spent loading or saving the token cache, which is customized by the app developer (for example, save to Redis).| Alarm on spikes. | | `DurationInHttpInMs`| Time spent making HTTP calls to Azure AD. | Alarm on spikes.| | `TokenSource` | Source of the token. Tokens are retrieved from the cache much faster (for example, ~100 ms versus ~700 ms). Can be used to monitor and alarm the cache hit ratio. | Use with `DurationTotalInMs`. |
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Previously updated : 10/21/2021 Last updated : 03/16/2023
Below are some common issues that may occur during the process.
- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is.** 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new). 2. Sign in with a user account in the org's primary Azure AD tenant.
- 3. If an MPN account already exists, this will be recognized and you'll be added to the account.
+ 3. If an MPN account already exists, this is recognized and you are added to the account.
4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed. - **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?**
Most commonly caused by the signed-in user not being a member of the proper role
- The MPN ID is correct. - There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥.
-2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). Be aware that all Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
+2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
3. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions). ### MPNGlobalAccountNotFound
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
The following Windows components play a key role in requesting and using a PRT:
A PRT contains claims found in most Azure AD refresh tokens. In addition, there are some device-specific claims included in the PRT. They are as follows: * **Device ID**: A PRT is issued to a user on a specific device. The device ID claim `deviceID` determines the device the PRT was issued to the user on. This claim is later issued to tokens obtained via the PRT. The device ID claim is used to determine authorization for Conditional Access based on device state or compliance.
-* **Session key**: The session key is an encrypted symmetric key, generated by the Azure AD authentication service, issued as part of the PRT. The session key acts as the proof of possession when a PRT is used to obtain tokens for other applications.
+* **Session key**: The session key is an encrypted symmetric key, generated by the Azure AD authentication service, issued as part of the PRT. The session key acts as the proof of possession when a PRT is used to obtain tokens for other applications. Session key is rolled on Windows 10 or newer Azure AD joined or Hybrid Azure AD joined devices if it's older than 30 days.
### Can I see whatΓÇÖs in a PRT?
A PRT can get a multifactor authentication (MFA) claim in specific scenarios. Wh
Windows 10 or newer maintain a partitioned list of PRTs for each credential. So, thereΓÇÖs a PRT for each of Windows Hello for Business, password, or smartcard. This partitioning ensures that MFA claims are isolated based on the credential used, and not mixed up during token requests.
+> [!NOTE]
+> When using password to sign into Windows 10 or newer Azure AD joined or Hybrid Azure AD joined device, MFA during WAM interactive sign in may be required after session key associated with PRT is rolled.
+ ## How is a PRT invalidated? A PRT is invalidated in the following scenarios:
active-directory Check Workflow Execution Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md
+
+ Title: 'Check execution user scope of a workflow - Azure Active Directory'
+description: Describes how to check the users who fall into the execution scope of a Lifecycle Workflow.
++++++ Last updated : 03/09/2023++++++
+# Check execution user scope of a workflow (Preview)
+
+Workflow scheduling will automatically process the workflow for users meeting the workflows execution conditions. This article walks you through the steps to check the users who fall into the execution scope of a workflow. For more information about execution conditions, see: [workflow basics](../governance/understanding-lifecycle-workflows.md#workflow-basics).
+
+## Check execution user scope of a workflow using the Azure portal
+
+To check the users who fall under the execution scope of a workflow, you'd follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
+
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+
+1. From the list of workflows, select the workflow you want to check the execution scope of.
+
+1. On the workflow overview page, select **Execution conditions (Preview)**.
+
+1. On the Execution conditions page, select the **Execution User Scope** tab.
+
+1. On this page you're presented with a list of users who currently meet the scope for execution for the workflow.
+ :::image type="content" source="media/check-workflow-execution-scope/execution-user-scope-list.png" alt-text="Screenshot of users under scope of workflow execution." lightbox="media/check-workflow-execution-scope/execution-user-scope-list.png":::
+
+> [!NOTE]
+> The workflow engine routinely evaluates the users that meet the execution conditions. The results will not be up to date if the execution conditions have been changed recently, relevant attributes on the user have been changed recently, or the time based trigger has recently passed.
+
+## Check execution user scope of a workflow using Microsoft Graph
+
+To check execution user scope of a workflow using API via Microsoft Graph, see: [List executionScope](/graph/api/workflow-list-executionscope).
+
+## Next steps
+
+- [Manage workflow properties](manage-workflow-properties.md)
+- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md)
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
Now that your Logic app is configured for use with Lifecycle Workflows, you can
## Next steps - [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)-- [Manage Workflow Versions](manage-workflow-tasks.md)
+- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
Workflows can be created and customized for common scenarios using templates, or
## Prerequisites -- Azure AD Premium P2-
-For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
+The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
## Create a Lifecycle workflow using a template in the Azure portal
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
+
+ Title: Customize emails sent out by workflow tasks
+description: A step by step guide for customizing emails sent out using tasks within Lifecycle Workflows
++++++ Last updated : 02/06/2023+++
+# Customize emails sent out by workflow tasks (Preview)
+
+Lifecycle Workflows provide several tasks that send out email notifications. Email notifications can be customized to suit the needs of a specific workflow. For a list of these tasks, see: [Lifecycle Workflows tasks and definitions (Preview)](lifecycle-workflow-tasks.md).
+
+Emails tasks allow for the customization of the following aspects:
+
+- Additional CC recipients
+- Sender domain
+- Organizational branding of the email
+- Subject
+- Message body
+- Email language
+
+> [!NOTE]
+> To avoid additional security disclaimers, you should opt in to using customized domain and organizational branding.
+
+For more information on these customizable parameters, see: [Common email task parameters](lifecycle-workflow-tasks.md#common-email-task-parameters).
+
+## Prerequisites
+
+- Azure AD Premium P2
+
+For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
+
+## Customize email using the Azure portal
+
+When customizing an email sent via Lifecycle workflows, you can choose to customize either a new or existing task. These customizations are done the same way no matter if the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent out from tasks within workflows using the Azure portal, you'd follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Type in **Identity Governance** in the search bar near the top of the page, and select it.
+
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+
+1. In the left menu, select **workflows (Preview)**.
+
+1. On the left side of the screen, select **Tasks (Preview)**.
+
+1. On the tasks screen, select the task for which you want to customize the email.
+
+1. On the specific task screen, you're able to set CC to include others in the email outside of the default audience.
+
+1. Select the **Email Customization** tab.
+
+1. On the email customization screen, enter a custom subject, message body, and the email language translation option that will be used to translate the message body of the email.
+ :::image type="content" source="media/customize-workflow-email/customize-workflow-email-example.png" alt-text="Screenshot of an example of a customized email from a workflow.":::
+1. After making changes, select **save** to capture changes to the customized email.
++
+## Format attributes within customized emails
+
+To further personalize customized emails, you can take advantage of dynamic attributes. With dynamic attributes by placing in specific attributes, you're able to specifically call out values such as a user's name, their generated Temporary Access Pass, or even their manager's email.
+
+To use dynamic attributes within your customized emails, you must follow the formatting rules within the customized email. The proper format is:
+
+{{**dynamic attribute**}}
+
+The following screenshot is an example of the proper format for dynamic attributes within a customized email:
++
+When typing this it's written the following way:
+
+```html
+Welcome to the team, {{userGivenName}}
+
+We're excited to have you join our growing team and look forward to a successful and memorable journey together.
+
+We've already set up a few things to help you get started quickly and make your onboarding process as smooth as possible.
+
+For more information and next steps, please contact your manager, {{managerDisplayName}}
+
+```
+
+For a full list of dynamic attributes that can be used with customized emails, see:[Dynamic attributes within email](lifecycle-workflow-tasks.md#dynamic-attributes-within-email).
+
+## Use custom branding and domain in emails sent out using workflows
+
+Emails sent out using Lifecycle workflows can be customized to have your own company branding, and be sent out using your company domain. When you opt in to using custom branding and domain, every email sent out using Lifecycle Workflows reflect these settings. To enable these features the following prerequisites are required:
+
+- A verified domain. To add a custom domain, see: [Managing custom domain names in your Azure Active Directory](../enterprise-users/domains-manage.md)
+- Custom Branding set within Azure AD if you want to have your custom branding used in emails. To set organizational branding within your Azure tenant, see: [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md).
+
+After these prerequisites are satisfied, you'd follow these steps:
+
+1. On the Lifecycle workflows page, select **Workflow settings (Preview)**.
+
+1. On the settings page, with **email domain** you're able to select your domain from a drop-down list of your verified domains.
+ :::image type="content" source="media/customize-workflow-email/workflow-email-settings.png" alt-text="Screenshot of workflow domain settings.":::
+1. With the Use company branding banner logo setting, you're able to turn on whether or not company branding is used in emails.
+ :::image type="content" source="media/customize-workflow-email/customize-email-logo-setting.png" alt-text="Screenshot of email logo setting.":::
++
+## Customize email using Microsoft Graph
+
+To customize email using Microsoft Graph API see: [workflow: createNewVersion](/graph/api/identitygovernance-workflow-createnewversion).
+
+## Set custom branding and domain workflow settings in Lifecycle Workflows using Microsoft Graph
+
+To turn on custom branding and domain feature settings in Lifecycle Workflows using Microsoft Graph API, see: [lifecycleManagementSettings resource type](/graph/api/resources/identitygovernance-lifecyclemanagementsettings)
+
+## Next steps
+
+- [Lifecycle Workflow tasks](lifecycle-workflow-tasks.md)
+- [Manage workflow versions](manage-workflow-tasks.md)
++
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
-# Customize the schedule of workflows
+# Customize the schedule of workflows (Preview)
Workflows created using Lifecycle Workflows can be fully customized to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours, but the interval can be set as frequent as 1 hour, or as infrequent as 24 hours.
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
You can remove workflows that are no longer needed. Deleting these workflows all
## Prerequisites -- Azure AD Premium P2-
-For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
+The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
## Delete a workflow using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
1. In the left menu, select **Lifecycle Workflows (Preview)**.
After deleting workflows, you can view them on the **Deleted Workflows (Preview)
To delete a workflow using API via Microsoft Graph, see: [Delete workflow (lifecycle workflow)](/graph/api/identitygovernance-workflow-delete?view=graph-rest-beta&preserve-view=true). -
-To view
-
-Workflows can be deleted by running the following call:
-```http
-DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-```
## View deleted workflows using Microsoft Graph To View a list of deleted workflows using API via Microsoft Graph, see: [List deleted workflows](/graph/api/identitygovernance-lifecycleworkflowscontainer-list-deleteditems). - ## Permanently delete a workflow using Microsoft Graph To permanently delete a workflow using API via Microsoft Graph, see: [Permanently delete a deleted workflow](/graph/api/identitygovernance-deleteditemcontainer-delete)
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
# Lifecycle Workflows Custom Task Extension (Preview)
-Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
+Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you're able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
## Prerequisite Logic App roles required for integration with the custom task extension
The roles on the Azure Logic App, which allows it to be compatible with the cust
## Custom task extension deployment scenarios
-When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of two ways:
+When creating custom task extensions, the scenarios for how it interacts with Lifecycle Workflows can be one of two ways:
:::image type="content" source="media/lifecycle-workflow-extensibility/task-extension-deployment-scenarios.png" alt-text="Screenshot of custom task deployment scenarios."::: - **Launch and continue** - The Azure Logic App is started, and the following task execution immediately continues with no response expected from the Azure Logic App. This scenario is best suited if the Lifecycle workflow doesn't require any feedback (including status) from the Azure Logic App. With this scenario, as long as the workflow is started successfully, the workflow is viewed as a success.-- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task will be considered failed.
- :::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice.":::
+- **Launch and wait** - The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task is considered failed.
+ :::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice." lightbox="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png":::
+
+## Response authorization
+
+When creating a custom task extension that waits for a response from the Logic App, you're able to define which applications can send a response
++
+Response authorization can be utilized in one of the following ways:
+
+- **System-assigned managed identity (Default)** - Enables and utilizes the Logic Apps system-assigned managed identity. For more information on this, see: [Authenticate access to Azure resources with managed identities in Azure Logic Apps](/azure/logic-apps/create-managed-service-identity)
+- **No authorization** - Grants no authorization to the Logic App. You're responsible for assigning an application permission, or role assignment.
+- **Existing application** - You can choose an existing application to respond.
++ ## Custom task extension integration with Azure Logic Apps high-level steps
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
Task detailed history information allows you to filter for specific information
- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran. - **Tasks**: You can filter based on specific task names.
-Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters (preview)](lifecycle-workflow-tasks.md#common-task-parameters-preview).
+Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters (preview)](lifecycle-workflow-tasks.md#common-task-parameters).
## Next steps
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Last updated 01/26/2023
-# Lifecycle Workflow built-in tasks (preview)
+# Lifecycle Workflow built-in tasks (Preview)
Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you'll get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task.
-## Supported tasks (preview)
+## Supported tasks
Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md). [!INCLUDE [Lifecylce Workflows tasks table](../../../includes/lifecycle-workflows-tasks-table.md)]
-## Common task parameters (preview)
+## Common task parameters
Common task parameters are the non-unique parameters contained in every task. When adding tasks to a new workflow, or a workflow template, you can customize and configure these parameters so that they match your requirements.
Common task parameters are the non-unique parameters contained in every task. Wh
|continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. | |arguments | Contains unique parameters relevant for the given task. |
+## Common email task parameters
+Emails, sent from tasks, are able to be customized. If you choose to customize the email, you're able to set the following arguments:
-## Task details (preview)
+- **Subject:** Customizes the subject of emails.
+- **Message body:** Customizes the body of the emails being sent out.
+- **Email language translation:** Overrides the email recipient's language settings. Custom text is not customized, and it is recommended to set this language to the same language as the custom text.
-Below is each specific task, and detailed information such as parameters and prerequisites, required for them to run successfully. The parameters are noted as they appear both in the Azure portal, and within Microsoft Graph. For information about editing Lifecycle Workflow tasks in general, see: [Manage workflow Versions](manage-workflow-tasks.md).
+
+For a step by step guide on this, see: [Customize emails sent out by workflow tasks](customize-workflow-email.md).
+
+### Dynamic attributes within email
+
+With customized emails, you're able to include dynamic attributes within the subject and body to personalize these emails. The list of dynamic attributes that can be included are as follows:
++
+|Attribute |Definition |
+|||
+|userDisplayName | The userΓÇÖs display name. |
+|userEmployeeHireDate | The userΓÇÖs employee hire date. |
+|userEmployeeLeaveDateTime | The userΓÇÖs employee leave date time. |
+|managerDisplayName | The display name of the userΓÇÖs manager. |
+|temporaryAccessPass | The generated Temporary Access Pass. Only available with the **Generate TAP And Send Email** task. |
+|userPrincipalName | The userΓÇÖs userPrincipalName. |
+|managerEmail | The managerΓÇÖs email. |
+|userSurname | UserΓÇÖs last name. |
+|userGivenName | UserΓÇÖs first name. |
++
+> [!NOTE]
+> When adding these attributes to a customized email, or subject, they must be properly embedded. For a step by step guide on doing this, see: [Format attributes within customized emails](customize-workflow-email.md).
+
+## Task details
+
+In this section is each specific task, and detailed information such as parameters and prerequisites, required for them to run successfully. The parameters are noted as they appear both in the Azure portal, and within Microsoft Graph. For information about editing Lifecycle Workflow tasks in general, see: [Manage workflow Versions](manage-workflow-tasks.md).
### Send welcome email to new hire
For Microsoft Graph the parameters for the **Send welcome email to new hire** ta
"displayName": "Send Welcome Email", "isEnabled": true, "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "arguments": []
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Welcome to the organization {{userDisplayName}}!"
+ },
+ {
+ "name": "customBody",
+ "value": "Welcome to our organization {{userGivenName}} {{userSurname}}. \nFor more information, reach out to your manager {{managerDisplayName}} at {{managerEmail}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph the parameters for the **Send onboarding reminder email** ta
"displayName": "Send onboarding reminder email", "isEnabled": true, "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
- "arguments": []
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder to onboard {{userDisplayName}}!"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}. \n This is a reminder to onboard {{userDisplayName}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ``` ### Generate Temporary Access Pass and send via email to user's manager
-When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass (TAP), and have it sent to the new user's manager.
+When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass (TAP), and have it sent to the new user's manager. You're also able to customize the email that is sent to the user's manager.
> [!NOTE] > The user's employee hire date is used as the start time for the Temporary Access Pass. Please make sure that the TAP lifetime task setting and the [time portion of your user's hire date](how-to-lifecycle-workflow-sync-attributes.md#importance-of-time) are set appropriately so that the TAP is still valid when the user starts their first day. If the hire date at the time of workflow execution is already in the past, the current time is used as the start time.
With this task in the Azure portal, you're able to give the task a name and desc
- **Activation duration**- How long the passcode is active. - **One time use**- If the passcode can only be used once. :::image type="content" source="media/lifecycle-workflow-task/tap-task.png" alt-text="Screenshot of Workflows task: TAP task.":::
-
The Azure AD prerequisites to run the **Generate Temporary Access Pass and send via email to user's manager** task are:
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
"taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d", "arguments": [ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Your new employees Temporary Access Pass {{managerDisplayName}}"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}. \nThe temporary Access Pass {{temporaryAccessPass}} has been generated for {{userDisplayName}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ },
+ {
"name": "tapLifetimeMinutes", "value": "60"
- },
- {
+ },
+ {
"name": "tapIsUsableOnce", "value": "true"
- }
+ }
] }
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
### Add user to groups + Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). + You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task.":::
For Microsoft Graph the parameters for the **Disable user account** task are as
Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). + You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-group-task.png" alt-text="Screenshot of Workflows task: Remove user from select groups.":::
For Microsoft Graph the parameters for the **Remove user from selected groups**
Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and role-assignable groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). + You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-all-groups-task.png" alt-text="Screenshot of Workflows task: remove user from all groups.":::
For Microsoft Graph the parameters for the **Delete User** task are as follows:
```
-## Send email to manager before user last day
+## Send email to manager before user's last day
Allows an email to be sent to a user's manager before their last day. You're able to customize the task name and the description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/send-email-before-last-day.png" alt-text="Screenshot of Workflows task: send email before user last day task.":::
-The Azure AD prerequisite to run the **Send email before user last day** task are:
+The Azure AD prerequisite to run the **Send email before user's last day** task are:
- A populated manager attribute for the user. - A populated manager's mail attribute for the user.
-For Microsoft Graph the parameters for the **Send email before user last day** task are as follows:
+For Microsoft Graph the parameters for the **Send email before user's last day** task are as follows:
|Parameter |Definition | |||
For Microsoft Graph the parameters for the **Send email before user last day** t
"description": "Send offboarding email to userΓÇÖs manager before the last day of work", "isEnabled": true, "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
- "arguments": []
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder that {{userDisplayName}}'s last day is coming up."
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}. \nThis is a reminder that {{userDisplayName}}'s last date is coming up."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ },
+ ]
} ```
-## Send email on users last day
+## Send email on user's last day
Allows an email to be sent to a user's manager on their last day. You're able to customize the task name and the description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/send-email-last-day.png" alt-text="Screenshot of Workflows task: task to send email last day.":::
For Microsoft Graph the parameters for the **Send email on user last day** task
"description": "Send offboarding email to userΓÇÖs manager on the last day of work", "isEnabled": true, "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
- "arguments": []
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}}'s last day"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}. \nThis is a reminder that {{userDisplayName}}'s last day is today, {{userEmployeeLeaveDateTime}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ },
+ ]
} ```
-## Send offboarding email to users manager after their last day
+## Send email to user's manager after their last day
-Allows an email containing offboarding information to be sent to the user's manager after their last day. You're able to customize the task name and description for this task in the Azure portal.
+Allows an email containing off-boarding information to be sent to the user's manager after their last day. You're able to customize the task name and description for this task in the Azure portal.
-The Azure AD prerequisite to run the **Send offboarding email to users manager after their last day** task are:
+The Azure AD prerequisite to run the **Send email to users manager after their last day** task are:
- A populated manager attribute for the user. - A populated manager's mail attribute for the user.
-For Microsoft Graph the parameters for the **Send offboarding email to users manager after their last day** task are as follows:
+For Microsoft Graph the parameters for the **Send email to users manager after their last day** task are as follows:
|Parameter |Definition | ||| |category | leaver |
-|displayName | Send offboarding email to userΓÇÖs manager after the last day of work (Customizable by user) |
-|description | Remove user from all Teams (Customizable by user) |
+|displayName | Send email to users manager after their last day |
+|description | Send offboarding email to userΓÇÖs manager after the last day of work (Customizable by user) |
|taskDefinitionId | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | ```Example for usage within the workflow
For Microsoft Graph the parameters for the **Send offboarding email to users man
"description": "Send email after userΓÇÖs last day", "isEnabled": true, "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
- "arguments": []
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "b47471b9-af8f-4a5a-bfa2-b78e82398f6e, a7a23ce0-909b-40b9-82cf-95d31f0aaca2"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}} left on {{userEmployeeLeaveDateTime}}"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}. This is a reminder that {{userDisplayName}} left on{{UserEmployeeLeaveDateTime}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ },
+]
} ```
active-directory Lifecycle Workflows Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-deployment.md
For Lifecycle Workflows, you'll likely include representatives from the followin
* Ensures that programmatic Lifecycle Workflows, via GRAPH or extensibility, are governed and reviewed. -- **Security Owner** ensures that the plan will meet the security requirements of your organization. This team:
+- **Security Owner** ensures that the plan meets the security requirements of your organization. This team:
- Ensure Lifecycle Workflows meet organizational security policies - **Compliance manager** ensures that the organization follows internal policy and complies with regulations. This team:
For Lifecycle Workflows, you'll likely include representatives from the followin
* Assesses processes and procedures for reviewing Lifecycle Workflows, which include documentation and record keeping for compliance. * Reviews results of past reviews for most critical resources. - **HR Representative** - Assists with attribute mapping and population in HR provisioning scenarios. This team:
- * Helps determine attributes that will be used to populate employeeHireDate and employeeLeaveDateTime.
+ * Helps determine attributes that are used to populate employeeHireDate and employeeLeaveDateTime.
* Ensures source attributes are populated and have values * Identifies and suggests alternate attributes that could be mapped to employeeHireDate and employeeLeaveDateTime
The following information is important information about your organization and t
|Item|Description|Documentation| |--|--|--| |Inbound Provisioning|You have a process to create user accounts for employees in Azure AD such as HR inbound, SuccessFactors, or MIM.<br><br> Alternatively you have a process to create user accounts in Active Directory and those accounts are provisioned to Azure AD.|[Workday to Active Directory](../saas-apps/workday-inbound-tutorial.md)<br><br>[Workday to Azure AD](../saas-apps/workday-inbound-tutorial.md)<br><br>[SuccessFactors to Active Directory](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)</br></br>[SuccessFactors to Azure AD](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)<br><br>[Azure AD Connect](../hybrid/whatis-azure-ad-connect-v2.md)<br><br>[Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md)|
-|Attribute synchronization|The accounts in Azure AD have the employeeHireDate and employeeLeaveDateTime attributes populated. The values may be populated when the accounts are created from an HR system or synchronized from AD using Azure AD Connect or cloud sync. You have additional attributes that will be used to determine the scope such as department, populated or the ability to populate, with data.|[How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md)
+|Attribute synchronization|The accounts in Azure AD have the employeeHireDate and employeeLeaveDateTime attributes populated. The values may be populated when the accounts are created from an HR system or synchronized from AD using Azure AD Connect or cloud sync. You have extra attributes that are used to determine the scope such as department, populated or the ability to populate, with data.|[How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md)
## Understanding parts of a workflow
The following table provides information that you need to be aware of as you cre
|--|--| |Workflows|50 workflow limit per tenant| |Number of custom tasks|limit of 25 per workflow|
-|Value range for offsetInDays|Between -60 and 60 days|
+|Value range for offsetInDays|Between -180 and 180 days|
|Workflow execution schedule|Default every 3 hours - can be set to run anywhere from 1 to 24 hours| |Custom task extensions|Limit of 100| |On-demand user limit|You can run an on-demand workflow against a maximum of 10 users|
The following table provides a quick checklist of steps you can use when designi
|Step|Description| |--|--| |[Determine your scenario](#determine-your-scenario)|Determine what scenario you're addressing with a workflow|
-|[Determine the execution conditions](#determine-the-execution-conditions)|Determine who and when the workflow will run|
+|[Determine the execution conditions](#determine-the-execution-conditions)|Determine who and when the workflow runs|
|[Review the tasks](#review-the-tasks)|Review and add additional tasks to the workflow| |[Create your workflow](#create-your-workflow)|Create your workflow after planning and design.| |[Plan a pilot](#plan-a-pilot)|Plan to pilot, run, and test your workflow.| ## Determine your scenario
-Before building a Lifecycle Workflow in the portal, you should determine which scenario or scenarios you wish to deploy. You can use the table below to see a current list of the available scenarios. These are based on the templates that are available in the portal and list the task associated with each one.
+Before building a Lifecycle Workflow in the portal, you should determine which scenario or scenarios you wish to deploy. You can use the following table to see a current list of the available scenarios. These are based on the templates that are available in the portal and list the task associated with each one.
-|Scenario|Pre-defined Tasks|
+|Scenario|Predefined Tasks|
|--|--|
-|Onboard pre-hire employee| Generate TAP and Send Email|
+|Onboard prehire employee| Generate TAP and Send Email|
|Onboard new hire employee|Enable User Account</br>Send Welcome Email</br>Add User To Groups| |Real-time employee termination|Remove user from all groups</br>Remove user from all Teams</br>Delete User Account| |Pre-Offboarding of an employee|Remove user from selected groups</br>Remove user from selected Teams|
For more information on the built-in templates, see [Lifecycle Workflow template
## Determine the execution conditions
-Now that you've determined your scenarios, you need to look at what users in your organization the scenarios will apply to.
+Now that you've determined your scenarios, you need to look at what users in your organization the scenarios apply to.
An Execution condition is the part of a workflow that defines the scope of **who** and the trigger of **when** a workflow will be performed.
-The [scope](understanding-lifecycle-workflows.md#configure-scope) determines who the workflow runs against. This is defined by a rule that will filter users based on a condition. For example, the rule, `"rule": "(department eq 'sales')"` will run the task only on users who are members of the sales department.
+The [scope](understanding-lifecycle-workflows.md#configure-scope) determines who the workflow runs against. This is defined by a rule that will filter users based on a condition. For example, the rule, `"rule": "(department eq 'sales')"` runs the task only on users who are members of the sales department.
-The [trigger](understanding-lifecycle-workflows.md#trigger-details) determines when the workflow will run. This can either be, on-demand, which is immediate, or time based. Most of the pre-defined templates in the portal are time based.
+The [trigger](understanding-lifecycle-workflows.md#trigger-details) determines when the workflow runs. This can either be, on-demand, which is immediate, or time based. Most of the predefined templates in the portal are time based.
### Attribute information The scope of a workflow uses attributes under the rule section. You can add the following extra conditionals to further refine **who** the tasks are applied to.
The following is some important information regarding time zones that you should
For more information, see [How to synchronize attributes for Lifecycle Workflows](../governance/how-to-lifecycle-workflow-sync-attributes.md) ## Review the tasks
-Now that we've determined the scenario and the who and when, you should consider whether the pre-defined tasks are sufficient or are you going to need additional tasks. The table below has a list of the pre-defined tasks that are currently in the portal. Use this table to determine if you want to add more tasks.
+Now that we've determined the scenario and the who and when, you should consider whether the predefined tasks are sufficient or are you going to need extra tasks. The following table has a list of the predefined tasks that are currently in the portal. Use this table to determine if you want to add more tasks.
|Task|Description|Relevant Scenarios| |--|--|--|
Now that we've determined the scenario and the who and when, you should consider
For more information on tasks, see [Lifecycle Workflow tasks](lifecycle-workflow-tasks.md). ### Group and team tasks
-If you're using a group or team task, the workflow will need you to specify the group or groups. In the screenshot below, you'll see the yellow triangle on the task indicating that it's missing information.
+If you're using a group or team task, the workflow needs you to specify the group or groups. In the following screenshot, you see the yellow triangle on the task indicating that it's missing information.
[![Screenshot of onboard new hire.](media/lifecycle-workflows-deployment/group-1.png)](media/lifecycle-workflows-deployment/group-1.png#lightbox)
-By clicking on the task, you'll be presented with a navigation bar to add or remove groups. Select the "x groups selected" link to add groups.
+By clicking on the task, you are presented with a navigation bar to add or remove groups. Select the "x groups selected" link to add groups.
[![Screenshot of add groups.](media/lifecycle-workflows-deployment/group-2.png)](media/lifecycle-workflows-deployment/group-2.png#lightbox) ### Custom task extensions
-Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a Lifecycle Workflow.
+Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you are able to utilize the concept of custom task extensions to call-out to external systems as part of a Lifecycle Workflow.
-When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of three ways:
+When creating custom task extensions, the scenarios for how it interacts with Lifecycle Workflows can be one of three ways:
- **Fire-and-forget scenario**- The Logic App is started, and the sequential task execution immediately continues with no response expected from the Logic App. - **Sequential task execution waiting for response from the Logic App** - The Logic app is started, and the sequential task execution waits on the response from the Logic App.
For more information, see [Best practices for a pilot.](../fundamentals/active-d
#### Test and run the workflow Once you've created a workflow, you should test it by running the workflow [on-demand](on-demand-workflow.md)
-Using the on-demand feature will allow you to test and evaluate whether the Lifecycle Workflow is working as intended.
+Using the on-demand feature allows you to test and evaluate whether the Lifecycle Workflow is working as intended.
Once you have completed testing, you can either rework the Lifecycle Workflow or get ready for a broader distribution.
You can also get more information from the audit logs. These logs can be access
|Stage|Description| | - | - |
-|Determine the scenario| A pre-hire workflow that sends email to new manager. |
-|Determine the execution conditions|The workflow will run on new employees in the sales department, two(2) days before the employeeHireDate.|
-|Review the tasks.|We'll use the pre-defined tasks in the workflow. No extra tasks will be added.|
-|Create the workflow in the portal|Use the pre-defined template for new hire in the portal.|
+|Determine the scenario| A prehire workflow that sends email to new manager. |
+|Determine the execution conditions|The workflow runs on new employees in the sales department, two (2) days before the employeeHireDate.|
+|Review the tasks.|We use the predefined tasks in the workflow. No extra tasks are added.|
+|Create the workflow in the portal|Use the predefined template for new hire in the portal.|
|Enable and test the workflow| Use the on-demand feature to test the workflow on one user.| |Review the test results|Review the test results and ensure the Lifecycle Workflow is working as intended.| |Roll out the workflow to a broader audience|Communicate with stakeholders, letting them know that is going live and that HR will no longer need to send an email to the hiring manager.
active-directory Manage Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-lifecycle-workflows.md
- Title: Manage lifecycle with Lifecycle workflows
-description: Learn how to manage user lifecycles with Lifecycle Workflows
------- Previously updated : 01/24/2021-----
-# Manage user lifecycle with Lifecycle Workflows (preview)
-With Lifecycle Workflows, you can easily ensure that users have the appropriate entitlements no matter where they fall under the Joiner-Mover-Leaver(JML) scenario. Before a new hire's start date you can add them to a group. You can generate a temporary password that is sent to their manager to help speed up the onboarding process. You can enable a user account when they join on their hire date, and send a welcome email to them. When a user is moving to a different group you can remove them from that group, and add them to a new one. When a user leaves, you can also delete user accounts.
-
-## Prerequisites
--
-The following **Delegated permissions** and **Application permissions** are required for access to Lifecycle Workflows:
-
-> [!IMPORTANT]
-> The Microsoft Graph API permissions shown below are currently hidden from user interfaces such as Graph Explorer and Azure ADΓÇÖs API permissions UI for app registrations. In such cases you can fall back to Entitlement Managements permissions which also work for Lifecycle Workflows (ΓÇ£EntitlementManagement.Read.AllΓÇ¥ and ΓÇ£EntitlementManagement.ReadWrite.AllΓÇ¥). The Entitlement Management permissions will stop working with Lifecycle Workflows in future versions of the preview.
-
-|Column1 |Display String |Description |Admin Consent Required |
-|||||
-|LifecycleManagement.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
-|LifecycleManagement.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
------
-## Language determination within email notifications
-
-When sending email notifications, Lifecycle Workflows can automatically set the language that is displayed. For language priority, Lifecycle Workflows follow the following hierarchy:
-- The user **preferredLanguage** property in the user object takes highest priority.-- The tenant **preferredLanguage** attribute takes next priority.
-If neither can be determined, Lifecycle Workflows will default the language to English.
-
-## Supported languages in Lifecycle Workflows
--
-|Culture Code |Language |
-|||
-|en-us | English (United States) |
-|ja-jp | Japanese (Japan) |
-|de-de | German (Germany) |
-|fr-fr | French (France) |
-|pt-br | Portuguese (Brazil) |
-|zh-cn | Chinese (Simplified, China) |
-|zh-tw | Chinese (Simplified, Taiwan) |
-|es-es | Spanish (Spain, International Sort) |
-|ko-kr | Korean (Korea) |
-|it-it | Italian (Italy) |
-|nl-nl | Dutch (Netherlands) |
-|ru-ru | Russian (Russia) |
-|cs-cz | Czech (Czech Republic) |
-|pl-pl | Polish (Poland) |
-|tr-tr | Turkish (Turkey) |
-|da-dk | Danish (Denmark) |
-|en-gb | English (United Kingdom) |
-|hu-hu | Hungarian (Hungary) |
-|nb-no | Norwegian Bokmål (Norway) |
-|pt-pt | Portuguese (Portugal) |
-|sv-se | Swedish (Sweden) |
-
-## Supported user and query parameters
-
-Lifecycle Workflows support a rich set of user properties that are available on the user profile in Azure AD. Lifecycle Workflows also support many of the advanced query capabilities available in Graph API. This allows you, for example, to filter on the user properties when managing user execution conditions and making API calls. For more information about currently supported user properties, and query parameters, see: [User properties](/graph/aad-advanced-queries?tabs=http#user-properties)
--
-## Limits and constraints
-
-|Item |Limit |
-|||
-|Custom Workflows | 50 |
-|Number of custom tasks | 25 per workflow |
-|Value range for offsetInDays | Between -60 and 60 days |
-|Default Workflow execution schedule | Every 3 hours |
--
-## Next steps
-- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md)-- [Create Lifecycle workflows](create-lifecycle-workflow.md)
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
To edit the properties of a workflow using the Azure portal, you do the followin
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
1. On the left menu, select **Lifecycle workflows (Preview)**.
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
Tasks within workflows can be added, edited, reordered, and removed at will. To
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
1. In the left menu, select **Lifecycle workflows (Preview)**.
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
Use the following steps to run a workflow on-demand.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Azure Active Directory** and then select **Identity Governance**.
+1. Type in **Identity Governance** on the search bar near the top of the page and select it.
1. On the left menu, select **Lifecycle workflows (Preview)**.
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
Lifecycle Workflows can be used to trigger custom tasks via an extension to Azur
For more information about Lifecycle Workflows extensibility, see: [Workflow Extensibility](lifecycle-workflow-extensibility.md).
-## Create a custom task extension with a new Azure Logic App
+## Create a custom task extension using the Azure portal
-To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you'll complete these steps:
+To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you complete these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
To use a custom task extension in your workflow, first a custom task extension m
1. In the left menu, select **Lifecycle Workflows (Preview)**.
-1. In the left menu, select **Workflows (Preview)**.
+1. On the Lifecycle workflows screen, select **Custom task extension**.
-1. On the workflows screen, select **Custom task extension**.
- :::image type="content" source="media/trigger-custom-task/custom-task-extension-select.png" alt-text="Screenshot of selecting a custom task extension from a workflow overview page.":::
1. On the custom task extensions page, select **Create custom task extension**. :::image type="content" source="media/trigger-custom-task/create-custom-task-extension.png" alt-text="Screenshot for creating a custom task extension selection."::: 1. On the basics page you, enter a unique display name and description for the custom task extension and select **Next**. :::image type="content" source="media/trigger-custom-task/custom-task-extension-basics.png" alt-text="Screenshot of the basics section for creating a custom task extension.":::
-1. On the **Task behavior** page, you specify how the custom task extension will behave after executing the Azure Logic App and select **Next**.
+1. On the **Task behavior** page, you specify how the custom task extension will behave after executing the Azure Logic App. If you choose **Launch and continue** you can immediately select **Next: Details**.
:::image type="content" source="media/trigger-custom-task/custom-task-extension-behavior.png" alt-text="Screenshot for choose task behavior for custom task extension.":::
- > [!NOTE]
- > For more information about custom task extension behavior, see: [Lifecycle Workflow extensibility](lifecycle-workflow-extensibility.md)
+
+1. If you select **Launch and wait**, you're given an option of how long to wait for a response from the logic app before the task is considered a failure, and also options to set **Response authorization**. After choosing these options, you would be able to select **Next: Details**.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-launch-wait.png" alt-text="Screenshot of launch and wait option for custom task extension." lightbox="media/trigger-custom-task/custom-task-extension-launch-wait.png":::
+ > [!NOTE]
+ > For more information about custom task extension behavior, see: [Lifecycle Workflow extensibility](lifecycle-workflow-extensibility.md)
1. On the **Logic App details** page, you select **Create new Logic App**, and specify the subscription and resource group where it will be located. You'll also give the new Azure Logic App a name. :::image type="content" source="media/trigger-custom-task/custom-task-extension-new-logic-app.png" alt-text="screen showing to create new logic app for custom task extension.":::
-1. If deployed successfully, you'll get confirmation on the **Logic App details** page immediately, and then you can select **Next**.
-
-1. On the **Review** page, you can review the details of the custom task extension and the Azure Logic App you've created. Select **Create** if the details match what you desire for the custom task extension.
-
-
-## Configure a custom task extension with an existing Azure Logic App
-
-You can also link a custom task extension to an existing Azure Logic App. To do this, you'd complete the following steps:
-
-> [!IMPORTANT]
-> A Logic App must be configured to be compatible with the custom task extension. For more information, see [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md)
-
-1. In the left menu, select **Lifecycle workflows (Preview)**.
-
-1. In the left menu, select **Workflows (Preview)**.
-
-1. On the workflows screen, select **custom task extension**.
+ > [!IMPORTANT]
+ > A Logic App must be configured to be compatible with the custom task extension. For more information, see [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md)
+1. If deployed successfully, you get confirmation on the **Logic App details** page immediately, and then you can select **Next**.
-1. On the **Logic App details** page, you select **Choose an existing Logic App**, and specify the subscription and resource group where the Azure Logic App is located and select **Next**.
- :::image type="content" source="media/trigger-custom-task/custom-task-extension-existing-logic-app.png" alt-text="Screenshot for selecting an existing logic app with custom task extension.":::
-1. You can Review information about the updated custom task extension and the existing Logic App linked to it. Select **Create** if the details match what you desire for the custom task extension.
+1. On the **Review** page, you can review the details of the custom task extension and the Azure Logic App you've created. Select **Create** if the details match what you desire for the custom task extension.
## Add your custom task extension to a workflow
To Add a custom task extension to a workflow, you'd do the following steps:
1. On the tasks screen, select **Add task**.
-1. In the **Select tasks** drop down, select **Run a Custom Task Extension**, and select **Add**.
+1. In the **Select tasks** side menu, select **Run a Custom Task Extension**, and select **Add**.
-1. On the custom task extension page, you can give the task a name and description. You can also choose from a list of configured custom task extensions to use.
+1. On the custom task extension page, you can give the task a name and description. You also choose from a list of configured custom task extensions to use.
:::image type="content" source="media/trigger-custom-task/add-custom-task-extension.png" alt-text="Screenshot showing to add a custom task extension to workflow."::: 1. When finished, select **Save**.
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
You may learn more about running a workflow on-demand [here](on-demand-workflow.
## Prerequisites -- Azure AD Premium P2
+The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements).
-For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
## Before you begin
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
# Automate employee onboarding tasks before their first day of work with Azure portal (preview)
-This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the Azure portal.
+This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Azure portal.
-This pre-hire scenario will generate a temporary access pass for our new employee and send it via email to the user's new manager.
+This prehire scenario generates a temporary access pass for our new employee and sends it via email to the user's new manager.
:::image type="content" source="media/tutorial-lifecycle-workflows/arch-2.png" alt-text="Screenshot of the lifecycle workflow scenario." lightbox="media/tutorial-lifecycle-workflows/arch-2.png"::: ## Prerequisites -- Azure AD Premium P2-
-For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
+The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements).
## Before you begin Two accounts are required for this tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: - employeeHireDate must be set to today-- department must be set to sales-- manager attribute must be set, and the manager account should have a mailbox to receive an email
+- Department must be set to sales
+- Manager attribute must be set, and the manager account should have a mailbox to receive an email
For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial.
Detailed breakdown of the relevant attributes:
|employeeHireDate|Used to trigger the workflow|Employee| |department|Used to provide the scope for the workflow|Employee|
-The pre-hire scenario can be broken down into the following:
+The prehire scenario can be broken down into the following:
- **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager - **Prerequisite:** Editing the attributes required for this scenario in the portal - **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer
The pre-hire scenario can be broken down into the following:
- Verifying the workflow was successfully executed ## Create a workflow using pre-hire template
-Use the following steps to create a pre-hire workflow that will generate a TAP and send it via email to the user's manager using the Azure portal.
+Use the following steps to create a prehire workflow that will generate a TAP and send it via email to the user's manager using the Azure portal.
- 1. Sign in to Azure portal
+ 1. Sign in to Azure portal.
2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**. 4. Select **Lifecycle workflows (Preview)**.
Use the following steps to create a pre-hire workflow that will generate a TAP a
6. From the templates, select **select** under **Onboard pre-hire employee**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
- 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**.
+ 7. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard prehire employee screen, add the following settings and then select **Next: Configure Scope**.
:::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
+ 8. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters).
:::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
- 9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you are finished.
+ 9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished.
:::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png"::: 10. On the review blade, verify the information is correct and select **Create**.
Use the following steps to create a pre-hire workflow that will generate a TAP a
## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
>[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
To run a workflow on-demand, for users using the Azure portal, do the following
## Check tasks and workflow status
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports.
1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history.png" alt-text="Screenshot of workflow History status." lightbox="media/tutorial-lifecycle-workflows/workflow-history.png":::
-1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown.
+1. Once the **Workflow history (Preview)** tab has been selected, you'll land on the workflow history page as shown.
:::image type="content" source="media/tutorial-lifecycle-workflows/user-summary.png" alt-text="Screenshot of workflow history overview" lightbox="media/tutorial-lifecycle-workflows/user-summary.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
After running your workflow on-demand and checking that everything is working fi
## Next steps - [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-onboard-custom-workflow)
+- [Automate employee onboarding tasks before their first day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-onboard-custom-workflow)
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
-This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks:
+This post off-boarding scenario runs a scheduled workflow and accomplishes the following tasks:
1. Remove all licenses for user 2. Remove user from all Teams
This post off-boarding scenario will run a scheduled workflow and accomplish the
## Prerequisites -- Azure AD Premium P2-
-For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements)
+The Lifecycle Workflows preview requires Azure AD Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements).
## Before you begin
-As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+As part of the prerequisites for completing this tutorial, you'll need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
The scheduled leaver scenario can be broken down into the following: - **Prerequisite:** Create a user account that represents an employee leaving your organization
The scheduled leaver scenario can be broken down into the following:
## Create a workflow using scheduled leaver template Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
- 1. Sign in to Azure portal
+ 1. Sign in to Azure portal.
2. On the right, select **Azure Active Directory**. 3. Select **Identity Governance**. 4. Select **Lifecycle workflows (Preview)**.
Use the following steps to create a scheduled leaver workflow that will configur
6. From the templates, select **Select** under **Post-offboarding of an employee**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-leaver-template.png" alt-text="Screenshot of selecting a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-leaver-template.png":::
- 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**.
+ 7. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**.
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
- 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see: [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
+ 8. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters)
:::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png":::
- 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
+ 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished.
:::image type="content" source="media/tutorial-lifecycle-workflows/review-leaver-tasks.png" alt-text="Screenshot of leaver workflow tasks." lightbox="media/tutorial-lifecycle-workflows/review-leaver-tasks.png"::: 10. On the review blade, verify the information is correct and select **Create**.
Use the following steps to create a scheduled leaver workflow that will configur
> Select **Create** with the **Enable schedule** box unchecked to run the workflow on-demand. You may enable this setting later after checking the tasks and workflow status. ## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
>[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
To run a workflow on-demand, for users using the Azure portal, do the following
## Check tasks and workflow status
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks that are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we'll look at the status using the user focused reports.
1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png":::
-1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown.
+1. Once the **Workflow history (Preview)** tab has been selected, you'll land on the workflow history page as shown.
:::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png"::: 1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
For delegated scenarios, the admin needs one of the following [Azure AD roles](/
|Number of Workflows | 50 per tenant | |Number of Tasks | 25 per workflow | |Number of Custom Task Extensions | 100 per tenant |
-|offsetInDays range of triggerAndScopeBasedConditions executionConditions | 60 days |
+|offsetInDays range of triggerAndScopeBasedConditions executionConditions | 180 days |
|Workflow schedule interval in hours | 1-24 hours | |Number of users per on-demand selection | 10 | |durationBeforeTimeout range of custom task extensions | 5 minutes-3 hours | > [!NOTE]
-> If creating, or updating, a workflow via API the offsetInDays range will be between -60-60 days. The negative value will signal happening before the timeBasedAttribute, while the positive value will signal happening afterwards.
+> If creating, or updating, a workflow via API the offsetInDays range will be between -180-180 days. The negative value will signal happening before the timeBasedAttribute, while the positive value will signal happening afterwards.
## Parts of a workflow
After selecting a template, on the basics screen:
## Trigger details
-The trigger of a workflow defines when a scheduled workflow will run for users in scope for the workflow. The trigger is a combination of a time-based attribute, and an offset value. For example, if the attribute is employeeHireDate and offsetInDays is -1, then the workflow should trigger one day before the employee hire date. The value can range between -60 and 60 days.
+The trigger of a workflow defines when a scheduled workflow will run for users in scope for the workflow. The trigger is a combination of a time-based attribute, and an offset value. For example, if the attribute is employeeHireDate and offsetInDays is -1, then the workflow should trigger one day before the employee hire date. The value can range between -180 and 180 days.
The time-based attribute can be either one of two values, which are automatically chosen based on the template in which you select during the creation of your workflow. The two values can be:
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
You can use Lifecycle workflows to address any of the following conditions.
### How many licenses must you have?
-To utilize the Lifecycle Workflows feature, you must have at least one Azure AD Premium P2 license in your tenant. With one license, you're able to:
--- Create, manage, and delete workflows for any, or all, users in your tenant up to the total limit of 50 workflows.
+To preview the Lifecycle Workflows feature, you must have an Azure AD Premium P2 license in your tenant. During this preview, you're able to:
+
+- Create, manage, and delete workflows up to the total limit of 50 workflows.
- Trigger on-demand and scheduled workflow execution. - Manage and configure existing tasks to create workflows that are specific to your needs. - Create up to 100 custom task extensions to be used in your workflows.
+
## Next steps
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-provisioning.md
Previously updated : 08/01/2022 Last updated : 01/05/2023
For more information, see [What is HR driven provisioning?](../app-provisioning/
In Azure AD, the term **[app provisioning](../app-provisioning/user-provisioning.md)** refers to automatically creating copies of user identities in the applications that users need access to, for applications that have their own data store, distinct from Azure AD or Active Directory. In addition to creating user identities, app provisioning includes the maintenance and removal of user identities from those apps, as the user's status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md), as each of these applications have their own user repository distinct from Azure AD.
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-scim-provisioning) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) user store or a [SQL](https://learn.microsoft.com/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) database, Azure AD can support those as well.
+ For more information, see [What is app provisioning?](../app-provisioning/user-provisioning.md) ## Inter-directory provisioning
For more information, see [What is inter-directory provisioning?](../hybrid/what
- [What is identity lifecycle management?](what-is-identity-lifecycle-management.md) - [What is HR driven provisioning?](../app-provisioning/what-is-hr-driven-provisioning.md) - [What is app provisioning?](../app-provisioning/user-provisioning.md)-- [What is inter-directory provisioning?](../hybrid/what-is-inter-directory-provisioning.md)
+- [What is inter-directory provisioning?](../hybrid/what-is-inter-directory-provisioning.md)
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
na Previously updated : 01/12/2023 Last updated : 3/15/2023
In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privi
This article is for eligible members or owners who want to activate their group membership or ownership in PIM.
+>[!IMPORTANT]
+>When a group membership or ownership is activated, Azure AD PIM temporarily adds an active assignment. Azure AD PIM creates an active assignment (adds user as member or owner of the group) within seconds. When deactivation (manual or through activation time expiration) happens, Azure AD PIM removes userΓÇÖs group membership or ownership within seconds as well.
+>
+>Application may provide access to users based on their group membership. In some situations, application access may not immediately reflect the fact that user was added to the group or removed from it. If application previously cached the fact that user is not member of the group ΓÇô when user tries to access application again, access may not be provided. Similarly, if application previously cached the fact that user is member of the group ΓÇô when group membership is deactivated, user may still get access. Specific situation depends on the applicationΓÇÖs architecture. For some applications, signing out and signing back in may help to get access added or removed.
+ ## Activate a role When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM.
You can view the status of your pending requests to activate. It is specifically
When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
-## Troubleshoot
-
-### Permissions are not granted after activating a role
-
-When you activate a role in PIM, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
-
-1. Sign out of the Azure portal and then sign back in.
-1. In PIM, verify that you are listed as the member of the role.
- ## Next steps - [Approve activation requests for group members and owners (preview)](groups-approval-workflow.md)
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
documentationcenter: ''
editor: ''- Previously updated : 02/02/2022 Last updated : 3/15/2023 -+
If you have been made *eligible* for an administrative role, then you must *acti
This article is for administrators who need to activate their Azure AD role in Privileged Identity Management.
+>[!IMPORTANT]
+>When a role is activated, Azure AD PIM temporarily adds active assignment for the role. Azure AD PIM creates active assignment (assigns user to a role) within seconds. When deactivation (manual or through activation time expiration) happens, Azure AD PIM removes the active assignment within seconds as well.
+>
+>Application may provide access based on the role the user has. In some situations, application access may not immediately reflect the fact that user got role assigned or removed. If application previously cached the fact that user does not have a role ΓÇô when user tries to access application again, access may not be provided. Similarly, if application previously cached the fact that user has a role ΓÇô when role is deactivated, user may still get access. Specific situation depends on the applicationΓÇÖs architecture. For some applications, signing out and signing back in may help get access added or removed.
+ ## Activate a role When you need to assume an Azure AD role, you can request activation by opening **My roles** in Privileged Identity Management.
If you don't require activation of a role that requires approval, you can cancel
## Deactivate a role assignment
-When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
-
-## Troubleshoot portal delay
-
-### Permissions aren't granted after activating a role
-
-When you activate a role in Privileged Identity Management, the activation might not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may cause a delay before the change takes effect. If your activation is delayed, sign out of the portal you're trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. Also, you can't deactivate a role assignment within five minutes after activation.
## Next steps
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 3/1/2023 Last updated : 3/15/2023
This article is for members who need to activate their Azure resource role in Pr
>[!NOTE] >As of March 2023, you may now activate your assignments and view your access directly from blades outside of PIM in the Azure portal. Read more [here](pim-resource-roles-activate-your-roles.md#activate-with-azure-portal).
+>[!IMPORTANT]
+>When a role is activated, Azure AD PIM temporarily adds active assignment for the role. Azure AD PIM creates active assignment (assigns user to a role) within seconds. When deactivation (manual or through activation time expiration) happens, Azure AD PIM removes the active assignment within seconds as well.
+>
+>Application may provide access based on the role the user has. In some situations, application access may not immediately reflect the fact that user got role assigned or removed. If application previously cached the fact that user does not have a role ΓÇô when user tries to access application again, access may not be provided. Similarly, if application previously cached the fact that user has a role ΓÇô when role is deactivated, user may still get access. Specific situation depends on the applicationΓÇÖs architecture. For some applications, signing out and signing back in may help get access added or removed.
+ ## Activate a role When you need to take on an Azure resource role, you can request activation by using the **My roles** navigation option in Privileged Identity Management.
If you do not require activation of a role that requires approval, you can cance
## Deactivate a role assignment
-When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. When you select **Deactivate**, there's a short time lag before the role is deactivated. Also, you can't deactivate a role assignment within five minutes after activation.
+When a role assignment is activated, you'll see a **Deactivate** option in the PIM portal for the role assignment. Also, you can't deactivate a role assignment within five minutes after activation.
## Activate with Azure portal
In Access control (IAM) for a resource, you can now select ΓÇ£View my accessΓÇ¥
By integrating PIM capabilities into different Azure portal blades, this new feature allows you to gain temporary access to view or edit subscriptions and resources more easily.
-## Troubleshoot
-
-### Permissions are not granted after activating a role
-
-When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
-
-1. Sign out of the Azure portal and then sign back in.
-1. In Privileged Identity Management, verify that you are listed as the member of the role.
- ## Next steps - [Extend or renew Azure resource roles in Privileged Identity Management](pim-resource-roles-renew-extend.md)
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
Title: Vulnerability management for Azure Kubernetes Service
description: Learn how Microsoft manages security vulnerabilities for Azure Kubernetes Service (AKS) clusters. Previously updated : 03/02/2023 Last updated : 03/17/2023
Microsoft identifies and patches vulnerabilities and missing security updates fo
## AKS Container Images
-While the [Cloud Native Computing Foundation][cloud-native-computing-foundation] (CNCF) owns and maintains most of the code running in AKS, the Azure Container Upstream team takes responsibility for building the open-source packages that we deploy on AKS. With that responsibility, it includes having complete ownership of the build, scan, sign, validate, and hotfix process and control over the binaries in container images. By us having responsibility for building the open-source packages deployed on AKS, it enables us to both establish a software supply chain over the binary, and patch the software as needed.
+While the [Cloud Native Computing Foundation][cloud-native-computing-foundation] (CNCF) owns and maintains most of the code running in AKS, Microsoft takes responsibility for building the open-source packages that we deploy on AKS. With that responsibility, it includes having complete ownership of the build, scan, sign, validate, and hotfix process and control over the binaries in container images. By us having responsibility for building the open-source packages deployed on AKS, it enables us to both establish a software supply chain over the binary, and patch the software as needed.  
-Microsoft has invested in engineers (the Azure Container Upstream team) and infrastructure in the broader Kubernetes ecosystem to help build the future of cloud-native compute in the wider CNCF community. A notable example of this is the donation of engineering time to help manage Kubernetes releases. This work not only ensures the quality of every Kubernetes release for the world, but also enables AKS quickly get new Kubernetes releases out into production for several years. In some cases, ahead of other cloud providers by several months. Microsoft collaborates with other industry partners in the Kubernetes security organization. For example, the Security Response Committee (SRC) receives, prioritizes, and patches embargoed security vulnerabilities before they're announced to the public. This commitment ensures Kubernetes is secure for everyone, and enables AKS to patch and respond to vulnerabilities faster to keep our customers safe. In addition to Kubernetes, Microsoft has signed up to receive pre-release notifications for software vulnerabilities for products such as Envoy, container runtimes, and many other open-source projects.
+Microsoft is active in the broader Kubernetes ecosystem to help build the future of cloud-native compute in the wider CNCF community. This work not only ensures the quality of every Kubernetes release for the world, but also enables AKS quickly get new Kubernetes releases out into production for several years. In some cases, ahead of other cloud providers by several months. Microsoft collaborates with other industry partners in the Kubernetes security organization. For example, the Security Response Committee (SRC) receives, prioritizes, and patches embargoed security vulnerabilities before they're  announced to the public. This commitment ensures Kubernetes is secure for everyone, and enables AKS to patch and respond to vulnerabilities faster to keep our customers safe. In addition to Kubernetes, Microsoft has signed up to receive pre-release notifications for software vulnerabilities for products such as Envoy, container runtimes, and many other open-source projects.
Microsoft scans container images using static analysis to discover vulnerabilities and missing updates in Kubernetes and Microsoft-managed containers. If fixes are available, the scanner automatically begins the update and release process.
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
[mrc-create-report]: https://aka.ms/opensource/security/create-report [msrc-pgp-key-page]: https://aka.ms/opensource/security/pgpkey [microsoft-security-response-center]: https://aka.ms/opensource/security/msrc
-[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
+[azure-bounty-program-overview]: https://www.microsoft.com/msrc/bounty-microsoft-azure
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
The managed identity that will be assigned to the pod needs to be granted permis
To run the demo, the *IDENTITY_CLIENT_ID* managed identity must have Virtual Machine Contributor permissions in the resource group that contains the Virtual Machine Scale Set of your AKS cluster. ```azurecli-interactive
+# Obtain the name of the resource group containing the Virtual Machine Scale set of your AKS cluster, commonly called the node resource group
NODE_GROUP=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)+
+# Obtain the id of the node resource group
NODES_RESOURCE_ID=$(az group show -n $NODE_GROUP -o tsv --query "id")+
+# Create a role assignment granting your managed identity permissions on the node resource group
az role assignment create --role "Virtual Machine Contributor" --assignee "$IDENTITY_CLIENT_ID" --scope $NODES_RESOURCE_ID ```
api-management Api Management In Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-in-workspace.md
This article is an introduction to managing APIs, products, subscriptions, and o
* An API Management instance. If needed, ask an administrator to [create one](get-started-create-service-instance.md). * A workspace. If needed, ask an administrator of your API Management instance to [create one](how-to-create-workspace.md).
-* Permissions to collaborate in the workspace. If needed, ask a workspace owner to assign you appropriate [roles](api-management-role-based-access-control.md#built-in-workspace-roles) in the workspace.
+* Permissions to collaborate in the workspace. If needed, ask an administrator of your API Management instance to assign you appropriate [roles](api-management-role-based-access-control.md#built-in-workspace-roles) in the service and the workspace.
## Go to the workspace - portal
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
A workspace collaborator must be assigned both a workspace-scoped role and a ser
|Role |Scope |Description | ||||
-|API Management Workspace Owner | workspace | Can modify workspace details, manage members and their role assignments; has read and write access to all entities within the workspace. This role should be assigned on the workspace scope. |
|API Management Workspace Contributor | workspace | Can manage the workspace and view, but not modify its members. This role should be assigned on the workspace scope. | |API Management Workspace Reader | workspace | Has read-only access to entities in the workspace. This role should be assigned on the workspace scope. | |API Management Workspace API Developer | workspace | Has read access to entities in the workspace and read and write access to entities for editing APIs. This role should be assigned on the workspace scope. | |API Management Workspace API Product Manager | workspace | Has read access to entities in the workspace and read and write access to entities for publishing APIs. This role should be assigned on the workspace scope. |
-| API Management Workspace API Developer | service | Has read access to tags and products and write access to allow: <br/><br/> ▪️ Assigning APIs to products<br/> ▪️ Assigning tags to products and APIs<br/><br/> This role should be assigned on the service scope. |
+| API Management Service Workspace API Developer | service | Has read access to tags and products and write access to allow: <br/><br/> ▪️ Assigning APIs to products<br/> ▪️ Assigning tags to products and APIs<br/><br/> This role should be assigned on the service scope. |
| API Management Service Workspace API Product Manager | service | Has the same access as API Management Service Workspace API Developer as well as read access to users and write access to allow assigning users to groups. This role should be assigned on the service scope. |
api-management How To Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-create-workspace.md
The new workspace appears in the list on the **Workspaces** page. Select the wor
## Assign users to workspace - portal
-After creating a workspace, assign permissions to users to manage the workspace's resources. Each workspace user must be assigned a workspace-specific RBAC role at the service level and at the workspace level, or granted equivalent permissions using custom roles.
+After creating a workspace, assign permissions to users to manage the workspace's resources. Each workspace user must be assigned both a service-scoped workspace RBAC role and a workspace-scoped RBAC role, or granted equivalent permissions using custom roles.
-At minimum, assign an *owner* of the workspace. Optionally, assign permissions to other workspace collaborators.
+> [!NOTE]
+> For easier management, set up Azure AD groups to assign workspace permissions to multiple users.
+>
* For a list of built-in workspace roles, see [How to use role-based access control in API Management](api-management-role-based-access-control.md). * For steps to assign a role, see [Assign Azure roles using the portal](../role-based-access-control/role-assignments-portal.md?tabs=current).
-### Assign a service-level role
+### Assign a service-scoped role
1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance. 1. In the left menu, select **Access control (IAM)** > **+ Add**.
-1. Assign the owner the following role:
- * **API Management Service Workspace API Product Manager**
+1. Assign one of the following service-scoped roles to each member of the workspace:
-1. Assign one of the following roles to other members of the workspace:
- * **API Management Workspace API Developer**
+ * **API Management Service Workspace API Developer**
* **API Management Service Workspace API Product Manager**
-### Assign a workspace-level role
+### Assign a workspace-scoped role
1. In the menu for your API Management instance, select **Workspaces (preview)** > the name of the workspace that you created. 1. In the **Workspace** window, select **Access control (IAM)**> **+ Add**.
-1. Assign the owner the following role:
-
- * **API Management Workspace Owner**
-
-1. Optionally, assign one of the following workspace-level roles to other workspace members to manage workspace APIs and other resources. The owner of the workspace can also assign workspace-level roles.
+1. Assign one of the following workspace-scoped roles to the workspace members to manage workspace APIs and other resources.
* **API Management Workspace Reader** * **API Management Workspace Contributor**
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
An organization that manages APIs using Azure API Management may have multiple d
The following is a sample workflow for creating and using a workspace.
-1. A central API platform team that manages the API Management instance creates a workspace and assigns its owners and workspace members.
+1. A central API platform team that manages the API Management instance creates a workspace and assigns permissions to workspace collaborators using RBAC roles - for example, permissions to create or read resources in the workspace.
1. A central API platform team uses DevOps tools to create a DevOps pipeline for APIs in that workspace.
-1. Workspace owners assign permissions to workspace members using RBAC roles - for example, permissions to create or read resources in the workspace.
- 1. Workspace members develop, publish, productize, and maintain APIs in the workspace. 1. The central API platform team manages the infrastructure of the service, such as network connectivity, monitoring, resiliency, and enforcement of all-APIs policies.
The following resources can be managed in the workspaces preview.
Azure RBAC is used to configure workspace collaborators' permissions to read and edit entities in the workspace. For a list of roles, see [How to use role-based access control in API Management](api-management-role-based-access-control.md).
-Workspace members must be assigned both a service-level role and a workspace-level role, or granted equivalent permissions using custom roles. The service-level role enables referencing service-level resources from workspace-level resources. For example, publish an API from a workspace with a service-level product, assign a service-level tag to an API, or organize a user into a workspace-level group to control API and product visibility.
+Workspace members must be assigned both a service-scoped role and a workspace-scoped role, or granted equivalent permissions using custom roles. The service-scoped role enables referencing service-level resources from workspace-level resources. For example, publish an API from a workspace with a service-level product, assign a service-level tag to an API, or organize a user into a workspace-level group to control API and product visibility.
+
+> [!NOTE]
+> For easier management, set up Azure AD groups to assign workspace permissions to multiple users.
+>
## Workspaces and other API Management features
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 03/10/2023 Last updated : 03/16/2023
To demonstrate this scenario, you have an App Service Environment v2 with a sing
If you migrate this environment to App Service Environment v3, your monthly cost is:
-[1(I1v2) = **$281.78**](https://azure.com/e/4c247282128746898ef4cfe1ef0f1070)
+[1(I1v2) = **$281.78**](https://azure.com/e/c2cfb6f810374f31b563e2f8a2c877e7)
This change is a significant cost reduction, but you're over-provisioned since you have double the cores and RAM, which you may not need. This excess isn't an issue for this scenario since the new environment is cheaper. However, when you increase your I1 instances in a single App Service Environment, you see how migrating to App Service Environment v3 can increase your monthly cost.
For this scenario, your App Service Environment v2 has 14 I1 instances. Your mon
When you migrate this environment to App Service Environment v3, your monthly cost is:
-[14(I1v2) = **$3,944.92**](https://azure.com/e/750b78d9e34a43dc9c8c8c400d4628bf)
+[14(I1v2) = **$3,944.92**](https://azure.com/e/a7b6240644824273bebd358c5919ae4f)
Your App Service Environment v3 is now more expensive than your App Service Environment v2. As you start add more I1 instances, and therefore need more I1v2 instances when you migrate, the difference in price becomes more significant. If this scenario is a requirement for your environment, you may need to plan for an increase in your monthly cost. The following graph visually depicts the point where App Service Environment v3 becomes more expensive than App Service Environment v2 for this specific scenario.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Azure App Service captures all messages output to the console to help you diagno
:::column-end::: :::row-end:::
-Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python]).
+Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python).
## 7. Clean up resources
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Previously updated : 02/09/2023 Last updated : 03/17/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
recommendations: false
# Managed identities for Form Recognizer - [!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)] Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources: + * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials. * To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). * There's no added cost to use managed identities in Azure.
-> [!TIP]
-> Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens. Managed identities are a safer way to grant access to data without having credentials in your code.
-
+> [!IMPORTANT]
+>
+> * Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens.
+>
+> * Managed identities are a safer way to grant access to data without having credentials in your code.
## Private storage account access
- Private Azure storage account access and authentication are supported by [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Form Recognizer can't directly access your storage account data. However, once a managed identity is enabled, Form Recognizer can access your storage account using an assigned managed identity credential.
+ Private Azure storage account access and authentication support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Form Recognizer can't directly access your storage account data. However, once a managed identity is enabled, Form Recognizer can access your storage account using an assigned managed identity credential.
> [!NOTE] >
You need to grant Form Recognizer access to your storage account before it can c
> > If you're unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or you get the permissions error, "you do not have permissions to add role assignment at this scope", check that you're currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as Owner or User Access Administrator at the Storage scope for the storage resource.
-1. Next, you're going to assign a **Storage Blob Data Reader** role to your Form Recognizer service resource. In the **Add role assignment** pop-up window complete the fields as follows and select **Save**:
+1. Next, you're going to assign a **Storage Blob Data Reader** role to your Form Recognizer service resource. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
| Field | Value| ||--|
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 03/02/2023 Last updated : 03/17/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Introduction to Kubernetes compute target in AzureML]
## Flux (GitOps) -- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. Not currently supported for ARM 64.
+- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters.
[GitOps on AKS and Azure Arc-enabled Kubernetes](conceptual-gitops-flux2.md) uses [Flux v2](https://fluxcd.io/docs/), a popular open-source tool set, to help manage cluster configuration and application deployment. GitOps is enabled in the cluster as a `Microsoft.KubernetesConfiguration/extensions/microsoft.flux` cluster extension resource.
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
+### 1.7.0 (March 2023)
+
+Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)
+
+- source-controller: v0.34.0
+- kustomize-controller: v0.33.0
+- helm-controller: v0.29.0
+- notification-controller: v0.31.0
+- image-automation-controller: v0.29.0
+- image-reflector-controller: v0.24.0
+
+Changes made for this version:
+
+- Upgrades Flux to [v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)
+- Flux extension is now supported on ARM64-based clusters
+ ### 1.6.4 (February 2023) Changes made for this version:
Changes made for this version:
- Fixes bug where [deleting the extension may fail on AKS with Windows node pool](https://github.com/Azure/AKS/issues/3191) - Adds support for sasToken for Azure blob storage at account level as well as container level
-### 1.6.0 (September 2022)
-
-Flux version: [Release v0.33.0](https://github.com/fluxcd/flux2/releases/tag/v0.33.0)
--- source-controller: v0.28.0-- kustomize-controller: v0.27.1-- helm-controller: v0.23.1-- notification-controller: v0.25.2-- image-automation-controller: v0.24.2-- image-reflector-controller: v0.20.1-
-Changes made for this version:
--- Upgrades Flux to [v0.33.0](https://github.com/fluxcd/flux2/releases/tag/v0.33.0)-- Fixes Helm-related [security issue](https://github.com/fluxcd/flux2/security/advisories/GHSA-p2g7-xwvr-rrw3)- ## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes [Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. The Dapr extension eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
If a machine remains disconnected for 45 days, its status may change to **Expire
## Service limits
-Azure Arc-enabled servers has a limit for the number of instances that can be created in each resource group. It does not have any limits at the subscription or service level.
+There is no limit to how many Arc-enabled servers and VM extensions you can deploy in a resource group or subscription. The standard 800 resource limit per resource group applies to the Azure Arc Private Link Scope resource type.
To learn more about resource type limits, see the [Resource instance limit](../../azure-resource-manager/management/resources-without-resource-group-limit.md#microsofthybridcompute) article. ## Data residency
-Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in.
+Azure Arc-enabled servers stores customer data. By default, customer data stays within the region the customer deploys the service instance in. For region with data residency requirements, customer data is always kept within the same region.
## Next steps
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Function apps are an essential part of solutions that can cause high volumes of
The generated telemetry can be consumed in real-time dashboards, alerting, detailed diagnostics, and so on. Depending on how the generated telemetry is going to be consumed, you'll need to define a strategy to reduce the volume of data generated. This strategy will allow you to properly monitor, operate, and diagnose your function apps in production. You can consider the following options:
-+ **Use sampling**: As mentioned [earlier](#configure-sampling), it will help to dramatically reduce the volume of telemetry events ingested while maintaining a statistically correct analysis. It could happen that even using sampling you still a get high volume of telemetry. Inspect the options that [adaptive sampling](../azure-monitor/app/sampling.md#configuring-adaptive-sampling-for-aspnet-applications) provides to you. For example, set the `maxTelemetryItemsPerSecond` to a value that balances the volume generated with your monitoring needs. Keep in mind that the telemetry sampling is applied per host executing your function app.
++ **Use sampling**: As mentioned [earlier](#configure-sampling), it will help to dramatically reduce the volume of telemetry events ingested while maintaining a statistically correct analysis. It could happen that even using sampling you still get a high volume of telemetry. Inspect the options that [adaptive sampling](../azure-monitor/app/sampling.md#configuring-adaptive-sampling-for-aspnet-applications) provides to you. For example, set the `maxTelemetryItemsPerSecond` to a value that balances the volume generated with your monitoring needs. Keep in mind that the telemetry sampling is applied per host executing your function app. + **Default log level**: Use `Warning` or `Error` as the default value for all telemetry categories. Now, you can decide which [categories](#configure-categories) you want to set at `Information` level so that you can monitor and diagnose your functions properly.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
A function can have zero or more input bindings that can pass data to a function
### Output bindings
-To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `myqueue-output` by using an output binding:
+To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `output-queue` by using an output binding:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is designed to work with all Azure Functions programming langu
| Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]
-> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to have a more idiomatic and intuitive. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators).
+> The new programming model for authoring Functions in Python (V2) is currently in preview. Compared to the current model, the new experience is designed to be more idiomatic and intuitive for Python programmers. To learn more, see Azure Functions Python [developer guide](../functions-reference-python.md?pivots=python-mode-decorators).
> > In the following code snippets, Python (PM2) denotes programming model V2, the new experience.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Specifies the repository or provider to use for key storage. Keys are always enc
|AzureWebJobsSecretStorageType|`blob`|Keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. Blob storage is the default behavior when `AzureWebJobsSecretStorageType` isn't set.<br/>To specify a different storage account, use the `AzureWebJobsSecretStorageSas` setting to indicate the SAS URL of a second storage account. | |AzureWebJobsSecretStorageType | `files` | Keys are persisted on the file system. This is the default behavior for Functions v1.x.| |AzureWebJobsSecretStorageType |`keyvault` | Keys are stored in a key vault instance set by `AzureWebJobsSecretStorageKeyVaultName`. |
-|Kubernetes Secrets | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
+|AzureWebJobsSecretStorageType | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
The following example shows how to use the `IAsyncCollector` interface to send a
[FunctionName("EH2EH")] public static async Task Run( [EventHubTrigger("source", Connection = "EventHubConnectionAppSetting")] EventData[] events,
- [EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<string> outputEvents,
+ [EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<EventData> outputEvents,
ILogger log) { foreach (EventData eventData in events) {
- // do some processing:
- var myProcessedEvent = DoSomething(eventData);
-
- // then send the message
- await outputEvents.AddAsync(JsonConvert.SerializeObject(myProcessedEvent));
+ // Do some processing:
+ string newEventBody = DoSomething(eventData);
+
+ // Queue the message to be sent in the background by adding it to the collector.
+ // If only the event is passed, an Event Hub partition to be be assigned via
+ // round-robin for each batch.
+ await outputEvents.AddAsync(new EventData(newEventBody));
+
+ // If your scenario requires that certain events are grouped together in an
+ // Event Hub partition, you can specify a partition key. Events added with
+ // the same key will always be assigned to the same partition.
+ await outputEvents.AddAsync(new EventData(newEventBody), "sample-key");
} } ```
In-process C# class library functions supports the following types:
This version of [EventData](/dotnet/api/azure.messaging.eventhubs.eventdata) drops support for the legacy `Body` type in favor of [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody).
-Send messages by using a method parameter such as `out string paramName`. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
+Send messages by using a method parameter such as `out string paramName`. To write multiple messages, you can use `ICollector<EventData>` or `IAsyncCollector<EventData>` in place of `out string`. Partition keys may only be used with `IAsyncCollector<EventData>`.
# [Extension v3.x+](#tab/extensionv3/in-process)
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
The `clientRetryOptions` settings only apply to interactions with the Service Bu
|**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that should be initiated per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting is used only when the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) is set to `false`. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting is used only when the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) is set to `true`. This setting only applies for functions that receive a single message at a time.| |**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
-|**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.|
+|**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the session will be closed and the function will attempt to process another session.
|**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.| # [Functions 2.x+](#tab/functionsv2)
azure-functions Functions Create First Java Gradle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-java-gradle.md
ms.devlang: java -+ Last updated 04/08/2020
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
Title: Deploy serverless Java apps with Quarkus on Azure Functions
description: Learn how to develop, build, and deploy a serverless Java app by using Quarkus on Azure Functions. -+ Last updated 01/10/2023 ms.devlang: java
To learn more about Azure Functions and Quarkus, see the following articles and
* [Azure Functions Java developer guide](./functions-reference-java.md) * [Quickstart: Create a Java function in Azure using Visual Studio Code](./create-first-function-vs-code-java.md) * [Azure Functions documentation](./index.yml)
-* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud)
+* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud)
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
Title: Create an Azure function app with Java and Eclipse description: How-to guide to create and publish a simple HTTP triggered serverless app using Java and Eclipse to Azure Functions.-+ Last updated 07/01/2018 ms.devlang: java
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Title: Create a Java function in Azure Functions using IntelliJ description: Learn how to use IntelliJ to create an HTTP-triggered Java function and then run it in a serverless environment in Azure. -+ Last updated 03/28/2022 ms.devlang: java
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
For a more detailed testing scenario using Visual Studio, see [Testing functions
When you publish from Visual Studio, it uses one of the two deployment methods: * [Web Deploy](functions-deployment-technologies.md#web-deploy-msdeploy): Packages and deploys Windows apps to any IIS server.
-* [Zip Deploy with run-From-package enabled](functions-deployment-technologies.md#zip-deploy): Recommended for Azure Functions deployments.
+* [Zip Deploy with Run-From-Package enabled](functions-deployment-technologies.md#zip-deploy): Recommended for Azure Functions deployments.
Use the following steps to publish your project to a function app in Azure.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Azure Functions supports the following Python versions:
\* Official Python distributions
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created, and it can't be changed.
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created, and it can't be changed for apps running in a Consumption plan.
The runtime uses the available Python version when you run it locally.
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| | | | | **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> | | **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>|
-| **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-20|
+| **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-30|
| **[ASE][Dedicated plan]**<sup>3</sup> | Manual/autoscale |100 | | **[Kubernetes]** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;|
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
Whenever possible, refactor large functions into smaller function sets that work
## Make sure background tasks complete
-When your function starts any tasks, callbacks, threads, processes, or tasks, they must complete before your function code returns. Because Functions doesn't track these background threads, site shutdown can occur regardless of background thread status, which can cause unintended behavior in your functions.
+When your function starts any tasks, callbacks, threads, processes, they must complete before your function code returns. Because Functions doesn't track these background threads, site shutdown can occur regardless of background thread status, which can cause unintended behavior in your functions.
For example, if a function starts a background task and returns a successful response before the task completes, the Functions runtime considers the execution as having completed successfully, regardless of the result of the background task. If this background task is performing essential work, it may be preempted by site shutdown, leaving that work in an unknown state.
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
[routeset]: /rest/api/maps/v20220901preview/routeset [wayfinding]: /rest/api/maps/v20220901preview/wayfinding [wayfinding service]: /rest/api/maps/v20220901preview/wayfinding
-[wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/path
+[wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/get-path
[style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md [map-config-api]: /rest/api/maps/v20220901preview/map-configuration
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based resources:
> [!div class="checklist"] > - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md). > - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
-> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
+>
> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources. > - Eliminate the need for cross-app/workspace queries. > - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml). > - Don't require changing instrumentation keys after migration from a classic resource. + > [!IMPORTANT] > * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation. > * When you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](export-telemetry.md#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
Legacy table: traces
|message|string|Message|string| |operation_Id|string|OperationId|string| |operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
+|operation_ParentId|string|ParentId|string|
|operation_SyntheticSource|string|OperationSyntheticSource|string| |sdkVersion|string|SDKVersion|string| |session_Id|string|SessionId|string|
Legacy table: traces
* [Explore metrics](../essentials/metrics-charts.md) * [Write Log Analytics queries](../logs/log-query-overview.md)+
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor collects these types of data:
|Data Type |Description | ||| |Application|Data about the performance and functionality of your application code on any platform.|
-|Infrastructure|**- Container.** Data about containers, such as [Azure Kubernetes Service](https://learn.microsoft.com/azure/aks/intro-kubernetes), [Prometheus](./essentials/prometheus-metrics-overview.md), and about the applications running inside containers.<br>**- Operating system.** Data about the guest operating system on which your application is running.|
+|Infrastructure|**- Container.** Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and about the applications running inside containers.<br>**- Operating system.** Data about the guest operating system on which your application is running.|
|Azure Platform|**- Azure resource**. The operation of an Azure resource.<br>**- Azure subscription.** The operation and management of an Azure subscription, and data about the health and operation of Azure itself.<br>**- Azure tenant.** Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br>**- Azure resource changes.** Data about changes within your Azure resources and how to address and triage incidents and issues. | |Custom Sources|Use the Azure Monitor REST API to send customer metric or log data to Azure Monitor and incorporate monitoring of resources that donΓÇÖt expose monitoring data through other methods.|
The Azure portal contains built in tools that allow you to analyze monitoring da
|Tool |Description | ||| |[Metrics explorer](essentials/metrics-getting-started.md)|Use the Azure Monitor metrics explorer user interface in the Azure portal to investigate the health and utilization of your resources. Metrics explorer helps you plot charts, visually correlate trends, and investigate spikes and dips in metric values. Metrics explorer contains features for applying dimensions and filtering, and for customizing charts. These features help you analyze exactly the data you need in a visually intuitive way.|
-|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL).Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily.|
+|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL). Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily.|
|[Change Analysis](change/change-analysis.md)| The Change Analysis user interface in the Azure portal gives you insight into the cause of live site issues, outages, or component failures. Change Analysis uses the power of [Azure Resource Graph](../governance/resource-graph/overview.md) to detect various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that checks resource changes in the subscription and provides data for diagnostic tools to help users understand what changes might have caused issues.|
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Alerts|[Create a new alert rule](alerts/alerts-create-new-alert-rule.md)|Add inf
Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|Removed option for managing alert instances using the CLI.| Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|The continuous export deprecation notice has been added to this article for more visibility. It's recommended to migrate to workspace-based Application Insights resources as soon as possible to take advantage of new features.| Application-Insights|[Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)|Client-side JavaScript SDK extensions have been consolidated into two new articles called "Framework extensions" and "Feature Extensions". We've additionally created new stand-alone Upgrade and Troubleshooting articles.|
-Application-Insights|[Create an Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource)|Classic workspace documentation has been moved to the Legacy and Retired Features section of our table of contents and we've made both the feature retirement and upgrade path clearer. It's recommended to migrate to workspace-based Application Insights resources as soon as possible to take advantage of new features.|
Application-Insights|[Monitor Azure Functions with Azure Monitor Application Insights](app/monitor-functions.md)|We've overhauled our documentation on Azure Functions integration with Application Insights.| Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](app/opentelemetry-enable.md)|Java OpenTelemetry examples have been updated.| Application-Insights|[Application Monitoring for Azure App Service and Java](app/azure-web-apps-java.md)|We updated and separated out the instructions to manually deploy the latest Application Insights Java version.|
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
For businesses looking to migrate their applications and workloads to Azure, Azu
In addition to migration, Azure NetApp Files provides a platform for running specialized workloads in High-Performance Computing (HPC) like Analytics, Oil and Gas, and Electronic Design Automation (EDA). These specialized workloads require high-performance computing resources, and Azure NetApp FilesΓÇÖ scalable and high-performance file storage solution provides the ideal platform for running these workloads in Azure. Azure NetApp Files also supports running Virtual Desktop Infrastructure (VDI) with Azure Virtual Desktop and Citrix, as well as Azure VMware Solution with guest OS mounts and datastores.
-Azure NetApp FilesΓÇÖ integration with Azure native services like Azure Kubernetes Service, Azure Batch, and ML provides users with a seamless experience and enables them to leverage the full power of Azure's cloud-native services. This integration allows businesses to run their workloads in a scalable, secure, and highly performant environment, providing them with the confidence they need to run mission-critical workloads in the cloud.
+Azure NetApp FilesΓÇÖ integration with Azure native services like Azure Kubernetes Service, Azure Batch, and Azure Machine Learning provides users with a seamless experience and enables them to leverage the full power of Azure's cloud-native services. This integration allows businesses to run their workloads in a scalable, secure, and highly performant environment, providing them with the confidence they need to run mission-critical workloads in the cloud.
The following diagram depicts the categorization of reference architectures, blueprints and solutions on this page as laid out in the above introduction:
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
You can enable preview features by adding:
```json { "experimentalFeaturesEnabled": {
- "userDefineTypes": true,
+ "userDefinedTypes": true,
"extensibility": true } }
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
This article describes how to decompile Azure Resource Manager templates (ARM te
> [!NOTE] > From Visual Studio Code, you can directly create resource declarations by importing from existing resources. For more information, see [Bicep commands](./visual-studio-code.md#bicep-commands). >
-> Visual Studio Code enables you to paste JSON as Bicep. It automatically runs the decompile command. For more information, see [Paste JSON as Bicep](./visual-studio-code.md#paste-as-bicep-preview).
+> Visual Studio Code enables you to paste JSON as Bicep. It automatically runs the decompile command. For more information, see [Paste JSON as Bicep](./visual-studio-code.md#paste-as-bicep).
Decompiling an ARM template helps you get started with Bicep development. If you have a library of ARM templates and want to use Bicep for future development, you can decompile them to Bicep. However, the Bicep file might need revisions to implement best practices for Bicep.
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
The convert phase consists of two steps, which you complete in sequence:
> [!NOTE] > You can import a resource by opening the Visual Studio Code command palette. Use <kbd>Ctrl+Shift+P</kbd> on Windows and Linux and <kbd>Γîÿ+Shift+P</kbd> on macOS. >
-> Visual Studio Code enables you to paste JSON as Bicep. For more information, see [Paste JSON as Bicep](./visual-studio-code.md#paste-as-bicep-preview).
+> Visual Studio Code enables you to paste JSON as Bicep. For more information, see [Paste JSON as Bicep](./visual-studio-code.md#paste-as-bicep).
## Phase 2: Migrate
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
You can define allowed values for a parameter. You provide the allowed values in
param demoEnum string ```
+If you define allowed values for an array parameter, the actual value can be any subset of the allowed values.
+ ### Length constraints You can specify minimum and maximum lengths for string and array parameters. You can set one or both constraints. For strings, the length indicates the number of characters. For arrays, the length indicates the number of items in the array.
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
From Visual Studio Code, you can easily open the template reference for the reso
:::image type="content" source="./media/visual-studio-code/visual-studio-code-bicep-view-type-document.png" alt-text="Screenshot of Visual Studio Code Bicep view type document.":::
-## Paste as Bicep (Preview)
+## Paste as Bicep
You can paste a JSON snippet from an ARM template to Bicep file. Visual Studio Code automatically decompiles the JSON to Bicep. This feature is only available with the Bicep extension version 0.14.0 or newer.
azure-resource-manager Microsoft Storage Storageaccountselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-storage-storageaccountselector.md
Title: StorageAccountSelector UI element description: Describes the Microsoft.Storage.StorageAccountSelector UI element for Azure portal. -- Previously updated : 06/28/2018 -+ Last updated : 03/17/2023 + # Microsoft.Storage.StorageAccountSelector UI element
-A control for selecting a new or existing storage account.
+A control that's used to select a new or existing storage account.
+
+Storage account names must be globally unique across Azure with a length of 3-24 characters, and include only lowercase letters or numbers.
## UI sample
-The control shows the default value.
+The `StorageAccountSelector` control shows the default name for a storage account. The default is set in your code.
-![Microsoft.Storage.StorageAccountSelector](./media/managed-application-elements/microsoft-storage-storageaccountselector.png)
-The control enables the user to create a new storage account or select an existing storage account.
+The `StorageAccountSelector` control allows you to create a new storage account or select an existing storage account.
-![Microsoft.Storage.StorageAccountSelector new](./media/managed-application-elements/microsoft-storage-storageaccountselector-new.png)
## Schema
The control enables the user to create a new storage account or select an existi
{ "name": "element1", "type": "Microsoft.Storage.StorageAccountSelector",
- "label": "Storage account",
+ "label": "Storage account selector",
"toolTip": "", "defaultValue": { "name": "storageaccount01",
The control enables the user to create a new storage account or select an existi
```json { "name": "storageaccount01",
- "resourceGroup": "rg01",
- "type": "Premium_LRS",
- "newOrExisting": "new"
+ "resourceGroup": "demoRG",
+ "type": "Standard_LRS",
+ "newOrExisting": "new",
+ "kind": "StorageV2"
} ``` ## Remarks -- If specified, `defaultValue.name` is automatically validated for uniqueness. If the storage account name isn't unique, the user must specify a different name or choose an existing storage account.-- The default value for `defaultValue.type` is **Premium_LRS**.
+- The `defaultValue.name` is required and the value is automatically validated for uniqueness. If the storage account name isn't unique, the user must specify a different name or choose an existing storage account.
+- The default value for `defaultValue.type` is **Premium_LRS**. You can set any storage account type as the default value. For example, _Standard_LRS_ or _Standard_GRS_.
- Any type not specified in `constraints.allowedTypes` is hidden, and any type not specified in `constraints.excludedTypes` is shown. `constraints.allowedTypes` and `constraints.excludedTypes` are both optional, but can't be used simultaneously.-- If `options.hideExisting` is **true**, the user can't choose an existing storage account. The default value is **false**.
+- If `options.hideExisting` is **true**, the user can't choose an existing storage account. The default value is **false**. The control only shows storage accounts as _existing_ if they are in same resource group and region as the selections made on the **Basics** tab.
+- The `kind` property displays the value if a new storage account was created, or an existing storage account's value.
+
+## Example
+
+The default values for the storage account name and type are examples. You can set your own default values for your environment.
+
+In the `outputs` section, the `storageSelector` output includes all the values for a storage account. The `storageKind` and `storageName` are examples of how to output specific values.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/0.1.2-preview/CreateUIDefinition.MultiVm.json#",
+ "handler": "Microsoft.Azure.CreateUIDef",
+ "version": "0.1.2-preview",
+ "parameters": {
+ "basics": [
+ {}
+ ],
+ "steps": [
+ {
+ "name": "StorageAccountSelector",
+ "label": "Storage account selector",
+ "elements": [
+ {
+ "name": "storageSelectorElement",
+ "type": "Microsoft.Storage.StorageAccountSelector",
+ "label": "Storage account name",
+ "toolTip": "",
+ "defaultValue": {
+ "name": "storageaccount01",
+ "type": "Premium_LRS"
+ },
+ "options": {
+ "hideExisting": false
+ },
+ "visible": true
+ }
+ ]
+ }
+ ],
+ "outputs": {
+ "location": "[location()]",
+ "storageSelector": "[steps('StorageAccountSelector').storageSelectorElement]",
+ "storageKind": "[steps('StorageAccountSelector').storageSelectorElement.kind]",
+ "storageName": "[steps('StorageAccountSelector').storageSelectorElement.name]"
+ }
+ }
+}
+```
+
+## Example output
+
+The output for a _new_ storage account.
+
+```json
+{
+ "location": {
+ "value": "westus3"
+ },
+ "storageSelector": {
+ "value": {
+ "name": "demostorageaccount01",
+ "resourceGroup": "demoRG",
+ "type": "Standard_GRS",
+ "newOrExisting": "new",
+ "kind": "StorageV2"
+ }
+ },
+ "storageKind": {
+ "value": "StorageV2"
+ },
+ "storageName": {
+ "value": "demostorageaccount01"
+ }
+}
+```
+
+The output for an _existing_ storage account.
+
+```json
+{
+ "location": {
+ "value": "westus3"
+ },
+ "storageSelector": {
+ "value": {
+ "name": "demostorage99",
+ "resourceGroup": "demoRG",
+ "type": "Standard_LRS",
+ "newOrExisting": "existing",
+ "kind": "StorageV2"
+ }
+ },
+ "storageKind": {
+ "value": "StorageV2"
+ },
+ "storageName": {
+ "value": "demostorage99"
+ }
+}
+```
## Next steps
-* For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
-* For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- For an introduction to creating UI definitions, go to [CreateUiDefinition.json for Azure managed application's create experience](create-uidefinition-overview.md).
+- For a description of common properties in UI elements, go to [CreateUiDefinition elements](create-uidefinition-elements.md).
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
+
+ Title: Azure VMware Solution known issues
+description: This article provides details about the known issues of Azure VMware Solution.
+++ Last updated : 3/17/2023++
+# Known issues: Azure VMware Solution
+
+This article describes the currently known issues with Azure VMware Solution.
+
+Refer to the table below to find details about resolution dates or possible workarounds. For more information about the different feature enhancements and bug fixes in Azure VMware Solution, see [What's New](azure-vmware-solution-platform-updates.md).
+
+|Issue | Date discovered | Workaround | Date resolved |
+| :- | : | :- | :- |
+| [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) |
+
+In this article, you learned about the currently known issues with the Azure VMware Solution. For more information about the Azure VMware Solution, see:
+
+>[!div class="nextstepaction"]
+>[About Azure VMware Solution](introduction.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 2/27/2023 Last updated : 3/16/2023 # What's new in Azure VMware Solution
Microsoft will regularly apply important updates to the Azure VMware Solution fo
## February 2023
-All new Azure VMware Solution private clouds are being deployed with NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023.
+All new Azure VMware Solution private clouds are being deployed with VMware NSX-T Data Center version 3.2.2. NSX-T Data Center versions in existing private clouds will be upgraded to NSX-T Data Center version 3.2.2 through April 2023.
-VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html) like, Replicated Assisted vMotion (RAV), Mobility Optimized Networking (MON). HCX Enterprise is now automatically installed for all new HCX add-on requests, and existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/install-vmware-hcx).
+VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. VMware HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html) like, Replicated Assisted vMotion (RAV), and Mobility Optimized Networking (MON). VMware HCX Enterprise is now automatically installed for all new VMware HCX add-on requests, and existing VMware HCX Advanced customers can upgrade to VMware HCX Enterprise using the Azure portal. Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
**Log analytics - monitor Azure VMware Solution**
Azure VMware Solution will apply the [VMware ESXi 6.7, Patch Release ESXi670-202
Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied **through March 15, 2021**. >[!NOTE]
- >This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
+ >This is non-disruptive and should not impact the Azure VMware Solution service or workloads. During maintenance, various VMware vSphere alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses.
## Post update Once complete, newer versions of VMware solution components will appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
azure-vmware Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md
Navigate to your **Azure Key Vault** and provide access to the SDDC on Azure Key
> [!IMPORTANT] > If you want to select a specific key version instead of the automatically selected latest version, you'll need to specify the key URI with key version. This will affect the CMK key version life cycle.
+ > [!NOTE]
+ > The Azure key vault Managed HSM option is only supported with the Key URI option.
+ 1. Select **Save** to grant access to the resource. # [Azure CLI](#tab/azure-cli)
azure-vmware Migrate Sql Server Always On Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-cluster.md
+
+ Title: Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
+description: Learn how to migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution.
++ Last updated : 3/17/2023++
+# Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution
+
+In this article, you learn how to migrate Microsoft SQL Server Always-On Cluster to Azure VMware Solution. For VMware HCX, you can follow the VMware vMotion migration procedure.
++
+## Prerequisites
+
+These are the prerequisites to migrating your SQL server instance to Azure VMware Solution.
+
+- Review and record the storage and network configuration of every node in the cluster.
+- Backup the full database.
+- Backup the virtual machine running the Microsoft SQL Server instance.
+- Remove the virtual machine from any VMware vSphere Distributed Resource Scheduler (DRS) groups and rules.
+- VMware HCX must be configured between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information on how to configure HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).
+- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+
+VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads, VMware HCX over VPN is not recommended for Microsoft SQL Server Always-On migrations for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads this may be suitable, depending upon the size of the database, to migrate.
+
+Microsoft SQL Server (2019 and 2022) was tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+
+## Downtime considerations
+
+Predicting downtime during a migration depends upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Always-On migrations are intended to be executed with low database downtime. However, plan to conduct the migration during off-peak hours within a pre-approved change window.
+
+The table below indicates the estimated downtime for each Microsoft SQL Server topology.
+
+| **Scenario** | **Downtime expected** | **Notes** |
+|:|:--|:--|
+| **Standalone instance** | LOW | Migrate with VMware vMotion, the DB is available during migration, but it is not recommended to commit any critical data during it. |
+| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | HIGH | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+
+## Windows Server Failover Cluster quorum considerations
+
+Microsoft SQL Server Always-On Availability Groups rely on Windows Server Failover Cluster, which requires a quorum voting mechanism to maintain the coherence of the cluster.
+
+An odd number of voting elements is required, which is achieved by an odd number of nodes in the cluster or by using a witness. Witness can be configured in three different ways:
+
+- Disk witness
+- File share witness
+- Cloud witness
+
+If the cluster uses **Disk witness**, then the disk must be migrated with the rest of cluster shared storage using the procedure described in this document.
+
+If the cluster uses a **File share witness** running on-premises, then the type of witness for your migrated cluster depends upon the Azure VMware Solution scenario, there are several options to consider.
+
+- **Datacenter Extension**: Maintain the file share witness on-premises. Your workloads are distributed across your datacenter and Azure. Therefore the connectivity between your datacenter and Azure should always be available. In any case, take into consideration bandwidth constraints and plan accordingly.
+- **Datacenter Exit**: For this scenario, there are two options. In both options, you can maintain the file share witness on-premises during the migration in case you need to do rollback during the process.
+ - Deploy a new **File share witness** in your Azure VMware Solution private cloud.
+ - Deploy a **Cloud witness** running in Azure Blob Storage in the same region as the Azure VMware Solution private cloud.
+- **Disaster Recovery and Business Continuity**: For a disaster recovery scenario, the best and most reliable option is to create a **Cloud Witness** running in Azure Storage.
+- **Application Modernization**: For this use case, the best option is to deploy a **Cloud Witness**.
+
+For details about configuring and managing the quorum, see [Failover Clustering documentation](https://learn.microsoft.com/windows-server/failover-clustering/manage-cluster-quorum). For information about deployment of Cloud witness in Azure Blob Storage, see [Manage a cluster quorum for a Failover Cluster](https://learn.microsoft.com/windows-server/failover-clustering/deploy-cloud-witness).
+
+## Migrate Microsoft SQL Server Always-On cluster
+
+1. Access your Always-On cluster with SQL Server Management Studio using administration credentials.
+ 1. Select your primary replica and open **Availability Group** **Properties**.
++
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-1.png" alt-text="Diagram showing Always On Availability Group properties." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-1.png":::
+
+ 1. Change **Availability Mode** to **Asynchronous commit** only for the replica to be migrated.
+ 1. Change **Failover Mode** to **Manual** for every member of the availability group.
+1. Access the on-premises vCenter Server and proceed to HCX area.
+1. Under **Services** select **Migration** > **Migrate**.
+ 1. Select one virtual machine running the secondary replica of the database the is going to be migrated.
+ 1. Set the vSphere cluster in the remote private cloud to run the migrated SQL cluster as the **Compute Container**.
+ 1. Select the **vSAN Datastore** as remote storage.
+ 1. Select a folder. This not mandatory, but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
+ 1. Keep **Same format as source**.
+ 1. Select **vMotion** as **Migration profile**.
+ 1. In **Extended Options** select **Migrate Custom Attributes**.
+ 1. Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ 1. Select **Validate** and ensure that all checks are completed with pass status. The most common error is related to the storage configuration. Verify again that there are no virtual SCSI controllers have the physical sharing setting.
+ 1. Click **Go** to start the migration.
+1. Once the migration has been completed, access the migrated replica and verify connectivity with the rest of the members in the availability group.
+1. In SQL Server Management Studio, open the **Availability Group Dashboard** and verify that the replica appears as **Online**.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-2.png" alt-text="Diagram showing Always On Availability Group Dashboard." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-2.png":::
+
+ 1. **Data Loss** status in the **Failover Readiness** column is expected since the replica has been out-of-sync with the primary during the migration.
+1. Edit the **Availability Group** **Properties** again and set **Availability Mode** back to **Synchronous commit**.
+ 1. The secondary replica starts to synchronize back all the changes made to the primary replica during the migration. Wait until it appears in Synchronized state.
+1. From the **Availability Group Dashboard** in SSMS click on **Start Failover Wizard**.
+1. Select the migrated replica and click **Next**.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-3.png" alt-text="Diagram showing new primary replica selection for always on." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-3.png":::
+
+1. Connect to the replica in the next screen with your DB admin credentials.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-4.png" alt-text="Diagram showing new primary replica admin credentials connection." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-4.png":::
+
+1. Review the changes and click **Finish** to start the failover operation.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-5.png" alt-text="Diagram showing Availability Group always on operation review." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-5.png":::
+
+
+1. Monitor the progress of the failover in the next screen, and click **Close** when the operation is finished.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-6.png" alt-text="Diagram showing that always on SQL server cluster successfully finished." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-6.png":::
++
+1. Refresh the **Object Explorer** view in SQL Server Management Studio (SSMS), and verify that the migrated instance is now the primary replica.
+1. Repeat steps 1 to 6 for the rest of the replicas of the availability group.
+
+ >[!Note]
+ > Migrate one replica at a time and verify that all changes are synchronized back to the replica after each migration. Do not migrate all the replicas at the same time using **HCX Bulk Migration**.
+1. After the migration of all the replicas is completed, access your Always-On availability group with **SQL Server Management Studio**.
+ 1. Open the Dashboard and verify there is no data loss in any of the replicas and that all are in a **Synchronized** state.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-7.png" alt-text="Diagram showing availability Group Dashboard with new primary replica and all migrated secondary replicas in synchronized state." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-7.png":::
+ 1. Edit the **Properties** of the availability group and set **Failover Mode** to **Automatic** in all replicas.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-always-on-8.png" alt-text="Diagram showing a setting for failover back to Automatic for all replicas." border="false" lightbox="media/sql-server-hybrid-benefit/sql-always-on-8.png":::
+
+## Next steps
+
+[Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+
+[Create a placement policy in Azure VMware Solution](create-placement-policy.md)
+
+[Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
+
+[Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/)
+
+[Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/)
+
+[Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+
+[Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
+
+[Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
+
+[VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
+
+[Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
+
+[Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
+
+[Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Migrate Sql Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md
+
+ Title: Migrate SQL Server failover cluster to Azure VMware Solution
+description: Learn how to migrate SQL Server failover cluster to Azure VMware Solution
++ Last updated : 3/17/2023+++
+# Migrate SQL Server failover cluster to Azure VMware Solution
+
+In this article, you'll learn how to migrate a Microsoft SQL Server Failover cluster instance to Azure VMware Solution. Currently Azure VMware Solution service doesn't support VMware Hybrid Linked Mode to connect an on-premises vCenter Server with one running in Azure VMware Solution. Due to this constraint, this process requires the use of VMware HCX for the migration. For more details about configuring HCX, see [Install and activate VMware HCX in Azure VMware Solution](install-vmware-hcx.md).
+
+VMware HCX doesn't support migrating virtual machines with SCSI controllers in physical sharing mode attached to a virtual machine. However, you can overcome this limitation by performing the steps shown in this procedure and by using VMware HCX Cold Migration to move the different virtual machines that make up the cluster.
++
+> [!NOTE]
+> This procedure requires a full shutdown of the cluster. Since the Microsoft SQL Server service will be unavailable during the migration, plan accordingly for the downtime period .
+
+## Prerequisites
+
+- Review and record the storage and network configuration of every node in the cluster.
+- Review and record the WSFC configuration.
+- Back up the database(s) being executed in the cluster.
+- Back up the cluster virtual machines.
+- Remove all cluster node VMs from any Distributed Resource Scheduler (DRS) groups and rules they're part of.
+- VMware HCX must be configured between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more details about installing VMware HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md).
+- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud. To verify this step, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+
+VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads it isn't recommended for Microsoft SQL Server Failover Cluster Instance and Microsoft SQL Server Always-On migrations, especially for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads this can be suitable, depending upon the size of the database, to migrate.
+
+Microsoft SQL Server 2019 and 2022 were tested with Windows Server 2019 and 2022 Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+
+## Downtime considerations
+
+Predicting downtime during a migration will depend upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Migration of SQL Server Failover Cluster Instances Always On to Azure VMware Solution requires a full downtime of the database and all cluster nodes, however you should plan for the migration to be executed during off-peak hours with an approved change window.
+
+The table below indicates the downtime for each Microsoft SQL Server topology.
+
+| **Scenario** | **Downtime expected** | **Notes** |
+|:|:--|:--|
+| **Standalone instance** | LOW | Migration will be done using vMotion, the DB will be available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | HIGH | All nodes of the cluster will be shut down and migrated using VMware HCX Cold Migration. Downtime duration will depend upon database size and private network speed to Azure cloud. |
+
+## Windows Server Failover Cluster quorum considerations
+
+Windows Server Failover Cluster requires a quorum mechanism to maintain the cluster.
+
+Use an odd number of voting elements to achieve by an odd number of nodes in the cluster or by using a witness. Witnesses can be configured in three different forms:
+
+- Disk witness
+- File share witness
+- Cloud witness
+
+If the cluster uses **Disk witness**, then the disk must be migrated with the cluster shared storage using the [Migrate fail over cluster](#migrate-failover-cluster).
+
+If the cluster uses a **File** **share witness** running on-premises, then the type of witness for your migrated cluster depends on the Azure VMware Solution scenario:
+
+- **Datacenter Extension**: Maintain the file share witness on-premises. Your workloads are distributed across your datacenter and Azure VMware Solution, therefore connectivity between both should always be available. In any case take into consideration bandwidth constraints and plan accordingly.
+- **Datacenter Exit**: For this scenario, there are two options. In both cases, you can maintain the file share witness on-premises during the migration in case you need to do roll back.
+ - Deploy a new **File share witness** in your Azure VMware Solution private cloud.
+ - Deploy a **Cloud witness** running in Azure Blob Storage in the same region as the Azure VMware Solution private cloud.
+- Disaster Recovery and Business Continuity: For a disaster recovery scenario, the best and most reliable option is to create a **Cloud Witness** running in Azure Storage.
+- Application Modernization: For this use case, the best option is to deploy a **Cloud Witness**.
+
+For more information about quorum configuration and management, see [Failover Clustering documentation](https://learn.microsoft.com/windows-server/failover-clustering/manage-cluster-quorum). For more information about deploying a Cloud witness in Azure Blob Storage, see [Deploy a Cloud Witness for a Failover Cluster](https://learn.microsoft.com/windows-server/failover-clustering/deploy-cloud-witness) documentation for the details.
+
+## Migrate failover cluster
+
+For illustration purposes, in this document we're using a two-node cluster with Windows Server 2019 Datacenter and SQL Server 2019 Enterprise. Windows Server 2022 and SQL Server 2022 are also supported with this procedure.
+
+1. From vSphere Client shutdown the second node of the cluster.
+1. Access the first node of the cluster and open **Failover Cluster Manager**.
+ 1. Verify that the second node is in **Offline** state and that all clustered services and storage are under the control of the first node.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-1.png" alt-text="Diagram showing Windows Server Failover Cluster Manager cluster storage verification." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-1.png":::
+
+ 1. Shut down the cluster.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-2.png" alt-text="Diagram showing a shut down cluster using Windows Server Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-2.png":::
+
+ 1. Check that all cluster services are successfully stopped without errors.
+1. Shut down first node of the cluster.
+1. From the **vSphere Client**, edit the settings of the second node of the cluster.
+ 1. Remove all shared disks from the virtual machine configuration.
+ 1. Ensure that the **Delete files from datastore** checkbox isn't selected as this will permanently delete the disk from the datastore, and you'll need to recover the cluster from a previous backup.
+ 1. Set **SCSI Bus Sharing** from **Physical** to **None** in the virtual SCSI controllers used for the shared storage. Usually, these controllers are of VMware Paravirtual type.
+1. Edit the first node virtual machine settings. Set **SCSI Bus Sharing** from **Physical** to **None** in the SCSI controllers.
+
+1. From the vSphere Client,** go to the HCX plugin area. Under **Services**, select **Migration** > **Migrate**.
+ 1. Select the second node virtual machine.
+ 1. Set the vSphere cluster in the remote private cloud that will run the migrated SQL cluster as the **Compute Container**.
+ 1. Select the **vSAN Datastore** as remote storage.
+ 1. Select a folder if you want to place the virtual machines in specific folder, this not mandatory but is recommended to separate the different workloads in your Azure VMware Solution private cloud.
+ 1. Keep **Same format as source**.
+ 1. Select **Cold migration** as **Migration profile**.
+ 1. In **Extended** **Options** select **Migrate Custom Attributes**.
+ 1. Verify that on-premises network segments have the correct remote stretched segment in Azure.
+ 1. Select **Validate** and ensure that all checks are completed with pass status. The most common error here will be one related to the storage configuration. Verify again that there are no SCSI controllers with physical sharing setting.
+ 1. Select **Go** and the migration will initiate.
+1. Repeat the same process for the first node.
+1. Access **Azure VMware Solution vSphere Client** and edit the first node settings and set back to physical SCSI Bus sharing the SCSI controller(s) managing the shared disks.
+
+1. Edit node 2 settings in **vSphere Client**.
+ 1. Set SCSI Bus sharing back to physical in the SCSI controller managing shared storage.
+ 1. Add the cluster shared disks to the node as additional storage. Assign them to the second SCSI controller.
+ 1. Ensure that all the storage configuration is the same as the one recorded before the migration.
+1. Power on the first node virtual machine.
+1. Access the first node VM with **VMware Remote Console**.
+ 1. Verify virtual machine network configuration and ensure it can reach on-premises and Azure resources.
+ 1. Open **Failover Cluster Manager** and verify cluster services.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-3.png" alt-text="Diagram showing a cluster summary in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-3.png":::
+
+1. Power on the second node virtual machine.
+1. Access the second node VM from the **VMware Remote Console**.
+ 1. Verify that Windows Server can reach the storage.
+ 1. In the **Failover Cluster Manager** review that the second node appears as **Online** status.
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-failover-4.png" alt-text="Diagram showing a cluster node status in Failover Cluster Manager." border="false" lightbox="media/sql-server-hybrid-benefit/sql-failover-4.png":::
+
+1. Using the **SQL Server Management Studio** connect to the SQL Server cluster resource network name. Check the database is online and accessible.
+
+
+Finally, check the connectivity to SQL from other systems and applications in your infrastructure and verify that all applications using the database(s) can still access them.
+
+## Next steps
+
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Create a placement policy in Azure VMware Solution](create-placement-policy.md)
+- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver15)
+- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver16)
+- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
+- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
+- [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
+- [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
+- [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
+- [Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Migrate Sql Server Standalone Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md
+
+ Title: Migrate Microsoft SQL Server Standalone to Azure VMware Solution
+description: Learn how to migrate Microsoft SQL Server Standalone to Azure VMware Solution.
++ Last updated : 3/17/2023+++
+# Migrate Microsoft SQL Server Standalone to Azure VMware Solution
+
+In this article, you learn how to migrate Microsoft SQL Server standalone to Azure VMware Solution.
+
+When migrating Microsoft SQL Server Standalone to Azure VMware Solution, VMware HCX offers two migration profiles that can be used:
+
+- HCX vMotion
+- HCX Cold Migration
+
+In both cases, consider the size and criticality of the database being migrated. For this how-to procedure, we have validated VMware HCX vMotion. VMware HCX Cold Migration is also valid, but it requires a longer downtime period.
++
+## Prerequisites
+
+- Review and record the storage and network configuration of every node in the cluster.
+- Back up the full database.
+- Back up the virtual machine running the Microsoft SQL Server instance.
+- Remove all cluster node VMs from any Distributed Resource Scheduler (DRS) groups and rules.
+
+- Configure VMware HCX between your on-premises datacenter and the Azure VMware Solution private cloud that runs the migrated workloads. For more information about configuring VMware HCX, see [Azure VMware Solution documentation](install-vmware-hcx.md) .
+- Ensure that all the network segments in use by the Microsoft SQL Server are extended into your Azure VMware Solution private cloud.For verify this step in the procedure, see [Configure VMware HCX network extension](configure-hcx-network-extension.md).
+
+VMware HCX over VPN is supported in Azure VMware Solution for workload migration. However, due to the size of database workloads, VMware HCX over VPN isn't recommended for Microsoft SQL Server Failover Cluster Instance and Microsoft SQL Server Always-On migrations, especially for production workloads. ExpressRoute connectivity is recommended as more performant and reliable. For Microsoft SQL Server Standalone and non-production workloads HCX over VPN can be suitable, depending on the size of the database, to migrate.
+
+Microsoft SQL Server (2019 and 2022) were tested with Windows Server (2019 and 2022) Data Center edition with the virtual machines deployed in the on-premises environment. Windows Server and SQL Server have been configured following best practices and recommendations from Microsoft and VMware. The on-premises source infrastructure was VMware vSphere 7.0 Update 3 and VMware vSAN running on Dell PowerEdge servers and Intel Optane P4800X SSD NVMe devices.
+
+## Downtime considerations
+
+Predicting downtime during a migration depends upon the size of the database to be migrated and the speed of the private network connection to Azure cloud. Migration of SQL Server standalone instance doesn't require database downtime since it will be done using the VMware HCX vMotion mechanism. We recommend the migration during off-peak hours with an pre-approved change window.
+
+This table indicates the estimated downtime for each Microsoft SQL Server topology.
+
+| **Scenario** | **Downtime expected** | **Notes** |
+|:|:--|:--|
+| **Standalone instance** | LOW | Migration is done using VMware vMotion, the DB is available during migration time, but it isn't recommended to commit any critical data during it. |
+| **Always-On Availability Group** | LOW | The primary replica will always be available during the migration of the first secondary replica and the secondary replica will become the primary after the initial failover to Azure. |
+| **Failover Cluster Instance** | HIGH | All nodes of the cluster are shutdown and migrated using VMware HCX Cold Migration. Downtime duration depends upon database size and private network speed to Azure cloud. |
+
+## Migrate Microsoft SQL Server standalone
+
+1. Log into your on-premises **vCenter Server** and access the VMware HCX plugin.
+1. Under **Services** select **Migration** > **Migrate**.
+ a. Select the Microsoft SQL Server virtual machine.
+ a. Set the vSphere cluster in the remote private cloud of the migrated SQL cluster as the **Compute Container**.
+ a. Select the vSAN Datastore as remote storage.
+ a. Select a folder. This isn't mandatory, but we recommended separating the different workloads in your Azure VMware Solution private cloud.
+ a. Keep **Same format as source**.
+ a. Select **vMotion** as Migration profile.
+ a. In **Extended Options** select **Migrate Custom Attributes**.
+ a. Verify that on-premises network segments have the correct remote stretched segment in Azure VMware Solution.
+ a. Select **Validate** and ensure that all checks are completed with pass status.
+ a. Select **Go** to start the migration.
+1. After the migration has completed, access the virtual machine using VMware Remote Console in the vSphere Client.
+ a. Verify the network configuration and check connectivity both with on-premises and Azure VMware Solution resources.
+ a. Using SQL Server Management Studio verify you can access the database.
+
+ :::image type="content" source="media/sql-server-hybrid-benefit/sql-standalone-1.png" alt-text="Diagram showing a SQL Server Management Studio connection to the migrated database." border="false" lightbox="media/sql-server-hybrid-benefit/sql-standalone-1.png":::
+
+## Next steps
+
+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md).
+- [Create a placement policy in Azure VMware Solution](create-placement-policy.md)
+- [Windows Server Failover Clustering Documentation](https://learn.microsoft.com/windows-server/failover-clustering/failover-clustering-overview)
+- [Microsoft SQL Server 2019 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver15)
+- [Microsoft SQL Server 2022 Documentation](https://learn.microsoft.com/sql/sql-server/?view=sql-server-ver16)
+- [Windows Server Technical Documentation](https://learn.microsoft.com/windows-server/)
+- [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)
+- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf)
+- [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951)
+- [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf)
+- [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf)
+- [Setup for Windows Server Failover Cluster in VMware vSphere 7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-vcenter-server-703-setup-wsfc.pdf)
azure-vmware Sql Server Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/sql-server-hybrid-benefit.md
+
+ Title: Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions
+description: Learn about Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions.
++ Last updated : 3/16/2023+++
+# Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions
+
+Azure Hybrid benefit is a cost saving offering from Microsoft you can use to save on cost while optimizing your hybrid environment by applying your existing Windows Server, SQL Server licenses or Linux subscriptions.
+
+- Save up to 85% over standard pay-as-you-go rate leveraging Windows Server and SQL Server licenses with Azure Hybrid benefit.
+- Use Azure Hybrid Benefit in Azure SQL platform as a service (PaaS) environment.
+- Apply to SQL Server one to four vCPUs exchange: For every one core of SQL Server Enterprise Edition, you get four vCPUs of SQL Managed Instance or Azure SQL Database general purpose and Hyperscale tiers, or 4 vCPUs of SQL Server Standard edition on Azure VMs.
+- Use existing SQL licensing to adopt Azure ArcΓÇôenabled SQL Managed Instance.
+- Help meet compliance requirements with unlimited virtualization on Azure Dedicated Host and the Azure VMware Solution.
+- Get 180 days of dual-use rights between on-premises and Azure.
+
+## Microsoft SQL server
+
+Microsoft SQL Server is a core component of many business-critical applications currently running on VMware vSphere and is one of the most widely used database platforms in the market with customers running hundreds of SQL Server instances with VMware vSphere on-premises.
+
+Azure VMware Solution is an ideal solution for customers looking to migrate and modernize their vSphere-based applications to the cloud, including their Microsoft SQL databases.
+
+## Next steps
+
+Now that you've covered Azure Hybrid benefit, you may want to learn about:
+
+- [Migrate Microsoft SQL Server Standalone to Azure VMware Solution](migrate-sql-server-standalone-cluster.md)
+- [Migrate SQL Server failover cluster to Azure VMware Solution](migrate-sql-server-failover-cluster.md)
+- [Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution](migrate-sql-server-always-on-cluster.md)
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 03/16/2023 Last updated : 03/17/2023
Archive tier supports the following clients:
### Supported regions
-| Workloads | Preview | Generally available |
+| Workloads | Generally available |
| | | | | SQL Server in Azure Virtual Machines/ SAP HANA in Azure Virtual Machines | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West. | | Azure Virtual Machines | All regions, except West US 3, West India, Switzerland North, Switzerland West, Sweden Central, Sweden South, Australia Central, Australia Central 2, Brazil Southeast, Norway West, Germany Central, Germany North, Germany Northeast, South Africa North, South Africa West, UAE North. |
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
-### My privatelink.azure.com cannot resolve to management.privatelinke.azure.com
+### My privatelink.azure.com cannot resolve to management.privatelink.azure.com
This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution.
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
The following faults are available for use today. Visit the [Fault Providers](./
} ```
-### Notes
+### Limitations
Known issues on Linux: 1. Stress effect may not be terminated correctly if AzureChaosAgent is unexpectedly killed. 2. Linux CPU fault is only tested on Ubuntu 16.04-LTS and Ubuntu 18.04-LTS.
Known issues on Linux:
} ```
+### Limitations
+Currently, the Windows agent doesn't reduce memory pressure when other applications increase their memory usage. If the overall memory usage exceeds 100%, the Windows agent may crash.
+ ## Virtual memory pressure | Property | Value |
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to
* Speech-to-text and text-to-speech container versions were updated in March 2023. * Speech SDK 1.26.0 was released in March 2023.
+* Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription.
* Custom Speech-to-Text container disconnected mode was released in January 2023. * Text-to-speech Batch synthesis API is available in public preview. * Speech-to-text REST API version 3.1 is generally available.
cognitive-services Speech Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-overview.md
Explore, try out, and view sample code for some of common use cases.
* [Call Center](https://aka.ms/speechstudio/callcenter): View a demonstration on how to use the Language and Speech services to analyze call center conversations. Transcribe calls in real-time or process a batch of calls, redact personally identifying information, and extract insights such as sentiment to help with your call center use case. For more information, see the [call center quickstart](call-center-quickstart.md).
+For a demonstration of these scenarios in Speech Studio, view this [introductory video](https://youtu.be/mILVerU6DAw).
+> [!VIDEO https://www.youtube.com/embed/mILVerU6DAw]
+ ## Speech Studio features In Speech Studio, the following Speech service features are available as project types:
cognitive-services Speech Synthesis Markup Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-structure.md
Here's a subset of the basic structure and syntax of an SSML document:
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="string"> <mstts:backgroundaudio src="string" volume="string" fadein="string" fadeout="string"/>
- <voice name="string">
+ <voice name="string" effect="string">
<audio src="string"/></audio> <bookmark mark="string"/> <break strength="string" time="string" />
Here's a subset of the basic structure and syntax of an SSML document:
</speak> ```
-Some examples of contents that are allowed in each element are described in the following list:
+Some examples of contents that are allowed in each element are described in the following list:
- `audio`: The body of the `audio` element can contain plain text or SSML markup that's spoken if the audio file is unavailable or unplayable. The `audio` element can also contain text and the following elements: `audio`, `break`, `p`, `s`, `phoneme`, `prosody`, `say-as`, and `sub`. - `bookmark`: This element can't contain text or any other elements. - `break`: This element can't contain text or any other elements.
cognitive-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md
At least one `voice` element must be specified within each SSML [speak](speech-s
You can include multiple `voice` elements in a single SSML document. Each `voice` element can specify a different voice. You can also use the same voice multiple times with different settings, such as when you [change the silence duration](speech-synthesis-markup-structure.md#add-silence) between sentences. + Usage of the `voice` element's attributes are described in the following table. | Attribute | Description | Required or optional | | - | - | - | | `name` | The voice used for text-to-speech output. For a complete list of supported prebuilt voices, see [Language support](language-support.md?tabs=tts).| Required|
+| `effect` |The audio effect processor that's used to optimize the quality of the synthesized speech output for specific scenarios on devices. <br/><br/>For some scenarios in production environments, the auditory experience may be degraded due to the playback distortion on certain devices. For example, the synthesized speech from a car speaker may sound dull and muffled due to environmental factors such as speaker response, room reverberation, and background noise. The passenger might have to turn up the volume to hear more clearly. To avoid manual operations in such a scenario, the audio effect processor can make the sound clearer by compensating the distortion of playback.<br/><br/>The following values are supported:<br/><ul><li>`eq_car` ΓÇô Optimize the auditory experience when providing high-fidelity speech in cars, buses, and other enclosed automobiles.</li><li>`eq_telecomhp8k` ΓÇô Optimize the auditory experience for narrowband speech in telecom or telephone scenarios. We recommend a sampling rate of 8 kHz. If the sample rate isn't 8 kHz, the auditory quality of the output speech won't be optimized.</li></ul><br/>If the value is missing or invalid, this attribute will be ignored and no effect will be applied.| Optional |
### Voice examples
This example uses a custom voice named "my-custom-voice".
</speak> ```
+#### Audio effect example
+
+You use the `effect` attribute to optimize the auditory experience for scenarios such as cars and telecommunications. The following SSML example uses the `effect` attribute with the configuration in car scenarios.
+
+```xml
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+ <voice name="en-US-JennyNeural" effect="eq_car">
+ This is the text that is spoken.
+ </voice>
+</speak>
+```
+ ## Speaking styles and roles By default, neural voices have a neutral speaking style. You can adjust the speaking style, style degree, and role at the sentence level.
cognitive-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-glossaries.md
Previously updated : 03/14/2023 Last updated : 03/16/2023 # Use glossaries with Document Translation
A glossary is a list of terms with definitions that you create for the Document
1. **Specify your glossary in the translation request.** Include the **`glossary URL`**, **`format`**, and **`version`** in your **`POST`** request:
- :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-15":::
+ :::code language="json" source="../../../../../cognitive-services-rest-samples/curl/Translator/translate-with-glossary.json" range="1-23" highlight="13-14":::
+
+ > [!NOTE]
+ > The example used an enabled [**system-assigned managed identity**](create-use-managed-identities.md#enable-a-system-assigned-managed-identity) with a [**Storage Blob Data Contributor**](create-use-managed-identities.md#grant-access-to-your-storage-account) role assignment for authorization. For more information, *see* [**Managed identities for Document Translation**](./create-use-managed-identities.md).
### Case sensitivity
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
Previously updated : 02/09/2023 Last updated : 03/16/2023 # Managed identities for Document Translation -
-> [!IMPORTANT]
->
-> * Currently, Document Translation doesn't support managed identity in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
->
-> * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
->
- Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
+ :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+ * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests. * To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md). * There's no added cost to use managed identities in Azure.
-> [!TIP]
+++
+> [!IMPORTANT]
> > * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. >
+> * Currently, Document Translation doesn't support managed identity in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
+>
+> * Document Translation is **only** available in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. _See_ [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
> * Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests. - ## Prerequisites To get started, you need:
The **Storage Blob Data Contributor** role gives Translator (represented by the
* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
-* If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
+* If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request.
* The translated documents appear in your target container.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
Previously updated : 02/28/2023 Last updated : 03/14/2023 zone_pivot_groups: usage-custom-language-features
cognitive-services Use Autotagging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autotagging.md
Title: How to use autotagging in custom named entity recognition
+ Title: How to use autolabeling in custom named entity recognition
-description: Learn how to use autotagging in custom named entity recognition.
+description: Learn how to use autolabeling in custom named entity recognition.
Previously updated : 05/09/2022 Last updated : 03/16/2023
-# How to use auto-labeling
+# How to use autolabeling for Custom Named Entity Recognition
-[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires a lot of time and effort, you can use the auto-labeling feature to automatically label your entities. With auto-labeling, you can start labeling a few of your documents, train a model, then create an auto-labeling job to produce labeling entities on your behalf, automatically. This feature can save you the time and effort of manually labeling your entities.
+[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities.
## Prerequisites
-Before you can use auto-labeling, you must have a [trained model](train-model.md).
+### [Autolabel based on a model you've trained](#tab/autolabel-model)
+Before you can use autolabeling based on a model you've trained, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* [Labeled data](tag-data.md)
+* A [successfully trained model](train-model.md)
-## Trigger an auto-labeling job
-When you trigger an auto-labeling job, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit will apply on all projects within the same resource.
+### [Autolabel with GPT](#tab/autolabel-gpt)
+Before you can use autolabeling with GPT, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided.
+* [Labeled data](tag-data.md) isn't required.
+* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
+++
+## Trigger an autolabeling job
+
+### [Autolabel based on a model you've trained](#tab/autolabel-model)
+
+When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource.
> [!TIP] > A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is: > > `ceil(8921/1000) = ceil(8.921)`, which is 9 text records.
-1. From the left navigation menu, select **Data auto-labeling**.
-2. Select **Trigger Auto-label** to start an auto-labeling job
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
:::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png":::
+
+3. Choose Autolabel based on a model you've trained and click on Next.
-3. Choose a trained model. It's recommended to check the model performance before using it for auto-labeling.
-
- :::image type="content" source="../media/choose-model.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model.png":::
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling.
+ :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png":::
-4. Choose the entities you want to be included in the auto-labeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities.
+5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities.
:::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
-5. Choose the documents you want to be automatically labeled. You'll see the number of text records of each document. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter.
+6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter.
> [!NOTE]
- > * If an entity was automatically labeled, but has a user defined label, only the user defined label will be used and be visible.
+ > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
> * You can view the documents by clicking on the document name. :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
-6. Select **Autolabel** to trigger the auto-labeling job.
-You should see the model used, number of documents included in the auto-labeling job, number of text records and entities to be automatically labeled. Auto-labeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+7. Select **Autolabel** to trigger the autolabeling job.
+You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
+
+### [Autolabel with GPT](#tab/autolabel-gpt)
+
+When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
+
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
+
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
+
+4. Choose Autolabel with GPT and click on Next.
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
+
+ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
+
+6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT.
+
+ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png":::
+
+7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+8. Select **Start job** to trigger the autolabeling job.
+You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
:::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: +++ ## Review the auto labeled documents
-When the auto-labeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
+When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
-Entities that have been automatically labeled will appear with a dotted line. These entities will have two selectors (a checkmark and an "X") that will let you accept or reject the automatic label.
+Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label.
-Once an entity is accepted, the dotted line will change to solid line, and this label will be included in any further model training and be a user defined label.
+Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label.
Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen.
After you accept or reject the labeled entities, select **Save labels** to apply
> [!NOTE] > * We recommend validating automatically labeled entities before accepting them.
-> * All labels that were not accepted will be deleted when you train your model.
+> * All labels that were not accepted are be deleted when you train your model.
## Next steps
cognitive-services Use Autotagging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/use-autotagging.md
+
+ Title: How to use autolabeling in custom text classification
+
+description: Learn how to use autolabeling in custom text classification.
+++++++ Last updated : 3/15/2023+++
+# How to use autolabeling for Custom Text Classification
+
+[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents.
+
+## Prerequisites
+
+Before you can use autolabeling with GPT, you need:
+* A successfully [created project](create-project.md) with a configured Azure blob storage account.
+* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account.
+* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided.
+* [Labeled data](tag-data.md) isn't required.
+* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md).
+++
+## Trigger an autolabeling job
+
+When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models.
+
+1. From the left navigation menu, select **Data labeling**.
+2. Select the **Autolabel** button under the Activity pane to the right of the page.
+
+ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png":::
+
+4. Choose Autolabel with GPT and click on Next.
+
+ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png":::
+
+5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed.
+
+ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png":::
+
+6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT.
+
+ :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png":::
+
+7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter.
+
+ > [!NOTE]
+ > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used.
+ > * You can view the documents by clicking on the document name.
+
+ :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png":::
+
+8. Select **Start job** to trigger the autolabeling job.
+You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included.
+
+ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png":::
++++
+## Review the auto labeled documents
+
+When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied.
++
+Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label.
+
+Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label.
+
+After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes.
+
+> [!NOTE]
+> * We recommend validating automatically labeled documents before accepting them.
+> * All labels that were not accepted are deleted when you train your model.
++
+## Next steps
+
+* Learn more about [labeling your data](tag-data.md).
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Title: Subscription Eligibility and Number Capabilities in Azure Communication Services
+ Title: Country availability of telephone numbers and subscription eligibility
-description: Learn about Subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Communication Services.
-
+description: Learn about Country Availability, Subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Communication Services.
+ -+ Last updated 03/04/2022
-# Subscription eligibility and number capabilities
+# Country availability of telephone numbers and subscription eligibility
Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them.
Numbers can be purchased on eligible Azure subscriptions and in geographies wher
To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits.
-Additional details on eligible subscription types are as follows:
+More details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- | | Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | | Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
-\* In some countries, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this purchases for CSP and LSP customers is not allowed.
+\* In some countries, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
-\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Please reach out to acstns@microsoft.com for assistance with your application.
+\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
-## Number capabilities
+## Number capabilities and availability
-The capabilities that are available to you depend on the country that you're operating within (your Azure billing address location), your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
+The capabilities and numbers that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
-The tables below summarize current availability:
+The following tables summarize current availability:
## Customers with US Azure billing addresses
The tables below summarize current availability:
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-\** Phone numbers in Italy can only be purchased for own use. Re-selling or sub-allocating to another party is not allowed.
+\** Phone numbers in Italy can only be purchased for own use. Reselling or suballocating to another party is not allowed.
## Customers with Sweden Azure billing addresses
The tables below summarize current availability:
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+## Customers with France Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| France | Local** | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Phone numbers in France can only be purchased for own use. Reselling or suballocating to another party is not allowed.
+
+## Customers with Spain Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Spain | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Spain | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Switzerland Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Switzerland | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Belgium Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Belgium | Toll-Free | - | - | Public Preview | Public Preview\* |
+| Belgium | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Luxembourg Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Luxembourg | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Austria Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Austria | Toll-Free** | - | - | Public Preview | Public Preview\* |
+| Austria | Local** | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Phone numbers in Austria can only be purchased for own use. Reselling or suballocating to another party is not allowed.
+
+## Customers with Portugal Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Portugal | Toll-Free** | - | - | Public Preview | Public Preview\* |
+| Portugal | Local** | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Phone numbers in Portugal can only be purchased for own use. Reselling or suballocating to another party is not allowed.
+
+## Customers with Slovakia Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Slovakia | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Norway Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Norway | Local** | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Phone numbers in Norway can only be purchased for own use. Reselling or suballocating to another party is not allowed.
++
+## Customers with Netherlands Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Netherlands | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Customers with Germany Azure billing addresses
+
+| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :-- | :- | :- | :- | : |
+| Germany | Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+ ## Next steps
-For additional information about Azure Communication Services' telephony options please see the following pages:
+For more information about Azure Communication Services' telephony options please see the following pages:
- [Learn more about Telephony](../telephony/telephony-concept.md) - Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+## France telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0160/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Spain telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 5.00/mo |
+|Toll-Free |USD 20.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0165/min |USD 0.0072/min |
+|Toll-free |Starting at USD 0165/min | USD 0.2200/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Switzerland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0234/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Belgium telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 0.70/mo |
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.1300/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.1300/min |Starting at USD 0.0505/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Luxembourg telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 3.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.2300/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Austria telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.1550/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.1550/min |Starting at USD 0.0897/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Portugal telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+|Toll-Free |USD 18.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0130/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0130/min | USD 0.0601/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Slovakia telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0270/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Norway telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 5.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0200/min |USD 0.0300/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Netherlands telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 1.50/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.3500/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## Germany telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Geographic |USD 0.80/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|--|--||
+|Geographic |Starting at USD 0.0150/min |USD 0.0100/min |
+
+\* For destination-specific pricing for making outbound calls, please refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+ *** Note: Pricing for all countries is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
communication-services Outbound Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/outbound-calling.md
+
+ Title: Outbound calling with Toll-Free numbers - Azure Communication Services
+description: Information about outbound calling limitations with Toll-Free numbers
+++++ Last updated : 03/10/2023+++++
+# Toll-Free telephone numbers and outbound calling
+Outbound calling capability with Toll-Free telephone numbers is available in many countries where Azure Communication Services is available. However, there can be some limitations when trying to place outbound calls with toll-free telephone numbers.
+
+**Why outbound calls from Toll-Free numbers may not work?**
+
+Microsoft provides Toll-Free telephone numbers that have outbound calling capabilities, but it's important to note that this feature is only provided on a "best-effort" basis. In some countries and regions, toll-free numbers are considered as an "inbound only" service from regulatory perspective. This means, that in some scenarios, the receiving carrier may not allow incoming calls from toll-free telephone numbers. Since Microsoft and our carrier-partners don't have control over other carrier networks, we can't guarantee that outbound calls will reach all possible destinations.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
> [!NOTE] > Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
-Call Recording enables you to record multiple calling scenarios available in Azure Communication Services by providing you with a set of APIs to start, stop, pause and resume recording. Whether it's a PSTN, WebRTC, or SIP call, these APIs can be accessed from server-side business logic or via events triggered by user actions.
+Call Recording enables you to record multiple calling scenarios available in Azure Communication Services by providing you with a set of APIs to start, stop, pause and resume recording. Whether it's a PSTN, WebRTC, or SIP call, these APIs can be accessed from your server-side business logic. Also, recordings can be triggered by a user action that tells the server application to start recording.
Depending on your business needs, you can use Call Recording for different Azure Communication Services calling implementations. For example, you can record 1:1 or 1:N scenarios for audio and video calls enabled by [Calling Client SDK](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
For example, you can record 1:1 or 1:N scenarios for audio and video calls enabl
![Diagram showing a call that it's being recorded.](../media/call-recording-client.png) But also, you can use Call Recording to record complex PSTN or VoIP inbound and outbound calling workflows managed by [Call Automation](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation).
-Regardless of how you established the call, Call Recording allows you to produce mixed or unmixed media files that are stored for 48 hours on a built-in temporary storage. You can retrieve the files and take them to the long-term storage solution of your choice. Call Recording supports all Azure Communication Services data regions.
+Regardless of how you establish the call, Call Recording allows you to produce mixed or unmixed media files that are stored for 48 hours on a built-in temporary storage. You can retrieve the files and take them to the long-term storage solution of your choice. Call Recording supports all Azure Communication Services data regions.
![Diagram showing call recording architecture using calling client sdk.](../media/call-recording-with-call-automation.png) ## Call Recording that supports your business needs
-Call Recording supports multiple media outputs and content types to address your business needs and use cases. You might use mixed formats for scenarios such as keeping records, meeting notes, coaching and training, or even compliance and adherence. Or, you can use unmixed formats to address quality assurance use cases or even more complex scenarios like advanced analytics or AI-based (Artificial Intelligence) sophisticated post-call processes.
+Call Recording supports multiple media outputs and content types to address your business needs and use cases. You might use mixed formats for scenarios such as keeping records, meeting notes, coaching and training, or even compliance and adherence. Or, you can use unmixed audio format to address quality assurance use cases or even more complex scenarios like advanced analytics or AI-based (Artificial Intelligence) sophisticated post-call processes.
### Video | Channel Type | Content Format | Resolution | Sampling Rate | Output | Description | | :-- | :- | :-- | :- | : | : |
-| mixed | mp4 | 1920x1080, 16 FPS (frames per second) | 16 kHz | single file, single channel | mixed audio+video of all participants in a default tile arrangement |
+| mixed | mp4 | 1920x1080, 16 FPS (frames per second) | 16 kHz | single file, single channel | mixed video in a default 3x3 (most active speakers) tile arrangement with display name support |
### Audio
A `recordingId` is returned when recording is started, which is then used for fo
## Event Grid notifications
-Call Recording use [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema-communication-services) to provide you with notifications related to media and metadata.
+Call Recording uses [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema-communication-services) to provide you with notifications related to media and metadata.
> [!NOTE] > Azure Communication Services provides short term media storage for recordings. **Recordings will be available to download for 48 hours.** After 48 hours, recordings will no longer be available.
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
## Regulatory and privacy concerns
-Many countries and states have laws and regulations that apply to call recording. PSTN, voice, and video calls, often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
+Many countries and states have laws and regulations that apply to call recording. PSTN, voice, and video calls often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. ## Known Issues
-It's possible that when a call is created using Call Automation, you won't get a value in the `serverCallId`. If that's the case, get the `serverCallId` from the `CallConnected` event method described in [Get serverCallId](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions?pivots=programming-language-csharp#configure-programcs-to-answer-the-call).
+It's possible that when a call is created using Call Automation, you don't get a value in the `serverCallId`. If that's the case, get the `serverCallId` from the `CallConnected` event method described in [Get serverCallId](https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions?pivots=programming-language-csharp#configure-programcs-to-answer-the-call).
## Next steps For more information, see the following articles:
communication-services Media Quality Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-quality-sdk.md
To help understand media quality in VoIP and Video calls using Azure Communicati
## Media quality statistics for ongoing call > [!NOTE]
-> This API is provided as a preview ('beta') for developers and may change based on feedback that we receive. Do not use this API in a production environment.
+> This API is provided as a Public Preview ('beta') for developers and may change based on feedback that we receive. Do not use this API in a production environment.
> [!IMPORTANT] > There is also an API breaking change on MediaStats in the SDK beginning since version 1.8.0-beta.1
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
Previously updated : 03/16/2023 Last updated : 03/17/2023 # Deploy Azure Communications Gateway
In this step, you'll create the Azure Communications Gateway resource.
:::image type="content" source="media/deploy/create.png" alt-text="Screenshot of the Azure portal. Shows the existing Azure Communications Gateway. A Create button allows you to create more Azure Communications Gateways.":::
-1. Use the information you collected in [Collect Azure Communications Gateway resource values](prepare-to-deploy.md#6-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**.
+1. Use the information you collected in [Collect Azure Communications Gateway resource values](prepare-to-deploy.md#4-collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration section and then select **Next: Service Regions**.
:::image type="content" source="media/deploy/basics.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing the Basics section.":::
-1. Use the information you collected in [Collect Service Regions configuration values](prepare-to-deploy.md#7-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**.
+1. Use the information you collected in [Collect Service Regions configuration values](prepare-to-deploy.md#5-collect-service-regions-configuration-values) to fill out the fields in the **Service Regions** section and then select **Next: Tags**.
1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create. 1. Select **Review + create**.
Once your resource has been provisioned, a message appears saying **Your deploym
:::image type="content" source="media/deploy/go-to-resource-group.png" alt-text="Screenshot of the Create an Azure Communications Gateway portal, showing a completed deployment screen.":::
-## 3. Complete the JSON onboarding file
+## 3. Provide additional information to your onboarding team
> [!NOTE] >This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip this step if you have already onboarded to TPM or OC.
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
Previously updated : 01/10/2022 Last updated : 03/13/2023 # Prepare to deploy Azure Communications Gateway
We strongly recommend that you have a support plan that includes technical suppo
## 1. Add the Project Synergy application to your Azure tenancy > [!NOTE]
->This step is required to set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. Skip steps 1 and 2 if you have already onboarded to TPM or OC.
+>This step and the next step ([2. Assign an Admin user to the Project Synergy application](#2-assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [3. Create a network design](#3-create-a-network-design).
The Operator Connect and Teams Phone Mobile programs require your Azure Active Directory tenant to contain a Microsoft application called Project Synergy. Operator Connect and Teams Phone Mobile inherit permissions and identities from your Azure Active Directory tenant through the Project Synergy application. The Project Synergy application also allows configuration of Operator Connect or Teams Phone Mobile and assigning users and groups to specific roles.
To add the Project Synergy application:
1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. (If you don't have the Azure Active Directory module installed), run the cmdlet:
+1. If you don't have the Azure Active Directory module installed, install it:
```azurepowershell
- Install-Module Azure AD
+ Install-Module AzureAD
``` 1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4. ```azurepowershell
To add the Project Synergy application:
The user who sets up Azure Communications Gateway needs to have the Admin user role in the Project Synergy application.
-1. In your Azure portal, navigate to **Enterprise applications** using the left-hand side menu. Alternatively, you can search for it in the search bar, it will appear under the **Services** subheading.
+1. In your Azure portal, navigate to **Enterprise applications** using the left-hand side menu. Alternatively, you can search for it in the search bar; it's under the **Services** subheading.
1. Set the **Application type** filter to **All applications** using the drop-down menu. 1. Select **Apply**. 1. Search for **Project Synergy** using the search bar. The application should appear.
The user who sets up Azure Communications Gateway needs to have the Admin user r
1. Select **Add user/group**. 1. Specify the user you want to use for setting up Azure Communications Gateway and give them the **Admin** role.
-## 3. Create an App registration to provide Azure Communications Gateway access to the Operator Connect API
-
-You must create an App registration to enable Azure Communications Gateway to function correctly. The App registration provides Azure Communications Gateway with access to the Operator Connect API on your behalf. The App registration **must** be created in **your** tenant.
-
-### 3.1 Create an App registration
-
-Use the following steps to create an App registration for Azure Communications Gateway:
-
-1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. Select **New registration**.
-1. Enter an appropriate **Name**. For example: **Azure Communications Gateway service**.
-1. Don't change any settings (leaving everything as default). This means:
- - **Supported account types** should be set as **Accounts in this organizational directory only**.
- - Leave the **Redirect URI** and **Service Tree ID** empty.
-1. Select **Register**.
-
-### 3.2 Configure permissions
-
-For the App registration that you created in [3.1 Create an App registration](#31-create-an-app-registration):
-
-1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. Select the App registration.
-1. Select **API permissions**.
-1. Select **Add a permission**.
-1. Select **APIs my organization uses**.
-1. Enter **Project Synergy** in the filter box.
-1. Select **Project Synergy**.
-1. Select/deselect checkboxes until only the required permissions are selected. The required permissions are:
- - Data.Write
- - Data.Read
- - NumberManagement.Read
- - TrunkManagement.Read
-1. Select **Add permissions**.
-1. Select **Grant admin consent** for ***\<YourTenantName\>***.
-1. Select **Yes** to confirm.
--
-### 3.3 Add the application ID to the Operator Connect Portal
-
-You must add the application ID to your Operator Connect environment. This step allows Azure Communications Gateway to use the Operator Connect API.
-
-1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. Copy the **Application (client) ID** from the Overview page of your new App registration.
-1. Log into the [Operator Connect Number Management Portal](https://operatorconnect.microsoft.com/operator/configuration) and add a new **Application Id**, pasting in the value you copied.
-
-## 4. Create and store secrets
-
-You must create an Azure secret and allow the App registration to access this secret. This integration allows Azure Communications Gateway to access the Operator Connect API.
-
-This step guides you through creating a Key Vault to store a secret for the App registration, creating the secret and allowing the App registration to use the secret.
-
-### 4.1 Create a Key Vault
-
-The App registration you created in [3. Create an App registration to provide Azure Communications Gateway access to the Operator Connect API](#3-create-an-app-registration-to-provide-azure-communications-gateway-access-to-the-operator-connect-api) requires a dedicated Key Vault. The Key Vault is used to store the secret name and secret value (created in the next steps) for the App registration.
-
-1. Create a Key Vault. Follow the steps in [Create a Vault](../key-vault/general/quick-create-portal.md).
-1. Provide your onboarding team with the ResourceID and the Vault URI of your Key Vault.
-1. Your onboarding team will use the ResourceID to request a Private-Endpoint. That request triggers two approval requests to appear in the Key Vault.
-1. Approve these requests.
-
-### 4.2 Create a secret
-
-You must create a secret for the App registration while preparing to deploy Azure Communications Gateway and then regularly rotate this secret.
-
-We recommend you rotate your secrets at least every 70 days for security. For instructions on how to rotate secrets, see [Rotate your Azure Communications Gateway secrets](rotate-secrets.md)
-
-1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. Select **Certificates & secrets**.
-1. Select **New client secret**.
-1. Enter a name for the secret (we suggest that the name should include the date at which the secret is being created).
-1. Copy or note down the value of the new secret (you won't be able to retrieve it later).
--
-### 4.3 Grant Admin Consent to Azure Communications Gateway
-
-To enable the Azure Communications Gateway service to access the Key Vault, you must grant Admin Consent to the App registration.
-
-1. Request the Admin Consent URL from your onboarding team.
-1. Follow the link. A pop-up window displays the **Application Name** of the Registered Application. Note down this name.
-
-### 4.4 Grant your application Key Vault Access
-
-This step must be performed on your tenant. It gives Azure Communications Gateway the ability to read the Operator Connect secrets from your tenant.
-
-1. Navigate to the Key Vault in the Azure portal. If you can't locate it, search for Key Vault in the search bar, select **Key vaults** from the results, and select your Key Vault.
-1. Select **Access Policies** on the left hand side menu.
-1. Select **Create**.
-1. Select **Get** from the secret permissions column.
-1. Select **Next**.
-1. Search for the Application Name of the Registered Application created by the Admin Consent process (which you noted down in the previous step), and select the name.
-1. Select **Next**.
-1. Select **Next** again to skip the **Application** tab.
-1. Select **Create**.
-
-## 5. Create a network design
+## 3. Create a network design
Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md). :::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network.":::
-## 6. Collect basic information for deploying an Azure Communications Gateway
+## 4. Collect basic information for deploying an Azure Communications Gateway
Collect all of the values in the following table for the Azure Communications Gateway resource.
To configure MAPS, follow the instructions in [Azure Internet peering for Commun
|The name of the Azure subscription to use to create an Azure Communications Gateway resource. You must use the same subscription for all resources in your Azure Communications Gateway deployment. |**Project details: Subscription**| |The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**|
- |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or co-located with the two regions for handling call traffic. |**Instance details: Region**
+ |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
|The voice codecs to use between Azure Communications Gateway and your network. |**Instance details: Supported Codecs**| |The Unified Communications as a Service (UCaaS) platform(s) Azure Communications Gateway should support. These platforms are Teams Phone Mobile and Operator Connect Mobile. |**Instance details: Supported Voice Platforms**| |Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Services Routing Proxy (US only). |**Instance details: Emergency call handling**|
- |The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** will give a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** will give a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** will give a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**|
+ |The scope at which Azure Communications Gateway's autogenerated domain name label is unique. Communications Gateway resources get assigned an autogenerated domain name label that depends on the name of the resource. You'll need to register the domain name later when you deploy Azure Communications Gateway. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**Instance details: Auto-generated Domain Name Scope**|
|The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Instance details: Teams Voicemail Pilot Number**| |A list of dial strings used for emergency calling.|**Instance details: Emergency Dial Strings**| |Whether an on-premises Mobile Control Point is in use.|**Instance details: Enable on-premises MCP functionality**| --
-## 7. Collect Service Regions configuration values
+## 5. Collect Service Regions configuration values
Collect all of the values in the following table for both service regions in which you want to deploy Azure Communications Gateway. |**Value**|**Field name(s) in Azure portal**| |||
- |The Azure regions that will handle call traffic. |**Service Region One/Two: Region**|
+ |The Azure regions to use for call traffic. |**Service Region One/Two: Region**|
|The IPv4 address used by Microsoft Teams to contact your network from this region. |**Service Region One/Two: Operator IP address**| |The set of IP addresses/ranges that are permitted as sources for signaling traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Signaling Source IP Addresses/CIDR Ranges**| |The set of IP addresses/ranges that are permitted as sources for media traffic from your network. Provide an IPv4 address range using CIDR notation (for example, 192.0.2.0/24) or an IPv4 address (for example, 192.0.2.0). You can also provide a comma-separated list of IPv4 addresses and/or address ranges.|**Service Region One/Two: Allowed Media Source IP Address/CIDR Ranges**|
-## 8. Collect Test Lines configuration values
+## 6. Collect Test Lines configuration values
Collect all of the values in the following table for all test lines you want to configure for Azure Communications Gateway. You must configure at least one test line.
Collect all of the values in the following table for all test lines you want to
|The phone number of the test line. |**Phone Number**| |Whether the test line is manual or automated: **Manual** test lines will be used by you and Microsoft staff to make test calls during integration testing. **Automated** test lines will be assigned to Microsoft Teams test suites for validation testing. |**Testing purpose**|
-## 9. Decide if you want tags
+## 7. Decide if you want tags
Resource naming and tagging is useful for resource management. It enables your organization to locate and keep track of resources associated with specific teams or workloads and also enables you to more accurately track the consumption of cloud resources by business area and team. If you believe tagging would be useful for your organization, design your naming and tagging conventions following the information in the [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/).
-## 10. Get access to Azure Communications Gateway for your Azure subscription
+## 8. Get access to Azure Communications Gateway for your Azure subscription
+
+Access to Azure Communications Gateway is restricted. When you've completed the previous steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
+
+Wait for confirmation that Azure Communications Gateway is enabled before moving on to the next step.
+
+## 9. Register the Microsoft Voice Services resource provider
+
+If the **Microsoft.VoiceServices** resource provider isn't already registered in your subscription, register this provider.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your Azure subscription.
+1. Select **Resource providers** under the **Settings** tab.
+1. Search for the **Microsoft.VoiceServices** resource provider.
+1. Check if the resource provider is already marked as registered. If it isn't, choose the resource provider and select **Register**.
+
+## 10. Set up application roles for Azure Communications Gateway in your Project Synergy application
-Access to Azure Communications Gateway is restricted. When you've completed the other steps in this article, contact your onboarding team and ask them to enable your subscription. If you don't already have an onboarding team, contact azcog-enablement@microsoft.com with your Azure subscription ID and contact details.
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to an AzureCommunicationsGateway service principal under the Project Synergy Enterprise Application. You created the Project Synergy application in [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy). Microsoft created the Azure Communications Gateway service principal for you when you followed [9. Register the Microsoft Voice Services resource provider](#9-register-the-microsoft-voice-services-resource-provider).
+
+You need to do the following steps in the tenant that contains your Project Synergy application.
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as an Azure Active Directory Global Admin.
+1. Select **Azure Active Directory**.
+1. Select **Properties**.
+1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID.
+1. Open PowerShell.
+1. If you didn't install the Azure Active Directory module as part of [1. Add the Project Synergy application to your Azure tenancy](#1-add-the-project-synergy-application-to-your-azure-tenancy), install it:
+ ```azurepowershell
+ Install-Module AzureAD
+ ```
+1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 4.
+ ```azurepowershell
+ Connect-AzureAD -TenantId "<AADTenantID>"
+ ```
+1. Run the following PowerShell commands. These commands add the following roles for Azure Communications Gateway: `TrunkManagement.Read`, `NumberManagement.Read`, `NumberManagement.Write`, `Data.Read`, `Data.Write`, `TrunkManagement.Write`, `PartnerSettings.Read`.
+ ```azurepowershell
+ # Get the Service Principal ID for Azure Communications Gateway
+ $commGwayApplicationId = "8502a0ec-c76d-412f-836c-398018e2312b"
+ $commGwayEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$commGwayApplicationId'"
+ $commGwayObjectId = $commGwayEnterpriseApplication.ObjectId
+
+ # Get the Service Principal ID for Project Synergy (Operator Connect)
+ $projectSynergyApplicationId = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $projectSynergyEnterpriseApplication = Get-AzureADServicePrincipal -Filter "AppId eq '$projectSynergyApplicationId'"
+ $projectSynergyObjectId = $projectSynergyEnterpriseApplication.ObjectId
+
+ # Required Operator Connect - Project Synergy Roles
+ $trunkManagementRead = "72129ccd-8886-42db-a63c-2647b61635c1"
+ $trunkManagementWrite = "e907ba07-8ad0-40be-8d72-c18a0b3c156b"
+
+ $partnerSettingsRead = "d6b0de4a-aab5-4261-be1b-0e1800746fb2"
+
+ $numberManagementRead = "130ecbe2-d1e6-4bbd-9a8d-9a7a909b876e"
+ $numberManagementWrite = "752b4e79-4b85-4e33-a6ef-5949f0d7d553"
+
+ $dataRead = "eb63d611-525e-4a31-abd7-0cb33f679599"
+ $dataWrite = "98d32f93-eaa7-4657-b443-090c23e69f27"
+
+ $requiredRoles = $trunkManagementRead, $numberManagementRead, $numberManagementWrite, $dataRead, $dataWrite, $trunkManagementWrite, $partnerSettingsRead
+
+ foreach ($role in $requiredRoles) {
+ # Assign the relevant Role to the Azure Communications Gateway Service Principal
+ New-AzureADServiceAppRoleAssignment -ObjectId $commGwayObjectId -PrincipalId $commGwayObjectId -ResourceId $projectSynergyObjectId -Id $role
+ }
+ ```
+
+## 11. Add the application ID for Azure Communications Gateway to Operator Connect
+
+Before you can use the roles that you set up in [10. Set up application roles for Azure Communications Gateway in your Project Synergy application](#10-set-up-application-roles-for-azure-communications-gateway-in-your-project-synergy-application), you must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment.
+
+To enable the Azure Communications Gateway application and the roles, add the application ID of the AzureCommunicationsGateway service principal to your Operator Connect or Teams Phone Mobile environment:
+
+1. Optionally, check the application ID of the service principal to confirm that you're adding the right application.
+ 1. Search for `AzureCommunicationsGateway` with the search bar: it's under the **Azure Active Directory** subheading.
+ 1. On the overview page, check that the value of **Object ID** is `8502a0ec-c76d-412f-836c-398018e2312b`.
+1. Log into the [Operator Connect Number Management Portal](https://operatorconnect.microsoft.com/operator/configuration).
+1. Add a new **Application Id**, pasting in the following value. This value is the application ID for Azure Communications Gateway.
+ ```
+ 8502a0ec-c76d-412f-836c-398018e2312b
+ ```
## Next steps
communications-gateway Rotate Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/rotate-secrets.md
- Title: Rotate your Azure Communications Gateway secrets
-description: Learn how to rotate your secrets to keep your Azure Communications Gateway secure.
---- Previously updated : 01/12/2023--
-# Rotate your Azure Communications Gateway secrets
-
-This article will guide you through how to rotate secrets for your Azure Communications Gateway. It's important to ensure that secrets are rotated regularly, and that you're aware and familiar with the mechanism for rotating them. Being familiar with this procedure is important because you may sometimes be required to perform an immediate rotation, for example, if the secret was leaked. Our recommendation is that these secrets are rotated at least **every 70 days**.
-
-Azure Communication Gateway uses an App registration to manage access to the Operator Connect API. This App registration uses secrets stored and managed in your subscription. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
-
-## Prerequisites
-
-You must know the name of the App registration and the Key Vault you created in [Prepare to deploy Azure Communications Gateway](deploy.md). We recommended using **Azure Communications Gateway service** as the name of the App registration.
-
-## 1. Rotate your secret for the App registration.
-
-We store both the secret and its associated identity, but only the secret needs to be rotated.
-
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as a **Storage Account Key Operator**, **Contributor** or **Owner**.
-1. Navigate to **App registrations** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **App registrations**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. In the App registrations search box, type **Azure Communications Gateway service** (or the name of the App registration if you chose a different name).
-1. Select the application.
-1. In the left hand menu, select **Certificates and secrets**.
-1. You should see the secret you created in [Prepare to deploy your Azure Communications Gateway](prepare-to-deploy.md).
- > [!NOTE]
- >If you need to immediately deactivate a secret and make it un-usable, select the bin icon to the right of the secret.
-1. Select **New client secret**.
-1. Enter a name for the secret (we suggest that the name should include the date at which the secret is being created).
-1. Enter an expiry date. The expiry date should sync with your rotation schedule.
-1. Select **Add**.
-1. Copy or note down the value of the new secret (you won't be able to retrieve it later). If you navigate away from the page or refresh without collecting the value of the secret, you'll need to create a new one.
-
-## 2. Update your Key Vault with the new secret value
-
-Azure Key Vault is a cloud service for securely storing and accessing secrets. When you create a new secret for your App registration, you must add the value to your corresponding Key Vault. Add the value as a new version of the existing secret in the Key Vault. Azure Communications Gateway starts using the new value as soon as it makes a request for the value of the secret.
-
-1. Sign in to the [Azure portal](https://ms.portal.azure.com/) as a **Storage Account Key Operator**, **Contributor** or **Owner**.
-1. Navigate to **Key Vaults** in the Azure portal (select **Azure Active Directory** and then in the left-hand menu, select **Key Vaults**). Alternatively, you can search for it with the search bar: it will appear under the **Services** subheading.
-1. Select the relevant Key Vault.
-1. In the left hand menu, select **Secrets**.
-1. Select the secret you're updating from the list.
-1. In the top navigation menu, select **New version**.
-1. In the **Secret value** textbox, enter the secret value you noted down in the previous procedure.
-1. (Optional) Enter an expiry date for your secret. The expiry date should sync with your rotation schedule.
-1. Select **Create**.
--
-## Next steps
--- Learn how [Azure Communications Gateway keeps your data secure](security.md).
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
+
+ Title: What's new in Azure Communications Gateway?
+description: Discover what's new in Azure Communications Gateway
++++ Last updated : 03/16/2023++
+# What's new in Azure Communications Gateway?
+
+This article covers new features and improvements for Azure Communications Gateway.
+
+## March 2023: Simpler authentication for Operator Connect APIs
+
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. Azure Communications Gateway therefore needs to authenticate with your Operator Connect or Teams Phone Mobile environment.
+
+From March 2023, Azure Communications Gateway automatically provides a service principal for this authentication. You must set up specific permissions for this service principal and then add the service principal to your Operator Connect or Teams Phone Mobile environment. For more information, see [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
+
+This new authentication model replaces an earlier model that required you to create an App registration and manage secrets for it. With the new model, you no longer need to create, manage or rotate secrets.
+
+## Next steps
+
+- [Learn more about Azure Communications Gateway](overview.md).
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md).
confidential-computing Concept Skr Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/concept-skr-attestation.md
+
+ Title: Secure Key Release with Azure Key Vault and Azure Confidential Computing
+description: Concept guide on what SKR is and its usage with Azure Confidential Computing Offerings
+++++ Last updated : 2/2/2023+++
+# Secure Key Release feature with AKV and Azure Confidential Computing (ACC)
+
+Secure Key Release (SKR) is a functionality of Azure Key Vault (AKV) Managed HSM and Premium offering. Secure key release enables the release of an HSM protected key from AKV to an attested Trusted Execution Environment (TEE), such as a secure enclave, VM based TEEs etc. SKR adds another layer of access protection to your data decryption/encryption keys where you can target an application + TEE runtime environment with known configuration get access to the key material. The SKR policies defined at the time of exportable key creation govern the access to these keys.
+
+## SKR support with AKV offerings
+
+- [Azure Key Vault Premium](../security/fundamentals/key-management.md)
+- [Azure Key Vault Managed HSM](../key-vault/managed-hsm/overview.md)
+
+## Overall Secure Key Release Flow with TEE
+
+SKR can only release keys based on the Microsoft Azure Attestation (MAA) generated claims. There's a tight integration on the SKR policy definition to MAA claims.
+
+![Diagram of Secure Key Release Flow.](media/skr-flow-confidential-vm-sev-snp-attestation/skr-e2e-flow.png)
+
+The below steps are for AKV Premium.
+
+### Step 1: Create a Key Vault Premium HSM Backed
+
+[Follow the details here for Az CLI based AKV creation](../key-vault/general/quick-create-cli.md)
+
+Make sure to set the value of [--sku] to "premium".
+
+### Step 2: Create a Secure Key Release Policy
+
+A Secure Key Release Policy is a json format release policy as defined [here](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP#keyreleasepolicy) that specifies a set of claims required in addition to authorization to release the key. The claims here are MAA based claims as referenced [here for SGX](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sgx-attestation) and here for [AMD SEV-SNP CVM](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sev-snp-attestation).
+
+Visit the TEE specific [examples page for more details](skr-policy-examples.md)
+
+Before you set an SKR policy make sure to run your TEE application through the remote attestation flow. Remote attestation isn't covered as part of this tutorial.
+
+Example
+
+```json
+{
+ "version": "1.0.0",
+ "anyOf": [ // Always starts with "anyOf", meaning you can multiple, even varying rules, per authority.
+ {
+ "authority": "https://sharedweu.weu.attest.azure.net",
+ "allOf": [ // can be replaced by "anyOf", though you cannot nest or combine "anyOf" and "allOf" yet.
+ {
+ "claim": "x-ms-isolation-tee.x-ms-attestation-type", // These are the MAA claims.
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-isolation-tee.x-ms-compliance-status",
+ "equals": "azure-compliant-cvm"
+ }
+ ]
+ }
+ ]
+}
++
+```
+
+### Step 3: Create an exportable key in AKV with attached SKR policy
+
+Exact details of the type of key and other attributes associated can be found [here](../key-vault/general/quick-create-cli.md).
+
+```azurecli
+az keyvault key create --exportable true --vault-name "vault name from step 1" --kty RSA-HSM --name "keyname" --policy "jsonpolicyfromstep3 -can be a path to JSON" --protection hsm --vault-name "name of vault created from step1"
+```
+
+### Step 4: Application running within a TEE doing a remote attestation
+
+This step can be specific to the type of TEE you're running your application Intel SGX Enclaves or AMD SEV-SNP based Confidential Virtual Machines (CVM) or Confidential Containers running in CVM Enclaves with AMD SEV-SNP etc.
+
+Follow these references examples for various TEE types offering with Azure:
+
+- [Application within AMD EV-SNP based CVM's performing Secure Key Release](skr-flow-confidential-vm-sev-snp.md)
+- [Confidential containers with Azure Container Instances (ACI) with SKR side-car containers](skr-flow-confidential-containers-azure-container-instance.md)
+- [Intel SGX based applications performing Secure Key Release - Open Source Solution Mystikos Implementation](https://github.com/deislabs/mystikos/tree/main/samples/confidential_ml#environment)
+
+## Frequently Asked Questions (FAQ)
+
+### Can I perform SKR with non confidential computing offerings?
+
+No. The policy attached to SKR only understands MAA claims that are associated to hardware based TEEs.
+
+### Can I bring my own attestation provider or service and use those claims for AKV to validate and release?
+
+No. AKV only understands and integrates with MAA today.
+
+### Can I use AKV SDKs to perform key RELEASE?
+
+Yes. Latest SDK integrated with 7.3 AKV API's support key RELEASE.
+
+### Can you share some examples of the key release policies?
+
+Yes, detailed examples by TEE type are listed [here.](./skr-policy-examples.md)
+
+## Can I attach SKR type of policy with certificates and secrets?
+
+No. Not at this time.
+
+## References
+
+[SKR Policy Examples](skr-policy-examples.md)
+
+[Azure Container Instance with confidential containers Secure Key Release with container side-cars](skr-flow-confidential-containers-azure-container-instance.md)
+
+[CVM on AMD SEV-SNP Applications with Secure Key Release Example](skr-flow-confidential-vm-sev-snp.md)
+
+[AKV REST API With SKR Details](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP)
+
+[AKV SDKs](../key-vault/general/client-libraries.md)
confidential-computing Confidential Containers Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-enclaves.md
Title: Confidential containers with Intex SGX enclaves on Azure
+ Title: Confidential containers with Intel SGX enclaves on Azure
description: Learn about unmodified container support with confidential containers on Intel SGX through OSS and partner solutions
confidential-computing Skr Flow Confidential Containers Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md
+
+ Title: Secure Key Release with Azure Key Vault and Confidential Containers on Azure Container Instance
+description: Learn how to build an application that securely gets the key from AKV to an attested Azure Container Instances confidential container environment
+++++ Last updated : 3/9/2023+++
+# Secure Key Release with Confidential containers on Azure Container Instance (ACI)
+
+Secure Key Release (SKR) flow with Azure Key Vault (AKV) with confidential container offerings can implement in couple of ways. Confidential containers run a guest enlightened exposting AMD SEV-SNP device through a Linux Kernel that uses an in guest firmware with necessary Hyper-V related patches that we refer as Direct Linux Boot (DLB). This platform doesn't use vTPM and HCL based that Confidential VMs with AMD SEV-SNP support. This concept document assumes you plan to run the containers in [Azure Container Support choosing a confidential computing SKU](../container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md)
+
+- Side-Car Helper Container provided by Azure
+- Custom implementation with your container application
+
+## Side-Car helper container provided by Azure
+
+An [open sourced GitHub project "confidential side-cars"](https://github.com/microsoft/confidential-sidecar-containers) details how to build this container and what parameters/environment variables are required for you to prepare and run this side-car container. The current side car implementation provides various HTTP REST APIs that your primary application container can use to fetch the key from AKV. The integration through Microsoft Azure Attestation(MAA) is already built in. The preparation steps to run the side-car SKR container can be found in details [here](https://github.com/microsoft/confidential-sidecar-containers/tree/main/examples/skr).
+
+Your main application container application can call the side-car WEB API end points as defined in the example blow. Side-cars runs within the same container group and is a local endpoint to your application container. Full details of the API can be found [here](https://github.com/microsoft/confidential-sidecar-containers/blob/main/cmd/skr/README.md)
+
+The `key/release` POST method expects a JSON of the following format:
+
+```json
+{
+ "maa_endpoint": "<maa endpoint>", //https://learn.microsoft.com/en-us/azure/attestation/quickstart-portal#attestation-provider
+ "akv_endpoint": "<akv endpoint>", //AKV URI
+ "kid": "<key identifier>" //key name,
+ "access_token": "optional aad token if the command will run in a resource without proper managed identity assigned"
+}
+```
+
+Upon success, the `key/release` POST method response carries a `StatusOK` header and a payload of the following format:
+
+```json
+{
+ "key": "<key in JSON Web Key format>"
+}
+```
+
+Upon error, the `key/release` POST method response carries a `StatusForbidden` header and a payload of the following format:
+
+```json
+{
+ "error": "<error message>"
+}
+```
+
+## Custom implementation with your container application
+
+To perform a custom container application that extends the capability of Azure Key Vault (AKV) - Secure Key Release and Microsoft Azure Attestation (MAA), use the below as a high level reference flow. An easy approach is to review the current side-car implementation code in this [side-car Github project](https://github.com/microsoft/confidential-sidecar-containers/tree/d933d0f4e3d5498f7ed9137189ab6a23ade15466/pkg/common).
+
+![Image of the aforementioned operations, which you should be performing.](media/skr-flow-azure-container-instance-sev-snp-attestation/skr-flow-custom-container.png)
+
+1. **Step 1:** Set up AKV with Exportable Key and attach the release policy. More [here](concept-skr-attestation.md)
+1. **Step 2:** Set up a managed identity with Azure Active Directory and attach that to AKV. More [here](../container-instances/container-instances-managed-identity.md)
+1. **Step 3:** Deploy your container application with required parameters within ACI by setting up a confidential computing enforcement policy. More [here](../container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md)
+1. **Step 4:** In this step, your application shall fetch a RAW AMD SEV-SNP hardware report by doing a IOCTL Linux Socket call. You don't need any guest attestation library to perform this action. More on existing side-car [implementation](https://github.com/microsoft/confidential-sidecar-containers/blob/d933d0f4e3d5498f7ed9137189ab6a23ade15466/pkg/attest/snp.go)
+1. **Step 5:** Fetch the AMD SEV-SNP cert chain for the container group. These certs are delivered from Azure host IMDS endpoint. More [here](https://github.com/microsoft/confidential-sidecar-containers/blob/d933d0f4e3d5498f7ed9137189ab6a23ade15466/pkg/common/info.go)
+1. **Step 6:** Send the SNP RAW hardware report and cert details to MAA for verification and return claims. More [here](../attestation/basic-concepts.md)
+1. **Step 7:** Send the MAA token and the managed identity token generated by ACI to AKV for key release. More [here](../container-instances/container-instances-managed-identity.md)
+
+On success of the key fetch from AKV, you can consume the key for decrypting the data sets or encrypt the data going out of the confidential container environment.
+
+## References
+
+[ACI with Confidential container deployments](../container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md)
+
+[Side-Car Implementation with encrypted blob fetch and decrypt with SKR AKV key](https://github.com/microsoft/confidential-sidecar-containers/#encrypted-filesystem-sidecar)
+
+[AKV SKR with Confidential VM's AMD SEV-SNP](skr-flow-confidential-vm-sev-snp.md)
+
+[Microsoft Azure Attestation (MAA)](../attestation/overview.md)
+
+[SKR Policy Examples](skr-policy-examples.md)
confidential-computing Skr Flow Confidential Vm Sev Snp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-vm-sev-snp.md
+
+ Title: Secure Key Release with Azure Key Vault and application on Confidential VMs with AMD SEV-SNP
+description: Learn how to build an application that securely gets the key from AKV to a Confidential VM attested environment and in an Azure Kubernetes Service cluster
+++++ Last updated : 2/2/2023+++
+# Secure Key Release with Confidential VMs How To Guide
+
+The below article describes how to perform a Secure Key Release from Azure Key Value when your applications are running with an AMD SEV-SNP confidential. To learn more about Secure Key Release and Azure Confidential Computing, [go here.](./concept-skr-attestation.md).
+
+SKR requires that an application performing SKR shall go through a remote guest attestation flow using Microsoft Azure Attestation (MAA) as described [here](guest-attestation-confidential-vms.md).
+
+## Overall flow and architecture
+
+To allow Azure Key Vault to release a key to an attested confidential virtual machine, there are certain steps that need to be followed:
+
+1. Assign a managed identity to the confidential virtual machine. System-assigned managed identity or a user-assigned managed identity are allowed.
+1. Set a Key Vault access policy to grant the managed identity the "release" key permission. A policy allows the confidential virtual machine to access the Key Vault and perform the release operation. If using Key Vault Managed HSM, assign "Managed HSM Crypto Service Release User" role membership.
+1. Create a Key Vault key that is marked as exportable and has an associated release policy. Key release policy associates the key to an attested confidential virtual machine and that the key can only be used for the desired purpose.
+1. To perform the release, send an HTTP request to the Key Vault from the confidential virtual machine. HTTP request must include the Confidential VMs attested platform report in the request body. The attested platform report is used to verify the trustworthiness of the state of the Trusted Execution Environment-enabled platform, such as the Confidential VM. The Microsoft Azure Attestation service can be used to create the attested platform report and include it in the request.
+
+![Diagram of the aforementioned operations, which we'll be performing.](media/skr-flow-confidential-vm-sev-snp-attestation/overview.png)
+
+## Deploying an Azure Key Vault
+
+Set up AKV Premium or AKV mHSM with an exportable key. Follow the detailed instructions from here [setting up SKR exportable keys](concept-skr-attestation.md)
+
+### Bicep
+
+```bicep
+@description('Required. Specifies the Azure location where the key vault should be created.')
+param location string = resourceGroup().location
+
+@description('Specifies the Azure Active Directory tenant ID that should be used for authenticating requests to the key vault. Get it by using Get-AzSubscription cmdlet.')
+param tenantId string = subscription().tenantId
+
+resource keyVault 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
+ name: 'mykeyvault'
+ location: location
+ properties: {
+ tenantId: tenantId
+ sku: {
+ name: 'premium'
+ family: 'A'
+ }
+ }
+}
+```
+
+### ARM template
+
+```json
+ {
+ "type": "Microsoft.KeyVault/vaults",
+ "apiVersion": "2021-11-01-preview",
+ "name": "mykeyvault",
+ "location": "[parameters('location')]",
+ "properties": {
+ "tenantId": "[parameters('tenantId')]",
+ "sku": {
+ "name": "premium",
+ "family": "A"
+ }
+ }
+ }
+```
+
+## Deploy a confidential virtual machine
+
+Follow the quickstart instructions on how to "[Deploy confidential VM with ARM template](quick-create-confidential-vm-arm-amd.md)"
+
+## Enable system-assigned managed identity
+
+[Managed identities](../active-directory/managed-identities-azure-resources/overview.md) for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+
+To enable system-assigned managed identity on a CVM, your account needs the [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role assignment. No other Azure AD directory role assignments are required.
+
+### [Bicep 1](#tab/bicep)
+
+1. Whether you sign in to Azure locally or via the Azure portal, use an account that is associated with the Azure subscription that contains the VM.
+
+2. To enable system-assigned managed identity, load the template into an editor, locate the `Microsoft.Compute/virtualMachines` resource of interest and add the `"identity"` property at the same level as the `name: vmName` property. Use the following syntax:
+
+ ```bicep
+ identity:{
+ type: 'SystemAssigned'
+ }
+ ```
+
+3. Add the `resource` details to the template.
+
+ ```bicep
+ resource confidentialVm 'Microsoft.Compute/virtualMachines@2021-11-01' = {
+ name: vmName
+ location: location
+ identity:{
+ type: 'SystemAssigned'
+ }
+ // other resource provider properties
+ }
+ ```
+
+### [ARM template 1](#tab/arm-template)
+
+1. Use an Azure account that is associated to the Azure subscription that contains the VM.
+
+2. To enable system-assigned managed identity, load the template into an editor, locate the `Microsoft.Compute/virtualMachines` resource of interest within the `resources` section and add the `"identity"` property at the same level as the `"type": "Microsoft.Compute/virtualMachines"` property. Use the following syntax:
+
+ ```json
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ ```
+
+3. The final template looks the below example
+
+ ```json
+ "resources": [
+ {
+ "apiVersion": "2021-11-01",
+ "type": "Microsoft.Compute/virtualMachines",
+ "name": "[parameters('vmName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned",
+ },
+ //other resource provider properties...
+ }
+ ]
+ ```
++
+## Add the access policy to Azure Key Vault
+
+Once you turn on a system-assigned managed identity for your CVM, you have to provide it with access to the Azure Key Vault data plane where key objects are stored. To ensure that only our confidential virtual machine can execute the release operation, we'll only grant specific permission required for that.
+
+> [!NOTE]
+> You can find the managed identity object ID in the virtual machine identity options, in the Azure portal. Alternatively you can retrieve it with [PowerShell](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md), [Azure CLI](../active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md), Bicep or ARM templates.
+
+### [Bicep 1]
+
+```bicep
+@description('Required. Specifies the object ID of a user, service principal or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the list of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets.')
+param objectId string
+
+resource keyVaultCvmAccessPolicy 'Microsoft.KeyVault/vaults/accessPolicies@2022-07-01' = {
+ parent: keyVault
+ name: 'add'
+ properties: {
+ accessPolicies: [
+ {
+ objectId: objectId
+ tenantId: tenantId
+ permissions: {
+ keys: [
+ 'release'
+ ]
+ }
+ }
+ ]
+ }
+}
+```
+
+### [ARM template 2]
+
+```json
+ {
+ "type": "Microsoft.KeyVault/vaults/accessPolicies",
+ "apiVersion": "2022-07-01",
+ "name": "[format('{0}/{1}', 'mykeyvault', 'add')]",
+ "properties": {
+ "accessPolicies": [
+ {
+ "objectId": "[parameters('objectId')]",
+ "tenantId": "[parameters('tenantId')]",
+ "permissions": {
+ "keys": [
+ "release"
+ ]
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.KeyVault/vaults', 'mykeyvault')]"
+ ]
+ }
+```
+
+## Prepare the release policy
+
+Key Vault Secure Key Release Policies are modeled after __Azure Policy__, with a [slightly different grammar](../key-vault/keys/policy-grammar.md).
+The idea is when we __pass the attested platform report__, in the form of a JSON Web Token (JWT), to Key Vault. It will, in turn, look at the JWT and check whether or not the attested platform report claims match the claims in the policy.
+
+For example, let's say we want to release a key only when our attested platform report has properties like:
+
+- Attested by [Microsoft Azure Attestation (MAA)](../attestation/overview.md) service endpoint "https://sharedweu.weu.attest.azure.net".
+ - This `authority` value from the policy is compared to the `iss` (issuer) property, in the token.
+- And that it also contains an object called `x-ms-isolation-tee` with a property called `x-ms-attestation-type`, which holds value `sevsnpvm`.
+ - MAA as an Azure service has attested that the CVM is running in an AMD SEV-SNP genuine processor.
+- And that it also contains an object called `x-ms-isolation-tee` with a property called `x-ms-compliance-status`, which holds the value `azure-compliant-cvm`.
+ - MAA as an Azure service has the ability to attest that the CVM is a compliant Azure confidential virtual machine.
+
+Create a new folder called `assets` and add the following JSON content to a file named `cvm-release-policy.json`:
+
+```json
+{
+ "version": "1.0.0",
+ "anyOf": [
+ {
+ "authority": "https://sharedweu.weu.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-isolation-tee.x-ms-attestation-type",
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-isolation-tee.x-ms-compliance-status",
+ "equals": "azure-compliant-cvm"
+ }
+ ]
+ }
+ ]
+}
+```
+
+Release policy is an `anyOf` condition containing an array of key authorities. A `claim` condition is a JSON object that identifies a claim name, a condition for matching, and a value. The `AnyOf` and `AllOf` condition objects allow for the modeling of a logical `OR` and `AND`. Currently, we can only perform an `equals` comparison on a `claim`. Condition properties are placed together with `authority` properties.
+
+> [!IMPORTANT]
+> An environment assertion contains at least __a key encryption key and one or more claims about the target environment__ (for example, TEE type, publisher, version) that are matched against the Key Release Policy. The key-encryption key is a public RSA key owned and protected by the target execution environment that is used for key export. It must appear in the TEE keys claim (x-ms-runtime/keys). This claim is a JSON object representing a JSON Web Key Set. Within the JWKS, one of the keys must meet the requirements for use as an encryption key (key_use is "enc", or key_ops contains "encrypt"). The first suitable key is chosen.
+
+Key Vault picks the first suitable key from "`keys`" array property in the "`x-ms-runtime`" object, it looks for a public RSA key with `"key_use": ["enc"]` or `"key_ops": ["encrypt"]`. An example of an attested platform report would look like:
+
+```json
+{
+ //...
+ "x-ms-runtime": {
+ "client-payload": {
+ "nonce": "MTIzNA=="
+ },
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "TpmEphemeralEncryptionKey",
+ "kty": "RSA",
+ "n": "9v2XQgAA6y18CxV8dSGnh..."
+ }
+ ]
+ },
+ //...
+}
+```
+
+In this example, we have only one key under the `$.x-ms-runtime.keys` path. Key Vault uses the `TpmEphemeralEncryptionKey` key as the key-encryption key.
+
+> [!NOTE]
+> Notice that there may be a key under `$.x-ms-isolation-tee.x-ms-runtime.keys`, this is __not__ the key that Key Vault will be using.
+
+## Create an exportable key with release policy
+
+We create a Key Vault access policy that lets an Azure Confidential Virtual Machine perform the `release` key operation. Finally, we must include our release policy as a base64 encoded string during the key creation. The key must be an __exportable__ key, backed by an HSM.
+
+> [!NOTE]
+> HSM-backed keys are available with Azure Key Vault Premium and Azure Key Vault Managed HSM.
+
+### [Bicep 2]
+
+```bicep
+@description('The type of the key. For valid values, see JsonWebKeyType. Must be backed by HSM, for secure key release.')
+@allowed([
+ 'EC-HSM'
+ 'RSA-HSM'
+])
+param keyType string = 'RSA-HSM'
+
+@description('Not before date in seconds since 1970-01-01T00:00:00Z.')
+param keyNotBefore int = -1
+
+@description('Expiry date in seconds since 1970-01-01T00:00:00Z.')
+param keyExpiration int = -1
+
+@description('The elliptic curve name. For valid values, see JsonWebKeyCurveName.')
+@allowed([
+ 'P-256'
+ 'P-256K'
+ 'P-384'
+ 'P-521'
+])
+param curveName string
+
+@description('The key size in bits. For example: 2048, 3072, or 4096 for RSA.')
+param keySize int = -1
+
+resource exportableKey 'Microsoft.KeyVault/vaults/keys@2022-07-01' = {
+ parent: keyVault
+ name: 'mykey'
+ properties: {
+ kty: keyType
+ attributes: {
+ exportable: true
+ enabled: true
+ nbf: keyNotBefore == -1 ? null : keyNotBefore
+ exp: keyExpiration == -1 ? null : keyExpiration
+ }
+ curveName: curveName // applicable when using key type (kty) 'EC'
+ keySize: keySize == -1 ? null : keySize
+ keyOps: ['encrypt','decrypt'] // encrypt and decrypt only work with RSA keys, not EC
+ release_policy: {
+ contentType: 'application/json; charset=utf-8'
+ data: loadFileAsBase64('assets/cvm-release-policy.json')
+ }
+ }
+}
+```
+
+### [ARM template 2]
+
+```json
+ {
+ "type": "Microsoft.KeyVault/vaults/keys",
+ "apiVersion": "2022-07-01",
+ "name": "[format('{0}/{1}', 'mykeyvault', 'mykey')]",
+ "properties": {
+ "kty": "RSA-HSM",
+ "attributes": {
+ "exportable": true,
+ "enabled": true,
+ "nbf": "[if(equals(parameters('keyNotBefore'), -1), null(), parameters('keyNotBefore'))]",
+ "exp": "[if(equals(parameters('keyExpiration'), -1), null(), parameters('keyExpiration'))]"
+ },
+ "curveName": "[parameters('curveName')]",
+ "keySize": "[if(equals(parameters('keySize'), -1), null(), parameters('keySize'))]",
+ "keyOps": [
+ "encrypt",
+ "decrypt"
+ ],
+ "release_policy": {
+ "contentType": "application/json; charset=utf-8",
+ "data": "[variables('cvmReleasePolicyBase64EncodedString')]"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.KeyVault/vaults', 'mykeyvault')]"
+ ]
+ }
+```
+
+We can verify that Key Vault has created a new, __HSM-backed__, key and that it contains our secure key __release policy__ by navigating to the Azure portal and selecting our key. The intended key will be marked as "__exportable__".
+
+![Screenshot of the Azure portal with the settings for key named 'my SKR key' visible. It shows another panel that shows the details of the secure key release policy.](media/skr-flow-confidential-vm-sev-snp-attestation/skr-onboard-key-with-policy.png)
+
+## Guest attestation client
+
+Attestation helps us to _cryptographically assess_ that something is running in the intended operating state. It is the process by which one party, the verifier, assesses the trustworthiness of a potentially untrusted peer, the attester. With remote guest attestation, the trusted execution environment offers a platform that allows you to run an entire operating system inside of it.
+
+> [!IMPORTANT]
+> Microsoft offers a C/C++ library, for both [Windows](https://www.nuget.org/packages/Microsoft.Azure.Security.GuestAttestation) and [Linux](https://packages.microsoft.com/repos/azurecore/pool/main/a/azguestattestation1/) that can help your development efforts. The library makes it easy to acquire a __a SEV-SNP platform report__ from the hardware and to also have it attested by an instance of Azure Attestation service. The Azure Attestation service can either be one hosted by Microsoft (shared) or your own private instance.
+
+A [open sourced](https://github.com/Azure/confidential-computing-cvm-guest-attestation) Windows and Linux client binary that utilizes the guest attestation library can be chosen to make the guest attestation process easy with CVMs. The client binary returns the attested platform report as a JSON Web Token, which is what is needed for Key Vault's `release` key operation.
+
+> [!NOTE]
+> A token from the Azure Attestation service is valid for [8 hours](../attestation/faq.yml).
+
+### [Linux]
+
+1. Sign in to your VM.
+
+1. Clone the [sample Linux application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-platform-checker-exe/Linux).
+
+1. Install the `build-essential` package. This package installs everything required for compiling the sample application.
+
+ ```bash
+ sudo apt-get install build-essential
+ ```
+
+1. Install the `libcurl4-openssl-dev` and `libjsoncpp-dev` packages.
+
+ ```bash
+ sudo apt-get install libcurl4-openssl-dev
+ ```
+
+ ```bash
+ sudo apt-get install libjsoncpp-dev
+ ```
+
+1. [Download](https://packages.microsoft.com/repos/azurecore/pool/main/a/azguestattestation1/) the attestation package.
+
+1. Install the attestation package. Make sure to replace `<version>` with the version that you downloaded.
+
+ ```bash
+ sudo dpkg -i azguestattestation1_<latest-version>_amd64.deb
+ ```
+
+1. To run the sample client, navigate inside the unzipped folder and run the below command:
+
+ ```sh
+ sudo ./AttestationClient -a <attestation-url> -n <nonce-value> -o token
+ ```
+
+> [!NOTE]
+> If `-o` is not specified to as `token`, the exe prints a binary result true or false depending on the attestation result and the platform being `sevsnp`.
+
+### [Windows](#tab/windows)
+
+1. Sign in to your VM.
+1. Clone the [sample Windows application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-platform-checker-exe/Windows).
+1. Navigate inside the unzipped folder and run `VC_redist.x64.exe`. VC_redist will install Microsoft C and C++ (MSVC) runtime libraries on the machine.
+1. To run the sample client, navigate inside the unzipped folder and run the below command:
+
+ ```sh
+ sudo ./AttestationClient -a <attestation-url> -n <nonce-value> -o token
+ ```
+
+> [!NOTE]
+> If `-o` is not specified to as `token`, the exe prints a binary result true or false depending on the attestation result and the platform being `sevsnp`.
+
+### Guest Attestation result
+
+The result from the Guest Attestation client simply is a base64 encoded string! This encoded string value is a signed JSON Web Token (__JWT__), with a header, body and signature. You can split the string by the `.` (dot) value and base64 decode the results.
+
+```text
+eyJhbGciO...
+```
+
+The header contains a `jku`, also known as [JWK Set URI](https://www.rfc-editor.org/rfc/rfc7515#section-4.1.2) which links to a set of JSON-encoded public keys. One of which corresponds to the key used to digitally sign the JWS. The `kid` indicates which key was used to sign the JWS.
+
+```json
+{
+ "alg": "RS256",
+ "jku": "https://sharedweu.weu.attest.azure.net/certs",
+ "kid": "dRKh+hBcWUfQimSl3Iv6ZhStW3TSOt0ThwiTgUUqZAo=",
+ "typ": "JWT"
+}
+```
+
+The body of the guest attestation response will get validated by Azure Key Vault as input to test against the key release policy. As previously noted, Azure Key Vault uses the "`TpmEphemeralEncryptionKey`" as the key-encryption key.
+
+```json
+{
+ "exp": 1671865218,
+ "iat": 1671836418,
+ "iss": "https://sharedweu.weu.attest.azure.net",
+ "jti": "ce395e5de9c638d384cd3bd06041e674edee820305596bba3029175af2018da0",
+ "nbf": 1671836418,
+ "secureboot": true,
+ "x-ms-attestation-type": "azurevm",
+ "x-ms-azurevm-attestation-protocol-ver": "2.0",
+ "x-ms-azurevm-attested-pcrs": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4,
+ 5,
+ 6,
+ 7
+ ],
+ "x-ms-azurevm-bootdebug-enabled": false,
+ "x-ms-azurevm-dbvalidated": true,
+ "x-ms-azurevm-dbxvalidated": true,
+ "x-ms-azurevm-debuggersdisabled": true,
+ "x-ms-azurevm-default-securebootkeysvalidated": true,
+ "x-ms-azurevm-elam-enabled": false,
+ "x-ms-azurevm-flightsigning-enabled": false,
+ "x-ms-azurevm-hvci-policy": 0,
+ "x-ms-azurevm-hypervisordebug-enabled": false,
+ "x-ms-azurevm-is-windows": false,
+ "x-ms-azurevm-kerneldebug-enabled": false,
+ "x-ms-azurevm-osbuild": "NotApplication",
+ "x-ms-azurevm-osdistro": "Ubuntu",
+ "x-ms-azurevm-ostype": "Linux",
+ "x-ms-azurevm-osversion-major": 20,
+ "x-ms-azurevm-osversion-minor": 4,
+ "x-ms-azurevm-signingdisabled": true,
+ "x-ms-azurevm-testsigning-enabled": false,
+ "x-ms-azurevm-vmid": "6506B531-1634-431E-99D2-42B7D3414AD0",
+ "x-ms-isolation-tee": {
+ "x-ms-attestation-type": "sevsnpvm",
+ "x-ms-compliance-status": "azure-compliant-cvm",
+ "x-ms-runtime": {
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "HCLAkPub",
+ "kty": "RSA",
+ "n": "tXkRLAABQ7vgX96..1OQ"
+ }
+ ],
+ "vm-configuration": {
+ "console-enabled": true,
+ "current-time": 1671835548,
+ "secure-boot": true,
+ "tpm-enabled": true,
+ "vmUniqueId": "6506B531-1634-431E-99D2-42B7D3414AD0"
+ }
+ },
+ "x-ms-sevsnpvm-authorkeydigest": "0000000000000..00",
+ "x-ms-sevsnpvm-bootloader-svn": 3,
+ "x-ms-sevsnpvm-familyId": "01000000000000000000000000000000",
+ "x-ms-sevsnpvm-guestsvn": 2,
+ "x-ms-sevsnpvm-hostdata": "0000000000000000000000000000000000000000000000000000000000000000",
+ "x-ms-sevsnpvm-idkeydigest": "57486a44..96",
+ "x-ms-sevsnpvm-imageId": "02000000000000000000000000000000",
+ "x-ms-sevsnpvm-is-debuggable": false,
+ "x-ms-sevsnpvm-launchmeasurement": "ad6de16..23",
+ "x-ms-sevsnpvm-microcode-svn": 115,
+ "x-ms-sevsnpvm-migration-allowed": false,
+ "x-ms-sevsnpvm-reportdata": "c6500..0000000",
+ "x-ms-sevsnpvm-reportid": "cf5ea742f08cb45240e8ad4..7eb7c6c86da6493",
+ "x-ms-sevsnpvm-smt-allowed": true,
+ "x-ms-sevsnpvm-snpfw-svn": 8,
+ "x-ms-sevsnpvm-tee-svn": 0,
+ "x-ms-sevsnpvm-vmpl": 0
+ },
+ "x-ms-policy-hash": "wm9mHlvTU82e8UqoOy1..RSNkfe99-69IYDq9eWs",
+ "x-ms-runtime": {
+ "client-payload": {
+ "nonce": ""
+ },
+ "keys": [
+ {
+ "e": "AQAB",
+ "key_ops": [
+ "encrypt"
+ ],
+ "kid": "TpmEphemeralEncryptionKey", // key-encryption key candidate!
+ "kty": "RSA",
+ "n": "kVTLSwAAQpg..Q"
+ }
+ ]
+ },
+ "x-ms-ver": "1.0"
+}
+```
+
+The documentation for Microsoft Azure Attestation service has an extensive list containing descriptions of all of these [SEV-SNP-related claims](../attestation/claim-sets.md#sev-snp-attestation).
+
+## Performing the key release operation
+
+We can use any scripting or programming language to receive an attested platform report using the AttestationClient binary. Since the virtual machine we deployed in a previous step has managed identity enabled, we should get an __Azure AD token for Key Vault__ from the instance metadata service (__IMDS__).
+
+By configuring the attested platform report as the body payload and the Azure AD token in our __authorization header__, you have everything needed to perform the key `release` operation.
+
+```powershell
+#Requires -Version 7
+#Requires -RunAsAdministrator
+#Requires -PSEdition Core
+
+<#
+.SYNOPSIS
+ Perform Secure Key Release operation in Azure Key Vault, provided this script is running inside an Azure Confidential Virtual Machine.
+.DESCRIPTION
+ Perform Secure Key Release operation in Azure Key Vault, provided this script is running inside an Azure Confidential Virtual Machine.
+ The release key operation is applicable to all key types. The target key must be marked exportable. This operation requires the keys/release permission.
+.PARAMETER -AttestationTenant
+ Provide the attestation instance base URI, for example https://mytenant.attest.azure.net.
+.PARAMETER -VaultBaseUrl
+ Provide the vault name, for example https://myvault.vault.azure.net.
+.PARAMETER -KeyName
+ Provide the name of the key to get.
+.PARAMETER -KeyName
+ Provide the version parameter to retrieve a specific version of a key.
+.INPUTS
+ None.
+.OUTPUTS
+ System.Management.Automation.PSObject
+.EXAMPLE
+ PS C:\> .\Invoke-SecureKeyRelease.ps1 -AttestationTenant "https://sharedweu.weu.attest.azure.net" -VaultBaseUrl "https://mykeyvault.vault.azure.net/" -KeyName "mykey" -KeyVersion "e473cd4c66224d16870bbe2eb4c58078"
+#>
+
+param (
+ [Parameter(Mandatory = $true)]
+ [string]
+ $AttestationTenant,
+ [Parameter(Mandatory = $true)]
+ [string]
+ $VaultBaseUrl,
+ [Parameter(Mandatory = $true)]
+ [string]
+ $KeyName,
+ [Parameter(Mandatory = $false)]
+ [string]
+ $KeyVersion
+)
+# Check if AttestationClient* exists.
+$fileExists = Test-Path -Path "AttestationClient*"
+if (!$fileExists) {
+ throw "AttestationClient binary not found. Please download it from 'https://github.com/Azure/confidential-computing-cvm-guest-attestation'."
+}
+
+$cmd = $null
+if ($isLinux) {
+ $cmd = "sudo ./AttestationClient -a $attestationTenant -o token"
+}
+elseif ($isWindows) {
+ $cmd = "./AttestationClientApp.exe -a $attestationTenant -o token"
+}
+
+$attestedPlatformReportJwt = Invoke-Expression -Command $cmd
+if (!$attestedPlatformReportJwt.StartsWith("eyJ")) {
+ throw "AttestationClient failed to get an attested platform report."
+}
+
+## Get access token from IMDS for Key Vault
+$imdsUrl = 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://vault.azure.net'
+$kvTokenResponse = Invoke-WebRequest -Uri $imdsUrl -Headers @{Metadata = "true" }
+if ($kvTokenResponse.StatusCode -ne 200) {
+ throw "Unable to get access token. Ensure Azure Managed Identity is enabled."
+}
+$kvAccessToken = ($kvTokenResponse.Content | ConvertFrom-Json).access_token
+
+# Perform release key operation
+if ([string]::IsNullOrEmpty($keyVersion)) {
+ $kvReleaseKeyUrl = "{0}/keys/{1}/release?api-version=7.3" -f $vaultBaseUrl, $keyName
+}
+else {
+ $kvReleaseKeyUrl = "{0}/keys/{1}/{2}/release?api-version=7.3" -f $vaultBaseUrl, $keyName, $keyVersion
+}
+
+$kvReleaseKeyHeaders = @{
+ Authorization = "Bearer $kvAccessToken"
+ 'Content-Type' = 'application/json'
+}
+
+$kvReleaseKeyBody = @{
+ target = $attestedPlatformReportJwt
+}
+
+$kvReleaseKeyResponse = Invoke-WebRequest -Method POST -Uri $kvReleaseKeyUrl -Headers $kvReleaseKeyHeaders -Body ($kvReleaseKeyBody | ConvertTo-Json)
+if ($kvReleaseKeyResponse.StatusCode -ne 200) {
+ Write-Error -Message "Unable to perform release key operation."
+ Write-Error -Message $kvReleaseKeyResponse.Content
+}
+else {
+ $kvReleaseKeyResponse.Content | ConvertFrom-Json
+}
+```
+
+### Key Release Response
+
+The secure key release operation only returns a single property inside of its JSON payload. The contents, however, have been base64 encoded as well.
+
+```json
+{
+ "value": "eyJhbGciOiJSUzI1NiIsImtpZCI6Ijg4RUFDM.."
+}
+```
+
+Here we have another header, though this one has a [X.509 certificate chain](https://www.rfc-editor.org/rfc/rfc7515#section-4.1.6) as a property.
+
+```json
+{
+ "alg": "RS256",
+ "kid": "88EAC2DB6BE4E051B0E05AEAF6CB79E675296121",
+ "x5t": "iOrC22vk4FGw4Frq9st55nUpYSE",
+ "typ": "JWT",
+ "x5t#S256": "BO7jbeU3BG0FEjetF8rSisRbkMfcdy0olhcnmYEwApA",
+ "x5c": [
+ "MIIIfDCCBmSgA..XQ==",
+ "MII..8ZZ8m",
+ "MII..lMrY="
+ ]
+}
+```
+
+You can read from the "`x5c`" array in PowerShell if you wanted to, this can help you verify that this is a valid certificate. Below is an example:
+
+```powershell
+$certBase64 = "MIIIfDCCBmSgA..XQ=="
+$cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]([System.Convert]::FromBase64String($certBase64))
+$cert | Format-List *
+
+# NotAfter : 9/18/2023 6:14:06 PM
+# NotBefore : 9/23/2022 6:14:06 PM
+# ...
+# Issuer : CN=Microsoft Azure TLS Issuing CA 06, O=Microsoft Corporation, C=US
+# Subject : CN=vault.azure.net, O=Microsoft Corporation, L=Redmond, S=WA, C=US
+```
+
+The response's JWT token body looks incredibly similar to the response that you get when invoking the `get` key operation. However, the `release` operation includes the `key_hsm` property, amongst other things.
+
+```json
+{
+ "request": {
+ "api-version": "7.3",
+ "enc": "CKM_RSA_AES_KEY_WRAP",
+ "kid": "https://mykeyvault.vault.azure.net/keys/mykey"
+ },
+ "response": {
+ "key": {
+ "key": {
+ "kid": "https://mykeyvault.vault.azure.net/keys/mykey/e473cd4c66224d16870bbe2eb4c58078",
+ "kty": "RSA-HSM",
+ "key_ops": [
+ "encrypt",
+ "decrypt"
+ ],
+ "n": "nwFQ8p..20M",
+ "e": "AQAB",
+ "key_hsm": "eyJzY2hlbW..GIifQ"
+ },
+ "attributes": {
+ "enabled": true,
+ "nbf": 1671577355,
+ "exp": 1703113355,
+ "created": 1671577377,
+ "updated": 1671827011,
+ "recoveryLevel": "Recoverable+Purgeable",
+ "recoverableDays": 90,
+ "exportable": true
+ },
+ "tags": {},
+ "release_policy": {
+ "data": "eyJ2ZXJzaW9uIjoiMS4wLjAiLCJhbnlPZiI6W3siYXV0aG9yaXR5IjoiaHR0cHM6Ly9zaGFyZWR3ZXUud2V1LmF0dGVzdC5henVyZS5uZXQiLCJhbGxPZiI6W3siY2xhaW0iOiJ4LW1zLWlzb2xhdGlvbi10ZWUueC1tcy1hdHRlc3RhdGlvbi10eXBlIiwiZXF1YWxzIjoic2V2c25wdm0ifSx7ImNsYWltIjoieC1tcy1pc29sYXRpb24tdGVlLngtbXMtY29tcGxpYW5jZS1zdGF0dXMiLCJlcXVhbHMiOiJhenVyZS1jb21wbGlhbnQtY3ZtIn1dfV19",
+ "immutable": false
+ }
+ }
+ }
+}
+```
+
+Should your base64 decode the value under `$.response.key.release_policy.data`, you get the JSON representation of the Key Vault key release policy that we defined in an earlier step.
+
+The `key_hsm` property base64 decoded value looks like this:
+
+```json
+{
+ "schema_version": "1.0",
+ "header": {
+ "kid": "TpmEphemeralEncryptionKey", // (key identifier of KEK)
+ "alg": "dir", // Direct mode, i.e. the referenced 'kid' is used to directly protect the ciphertext
+ "enc": "CKM_RSA_AES_KEY_WRAP"
+ },
+ "ciphertext": "Rftxvr..lb"
+}
+```
+
+## Next steps
+
+[SKR Policy Examples](skr-policy-examples.md)
+[Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
+[Learn more about the guest attestation feature](guest-attestation-confidential-vms.md)
+[Learn about Azure confidential VMs](confidential-vm-overview.md)
confidential-computing Skr Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-policy-examples.md
+
+ Title: Secure Key Release Policy with Azure Key Vault and Azure Confidential Computing
+description: Examples of AKV SKR policies across offered Azure Confidential Computing Trusted Execution Environments
+++++ Last updated : 3/5/2023+++
+# Secure Key Release Policy (SKR) Examples for Confidential Computing (ACC)
+
+SKR can only release exportable marked keys based on the Microsoft Azure Attestation (MAA) generated claims. There's a tight integration on the SKR policy definition to MAA claims. MAA claims by trusted execution environment (TEE) can be found [here.](../attestation/attestation-token-examples.md)
+
+Follow the policy [grammar](../key-vault/keys/policy-grammar.md) for more examples on how you can customize the SKR policies.
+
+## Intel SGX Application Enclaves SKR policy examples
+
+**Example 1:** Intel SGX based SKR policy validating the MR Signer (SGX enclave signer) details as part of the MAA claims
+
+```json
+
+{
+ "anyOf": [
+ {
+ "authority": "https://sharedeus2.eus2.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-sgx-mrsigner",
+ "equals": "9fa48b1629bd246a1de3d38fb7df97f6554cd65d6b3b72e85b86848ae6b578ba"
+ }
+ ]
+ }
+ ],
+ "version": "1.0.0"
+}
+
+```
+
+**Example 2:** Intel SGX based SKR policy validating the MR Signer (SGX enclave signer) or MR Enclave details as part of the MAA claims
+
+```json
+
+{
+ "anyOf": [
+ {
+ "authority": "https://sharedeus2.eus2.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-sgx-mrsigner",
+ "equals": "9fa48b1629bd246a1de3d38fb7df97f6554cd65d6b3b72e85b86848ae6b578ba"
+ },
+ {
+ "claim": "x-ms-sgx-mrenclave",
+ "equals": "9fa48b1629bg677jfsaawed7772e85b86848ae6b578ba"
+ }
+ ]
+ }
+ ],
+ "version": "1.0.0"
+}
+
+```
+
+**Example 3:** Intel SGX based SKR policy validating the MR Signer (SGX enclave signer) and MR Enclave with a min SVN number details as part of the MAA claims
+
+```json
+{
+ "anyOf": [
+ {
+ "authority": "https://sharedeus2.eus2.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-sgx-mrsigner",
+ "equals": "9fa48b1629bd246a1de3d38fb7df97f6554cd65d6b3b72e85b86848ae6b578ba"
+ },
+ {
+ "claim": "x-ms-sgx-mrenclave",
+ "equals": "9fa48b1629bg677jfsaawed7772e85b86848ae6b578ba"
+ },
+ {
+ "claim": "x-ms-sgx-svn",
+ "greater": 1
+ }
+ ]
+ }
+ ],
+ "version": "1.0.0"
+}
+
+```
+
+## Confidential VM AMD SEV-SNP based VM TEE SKR policy examples
+
+**Example 1:** A SKR policy that validates if this is Azure compliant CVM and is running on a genuine AMD SEV-SNP hardware and the MAA URL authority is spread across many regions.
+
+```json
+{
+ "version": "1.0.0",
+ "anyOf": [
+ {
+ "authority": "https://sharedweu.weu.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-attestation-type",
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-compliance-status",
+ "equals": "azure-compliant-cvm"
+ }
+ ]
+ },
+ {
+ "authority": "https://sharedeus2.weu2.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-attestation-type",
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-compliance-status",
+ "equals": "azure-compliant-cvm"
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+**Example 2:** A SKR policy that validates if the CVM is an Azure compliant CVM and is running on a genuine AMD SEV-SNP hardware and is of a known Virtual Machine ID. (VMIDs are unique across Azure)
+
+```json
+{
+ "version": "1.0.0",
+ "allOf": [
+ {
+ "authority": "https://sharedweu.weu.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-isolation-tee.x-ms-attestation-type",
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-isolation-tee.x-ms-compliance-status",
+ "equals": "azure-compliant-cvm"
+ },
+ {
+ "claim": "x-ms-azurevm-vmid",
+ "equals": "B958DC88-E41D-47F1-8D20-E57B6B7E9825"
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+## Confidential containers on Azure Container Instances (ACI) SKR policy examples
+
+**Example 1:** Confidential containers on ACI validating the containers initiated and container configuration metadata as part of container group launch with added validations that this is an AMD SEV-SNP hardware.
+
+> [!NOTE]
+> The containers metadata is a rego based policy hash reflected as in this [example.](https://github.com/microsoft/confidential-sidecar-containers/tree/main).
+
+```json
+{
+ "version": "1.0.0",
+ "anyOf": [
+ {
+ "authority": "https://fabrikam1.wus.attest.azure.net",
+ "allOf": [
+ {
+ "claim": "x-ms-attestation-type",
+ "equals": "sevsnpvm"
+ },
+ {
+ "claim": "x-ms-compliance-status",
+ "equals": "azure-compliant-uvm"
+ },
+ {
+ "claim": "x-ms-sevsnpvm-hostdata",
+ "equals": "532eaabd9574880dbf76b9b8cc00832c20a6ec113d682299550d7a6e0f345e25"
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+## References
+
+[Microsoft Azure Attestation (MAA)](../attestation/overview.md)
+
+[Secure Key Release Concept and Basic Steps](concept-skr-attestation.md)
cosmos-db Burst Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md
Title: Burst capacity in Azure Cosmos DB (preview)
-description: Learn more about burst capacity in Azure Cosmos DB
+ Title: Burst capacity (preview)
+
+description: Use your database or container's idle throughput capacity to handle spikes of traffic with burst capacity in Azure Cosmos DB.
Previously updated : 10/26/2022 Last updated : 03/16/2023 # Burst capacity in Azure Cosmos DB (preview)
Last updated 10/26/2022
Azure Cosmos DB burst capacity (preview) allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available.
-Burst capacity applies only to Azure Cosmos DB accounts using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. The feature is configured at the Azure Cosmos DB account level and will automatically apply to all databases and containers in the account that have physical partitions with less than 3000 RU/s of provisioned throughput. Resources that have greater than or equal to 3000 RU/s per physical partition won't benefit from or be able to use burst capacity.
+Burst capacity applies only to Azure Cosmos DB accounts using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. The feature is configured at the Azure Cosmos DB account level and automatically applies to all databases and containers in the account that have physical partitions with less than 3000 RU/s of provisioned throughput. Resources that have greater than or equal to 3000 RU/s per physical partition can't benefit from or use burst capacity.
## How burst capacity works
After the 10 seconds is over, the burst capacity has been used up. If the worklo
To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity (preview)** feature.
-Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it will take 15-20 minutes to take effect.
+Before enabling the feature, verify that your Azure Cosmos DB account(s) meet all the [preview eligibility criteria](#limitations-preview-eligibility-criteria). Once you've enabled the feature, it takes 15-20 minutes to take effect.
## Limitations (preview eligibility criteria)
To enroll in the preview, your Azure Cosmos DB account must meet all the followi
- See the FAQ on [burst capacity.](burst-capacity-faq.yml) - Learn more about [provisioned throughput.](set-throughput.md) - Learn more about [request units.](request-units.md)-- Trying to decide between provisioned throughput and serverless? See [choose between provisioned throughput and serverless.](throughput-serverless.md)-- Want to learn the best practices? See [best practices for scaling provisioned throughput.](scaling-provisioned-throughput-best-practices.md)
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
Title: Configure Azure Cosmos DB account with periodic backup
-description: This article describes how to configure Azure Cosmos DB accounts with periodic backup with backup interval. and retention. Also how to contacts support to restore your data.
+ Title: Configure periodic backup
+
+description: Configure Azure Cosmos DB accounts with periodic backup and retention at a specified interval through the portal or a support ticket.
- Previously updated : 12/09/2021+ Last updated : 03/16/2023 # Configure Azure Cosmos DB account with periodic backup+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters. With Azure Cosmos DB, not only your data, but also the backups of your data are highly redundant and resilient to regional disasters. The following steps show how Azure Cosmos DB performs data backup:
-* Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.
+- Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time, only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you can change the backup interval and the retention period from the Azure portal. You can change the backup configuration during or after the Azure Cosmos DB account is created. If the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given provisioned throughput container or shared throughput database for 30 days. If throughput is provisioned at the database level, the backup and restore process happens across the entire database scope.
-* Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB.
+- Azure Cosmos DB stores these backups in Azure Blob storage whereas the actual data resides locally within Azure Cosmos DB.
-* To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../availability-zones/cross-region-replication-azure.md) article. You cannot access this backup directly. Azure Cosmos DB team will restore your backup when you request through a support request.
+- To guarantee low latency, the snapshot of your backup is stored in Azure Blob storage in the same region as the current write region (or **one** of the write regions, in case you have a multi-region write configuration). For resiliency against regional disaster, each snapshot of the backup data in Azure Blob storage is again replicated to another region through geo-redundant storage (GRS). The region to which the backup is replicated is based on your source region and the regional pair associated with the source region. To learn more, see the [list of geo-redundant pairs of Azure regions](../availability-zones/cross-region-replication-azure.md) article. You can't access this backup directly. Azure Cosmos DB team restores your backup when you request through a support request.
- The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
+ The following image shows how an Azure Cosmos DB container with all the three primary physical partitions in West US. The container is backed up in a remote Azure Blob Storage account in West US and then replicated to East US:
- :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Periodic full backups of all Azure Cosmos DB entities in GRS Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
+ :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="Diagram of periodic full backups taken of multiple Azure Cosmos DB entities in geo-redundant Azure Storage." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false":::
-* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
+- The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database.
-> [!Note]
+> [!NOTE]
> For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time. ## Backup storage redundancy By default, Azure Cosmos DB stores periodic mode backup data in geo-redundant [blob storage](../storage/common/storage-redundancy.md) that is replicated to a [paired region](../availability-zones/cross-region-replication-azure.md). You can update this default value using Azure PowerShell or CLI and define an Azure policy to enforce a specific storage redundancy option. To learn more, see [update backup storage redundancy](update-backup-storage-redundancy.md) article.
-To ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned, you can change the default geo-redundant backup storage and configure either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters.
+Change the default geo-redundant backup storage to ensure that your backup data stays within the same region where your Azure Cosmos DB account is provisioned. You can configure the geo-redundant backup to use either locally redundant or zone-redundant storage. Storage redundancy mechanisms store multiple copies of your backups so that it's protected from planned and unplanned events. These events can include transient hardware failure, network or power outages, or massive natural disasters.
You can configure storage redundancy for periodic backup mode at the time of account creation or update it for an existing account. You can use the following three data redundancy options in periodic backup mode:
-* **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.
+- **Geo-redundant backup storage:** This option copies your data asynchronously across the paired region.
-* **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+- **Zone-redundant backup storage:** This option copies your data synchronously across three Azure availability zones in the primary region. For more information, see [Zone-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
-* **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally-redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
+- **Locally-redundant backup storage:** This option copies your data synchronously three times within a single physical location in the primary region. For more information, see [locally redundant storage.](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region)
> [!NOTE] > Zone-redundant storage is currently available only in [specific regions](../availability-zones/az-region.md). Depending on the region you select for a new account or the region you have for an existing account; the zone-redundant option will not be available.
You can configure storage redundancy for periodic backup mode at the time of acc
## Modify the backup interval and retention period
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described below, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos DB account creation or after the account is created. The backup configuration is set at the Azure Cosmos DB account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described later in this article, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
-If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
+If you've accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
### Modify backup options using Azure portal - Existing account Use the following steps to change the default backup options for an existing Azure Cosmos DB account: 1. Sign into the [Azure portal.](https://portal.azure.com/)+ 1. Navigate to your Azure Cosmos DB account and open the **Backup & Restore** pane. Update the backup interval and the backup retention period as required.
- * **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a non-zero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval cannot be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
+ - **Backup Interval** - ItΓÇÖs the interval at which Azure Cosmos DB attempts to take a backup of your data. Backup takes a nonzero amount of time and in some case it could potentially fail due to downstream dependencies. Azure Cosmos DB tries its best to take a backup at the configured interval, however, it doesnΓÇÖt guarantee that the backup completes within that time interval. You can configure this value in hours or minutes. Backup Interval can't be less than 1 hour and greater than 24 hours. When you change this interval, the new interval takes into effect starting from the time when the last backup was taken.
- * **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
+ - **Backup Retention** - It represents the period where each backup is retained. You can configure it in hours or days. The minimum retention period canΓÇÖt be less than two times the backup interval (in hours) and it canΓÇÖt be greater than 720 hours.
- * **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There is an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
+ - **Copies of data retained** - By default, two backup copies of your data are offered at free of charge. There's an extra charge if you need more than two copies. See the Consumed Storage section in the [Pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to know the exact price for extra copies.
- * **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup is not replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect and **you will lose access to restore the older backups immediately.**
+ - **Backup storage redundancy** - Choose the required storage redundancy option, see the [Backup storage redundancy](#backup-storage-redundancy) section for available options. By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the account is being provisioned supports it. Otherwise, the account fallback to the highest redundancy option available. You can choose other storage such as locally redundant to ensure the backup isn't replicated to another region. The changes made to an existing account are applied to only future backups. After the backup storage redundancy of an existing account is updated, it may take up to twice the backup interval time for the changes to take effect, and **you will lose access to restore the older backups immediately.**
- > [!NOTE]
- > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
+ > [!NOTE]
+ > You must have the Azure [Azure Cosmos DB Operator role](../role-based-access-control/built-in-roles.md#cosmos-db-operator) role assigned at the subscription level to configure backup storage redundancy.
- :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Configure backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true":::
+ :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Screenshot of configuration options including backup interval, retention, and storage redundancy for an existing Azure Cosmos DB account." border="true":::
### Modify backup options using Azure portal - New account When provisioning a new account, from the **Backup Policy** tab, select **Periodic*** backup policy. The periodic policy allows you to configure the backup interval, backup retention, and backup storage redundancy. For example, you can choose **locally redundant backup storage** or **Zone redundant backup storage** options to prevent backup data replication outside your region. ### Modify backup options using Azure PowerShell
When deploying the Resource Manager template, change the periodic backup options
## Request data restore from a backup
-If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those. Azure support is not available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
+If you accidentally delete your database or a container, you can [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) or [call the Azure support](https://azure.microsoft.com/support/options/) to restore the data from automatic online backups. Azure support is available for selected plans only such as **Standard**, **Developer**, and plans higher than those tiers. Azure support isn't available with **Basic** plan. To learn about different support plans, see the [Azure support plans](https://azure.microsoft.com/support/plans/) page.
To restore a specific snapshot of the backup, Azure Cosmos DB requires that the data is available during the backup cycle for that snapshot. You should have the following details before requesting a restore:
-* Have your subscription ID ready.
-
-* Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It is advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
-
-* If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
-
-* If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.
-
-* If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.
+- Have your subscription ID ready.
+- Based on how your data was accidentally deleted or modified, you should prepare to have additional information. It's advised that you have the information available ahead to minimize the back-and-forth that can be detrimental in some time sensitive cases.
+- If the entire Azure Cosmos DB account is deleted, you need to provide the name of the deleted account. If you create another account with the same name as the deleted account, share that with the support team because it helps to determine the right account to choose. It's recommended to file different support tickets for each deleted account because it minimizes the confusion for the state of restore.
+- If one or more databases are deleted, you should provide the Azure Cosmos DB account, and the Azure Cosmos DB database names and specify if a new database with the same name exists.
+- If one or more containers are deleted, you should provide the Azure Cosmos DB account name, database names, and the container names. And specify if a container with the same name exists.
+- If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#modify-the-backup-interval-and-retention-period) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team has enough time to restore your account.
-* If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. **Before you create a support request to restore the data, make sure to [increase the backup retention](#modify-the-backup-interval-and-retention-period) for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way the Azure Cosmos DB support team will have enough time to restore your account.
-
-In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to which the data can be restored to. It is important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
+In addition to Azure Cosmos DB account name, database names, container names, you should specify the point in time to use for data restoration. It's important to be as precise as possible to help us determine the best available backups at that time. **It is also important to specify the time in UTC.**
The following screenshot illustrates how to create a support request for a container(collection/graph/table) to restore data by using Azure portal. Provide other details such as type of data, purpose of the restore, time when the data was deleted to help us prioritize the request. ## Considerations for restoring the data from a backup You may accidentally delete or modify your data in one of the following scenarios:
-* Delete the entire Azure Cosmos DB account.
+- Delete the entire Azure Cosmos DB account.
-* Delete one or more Azure Cosmos DB databases.
+- Delete one or more Azure Cosmos DB databases.
-* Delete one or more Azure Cosmos DB containers.
+- Delete one or more Azure Cosmos DB containers.
-* Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.
+- Delete or modify the Azure Cosmos DB items (for example, documents) within a container. This specific case is typically referred to as data corruption.
-* A shared offer database or containers within a shared offer database are deleted or corrupted.
+- A shared offer database or containers within a shared offer database are deleted or corrupted.
-Azure Cosmos DB can restore data in all the above scenarios. When restoring, a new Azure Cosmos DB account is created to hold the restored data. The name of the new account, if it's not specified, will have the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a pre-created Azure Cosmos DB account.
+Azure Cosmos DB can restore data in all the above scenarios. A new Azure Cosmos DB account is created to hold the restored data when restoring from a backup. The name of the new account, if it's not specified, has the format `<Azure_Cosmos_account_original_name>-restored1`. The last digit is incremented when multiple restores are attempted. You can't restore data to a precreated Azure Cosmos DB account.
-When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name is not in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
+When you accidentally delete an Azure Cosmos DB account, we can restore the data into a new account with the same name, if the account name isn't in use. So, we recommend that you don't re-create the account after deleting it. Because it not only prevents the restored data to use the same name, but also makes discovering the right account to restore from difficult.
-When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It is also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
+When you accidentally delete an Azure Cosmos DB database, we can restore the whole database or a subset of the containers within that database. It's also possible to select specific containers across databases and restore them to a new Azure Cosmos DB account.
-When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there is data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
+When you accidentally delete or modify one or more items within a container (the data corruption case), you need to specify the time to restore to. Time is important if there's data corruption. Because the container is live, the backup is still running, so if you wait beyond the retention period (the default is eight hours) the backups would be overwritten. In order to prevent the backup from being overwritten, increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours from the data corruption.
-If you have accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team will have enough time to restore your account.
+If you've accidentally deleted or corrupted your data, you should contact [Azure support](https://azure.microsoft.com/support/options/) within 8 hours so that the Azure Cosmos DB team can help you restore the data from the backups. This way the Azure Cosmos DB support team has enough time to restore your account.
> [!NOTE] > After you restore the data, not all the source capabilities or settings are carried over to the restored account. The following settings are not carried over to the new account:
-> * VNET access control lists
-> * Stored procedures, triggers and user-defined functions
-> * Multi-region settings
-> * Managed identity settings
-
+>
+> - VNET access control lists
+> - Stored procedures, triggers and user-defined functions
+> - Multi-region settings
+> - Managed identity settings
+>
-If you provision throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
+If you assign throughput at the database level, the backup and restore process in this case happen at the entire database level, and not at the individual containers level. In such cases, you can't select a subset of containers to restore.
## Required permissions to change retention or restore from the portal+ Principals who are part of the role [CosmosdbBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), owner, or contributor are allowed to request a restore or change the retention period. ## Understanding Costs of extra backups
-Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example if Backup Retention is configured to 240 hrs that is, 10 days and Backup Interval to 24 hrs. This implies 10 copies of the backup data. Assuming 1 TB of data in West US 2, the cost would be will be 0.12 * 1000 * 8 for backup storage in given month.
+
+Two backups are provided free and extra backups are charged according to the region-based pricing for backup storage described in [backup storage pricing](https://azure.microsoft.com/pricing/details/cosmos-db/). For example, consider a scenario where Backup Retention is configured to **240 hrs** (or 10 days) and Backup Interval is configured to **24** hrs. This configuration implies that there are 10 copies of the backup data. If you have **1 TB** of data in the West US 2 region, the cost would be `0.12 * 1000 * 8` for backup storage in given month.
## Get the restore details from the restored account
Use the following steps to get the restore details from Azure portal:
1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to the restored account.
-1. Open the **Tags** blade. This blade should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore.
+1. Open the **Tags** page. This page should have the tags **restoredAtTimestamp** and **restoredSourceDatabaseAccountName**. These tags describe the timestamp and the source account name that were used for the periodic restore.
### Use Azure CLI
-Run the following command to get the restore details. The `restoreSourceAccountName` and the `restoreTimestamp` will be under the `tags` property:
+Run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` fields are within the `tags` field:
```azurecli-interactive az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup
az cosmosdb show --name MyCosmosDBDatabaseAccount --resource-group MyResourceGro
### Use PowerShell
-Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and the `restoreTimestamp` will be under the `tags` property:
+Import the Az.CosmosDB module and run the following command to get the restore details. The `restoreSourceAccountName` and `restoreTimestamp` are within the `tags` field:
```powershell-interactive Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount
Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabas
With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own backups by using one of the following approaches:
-* Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage of your choice.
+- Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice.
-* Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
+- Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage.
## Post-restore actions
-The primary goal of the data restore is to recover the data that you have accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it is possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
+The primary goal of the data restore is to recover the data that you've accidentally deleted or modified. So, we recommend that you first inspect the content of the recovered data to ensure it contains what you are expecting. If everything looks good, you can migrate the data back to the primary account. Although it's possible to use the restored account as your new active account, it's not a recommended option if you have production workloads.
-After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account will have the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
+After you restore the data, you get a notification about the name of the new account (itΓÇÖs typically in the format `<original-name>-restored1`) and the time when the account was restored to. The restored account has the same provisioned throughput, indexing policies and it is in same region as the original account. A user who is the subscription admin or a coadmin can see the restored account.
### Migrate data to the original account The following are different ways to migrate data back to the original account:
-* Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
-* Use the [change feed](change-feed.md) in Azure Cosmos DB.
-* You can write your own custom code.
+- Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md).
+- Use the [change feed](change-feed.md) in Azure Cosmos DB.
+- You can write your own custom code.
-It is advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they will incur cost for request units, storage, and egress.
+It's advised that you delete the container or database immediately after migrating the data. If you don't delete the restored databases or containers, they incur cost for request units, storage, and egress.
## Next steps
-* To make a restore request, contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
-* Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
-* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
-* [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md).
-* [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
+- To make a restore request, contact Azure Support by [filing a ticket in the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- [Create account with continuous backup](provision-account-continuous-backup.md).
+- [Restore continuous backup account](restore-account-continuous-backup.md).
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes. The integrated cache
| **D8s** | **8** | **32 GB** | | **D16s** | **16** | **64 GB** |
-> [!NOTE]
+> [!TIP]
> Once created, you can add or remove dedicated gateway nodes, but you can't modify the size of the nodes. To change the size of your dedicated gateway nodes you can deprovision the cluster and provision it again in a different size. This will result in a short period of downtime unless you change the connection string in your application to use the standard gateway during reprovisioning. There are many different ways to provision a dedicated gateway: -- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)
+- [Provision a dedicated gateway using the Azure portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)
- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2022-05-15/service/create#sqldedicatedgatewayservicecreate) - [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
+> [!NOTE]
+> You can provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](../availability-zones/az-region.md) by request. Reach out to cosmoscachefeedback@microsoft.com for more information.
+ ## Dedicated gateway in multi-region accounts When you provision a dedicated gateway cluster in multi-region accounts, identical dedicated gateway clusters are provisioned in each region. For example, consider an Azure Cosmos DB account in East US and North Europe. If you provision a dedicated gateway cluster with two D8 nodes in this account, you'd have four D8 nodes in total - two in East US and two in North Europe. You don't need to explicitly configure dedicated gateways in each region and your connection string remains the same. There are also no changes to best practices for performing failovers.
Like nodes within a cluster, dedicated gateway nodes across regions are independ
The dedicated gateway has the following limitations: - Dedicated gateways are only supported on API for NoSQL accounts-- You can't provision a dedicated gateway in Azure Cosmos DB accounts with [availability zones](../availability-zones/az-region.md). - You can't use [role-based access control (RBAC)](how-to-setup-rbac.md) to authenticate data plane requests routed through the dedicated gateway
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
Previously updated : 08/29/2022 Last updated : 03/15/2023
# Azure Cosmos DB integrated cache - Overview [!INCLUDE[NoSQL](includes/appliesto-nosql.md)]
-The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you donΓÇÖt need to spend time writing custom code for cache invalidation or managing backend infrastructure. Your integrated cache uses a [dedicated gateway](dedicated-gateway.md) within your Azure Cosmos DB account. The integrated cache is the first of many Azure Cosmos DB features that will utilize a dedicated gateway for improved performance. You can choose from three possible dedicated gateway sizes based on the number of cores and memory needed for your workload.
+The Azure Cosmos DB integrated cache is an in-memory cache that helps you ensure manageable costs and low latency as your request volume grows. The integrated cache is easy to set up and you donΓÇÖt need to spend time writing custom code for cache invalidation or managing backend infrastructure. The integrated cache uses the [dedicated gateway](dedicated-gateway.md) within your Azure Cosmos DB account. When provisioning your dedicated gateway, you can choose the number of nodes and the node size based on the number of cores and memory needed for your workload. Each dedicated gateway node has a separate integrated cache from the others.
An integrated cache is automatically configured within the dedicated gateway. The integrated cache has two parts: * An item cache for point reads * A query cache for queries
-The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. The item cache and query cache share the same capacity within the integrated cache and the LRU eviction policy applies to both. In other words, data is evicted from the cache strictly based on when it was least recently used, regardless of whether it's a point read or query.
+The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. The item cache and query cache share the same capacity within the integrated cache and the LRU eviction policy applies to both. Data is evicted from the cache strictly based on when it was least recently used, regardless of whether it's a point read or query. The cached data within each node depends on the data that was recently [written or read](integrated-cache.md#item-cache) through that specific node. If an item or query is cached on one node, it isn't necessarily cached on the others.
> [!NOTE] > Do you have any feedback about the integrated cache? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team:
cosmoscachefeedback@microsoft.com
The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, isn't the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
-Point reads and queries that hit the integrated cache will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
+Point reads and queries that hit the integrated cache have an RU charge of 0. Cache hits have a much lower per-operation cost than reads from the backend database.
-Workloads that fit the following characteristics should evaluate if the integrated cache will help lower costs:
+Workloads that fit the following characteristics should evaluate if the integrated cache helps lower costs:
- Read-heavy workloads - Many repeated point reads on large items - Many repeated high RU queries - Hot partition key for reads
-The biggest factor in expected savings is the degree to which reads repeat themselves. If your workload consistently executes the same point reads or queries within a short period of time, it's a great candidate for the integrated cache. When using the integrated cache for repeated reads, you only use RUs for the first read. Subsequent reads routed through the same dedicated gateway node (within the `MaxIntegratedCacheStaleness` window and if the data hasn't been evicted) won't use throughput.
+The biggest factor in expected savings is the degree to which reads repeat themselves. If your workload consistently executes the same point reads or queries within a short period of time, it's a great candidate for the integrated cache. When using the integrated cache for repeated reads, you only use RUs for the first read. Subsequent reads routed through the same dedicated gateway node (within the `MaxIntegratedCacheStaleness` window and if the data hasn't been evicted) don't use throughput.
Some workloads shouldn't consider the integrated cache, including:
Some workloads shouldn't consider the integrated cache, including:
## Item cache
-You can use the item cache for point reads (in other words, key/value look ups based on the Item ID and partition key).
+Item cache is used for point reads (key/value look ups based on the Item ID and partition key).
### Populating the item cache -- New writes, updates, and deletes are automatically populated in the item cache-- If your app tries to read a specific item that wasnΓÇÖt previously in the cache (cache miss), the item would now be stored in the item cache
+- New writes, updates, and deletes are automatically populated in the item cache of the node that the request is routed through
+- Items from point read requests where the item isnΓÇÖt already in the cache (cache miss) of the node the request is routed through are added to the item cache
+- Requests that are part of a [transactional batch](./nosql/transactional-batch.md) or written in [bulk mode](./nosql/how-to-migrate-from-bulk-executor-library.md#enable-bulk-support) don't populate the item cache
### Item cache invalidation and eviction
+Because each node has an independent cache, it's possible items are invalidated or evicted in the cache of one node and not the others. Items in the cache of a given node are invalidated and evicted based on the below criteria:
+ - Item update or delete - Least recently used (LRU) - Cache retention time (in other words, the `MaxIntegratedCacheStaleness`) ## Query cache
-The query cache can be used to cache queries. The query cache transforms a query into a key/value lookup where the key is the query text and the value is query results. The integrated cache doesn't have a query engine, it only stores the key/value lookup for each query.
+The query cache is used to cache queries. The query cache transforms a query into a key/value lookup where the key is the query text and the value is the query results. The integrated cache doesn't have a query engine, it only stores the key/value lookup for each query. Query results are stored as a set, and the cache doesn't keep track of individual items. A given item can be stored in the query cache multiple times if it appears in the result set of multiple queries. Updates to the underlying items won't be reflected in query results unless the [max integrated cache staleness](#maxintegratedcachestaleness) for the query is reached and the query is served from the backend database.
### Populating the query cache -- If the cache doesn't have a result for that query (cache miss), the query is sent to the backend. After the query is run, the cache will store the results for that query
+- If the cache doesn't have a result for that query (cache miss) on the node it was routed through, the query is sent to the backend. After the query is run, the cache will store the results for that query
+- Queries with the same shape but different parameters or request options that affect the results (ex. max item count) will be stored as their own key/value pair
### Query cache eviction
+Query cache eviction is based on the node the request was routed through. It's possible queries could be evicted or refreshed on one node and not the others.
+ - Least recently used (LRU) - Cache retention time (in other words, the `MaxIntegratedCacheStaleness`)
The query cache can be used to cache queries. The query cache transforms a query
You don't need special code when working with the query cache, even if your queries have multiple pages of results. The best practices and code for query pagination are the same whether your query hits the integrated cache or is executed on the backend query engine.
-The query cache will automatically cache query continuation tokens where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache will have an RU charge of 0. If your subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
+The query cache automatically caches query continuation tokens where applicable. If you have a query with multiple pages of results, any pages that are stored in the integrated cache have an RU charge of 0. If subsequent pages of query results require backend execution, they'll have a continuation token from the previous page so they can avoid duplicating previous work.
-> [!NOTE]
+> [!IMPORTANT]
> Integrated cache instances within different dedicated gateway nodes have independent caches from one another. If data is cached within one node, it is not necessarily cached in the others. Multiple pages of the same query are not guaranteed to be routed to the same dedicated gateway node. ## Integrated cache consistency
-The integrated cache supports read requests with session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it will always bypass the integrated cache and be served from the backend.
+The integrated cache supports read requests with session and eventual [consistency](consistency-levels.md) only. If a read has consistent prefix, bounded staleness, or strong consistency, it bypasses the integrated cache and is served from the backend.
The easiest way to configure either session or eventual consistency for all reads is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). However, if you would only like some of your reads to have a specific consistency, you can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level). > [!NOTE]
-> Write requests with other consistencies will still populate the cache, but in order to read from the cache the request must have either session or eventual consistency.
+> Write requests with other consistencies still populate the cache, but in order to read from the cache the request must have either session or eventual consistency.
### Session consistency
-[Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region as well as globally distributed Azure Cosmos DB accounts. When using session consistency, single client sessions can read their own writes. When using the integrated cache, clients outside of the session performing writes will see eventual consistency.
+[Session consistency](consistency-levels.md#session-consistency) is the most widely used consistency level for both single region and globally distributed Azure Cosmos DB accounts. With session consistency, single client sessions can read their own writes. Clients outside of the session performing writes will see eventual consistency when they are using the integrated cache.
## MaxIntegratedCacheStaleness The `MaxIntegratedCacheStaleness` is the maximum acceptable staleness for cached point reads and queries, regardless of the selected consistency. The `MaxIntegratedCacheStaleness` is configurable at the request-level. For example, if you set a `MaxIntegratedCacheStaleness` of 2 hours, your request will only return cached data if the data is less than 2 hours old. To increase the likelihood of repeated reads utilizing the integrated cache, you should set the `MaxIntegratedCacheStaleness` as high as your business requirements allow.
-It's important to understand that the `MaxIntegratedCacheStaleness`, when configured on a request that ends up populating the cache, doesn't impact how long that request will be cached. `MaxIntegratedCacheStaleness` enforces consistency when you try to use cached data. There's no global TTL or cache retention setting, so data will only be evicted from the cache if either the integrated cache is full or a new read is run with a lower `MaxIntegratedCacheStaleness` than the age of the current cached entry.
+It's important to understand that the `MaxIntegratedCacheStaleness`, when configured on a request that ends up populating the cache, doesn't affect how long that request is cached. `MaxIntegratedCacheStaleness` enforces consistency when you try to use cached data. There's no global TTL or cache retention setting, so data is only evicted from the cache if either the integrated cache is full or a new read is run with a lower `MaxIntegratedCacheStaleness` than the age of the current cached entry.
-This is an improvement from how most caches work and allows the following additional customization:
+This is an improvement from how most caches work and allows for the following other customizations:
- You can set different staleness requirements for each point read or query - Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values-- If you wanted to modify read consistency for cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency
+- If you wanted to modify read consistency for cached data, changing `MaxIntegratedCacheStaleness` has an immediate effect on read consistency
> [!NOTE] > The minimum `MaxIntegratedCacheStaleness` value is 0 and the maximum value is 10 years. When not explicitly configured, the `MaxIntegratedCacheStaleness` defaults to 5 minutes.
It's helpful to monitor some key metrics for the integrated cache. These metrics
- `IntegratedCacheItemHitRate` ΓÇô The proportion of point reads that used the integrated cache (out of all point reads routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes. - `IntegratedCacheQueryHitRate` ΓÇô The proportion of queries that used the integrated cache (out of all queries routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes.
-All existing metrics are available, by default, from the **Metrics** blade (not Metrics classic):
+All existing metrics are available, by default, from **Metrics** in the Azure portal (not Metrics classic):
:::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="Screenshot of the Azure portal that shows the location of integrated cache metrics." border="false":::
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md
Title: Register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB SDKs
+ Title: Use stored procedures, triggers, and UDFs in SDKs
+ description: Learn how to register and call stored procedures, triggers, and user-defined functions using the Azure Cosmos DB SDKs. - Previously updated : 03/09/2023+ Last updated : 03/16/2023 ms.devlang: csharp, java, javascript, python
# How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] The API for NoSQL in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. After you define one or more stored procedures, triggers, or user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
You can use the API for NoSQL SDK across multiple platforms including [.NET v2 (
| SDK | Getting started | | : | : | | .NET v3 | [Quickstart: Azure Cosmos DB for NoSQL client library for .NET](quickstart-dotnet.md) |
-| Java | [Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data](quickstart-java.md)
+| Java | [Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data](quickstart-java.md) |
| JavaScript | [Quickstart: Azure Cosmos DB for NoSQL client library for Node.js](quickstart-nodejs.md) | | Python | [Quickstart: Azure Cosmos DB for NoSQL client library for Python](quickstart-python.md) |
+> [!IMPORTANT]
+> The following code samples assume that you have already have `client` and `container` variables. If you need to create those variables, refer to the appropriate quickstart for your platform.
+ ## How to run stored procedures Stored procedures are written using JavaScript. They can create, update, read, query, and delete items within an Azure Cosmos DB container. For more information, see [How to write stored procedures](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures).
result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[
-## How to run pre-triggers
+## <a id="how-to-run-pre-triggers"></a>How to run pretriggers
-The following examples show how to register and call a pre-trigger by using the Azure Cosmos DB SDKs. For the source of this pre-trigger example, saved as *trgPreValidateToDoItemTimestamp.js*, see [Pre-triggers](how-to-write-stored-procedures-triggers-udfs.md#pre-triggers).
+The following examples show how to register and call a pretrigger by using the Azure Cosmos DB SDKs. For the source of this pretrigger example, saved as *trgPreValidateToDoItemTimestamp.js*, see [Pretriggers](how-to-write-stored-procedures-triggers-udfs.md#pre-triggers).
-When you run an operation by specifying `PreTriggerInclude` and then passing the name of the trigger in a `List` object, pre-triggers are passed in the `RequestOptions` object.
+When you run an operation by specifying `PreTriggerInclude` and then passing the name of the trigger in a `List` object, pretriggers are passed in the `RequestOptions` object.
> [!NOTE] > Even though the name of the trigger is passed as a `List`, you can still run only one trigger per operation. ### [.NET SDK v2](#tab/dotnet-sdk-v2)
-The following code shows how to register a pre-trigger using the .NET SDK v2:
+The following code shows how to register a pretrigger using the .NET SDK v2:
```csharp string triggerId = "trgPreValidateToDoItemTimestamp";
Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myConta
await client.CreateTriggerAsync(containerUri, trigger); ```
-The following code shows how to call a pre-trigger using the .NET SDK v2:
+The following code shows how to call a pretrigger using the .NET SDK v2:
```csharp dynamic newItem = new
await client.CreateDocumentAsync(containerUri, newItem, requestOptions);
### [.NET SDK v3](#tab/dotnet-sdk-v3)
-The following code shows how to register a pre-trigger using the .NET SDK v3:
+The following code shows how to register a pretrigger using the .NET SDK v3:
```csharp await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(ne
}); ```
-The following code shows how to call a pre-trigger using the .NET SDK v3:
+The following code shows how to call a pretrigger using the .NET SDK v3:
```csharp dynamic newItem = new
await client.GetContainer("database", "container").CreateItemAsync(newItem, null
### [Java SDK](#tab/java-sdk)
-The following code shows how to register a pre-trigger using the Java SDK:
+The following code shows how to register a pretrigger using the Java SDK:
```java CosmosTriggerProperties definition = new CosmosTriggerProperties(
CosmosTriggerResponse response = container
.createTrigger(definition); ```
-The following code shows how to call a pre-trigger using the Java SDK:
+The following code shows how to call a pretrigger using the Java SDK:
```java ToDoItem item = new ToDoItem();
CosmosItemResponse<ToDoItem> response = container.createItem(item, options);
### [JavaScript SDK](#tab/javascript-sdk)
-The following code shows how to register a pre-trigger using the JavaScript SDK:
+The following code shows how to register a pretrigger using the JavaScript SDK:
```javascript const container = client.database("myDatabase").container("myContainer");
await container.scripts.triggers.create({
}); ```
-The following code shows how to call a pre-trigger using the JavaScript SDK:
+The following code shows how to call a pretrigger using the JavaScript SDK:
```javascript const container = client.database("myDatabase").container("myContainer");
await container.items.create({
### [Python SDK](#tab/python-sdk)
-The following code shows how to register a pre-trigger using the Python SDK:
+The following code shows how to register a pretrigger using the Python SDK:
```python import azure.cosmos.cosmos_client as cosmos_client
container = database.get_container_client(container_name)
trigger = container.scripts.create_trigger(trigger_definition) ```
-The following code shows how to call a pre-trigger using the Python SDK:
+The following code shows how to call a pretrigger using the Python SDK:
```python item = {'category': 'Personal', 'name': 'Groceries',
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/odbc-driver.md
Title: Use Azure Cosmos DB ODBC driver to connect to BI and analytics tools
-description: Use the Azure Cosmos DB ODBC driver to create normalized data tables and views for SQL queries, analytics, BI, and visualizations.
+ Title: ODBC driver for BI and analytics
+
+description: Use the ODBC driver for Azure Cosmos DB to create normalized data tables and views for SQL queries, analytics, BI, and visualizations.
- Previously updated : 06/21/2022+ Last updated : 03/16/2023
This article walks you through installing and using the Azure Cosmos DB ODBC dri
Azure Cosmos DB is a schemaless database, which enables rapid application development and lets you iterate on data models without being confined to a strict schema. A single Azure Cosmos DB database can contain JSON documents of various structures. To analyze or report on this data, you might need to flatten the data to fit into a schema.
-The ODBC driver normalizes Azure Cosmos DB data into tables and views that fit your data analytics and reporting needs. The normalized schemas let you use ODBC-compliant tools to access the data. The schemas have no impact on the underlying data, and don't require developers to adhere to them. The ODBC driver helps make Azure Cosmos DB databases useful for data analysts as well as development teams.
+The ODBC driver normalizes Azure Cosmos DB data into tables and views that fit your data analytics and reporting needs. The normalized schemas let you use ODBC-compliant tools to access the data. The schemas have no effect on the underlying data, and don't require developers to adhere to them. The ODBC driver helps make Azure Cosmos DB databases useful for data analysts and development teams.
You can do SQL operations against the normalized tables and views, including group by queries, inserts, updates, and deletes. The driver is ODBC 3.8 compliant and supports ANSI SQL-92 syntax.
-You can also connect the normalized Azure Cosmos DB data to other software solutions, such as SQL Server Integration Services (SSIS), Alteryx, QlikSense, Tableau and other analytics software, BI, and data integration tools. You can use those solutions to analyze, move, transform, and create visualizations with your Azure Cosmos DB data.
+> [!IMPORTANT]
+> Consider using [Azure Synapse Link for Azure Cosmos DB](../synapse-link.md) to create tables and views for your data. Synapse Link has distinct performance benefits for large datasets over the ODBC driver. You can also connect the normalized Azure Cosmos DB data to other software solutions, such as SQL Server Integration Services (SSIS), QlikSense, Tableau and other analytics software, BI, and data integration tools. You can use those solutions to analyze, move, transform, and create visualizations with your Azure Cosmos DB data.
> [!IMPORTANT]
+>
> - Connecting to Azure Cosmos DB with the ODBC driver is currently supported for Azure Cosmos DB for NoSQL only. > - The current ODBC driver doesn't support aggregate pushdowns, and has known issues with some analytics tools. Until a new version is released, you can use one of the following alternatives: > - [Azure Synapse Link](../synapse-link.md) is the preferred analytics solution for Azure Cosmos DB. With Azure Synapse Link and Azure Synapse SQL serverless pools, you can use any BI tool to extract near real-time insights from Azure Cosmos DB SQL or API for MongoDB data. > - For Power BI, you can use the [Azure Cosmos DB connector for Power BI](powerbi-visualize.md). > - For Qlik Sense, see [Connect Qlik Sense to Azure Cosmos DB](../visualize-qlik-sense.md).
+>
-<a id="install"></a>
## Install the ODBC driver and connect to your database 1. Download the drivers for your environment:
- | Installer | Supported operating systems|
- |||
- |[Microsoft Azure Cosmos DB ODBC 64-bit.msi](https://aka.ms/cosmos-odbc-64x64) for 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.|
- |[Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) for 32-bit on 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, Windows Vista, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2003.|
- |[Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) for 32-bit Windows|32-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, and Windows Vista.|
+ | Installer | Supported operating systems |
+ | | |
+ | [Microsoft Azure Cosmos DB ODBC 64-bit.msi](https://aka.ms/cosmos-odbc-64x64) for 64-bit Windows | 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7. 64-bit versions of Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2. |
+ | [Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) for 32-bit on 64-bit Windows | 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, Windows Vista. 64-bit versions of Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2003. |
+ | [Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) for 32-bit Windows | 32-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, and Windows Vista. |
1. Run the *.msi* file locally, which starts the **Microsoft Azure Cosmos DB ODBC Driver Installation Wizard**.
You can also connect the normalized Azure Cosmos DB data to other software solut
1. Make sure that the **Microsoft Azure DocumentDB ODBC Driver** is listed on the **Drivers** tab.
- :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Screenshot of the ODBC Data Source Administrator window.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Screenshot of the ODBC Data Source Administrator window.":::
- <a id="connect"></a>
1. Select the **User DSN** tab, and then select **Add** to create a new data source name (DSN). You can also create a System DSN. 1. In the **Create New Data Source** window, select **Microsoft Azure DocumentDB ODBC Driver**, and then select **Finish**.
-1. In the **DocumentDB ODBC Driver DSN Setup** window, fill in the following information:
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Screenshot of the D S N Setup window.":::
-
- - **Data Source Name**: A friendly name for the ODBC DSN. This name is unique to this Azure Cosmos DB account.
- - **Description**: A brief description of the data source.
- - **Host**: The URI for your Azure Cosmos DB account. You can get this information from the **Keys** page in your Azure Cosmos DB account in the Azure portal.
- - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB **Keys** page in the Azure portal. It's best to use the read-only keys, if you use the DSN for read-only data processing and reporting.
-
- To avoid an authentication error, use the copy buttons to copy the URI and key from the Azure portal.
-
- :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Screenshot of the Azure Cosmos DB DB Keys page.":::
-
- - **Encrypt Access Key for**: Select the best choice, based on who uses the machine.
-
+1. In the **DocumentDB ODBC Driver DSN Setup** window, fill in the following information:
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Screenshot of the domain name server (DNS) setup window.":::
+
+ - **Data Source Name**: A friendly name for the ODBC DSN. This name is unique to this Azure Cosmos DB account.
+ - **Description**: A brief description of the data source.
+ - **Host**: The URI for your Azure Cosmos DB account. You can get this information from the **Keys** page in your Azure Cosmos DB account in the Azure portal.
+ - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB **Keys** page in the Azure portal. It's best to use the read-only keys, if you use the DSN for read-only data processing and reporting.
+
+ To avoid an authentication error, use the copy buttons to copy the URI and key from the Azure portal.
+
+ :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Screenshot of the Azure Cosmos DB Keys page.":::
+
+ - **Encrypt Access Key for**: Select the best choice, based on who uses the machine.
+ 1. Select **Test** to make sure you can connect to your Azure Cosmos DB account. 1. Select **Advanced Options** and set the following values:
- - **REST API Version**: Select the [REST API version](/rest/api/cosmos-db) for your operations. The default is **2015-12-16**.
+ - **REST API Version**: Select the [REST API version](/rest/api/cosmos-db) for your operations. The default is **2015-12-16**.
+
+ If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version `2018-12-31`, type `2018-12-31`, and then [follow the steps at the end of this procedure](#edit-the-windows-registry-to-support-rest-api-version-2018-12-31).
- If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, type *2018-12-31*, and then [follow the steps at the end of this procedure](#edit-the-windows-registry-to-support-rest-api-version-2018-12-31).
+ - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is **Session**.
- - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is **Session**.
- - **Number of Retries**: Enter the number of times to retry an operation if the initial request doesn't complete due to service rate limiting.
- - **Schema File**: If you don't select a schema file, the driver scans the first page of data for each container to determine its schema, called container mapping, for each session. This process can cause long startup time for applications that use the DSN. It's best to associate a schema file to the DSN.
+ - **Number of Retries**: Enter the number of times to retry an operation if the initial request doesn't complete due to service rate limiting.
- - If you already have a schema file, select **Browse**, navigate to the file, select **Save**, and then select **OK**.
- - If you don't have a schema file yet, select **OK**, and then follow the steps in the next section to [create a schema definition](#create-a-schema-definition). After you create the schema, come back to this **Advanced Options** window to add the schema file.
+ - **Schema File**: If you don't select a schema file, the driver scans the first page of data for each container to determine its schema, called container mapping, for each session. This process can cause long startup time for applications that use the DSN. It's best to associate a schema file to the DSN.
+
+ - If you already have a schema file, select **Browse**, navigate to the file, select **Save**, and then select **OK**.
+
+ - If you don't have a schema file yet, select **OK**, and then follow the steps in the next section to [create a schema definition](#create-a-schema-definition). After you create the schema, come back to this **Advanced Options** window to add the schema file.
After you select **OK** to complete and close the **DocumentDB ODBC Driver DSN Setup** window, the new User DSN appears on the **User DSN** tab of the **ODBC Data Source Administrator** window.
- :::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="Screenshot that shows the new User D S N on the User D S N tab.":::
### Edit the Windows registry to support REST API version 2018-12-31 If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, follow these steps to update the Windows registry to support this version. 1. In the Windows **Start** menu, type *regedit* to find and open the **Registry Editor**.+ 1. In the Registry Editor, navigate to the path **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**.+ 1. Create a new subkey with the same name as your DSN, such as *Contoso Account ODBC DSN*.+ 1. Navigate to the new **Contoso Account ODBC DSN** subkey, and right-click to add a new **String** value:
- - Value Name: **IgnoreSessionToken**
- - Value data: **1**
- :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Screenshot that shows the Windows Registry Editor settings.":::
-<a id="#container-mapping"></a><a id="table-mapping"></a>
+ - Value Name: **IgnoreSessionToken**
+
+ - Value data: **1**
+
+ :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Screenshot that shows the Windows Registry Editor settings.":::
+ ## Create a schema definition There are two types of sampling methods you can use to create a schema: *container mapping* or *table-delimiter mapping*. A sampling session can use both sampling methods, but each container can use only one of the sampling methods. Which method to use depends on your data's characteristics.
There are two types of sampling methods you can use to create a schema: *contain
- **Table-delimiter mapping** provides more robust sampling for heterogeneous data. This method scopes the sampling to a set of attributes and corresponding values.
- For example, if a document contains a **Type** property, you can scope the sampling to the values of this property. The end result of the sampling is a set of tables for each of the **Type** values you specified. **Type = Car** produces a **Car** table, while **Type = Plane** produces a **Plane** table.
+ For example, if a document contains a **Type** property, you can scope the sampling to the values of this property. The end result of the sampling is a set of tables for each of the **Type** values you specified. **Type = Car** produces a **Car** table, while **Type = Plane** produces a **Plane** table.
To define a schema, follow these steps. For the table-delimiter mapping method, you take extra steps to define attributes and values for the schema.
To define a schema, follow these steps. For the table-delimiter mapping method,
1. In the **DocumentDB ODBC Driver DSN Setup** window, select **Schema Editor**.
- :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Screenshot that shows the Schema Editor button in the D S N Setup window.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Screenshot that shows the Schema Editor button in the D S N Setup window.":::
-1. In the **Schema Editor** window, select **Create New**.
+1. In the **Schema Editor** window, select **Create New**.
1. The **Generate Schema** window displays all the collections in the Azure Cosmos DB account. Select the checkboxes next to the containers you want to sample. 1. To use the *container mapping* method, select **Sample**.
- Or, to use *table-delimiter* mapping, take the following steps to define attributes and values for scoping the sample.
+ Or, to use *table-delimiter* mapping, take the following steps to define attributes and values for scoping the sample.
- 1. Select **Edit** in the **Mapping Definition** column for your DSN.
+ 1. Select **Edit** in the **Mapping Definition** column for your DSN.
- 1. In the **Mapping Definition** window, under **Mapping Method**, select **Table Delimiters**.
+ 1. In the **Mapping Definition** window, under **Mapping Method**, select **Table Delimiters**.
- 1. In the **Attributes** box, type the name of a delimiter property in your document that you want to scope the sampling to, for instance, *City*. Press Enter.
+ 1. In the **Attributes** box, type the name of a delimiter property in your document that you want to scope the sampling to, for instance, *City*. Press Enter.
- 1. If you want to scope the sampling to certain values for the attribute you entered, select the attribute, and then enter a value in the **Value** box, such as *Seattle*, and press Enter. You can add multiple values for attributes. Just make sure that the correct attribute is selected when you enter values.
+ 1. If you want to scope the sampling to certain values for the attribute you entered, select the attribute, and then enter a value in the **Value** box, such as *Seattle*, and press Enter. You can add multiple values for attributes. Just make sure that the correct attribute is selected when you enter values.
- 1. When you're done entering attributes and values, select **OK**.
+ 1. When you're done entering attributes and values, select **OK**.
- 1. In the **Generate Schema** window, select **Sample**.
+ 1. In the **Generate Schema** window, select **Sample**.
1. In the **Design View** tab, refine your schema. The **Design View** represents the database, schema, and table. The table view displays the set of properties associated with the column names, such as **SQL Name** and **Source Name**.
- For each column, you can modify the **SQL name**, the **SQL type**, **SQL length**, **Scale**, **Precision**, and **Nullable** as applicable.
+ For each column, you can modify the **SQL name**, the **SQL type**, **SQL length**, **Scale**, **Precision**, and **Nullable** as applicable.
- You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked **Hide Column = true** aren't returned for selection and projection, although they're still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties that start with **_**. The **id** column is the only field you can't hide, because it's the primary key in the normalized schema.
+ You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked **Hide Column = true** aren't returned for selection and projection, although they're still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties that start with **_**. The **id** column is the only field you can't hide, because it's the primary key in the normalized schema.
1. Once you finish defining the schema, select **File** > **Save**, navigate to the directory to save in, and select **Save**.
Follow these steps to create a view for your data:
1. On the **Sample View** tab of the **Schema Editor** window, select the containers you want to sample, and then select **Add** in the **View Definition** column.
- :::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Screenshot that shows creating a view.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Screenshot of creating a view within the driver.":::
1. In the **View Definitions** window, select **New**. Enter a name for the view, for example *EmployeesfromSeattleView*, and then select **OK**.
You can create as many views as you like. Once you're done defining the views, s
> [!IMPORTANT] > The query text in the view definition should not contain line breaks. Otherwise, we will get a generic error when previewing the view. - ## Query with SQL Server Management Studio Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cos
GO ```
-
+ To see the new linked server name, refresh the linked servers list. :::image type="content" source="./media/odbc-driver/odbc-driver-linked-server-ssms.png" alt-text="Screenshot showing a linked server in S S M S.":::
You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools
1. In Power BI Desktop, select **Get Data**.
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Screenshot showing Get Data in Power B I Desktop.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Screenshot showing Get Data in Power B I Desktop.":::
1. In the **Get Data** window, select **Other** > **ODBC**, and then select **Connect**.
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Screenshot that shows choosing ODBC data source in Power B I Get Data.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Screenshot that shows choosing ODBC data source in Power B I Get Data.":::
1. In the **From ODBC** window, select the DSN you created, and then select **OK**.
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Screenshot that shows choosing the D S N in Power B I Get Data.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Screenshot that shows choosing the D S N in Power B I Get Data.":::
1. In the **Access a data source using an ODBC driver** window, select **Default or Custom** and then select **Connect**. 1. In the **Navigator** window, in the left pane, expand the database and schema, and select the table. The results pane includes the data that uses the schema you created.
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Screenshot of selecting the table in Power B I Get Data.":::
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Screenshot of selecting the table in Power B I Get Data.":::
1. To visualize the data in Power BI desktop, select the checkbox next to the table name, and then select **Load**.
You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools
- **Problem**: You get the following error when trying to connect:
- ```output
- [HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
- ```
+ ```output
+ [HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
+ ```
- **Solution:** Make sure the **Host** and **Access Key** values you copied from the Azure portal are correct, and retry.
+ **Solution:** Make sure the **Host** and **Access Key** values you copied from the Azure portal are correct, and retry.
- **Problem**: You get the following error in SSMS when trying to create a linked Azure Cosmos DB server:
- ```output
- Msg 7312, Level 16, State 1, Line 44
-
- Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
- ```
+ ```output
+ Msg 7312, Level 16, State 1, Line 44
+
+ Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
+ ```
- **Solution**: A linked Azure Cosmos DB server doesn't support four-part naming.
+ **Solution**: A linked Azure Cosmos DB server doesn't support four-part naming.
## Next steps
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-query.md
Title: 'Tutorial: How to query with SQL in Azure Cosmos DB?'
-description: 'Tutorial: Learn how to query with SQL queries in Azure Cosmos DB using the query playground'
+ Title: |
+ Tutorial: Query data
+
+description: In this tutorial, learn how to query data in Azure Cosmos DB for NoSQL with the built-in query syntax using the Data Explorer.
Previously updated : 08/26/2021 Last updated : 03/16/2023
-# Tutorial: Query Azure Cosmos DB by using the API for NoSQL
+# Tutorial: Query data in Azure Cosmos DB for NoSQL
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-The Azure Cosmos DB [API for NoSQL](../introduction.md) supports querying documents using SQL. This article provides a sample document and two sample SQL queries and results.
+[Azure Cosmos DB for NoSQL](../introduction.md) supports querying documents using the built-in query syntax. This article provides a sample document and two sample queries and results.
-This article covers the following tasks:
+This article covers the following tasks:
> [!div class="checklist"]
-> * Querying data with SQL
+>
+> - Query NoSQL data with the built-in query syntax
+>
+
+## Prerequisites
+
+This tutorial assumes you have an Azure Cosmos DB account, database, and container.
+
+Don't have any of those resources? Complete this quickstart: [Create an Azure Cosmos DB account, database, container, and items from the Azure portal](quickstart-portal.md).
+
+You can run the queries using the [Azure Cosmos DB Explorer](../data-explorer.md) in the Azure portal. You can also run queries by using the [REST API](/rest/api/cosmos-db/) or [various SDKs](sdk-dotnet-v3.md).
+
+For more information about queries, see [setting started with queries](query/getting-started.md).
## Sample document
-The SQL queries in this article use the following sample document.
+The queries in this article use the following sample document.
```json { "id": "WakefieldFamily", "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
], "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female", "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
], "address": { "state": "NY", "county": "Manhattan", "city": "NY" }, "creationDate": 1431620462,
The SQL queries in this article use the following sample document.
} ```
-## Where can I run SQL queries?
-
-You can run queries using the Data Explorer in the Azure portal and via the [REST API and SDKs](sdk-dotnet-v2.md).
-
-For more information about SQL queries, see:
-* [SQL query and SQL syntax](query/getting-started.md)
-
-## Prerequisites
+## Select all fields and apply a filter
-This tutorial assumes you have an Azure Cosmos DB account and collection. Don't have any of those resources? Complete the [5-minute quickstart](quickstart-portal.md).
+Given the sample family document, the following query returns the documents where the ID field matches `WakefieldFamily`. Since it's a `SELECT *` statement, the output of the query is the complete JSON document:
-## Example query 1
-
-Given the sample family document above, following SQL query returns the documents where the ID field matches `WakefieldFamily`. Since it's a `SELECT *` statement, the output of the query is the complete JSON document:
-
-**Query**
+Query:
```sql
- SELECT *
- FROM Families f
- WHERE f.id = "WakefieldFamily"
+SELECT *
+FROM Families f
+WHERE f.id = "WakefieldFamily"
```
-**Results**
+Results:
```json { "id": "WakefieldFamily", "parents": [
- { "familyName": "Wakefield", "givenName": "Robin" },
- { "familyName": "Miller", "givenName": "Ben" }
+ { "familyName": "Wakefield", "givenName": "Robin" },
+ { "familyName": "Miller", "givenName": "Ben" }
], "children": [
- {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female", "grade": 1,
- "pets": [
- { "givenName": "Goofy" },
- { "givenName": "Shadow" }
- ]
- },
- {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8 }
+ {
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female", "grade": 1,
+ "pets": [
+ { "givenName": "Goofy" },
+ { "givenName": "Shadow" }
+ ]
+ },
+ {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }
], "address": { "state": "NY", "county": "Manhattan", "city": "NY" }, "creationDate": 1431620462,
Given the sample family document above, following SQL query returns the document
} ```
-## Example query 2
+## Select a cross-product of a child collection field
The next query returns all the given names of children in the family whose ID matches `WakefieldFamily`.
-**Query**
+Query:
```sql
- SELECT c.givenName
- FROM Families f
- JOIN c IN f.children
- WHERE f.id = 'WakefieldFamily'
+SELECT c.givenName
+FROM Families f
+JOIN c IN f.children
+WHERE f.id = 'WakefieldFamily'
```
-**Results**
+Results:
-```
+```json
[
- {
- "givenName": "Jesse"
- },
- {
- "givenName": "Lisa"
- }
+ {
+ "givenName": "Jesse"
+ },
+ {
+ "givenName": "Lisa"
+ }
] ``` - ## Next steps In this tutorial, you've done the following tasks: > [!div class="checklist"]
-> * Learned how to query using SQL
+>
+> - Learned how to query using the built-in query syntax
+>
You can now proceed to the next tutorial to learn how to distribute your data globally. > [!div class="nextstepaction"] > [Distribute your data globally](tutorial-global-distribution.md)-
-Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Title: Getting started with Azure Cosmos DB Partial Document Update
-description: This article provides example for how to use Partial Document Update with .NET, Java, Node SDKs
+ Title: Get started with Azure Cosmos DB Partial Document Update
+description: Learn how to use Partial Document Update with .NET, Java, and Node SDKs for Azure Cosmos DB with these examples.
Previously updated : 12/09/2021 Last updated : 03/06/2023
-# Azure Cosmos DB Partial Document Update: Getting Started
+# Get started with Azure Cosmos DB Partial Document Update
[!INCLUDE[NoSQL](includes/appliesto-nosql.md)]
-This article provides examples illustrating for how to use Partial Document Update with .NET, Java, and Node SDKs. This article also details common errors that you may encounter. Code samples for the following scenarios have been provided:
+This article provides examples that illustrate how to use Partial Document Update with .NET, Java, and Node SDKs. It also describes common errors that you might encounter.
-- Executing a single patch operation-- Combining multiple patch operations-- Conditional patch syntax based on filter predicate-- Executing patch operation as part of a Transaction
+This article links to code samples for the following scenarios:
+
+- Run a single patch operation
+- Combine multiple patch operations
+- Use conditional patch syntax based on filter predicate
+- Run patch operation as part of a transaction
## [.NET](#tab/dotnet)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](nosql/sdk-dotnet-v3.md) is available from version *3.23.0* onwards. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0)
+Support for Partial Document Update (Patch API) in the [Azure Cosmos DB .NET v3 SDK](nosql/sdk-dotnet-v3.md) is available starting with version *3.23.0*. You can download it from the [NuGet Gallery](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.23.0).
> [!NOTE]
-> A complete partial document update sample can be found in the [.NET v3 samples repository](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs) on GitHub.
+> Find a complete Partial Document Update sample in the [.NET v3 samples repository](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs) on GitHub.
-- Executing a single patch operation
+- Run a single patch operation:
```csharp ItemResponse<Product> response = await container.PatchItemAsync<Product>(
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3
Product updated = response.Resource; ``` -- Combining multiple patch operations
+- Combine multiple patch operations:
```csharp List<PatchOperation> operations = new ()
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3
); ``` -- Conditional patch syntax based on filter predicate
+- Use conditional patch syntax based on filter predicate:
```csharp PatchItemRequestOptions options = new()
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3
); ``` -- Executing patch operation as a part of a Transaction
+- Run patch operation as a part of a transaction:
```csharp TransactionalBatchPatchItemRequestOptions options = new()
Support for Partial document update (Patch API) in the [Azure Cosmos DB .NET v3
## [Java](#tab/java)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4 SDK](nosql/sdk-java-v4.md) is available from version *4.21.0* onwards. You can either add it to the list of dependencies in your `pom.xml` or download it directly from [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos).
+Support for Partial Document Update (Patch API) in the [Azure Cosmos DB Java v4 SDK](nosql/sdk-java-v4.md) is available starting with version *4.21.0*. You can either add it to the list of dependencies in your `pom.xml` or download it directly from [Maven](https://mvnrepository.com/artifact/com.azure/azure-cosmos).
```xml <dependency>
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
``` > [!NOTE]
-> The full sample can be found in the [Java SDK v4 samples repository](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/patch/sync) on GitHub
+> Find the full sample in the [Java SDK v4 samples repository](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/patch/sync) on GitHub.
-- Executing a single patch operation
+- Run a single patch operation:
```java CosmosItemResponse<Product> response = container.patchItem(
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
Product updated = response.getItem(); ``` -- Combining multiple patch operations
+- Combine multiple patch operations:
```java CosmosPatchOperations operations = CosmosPatchOperations
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
); ``` -- Conditional patch syntax based on filter predicate
+- Use conditional patch syntax based on filter predicate:
```java CosmosPatchItemRequestOptions options = new CosmosPatchItemRequestOptions();
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
); ``` -- Executing patch operation as a part of a Transaction
+- Run patch operation as a part of a transaction:
```java CosmosBatchPatchItemRequestOptions options = new CosmosBatchPatchItemRequestOptions();
Support for Partial document update (Patch API) in the [Azure Cosmos DB Java v4
## [Node.js](#tab/nodejs)
-Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](nosql/sdk-nodejs.md) is available from version *3.15.0* onwards. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos)
+Support for Partial Document Update (Patch API) in the [Azure Cosmos DB JavaScript SDK](nosql/sdk-nodejs.md) is available starting with version *3.15.0*. You can download it from the [npm Registry](https://www.npmjs.com/package/@azure/cosmos).
> [!NOTE]
-> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK
-resolves the partition key values from the items through the container's partition
-key definition.
+> Find a complete Partial Document Update sample in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK resolves the partition key values from the items through the container's partition key definition.
-- Executing a single patch operation
+- Run a single patch operation:
```javascript const operations =
key definition.
.patch(operations); ``` -- Combining multiple patch operations
+- Combine multiple patch operations:
```javascript const operations =
key definition.
.patch(operations); ``` -- Conditional patch syntax based on filter predicate
+- Use conditional patch syntax based on filter predicate:
```javascript const filter = 'FROM products p WHERE p.used = false'
key definition.
-## Support for Server-Side programming
+## Support for server-side programming
-Partial Document Update operations can also be [executed on the server-side](stored-procedures-triggers-udfs.md) using Stored procedures, triggers, and user-defined functions.
+Partial Document Update operations can also be [executed on the server-side](stored-procedures-triggers-udfs.md) using stored procedures, triggers, and user-defined functions.
```javascript this.patchDocument = function (documentLink, patchSpec, options, callback) {
this.patchDocument = function (documentLink, patchSpec, options, callback) {
); }; ```+ > [!NOTE]
-> Definition of validateOptionsAndCallback can be found in the [.js DocDbWrapperScript](https://github.com/Azure/azure-cosmosdb-js-server/blob/1dbe69893d09a5da29328c14ec087ef168038009/utils/DocDbWrapperScript.js#L289) on GitHub.
+> Find the definition of `validateOptionsAndCallback` in the [.js DocDbWrapperScript](https://github.com/Azure/azure-cosmosdb-js-server/blob/1dbe69893d09a5da29328c14ec087ef168038009/utils/DocDbWrapperScript.js#L289) on GitHub.
-- Sample parameter for patch operation
+Sample parameter for patch operation:
- ```javascript
- function () {
- var doc = {
- "id": "exampleDoc",
- "field1": {
- "field2": 10,
- "field3": 20
- }
- };
- var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
- if (err) throw err;
- var patchSpec = [
- {"op": "add", "path": "/field1/field2", "value": 20},
- {"op": "remove", "path": "/field1/field3"}
- ];
- isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
- if (err) throw err;
- else {
- getContext().getResponse().setBody(docPatched);
- }
- }
- }
- if(!isAccepted) throw new Error("patch was't accepted")
- }
- }
- if(!isAccepted) throw new Error("create wasn't accepted")
- }
- ```
+```javascript
+function () {
+ var doc = {
+ "id": "exampleDoc",
+ "field1": {
+ "field2": 10,
+ "field3": 20
+ }
+ };
+ var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
+ if (err) throw err;
+ var patchSpec = [
+ {"op": "add", "path": "/field1/field2", "value": 20},
+ {"op": "remove", "path": "/field1/field3"}
+ ];
+ isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
+ if (err) throw err;
+ else {
+ getContext().getResponse().setBody(docPatched);
+ }
+ }
+ }
+ if(!isAccepted) throw new Error("patch was't accepted")
+ }
+ }
+ if(!isAccepted) throw new Error("create wasn't accepted")
+}
+```
## Troubleshooting
-Here's a list of common errors that you might encounter while using this feature:
+Here's some common errors that you might encounter while using this feature:
| **Error Message** | **Description** | | | -- |
-| Invalid patch request: check syntax of patch specification| The Patch operation syntax is invalid. For more information, see [the partial document update specification](partial-document-update.md#rest-api-reference-for-partial-document-update)
-| Invalid patch request: Can't patch system property `SYSTEM_PROPERTY`. | System-generated properties likeΓÇ»`_id`,ΓÇ»`_ts`,ΓÇ»`_etag`,ΓÇ»`_rid` aren't modifiable using a Patch operation. For more information, see: [Partial Document Update FAQs](partial-document-update-faq.yml#is-partial-document-update-supported-for-system-generated-properties-)
-| The number of patch operations can't exceed 10 | There's a limit of 10 patch operations that can be added in a single patch specification. For more information, see: [Partial Document Update FAQs](partial-document-update-faq.yml#is-there-a-limit-to-the-number-of-partial-document-update-operations-)
-| For Operation(`PATCH_OPERATION_INDEX`): Index(`ARRAY_INDEX`) to operate on is out of array bounds | The index of array element to be patched is out of bounds
-| For Operation(`PATCH_OPERATION_INDEX`)): Node(`PATH`) to be replaced has been removed earlier in the transaction.| The path you're trying to patch doesn't exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be removed is absent. Note: it may also have been removed earlier in the transaction.ΓÇ» | The path you're trying to patch doesn't exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be replaced is absent. | The path you're trying to patch doesn't exist.
-| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) isn't a number.| Increment operation can only work on integer and float. For more information, see: [Supported Operations](partial-document-update.md#supported-operations)
-| For Operation(`PATCH_OPERATION_INDEX`): Add Operation can only create a child object of an existing node(array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
-| For Operation(`PATCH_OPERATION_INDEX`): Given Operation can only create a child object of an existing node(array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present
+| Invalid patch request: check syntax of patch specification. | The patch operation syntax is invalid. For more information, see [the Partial Document Update specification](partial-document-update.md#rest-api-reference-for-partial-document-update). |
+| Invalid patch request: Can't patch system property `SYSTEM_PROPERTY`. | System-generated properties likeΓÇ»`_id`,ΓÇ»`_ts`,ΓÇ»`_etag`,ΓÇ»`_rid` aren't modifiable using a patch operation. For more information, see [Partial Document Update FAQs](partial-document-update-faq.yml#is-partial-document-update-supported-for-system-generated-properties-). |
+| The number of patch operations can't exceed 10. | There's a limit of 10 patch operations that can be added in a single patch specification. For more information, see [Partial Document Update FAQs](partial-document-update-faq.yml#is-there-a-limit-to-the-number-of-partial-document-update-operations-). |
+| For Operation(`PATCH_OPERATION_INDEX`): Index(`ARRAY_INDEX`) to operate on is out of array bounds. | The index of array element to be patched is out of bounds. |
+| For Operation(`PATCH_OPERATION_INDEX`)): Node(`PATH`) to be replaced has been removed earlier in the transaction. | The path you're trying to patch doesn't exist. |
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be removed is absent. Note: it might also have been removed earlier in the transaction.ΓÇ»| The path you're trying to patch doesn't exist. |
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) to be replaced is absent. | The path you're trying to patch doesn't exist. |
+| For Operation(`PATCH_OPERATION_INDEX`): Node(`PATH`) isn't a number. | Increment operation can only work on integer and float. For more information, see: [Supported Operations](partial-document-update.md#supported-operations). |
+| For Operation(`PATCH_OPERATION_INDEX`): Add Operation can only create a child object of an existing node (array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present. |
+| For Operation(`PATCH_OPERATION_INDEX`): Given Operation can only create a child object of an existing node(array or object) and can't create path recursively, no path found beyond: `PATH`. | Child paths can be added to an object or array node type. Also, to create `n`th child, `n-1`th child should be present. |
## Next steps -- Review the conceptual overview of [Partial Document Update](partial-document-update.md)
+- [Partial Document Update in Azure Cosmos DB](partial-document-update.md)
+- [Frequently asked questions about Partial Document Update in Azure Cosmos DB](partial-document-update-faq.yml)
cost-management-billing Add Change Subscription Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md
To manage access to Azure resources, you must have the appropriate administrator role. Azure has an authorization system called [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) with several built-in roles you can choose from. You can assign these roles at different scopes, such as management group, subscription, or resource group. By default, the person who creates a new Azure subscription can assign other users administrative access to a subscription.
-This article describes how add or change the administrator role for a user using Azure RBAC at the subscription scope.
+This article describes how to add or change the administrator role for a user using Azure RBAC at the subscription scope.
This article applies to a Microsoft Online Service Program (pay-as-you-go) account or a Visual Studio account. If you have a Microsoft Customer Agreement (Azure plan) account, see [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md). If you have an Azure Enterprise Agreement, see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md).
If you still need help, [contact support](https://portal.azure.com/?#blade/Micro
* [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) * [Understand the different roles in Azure](../../role-based-access-control/rbac-and-directory-admin-roles.md) * [Associate or add an Azure subscription to your Azure Active Directory tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md)
-* [Administrator role permissions in Azure Active Directory](../../active-directory/roles/permissions-reference.md)
+* [Administrator role permissions in Azure Active Directory](../../active-directory/roles/permissions-reference.md)
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
With Managed Airflow, Azure Data Factory now offers multi-orchestration capabili
- **Azure integration** – Azure Data Factory Managed Airflow supports open-source integrations with Azure Data Factory pipelines, Azure Batch, Azure Cosmos DB, Azure Key Vault, ACI, ADLS Gen2, Azure Kusto, as well as hundreds of built-in and community-created operators and sensors. ## Architecture
- :::image type="content" source="media/concept-managed-airflow/architecture.png" alt-text="Screenshot shows architecture in Managed Airflow.":::
+ :::image type="content" source="media/concept-managed-airflow/architecture.png" lightbox="media/concept-managed-airflow/architecture.png" alt-text="Screenshot shows architecture in Managed Airflow.":::
## Region availability (public preview)
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
Data flows distribute the data processing over different nodes in a Spark cluste
The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
-| Worker cores | Driver cores | Total cores | Notes |
+| Worker Nodes | Driver Nodes | Total Nodes | Notes |
| | | -- | -- | | 4 | 4 | 8 | Small | | 8 | 8 | 16 | Medium |
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
Previously updated : 08/23/2022 Last updated : 03/16/2023
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Key Vault**. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter. - **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'.
+
+- **Altering repository**. ADF manages GIT repository content automatically. Altering or adding manually unrelated files or folder into anywhere in ADF Git repository data folder could cause resource loading errors. For example, presence of *.bak* files can cause ADF CI/CD error, so they should be removed for ADF to load.
- **Exposure control and feature flags**. When working in a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
data-factory Solution Template Replicate Multiple Objects Sap Cdc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-replicate-multiple-objects-sap-cdc.md
A sample control file is as below:
```json [ {
- "checkPointKey":"cba2acf0-d5e2-4d84-a552-e0a059b6d320",
+ "checkPointKey":"CheckPointFor_ZPERFCDPOS$F",
"sapContext": "ABAP_CDS", "sapObjectName": "ZPERFCDPOS$F", "sapRunMode": "fullAndIncrementalLoad",
A sample control file is as below:
"stagingStorageFolder":"stagingcontainer/stagingfolder" }, {
- "checkPointKey":"fgaeca7f-d3d4-406f-bb48-a17faa83f76c",
+ "checkPointKey":"CheckPointFor_Z0131",
"sapContext": "SAPI", "sapObjectName": "Z0131", "sapRunMode": "incrementalLoad",
data-factory Tutorial Run Existing Pipeline With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md
Data Factory pipelines provide 100+ data source connectors that provide scalable
1. Create a new Python file **adf.py** with the below contents: ```python
- from airflow import DAG
- from airflow.operators.python_operator import PythonOperator
- from azure.common.credentials import ServicePrincipalCredentials
- from azure.mgmt.datafactory import DataFactoryManagementClient
from datetime import datetime, timedelta
-
- # Default arguments for the DAG
- default_args = {
- 'owner': 'me',
- 'start_date': datetime(2022, 1, 1),
- 'depends_on_past': False,
- 'retries': 1,
- 'retry_delay': timedelta(minutes=5),
- }
-
- # Create the DAG
- dag = DAG(
- 'run_azure_data_factory_pipeline',
- default_args=default_args,
- schedule_interval=timedelta(hours=1),
- )
-
- # Define a function to run the pipeline
-
- def run_pipeline(**kwargs):
- # Create the client
- credentials = ServicePrincipalCredentials(
- client_id='your_client_id',
- secret='your_client_secret',
- tenant='your_tenant_id',
+
+ from airflow.models import DAG, BaseOperator
+
+ try:
+ from airflow.operators.empty import EmptyOperator
+ except ModuleNotFoundError:
+ from airflow.operators.dummy import DummyOperator as EmptyOperator # type: ignore
+ from airflow.providers.microsoft.azure.operators.data_factory import AzureDataFactoryRunPipelineOperator
+ from airflow.providers.microsoft.azure.sensors.data_factory import AzureDataFactoryPipelineRunStatusSensor
+ from airflow.utils.edgemodifier import Label
+
+ with DAG(
+ dag_id="example_adf_run_pipeline",
+ start_date=datetime(2022, 5, 14),
+ schedule_interval="@daily",
+ catchup=False,
+ default_args={
+ "retries": 1,
+ "retry_delay": timedelta(minutes=3),
+ "azure_data_factory_conn_id": "<connection_id>", #This is a connection created on Airflow UI
+ "factory_name": "<FactoryName>", # This can also be specified in the ADF connection.
+ "resource_group_name": "<ResourceGroupName>", # This can also be specified in the ADF connection.
+ },
+ default_view="graph",
+ ) as dag:
+ begin = EmptyOperator(task_id="begin")
+ end = EmptyOperator(task_id="end")
+
+ # [START howto_operator_adf_run_pipeline]
+ run_pipeline1: BaseOperator = AzureDataFactoryRunPipelineOperator(
+ task_id="run_pipeline1",
+ pipeline_name="<PipelineName>",
+ parameters={"myParam": "value"},
+ )
+ # [END howto_operator_adf_run_pipeline]
+
+ # [START howto_operator_adf_run_pipeline_async]
+ run_pipeline2: BaseOperator = AzureDataFactoryRunPipelineOperator(
+ task_id="run_pipeline2",
+ pipeline_name="<PipelineName>",
+ wait_for_termination=False,
)
- client = DataFactoryManagementClient(credentials, 'your_subscription_id')
-
- # Run the pipeline
- pipeline_name = 'your_pipeline_name'
- run_response = client.pipelines.create_run(
- 'your_resource_group_name',
- 'your_data_factory_name',
- pipeline_name,
- )
- run_id = run_response.run_id
-
- # Print the run ID
- print(f'Pipeline run ID: {run_id}')
-
- # Create a PythonOperator to run the pipeline
- run_pipeline_operator = PythonOperator(
- task_id='run_pipeline',
- python_callable=run_pipeline,
- provide_context=True,
- dag=dag,
- )
-
- # Set the dependencies
- run_pipeline_operator
+
+ pipeline_run_sensor: BaseOperator = AzureDataFactoryPipelineRunStatusSensor(
+ task_id="pipeline_run_sensor",
+ run_id=run_pipeline2.output["run_id"],
+ )
+ # [END howto_operator_adf_run_pipeline_async]
+
+ begin >> Label("No async wait") >> run_pipeline1
+ begin >> Label("Do async wait with sensor") >> run_pipeline2
+ [run_pipeline1, pipeline_run_sensor] >> end
+
+ # Task dependency created via `XComArgs`:
+ # run_pipeline2 >> pipeline_run_sensor
```
- You will have to fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**.
+ You will have to create the connection using the Airflow UI (Admin -> Connections -> '+' -> Choose 'Connection type' as 'Azure Data Factory', then fill in your **client_id**, **client_secret**, **tenant_id**, **subscription_id**, **resource_group_name**, **data_factory_name**, and **pipeline_name**.
1. Upload the **adf.py** file to your blob storage within a folder called **DAGS**. 1. [Import the **DAGS** folder into your Managed Airflow environment](./how-does-managed-airflow-work.md#import-dags). If you do not have one, [create a new one](./how-does-managed-airflow-work.md#create-a-managed-airflow-environment)
data-lake-analytics Data Lake Analytics Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-add-users.md
Grant "R-X" or "RWX", as needed, on folders containing input data and output dat
The sample command to give user access to submit jobs, view new job metadata, and view old metadata is:
-`Add-AdlaJobUser.ps1 -Account myadlsaccount -EntityToAdd 546e153e-0ecf-417b-ab7f-aa01ce4a7bff -EntityType User -FullReplication`
+`.\Add-AdlaJobUser.ps1 -Account myadlsaccount -EntityIdToAdd 546e153e-0ecf-417b-ab7f-aa01ce4a7bff -EntityType User -FullReplication`
## Next steps
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Previously updated : 02/01/2023 Last updated : 03/16/2023 adobe-target: true # What is Microsoft Dev Box Preview?
-Microsoft Dev Box Preview gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to your project, so you can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows.
+Microsoft Dev Box Preview gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows.
-The Dev Box service was designed with three organizational roles in mind: dev infrastructure (infra) admins, project admins, and dev box users.
+The Dev Box service was designed with three organizational roles in mind: dev infrastructure (infra) admins, developer team leads, and developers.
-Dev infra admins provide developer infrastructure and tools to the dev teams. Dev infra admins set and manage security settings, network configurations, and organizational policies to help ensure that dev boxes can access resources securely.
+Dev infra admins and IT admins work together to provide developer infrastructure and tools to the developer teams. Dev infra admins set and manage security settings, network configurations, and organizational policies to ensure that dev boxes can access resources securely.
-Project admins are experienced developers who have in-depth knowledge of their projects and can assist with creating and managing the developer experience. Project admins create and manage pools of dev boxes.
+Developer team leads are experienced developers who have in-depth knowledge of their projects. They can be assigned the DevCenter Project Admin role and assist with creating and managing the developer experience. Project admins create and manage pools of dev boxes.
-Dev box users are members of a development team. They can self-serve one or more dev boxes on demand from the dev box pools that have been enabled for a project. Dev box users can work on multiple projects or tasks by creating multiple dev boxes.
+Members of a development team are assigned the DevCenter Dev Box User role. They can then self-serve one or more dev boxes on demand from the dev box pools that have been enabled for a project. Dev box users can work on multiple projects or tasks by creating multiple dev boxes.
Microsoft Dev Box bridges the gap between development teams and IT, by bringing control of project resources closer to the development team. ## Scenarios for Microsoft Dev Box Organizations can use Microsoft Dev Box Preview in a range of scenarios.-
-### Developer scenarios
-
-An organization that has globally distributed development teams can configure Dev Box to enable developers to create their own dev boxes in their closest region. Developers can create dev boxes as needed, without waiting for the IT admin team. Users can access dev boxes from any device and from any operating system.
-
-Dev Box supports developers who are working on multiple projects. Developers can create and use separate dev boxes for separate workloads, projects, or tasks. Developers can create multiple dev boxes from a predefined pool whenever they need them, and then delete those dev boxes when they're done.
-
-Organizations can even define dev boxes for various roles on a team. You might configure standard dev boxes with admin rights to give full-time developers greater control, while applying more restricted permissions for contractors.
- ### Dev infra scenarios Dev Box helps dev infra teams provide the appropriate dev boxes for each user's workload. Dev infra admins can:
Dev Box helps dev infra teams provide the appropriate dev boxes for each user's
Dev Box has the following benefits for IT admins: -- You can manage dev boxes like any other device on your network:
+- Manage dev boxes like any other device on your network:
- Dev boxes automatically enroll in Intune. Use the [Microsoft Intune admin center](https://go.microsoft.com/fwlink/?linkid=2109431) to manage dev boxes. - Keep all Windows devices up to date by using expedited quality updates in Intune to deploy zero-day patches across your organization. - If a dev box is compromised, isolate it while helping users get back up and running on a new dev box.
Dev Box has the following benefits for IT admins:
- Require multifactor authentication at sign-in. - Configure risk-based sign-in policies for dev boxes that access sensitive source code and customer data.
+### Developer team lead scenarios
+
+After a developer team lead is assigned the DevCenter Project Admin role, they can help manage the project. Project Admins can:
+
+- Create dev box pools and add appropriate dev box definitions.
+- Control costs by using auto-stop schedules.
+
+### Developer scenarios
+
+An organization that has globally distributed development teams can configure Dev Box to enable developers to create their own dev boxes in their closest region. Developers can create dev boxes as needed, without waiting for the IT admin team. Users can access dev boxes from any device and from any operating system.
+
+Dev Box supports developers who are working on multiple projects. Developers can create and use separate dev boxes for separate workloads, projects, or tasks. Developers can create multiple dev boxes from a predefined pool whenever they need them, and then delete those dev boxes when they're done.
+
+Organizations can even define dev boxes for various roles on a team. You might configure standard dev boxes with admin rights to give full-time developers greater control, while applying more restricted permissions for contractors.
+ ## How does Dev Box work? This diagram shows the components of the Dev Box Preview service and the relationships between them.
This diagram shows the components of the Dev Box Preview service and the relatio
Dev Box service configuration begins with the creation of a dev center, which represents the units of organization in the enterprise. Dev centers are logical containers to help organize dev box resources. There's no limit on the number of dev centers that you can create, but most organizations need only one.
-Azure network connections enable dev boxes to communicate with your organization's network. The network connection provides a link between the dev center and your organization's virtual networks. In the network connection, you define how a dev box will join Azure AD. Use an Azure AD join to connect exclusively to cloud-based resources, or use a hybrid Azure AD join to connect to on-premises resources and cloud-based resources.
+Azure network connections enable dev boxes to communicate with your organization's network. The network connection provides a link between the dev center and your organization's virtual networks. In the network connection, you define how a dev box joins Azure AD. Use an Azure AD join to connect exclusively to cloud-based resources, or use a hybrid Azure AD join to connect to on-premises resources and cloud-based resources.
Dev box definitions define the configuration of the dev boxes that are available to users. You can use an image from Azure Marketplace, like the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. Or you can create your own custom image and store it in Azure Compute Gallery. Specify a SKU with compute and storage to complete the dev box definition.
dns Dns Get Started Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-terraform.md
+
+ Title: 'Quickstart: Create an Azure DNS zone and record using Terraform'
+description: 'In this article, you create an Azure DNS zone and record using Terraform'
++ Last updated : 3/16/2023+++++
+# Quickstart: Create an Azure DNS zone and record using Terraform
+
+This article shows how to use [Terraform](/azure/terraform) to create an [Azure DNS zone](/azure/dns/dns-zones-records) and an [A record](/azure/dns/dns-alias) in that zone.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random string using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure DNS zone using [azurerm_dns_zone](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/dns_zone)
+> * Create an Azure DNS A record using [azurerm_dns_a_record](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/dns_a_record)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The example code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-dns_zone). See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-dns_zone/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-dns_zone/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-dns_zone/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-dns_zone/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the DNS zone name.
+
+ ```console
+ dns_zone_name=$(terraform output -raw dns_zone_name)
+ ```
+
+1. Run [az network dns zone show](/cli/azure/network/dns/zone#az-network-dns-zone-show) to display information about the new DNS zone.
+
+ ```azurecli
+ az network dns zone show \
+ --resource-group $resource_group_name \
+ --name $dns_zone_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the DNS zone name.
+
+ ```console
+ $dns_zone_name=$(terraform output -raw dns_zone_name)
+ ```
+
+1. Run [Get-AzDnsZone](/powershell/module/az.dns/get-azdnszone) to display information about the new service.
+
+ ```azurepowershell
+ Get-AzDnsZone -ResourceGroupName $resource_group_name `
+ -Name $dns_zone_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Azure DNS](/azure/dns)
event-grid Subscription Creation Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscription-creation-schema.md
The Event Subscription name must be 3-64 characters in length and can only conta
}, "filter": { "includedEventTypes": [ "Microsoft.Storage.BlobCreated", "Microsoft.Storage.BlobDeleted" ],
- "subjectBeginsWith": "/blobServices/default/containers/mycontainer/log",
+ "subjectBeginsWith": "/blobServices/default/containers/mycontainer/blobs/log",
"subjectEndsWith": ".jpg", "isSubjectCaseSensitive ": "true" }
event-hubs Event Hubs Management Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-management-libraries.md
namespace event_hub_dotnet_management
{ using System; using System.Threading.Tasks;
- using Microsoft.Azure.Management.EventHub;
- using Microsoft.Azure.Management.EventHub.Models;
+ using Microsoft.Azure.ResourceManager.EventHubs;
+ using Microsoft.Azure.ResourceManager.EventHubs.Models;
using Microsoft.Identity.Client; using Microsoft.Rest;
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
However, if you load balance traffic across geo-redundant parallel paths, regard
When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increased latency end-to-end. >[!NOTE]
->Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased laterncy or performance degradation during this time.
+>Enabling BFD on the ExpressRoute circuits will help with faster link failure detection between Microsoft Enterprise Edge (MSEE) devices and the Customer/Partner Edge routers. However, the overall failover and convergence to redundant site may take up to 180 seconds under some failure conditions and you may experience increased latency or performance degradation during this time.
In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths.
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
When a NAT gateway resource is associated with an Azure Firewall subnet, all out
ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address. > [!NOTE]
-> Deploying NAT gateway with a [zone redundant firewall]((deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required.
+> Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required.
> > In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
hdinsight Hdinsight 50 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-50-component-versioning.md
- Title: Open-source components and versions - Azure HDInsight 5.0
-description: Learn about the open-source components and versions in Azure HDInsight 5.0.
-- Previously updated : 02/22/2023--
-# HDInsight 5.0 component versions
-
-In this article, you learn about the open-source components and their versions in Azure HDInsight 5.0.
-
-Starting June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0.
-
-## Open-source components available with HDInsight version 5.0
-
-The Open-source component versions associated with HDInsight 5.0 are listed in the following table.
-
-| Component | HDInsight 5.0 | HDInsight 4.0 |
-||||
-| Apache Spark | 3.1.3 | 2.4.4 |
-| Apache Hive | 3.1.2 | 3.1.2 |
-| Apache Kafka | 2.4.1 | 2.1.1 |
-| Apache Hadoop | 3.1.1 | 3.1.1 |
-| Apache Tez | 0.9.1 | 0.9.1 |
-| Apache Pig | 0.16.1 | 0.16.1 |
-| Apache Ranger | 1.1.0 | 1.1.0 |
-| Apache HBase | - | 2.1.6 |
-| Apache Sqoop | 1.5.0 | 1.5.0 |
-| Apache Oozie | 4.3.1 | 4.3.1 |
-| Apache Zookeeper | 3.4.6 | 3.4.6 |
-| Apache Livy | 0.5 | 0.5 |
-| Apache Ambari | 2.7.0 | 2.7.0 |
-| Apache Zeppelin | 0.8.0 | 0.8.0 |
-
-This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon.
-
-| Cluster Type | Framework version | Support expiration date | Retirement date |
-||-||--|
-| HDInsight 4.0 Kafka | 2.1.0 | Sep 30, 2022 | Oct 1, 2022 |
-
-## Spark versions supported in Azure HDInsight
-
-Apache Spark versions supported in Azure HDIinsight
-
-|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|[End of standard support]()|[End of basic support]()|
-|--|--|--|--|--|--|
-|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
-|3.1|March 11,2022|GA |-|-|-|
-|3.3|To be announced for Public Preview|-|-|-|-|
-
-## Apache Spark 2.4 to Spark 3.x Migration Guides
-
-Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html).
-
-## Spark
--
-> [!NOTE]
-> * If you are using Azure User Interface to create a Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0) and it is backward compatible.
-> * This is only a UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template to build their clusters.
-> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Spark 3.1 (HDI 5.0)
-> * Spark 3.1 (HDI 5.0) cluster comes with HWC 2.0 which works well together with Interactive Query (HDI 5.0) cluster.
-
-## Interactive Query
--
-> [!NOTE]
-> If you are creating an Interactive Query Cluster, you will see from the dropdown list another version as Interactive Query 3.1 (HDI 5.0).
-> * If you are going to use Spark 3.1 version along with Hive which require ACID support via Hive Warehouse Connector (HWC).
--
-you need to select this version Interactive Query 3.1 (HDI 5.0).
-
-## Kafka
-
-Current ARM template supports HDI 5.0 for Kafka 2.4.1
-
-`HDI Version '5.0' is supported for clusterType "Kafka" and component Version '2.4'.`
-
-We have fixed the arm templated issue.
-
-### Upcoming version upgrades.
-HDInsight team is working on upgrading other open-source components.
-
-1. Spark 3.2.0
-1. Kafka 3.2.1
-1. HBase 2.4.11
-
-## Next steps
--- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight 51 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-51-component-versioning.md
- Title: Open-source components and versions - Azure HDInsight 5.1
-description: Learn about the open-source components and versions in Azure HDInsight 5.1
-- Previously updated : 03/15/2023--
-# HDInsight 5.1 component versions
-
-In this article, you learn about the open-source components and their versions in Azure HDInsight 5.1.
-
-## Public preview
-
-From February 27, 2023 we have started rolling out a new version of HDInsight 5.1, this version is backward compatible with HDInsight 4.0. and 5.0. All new open-source releases added as incremental releases on HDInsight 5.1.
-
-**Only Kafka and HBase clusters are supported right now.**
-
-## Open-source components available with HDInsight version 5.1
-
-The Open-source component versions associated with HDInsight 5.1 listed in the following table.
--
-| Component | HDInsight 5.1 | HDInsight 5.0 |
-||||
-| Apache Spark | 3.3 * | 3.1.3 |
-| Apache Hive | 3.1.2 * | 3.1.2 |
-| Apache Kafka | 3.2.0 ** | 2.4.1 |
-| Apache Hadoop with YARN | 3.3.4 * | 3.1.1 |
-| Apache Tez | 0.9.1 * | 0.9.1 |
-| Apache Pig | 0.17.0 * | 0.16.1 |
-| Apache Ranger | 2.1.0 * | 1.1.0 |
-| Apache HBase | 2.4.11 ** | - |
-| Apache Sqoop | 1.5.0 * | 1.5.0 |
-| Apache Oozie | 5.2.1 * | 4.3.1 |
-| Apache Zookeeper | 3.6.3 * | 3.4.6 |
-| Apache Livy | 0.7.1 * | 0.5 |
-| Apache Ambari | 2.7.0 ** | 2.7.0 |
-| Apache Zeppelin | 0.10.0 * | 0.8.0 |
-| Apache Phoenix | 5.1.2 ** | - |
-
-\* Under development/Planned
-
-** Public Preview
-
-> [!NOTE]
-> ESP isn't supported for Kafka and HBase in this release.
-
-## Spark versions supported in Azure HDInsight
-
-Apache Spark versions supported in Azure HDIinsight
-
-|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|[End of standard support]()|[End of basic support]()|
-|--|--|--|--|--|--|
-|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
-|3.1|March 11,2022|GA |-|-|-|
-|3.3|To be announced for Public Preview|-|-|-|-|
-
-## Apache Spark 2.4 to Spark 3.x Migration Guides
-
-Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html).
-
-## Next steps
--- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight 5X Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md
+
+ Title: Open-source components and versions - Azure HDInsight 5.x
+description: Learn about the open-source components and versions in Azure HDInsight 5.x
++ Last updated : 03/16/2023++
+# HDInsight 5.x component versions
+
+In this article, you learn about the open-source components and their versions in Azure HDInsight 5.x.
+
+## Public preview
+
+From February 27, 2023 we have started rolling out a new version of HDInsight 5.1, this version is backward compatible with HDInsight 4.0. and 5.0. All new open-source releases added as incremental releases on HDInsight 5.1.
+
+**Only Kafka and HBase clusters are supported now.**
+
+## Open-source components available with HDInsight version 5.x
+
+The Open-source component versions associated with HDInsight 5.1 listed in the following table.
+
+| Component | HDInsight 5.1 | HDInsight 5.0 |
+||||
+| Apache Spark | 3.3 * | 3.1.3 |
+| Apache Hive | 3.1.2 * | 3.1.2 |
+| Apache Kafka | 3.2.0 ** | 2.4.1 |
+| Apache Hadoop with YARN | 3.3.4 * | 3.1.1 |
+| Apache Tez | 0.9.1 * | 0.9.1 |
+| Apache Pig | 0.17.0 * | 0.16.1 |
+| Apache Ranger | 2.1.0 * | 1.1.0 |
+| Apache HBase | 2.4.11 ** | - |
+| Apache Sqoop | 1.5.0 * | 1.5.0 |
+| Apache Oozie | 5.2.1 * | 4.3.1 |
+| Apache Zookeeper | 3.6.3 * | 3.4.6 |
+| Apache Livy | 0.7.1 * | 0.5 |
+| Apache Ambari | 2.7.0 ** | 2.7.0 |
+| Apache Zeppelin | 0.10.0 * | 0.8.0 |
+| Apache Phoenix | 5.1.2 ** | - |
+
+\* Under development/Planned
+
+** Public Preview
+
+> [!NOTE]
+> ESP isn't supported for Kafka and HBase in this release.
+
+### Spark versions supported in Azure HDInsight
+
+Apache Spark versions supported in Azure HDIinsight
+
+|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|End of standard support|End of basic support|
+|--|--|--|--|--|--|
+|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024|
+|3.1|March 11,2022|GA |-|-|-|
+|3.3|To be announced for Public Preview|-|-|-|-|
+
+### Apache Spark 2.4 to Spark 3.x Migration Guides
+
+Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html).
+
+## HDInsight version 5.0
+
+Starting from June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0.
++
+### Spark
++
+> [!NOTE]
+> * If you are using Azure User Interface to create a Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0) and it is backward compatible.
+> * This is only a UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template to build their clusters.
+> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Spark 3.1 (HDI 5.0)
+> * Spark 3.1 (HDI 5.0) cluster comes with HWC 2.0 which works well together with Interactive Query (HDI 5.0) cluster.
+
+### Interactive Query
++
+> [!NOTE]
+> * If you are creating an Interactive Query Cluster, you will see from the dropdown list another version as Interactive Query 3.1 (HDI 5.0).
+> * If you are going to use Spark 3.1 version along with Hive which require ACID support via Hive Warehouse Connector (HWC). You need to select this version Interactive Query 3.1 (HDI 5.0).
+
+### Kafka
+
+Current ARM template supports HDI 5.0 for Kafka 2.4.1
+
+`HDI Version '5.0' is supported for clusterType "Kafka" and component Version '2.4'.`
+
+We have fixed the arm templated issue.
+
+### Upcoming version upgrades.
+HDInsight team is working on upgrading other open-source components.
+
+1. Spark 3.2.0
+1. Kafka 3.2.1
+1. HBase 2.4.11
++
+## Next steps
+
+- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)
+- [Enterprise Security Package](./enterprise-security-package.md)
+- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Open-source components and versions - Azure HDInsight
description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 02/25/2023 Last updated : 03/16/2023 # Azure HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability | | | | | | | | |
-| [HDInsight 5.1](hdinsight-51-component-versioning.md) |Ubuntu 18.0.4 LTS |Feb 27, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
-| [HDInsight 5.0](hdinsight-50-component-versioning.md) |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
+| HDInsight 5.1 |Ubuntu 18.0.4 LTS |Feb 27, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
+| HDInsight 5.0 |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced |Not announced| Yes |
| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Not announced | Not announced |Yes | **Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal.
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see
-* [HDInsight 4.0 component versions](./hdinsight-40-component-versioning.md)
-* [HDInsight 5.0 component versions](./hdinsight-50-component-versioning.md)
-* [HDInsight 5.1 component versions](./hdinsight-51-component-versioning.md)
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
> [!IMPORTANT] > Microsoft has issued [CVE-2023-23408](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-23408), which is fixed on the current release and customers are advised to upgrade their clusters to latest image. 
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
The FHIR specification defines a set of search parameters that apply to all reso
> [!NOTE] > Each time you create, update, or delete a search parameter, youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter for live production. Below we will outline how you can test search parameters before reindexing the entire FHIR service database.
-## Create new search parameter
+## Create new search parameter
-To create a new search parameter, you need to `POST` a `SearchParameter` resource to the FHIR service database. The code example below shows how to add the [US Core Race search parameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource type in your FHIR service database.
+To create a new search parameter, you need to `POST` a `SearchParameter` resource to the FHIR service database.
```rest POST {{FHIR_URL}}/SearchParameter
+```
+
+The examples below demonstrate creating new custom search parameter
+
+### Create new search parameter per definition in Implementation Guide
+
+The code example below shows how to add the [US Core Race search parameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource type in your FHIR service database.
+
+```rest
{ "resourceType" : "SearchParameter", "id" : "us-core-race",
POST {{FHIR_URL}}/SearchParameter
} ```
+### Create new search parameter for resource attributes with reference type
+
+The code example shows how to create a custom search parameter to search MedicationDispense resources based on the location where they were dispensed. This is an example of adding custom search parameter for a Reference type.
+```rest
+{
+ "resourceType": "SearchParameter",
+ "id": "a3c28d46-fd06-49ca-aea7-5f9314ef0497",
+ "url": "{{An absolute URI that is used to identify this search parameter}}",
+ "version": "1.0",
+ "name": "MedicationDispenseLocationSearchParameter",
+ "status": "active",
+ "description": "Search parameter for MedicationDispense by location",
+ "code": "location",
+ "base": ["MedicationDispense"],
+ "target": ["Location"],
+ "type": "reference",
+ "expression": "MedicationDispense.location"
+}
+```
> [!NOTE] > The new search parameter will appear in the capability statement of the FHIR service after you `POST` the search parameter to the database **and** reindex your database. Viewing the `SearchParameter` in the capability statement is the only way to tell if a search parameter is supported in your FHIR service. If you cannot find the `SearchParameter` in the capability statement, then you still need to reindex your database to activate the search parameter. You can `POST` multiple search parameters before triggering a reindex operation.
Important elements of a `SearchParameter` resource:
* `url`: A unique key to describe the search parameter. Organizations such as HL7 use a standard URL format for the search parameters that they define, as shown above in the US Core Race search parameter.
-* `code`: The value stored in the **code** element is the name used for the search parameter when it is included in an API call. For the example above, you would search with `GET {{FHIR_URL}}/Patient?race=<code>` where `<code>` is in the value set from the specified coding system. This call would retrieve all patients of a certain race.
+* `code`: The value stored in the **code** element is the name used for the search parameter when it's included in an API call. For the example above with the "US Core Race" extension, you would search with `GET {{FHIR_URL}}/Patient?race=<code>` where `<code>` is in the value set from the specified coding system. This call would retrieve all patients of a certain race.
* `base`: Describes which resource type(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resource types.+
+* `target`: Describes which resource type(s) the search parameter matches to.
* `type`: Describes the data type for the search parameter. Type is limited by the support for data types in the FHIR service. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
Important elements of a `SearchParameter` resource:
While you canΓÇÖt use the new search parameters in production until you run a reindex job, there are a few ways to test your custom search parameters before reindexing the entire database.
-First, you can test a new search parameter to see what values will be returned. By running the command below against a specific resource instance (by supplying the resource ID), you'll get back a list of value pairs with the search parameter name and the value stored in the corresponding element. This will include all of the search parameters for the resource. You can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR service.
+First, you can test a new search parameter to see what values will be returned. By running the command below against a specific resource instance (by supplying the resource ID), you get back a list of value pairs with the search parameter name and the value stored in the corresponding element. This list includes all of the search parameters for the resource. You can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR service.
```rest
-GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOUCE_ID}}/$reindex
+GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex
``` For example, to find all search parameters for a patient:
GET https://{{FHIR_URL}}/Patient/{{PATIENT_ID}}/$reindex
```
-The result will look like this:
+The result looks like this:
```json {
POST https://{{FHIR_URL}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex
Running this `POST` call sets the indices for any search parameters defined for the resource instance specified in the request. This call does make a change to the FHIR service database. Now you can search and set the `x-ms-use-partial-indices` header to `true`, which causes the FHIR service to return results for any of the resources that have the search parameter indexed, even if not all resource instances of that type have it indexed.
-Continuing with our example above, you could index one patient to enable the US Core Race `SearchParameter`:
+Continuing with our example, you could index one patient to enable `SearchParameter`:
```rest POST {{FHIR_URL}}/Patient/{{PATIENT_ID}}/$reindex ```
-And then do a test search for the patient by race:
+And then do a test search
+1. For the patient by race:
```rest GET {{FHIR_URL}}/Patient?race=2028-9 x-ms-use-partial-indices: true ```
+1. For Location (reference type):
+```rest
+{{fhirurl}}/MedicationDispense?location=<locationid referenced in MedicationDispense Resource>
+x-ms-use-partial-indices: true
+```
+After you've tested your new search parameter and confirmed that it's working as expected, run or schedule your reindex job so the new search parameter(s) can be used in live production.
-After you have tested your new search parameter and confirmed that it is working as expected, run or schedule your reindex job so the new search parameter(s) can be used in live production.
-
-See [Running a reindex job](../fhir/how-to-run-a-reindex.md) for information on how to re-index your FHIR service database.
+See [Running a reindex job](../fhir/how-to-run-a-reindex.md) for information on how to reindex your FHIR service database.
## Update a search parameter
The result of the above request will be an updated `SearchParameter` resource.
> [!Warning] > Be careful when updating search parameters. Changing an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately. ++ ## Delete a search parameter If you need to delete a search parameter, use the following:
DELETE {{FHIR_URL}}/SearchParameter/{{SearchParameter_ID}}
> [!Warning] > Be careful when deleting search parameters. Changing an existing search parameter could have impacts on the expected behavior. We recommend running a reindex job immediately. ++ ## Next steps
-In this article, youΓÇÖve learned how to create a custom search parameter. Next you can learn how to reindex your FHIR service database. For more information, see
+In this article, youΓÇÖve learned how to create a custom search parameter. Next you can learn how to reindex your FHIR service database.
+For more information, see
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
Title: Access control and security for DPS by using Azure Active Directory | Microsoft Docs
-description: Concepts - how to control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC.
+ Title: Access control and security for DPS with Azure AD
+
+description: Control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC.
+ --+ Last updated 02/07/2022
iot-dps Concepts Control Access Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps.md
Title: Access control and security for Azure IoT Hub Device Provisioning Service | Microsoft Docs
-description: Overview on how to control access to Azure IoT Hub Device Provisioning Service (DPS), includes links to in-depth articles on Azure Active Directory integration (Public Preview) and SAS options.
+ Title: Access control and security for Azure DPS
+
+description: Overview on controlling access to Azure IoT Hub Device Provisioning Service, links to articles on Azure Active Directory integration and SAS options.
+ --+ Last updated 04/20/2022
iot-dps Concepts Custom Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-custom-allocation.md
Title: Using custom allocation policies with Azure IoT Hub Device Provisioning Service
-description: Understand custom allocation policies with the Azure IoT Hub Device Provisioning Service (DPS)
+ Title: Using custom allocation policies with Azure DPS
+
+description: Understand how custom allocation policies enable provisioning to multiple IoT hubs with the Azure IoT Hub Device Provisioning Service (DPS)
+ Last updated 09/09/2022-+ -
iot-dps Concepts Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md
Title: Best practices for large-scale Microsoft Azure IoT device deployments
-description: This article describes best practices, patterns, and sample code you can use to help with large-scale deployments.
+ Title: Best practices for large-scale IoT deployments
+
+description: Best practices, patterns, and sample code you can use to help with large-scale deployments of Azure IoT Hub and Device Provisioning Service.
+ -+ Last updated 06/27/2022- # Best practices for large-scale IoT device deployments
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
Title: Azure IoT Hub Device Provisioning Service - Device concepts
-description: Describes device reprovisioning concepts for the Azure IoT Hub Device Provisioning Service (DPS)
+ Title: Device lifecycle and reprovisioning concepts
+
+description: Describes device reprovisioning concepts and policies for the Azure IoT Hub Device Provisioning Service (DPS)
+ Last updated 04/16/2021-+ - # IoT Hub Device reprovisioning concepts
iot-dps Concepts Roles Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-roles-operations.md
Title: IoT Hub Device Provisioning Service - Roles and operations
-description: This article provides a conceptual overview of the roles and operations involved when developing and IoT solution using the IoT Device Provisioning Service (DPS).
+ Title: Roles and operations for Azure DPS
+
+description: Conceptual overview of the roles and operations involved when developing and IoT solution using the IoT Device Provisioning Service (DPS).
+ Last updated 09/14/2020-+ -- # Roles and operations
iot-dps Concepts Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-service.md
Title: Terminology used with Azure IoT Hub Device Provisioning Service | Microsoft Docs
-description: Describes common terminology used with the Device Provisioning Service (DPS) and IoT Hub
+ Title: Terminology and glossary for Azure DPS
+
+description: This article describes common terminology used with the Device Provisioning Service (DPS) and IoT Hub
+ Last updated 09/18/2019-+ -- # IoT Hub Device Provisioning Service (DPS) terminology
iot-dps Concepts Symmetric Key Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-symmetric-key-attestation.md
Title: Azure IoT Hub Device Provisioning Service - Symmetric key attestation
+ Title: Symmetric key attestation with Azure DPS
+ description: This article provides a conceptual overview of symmetric key attestation using IoT Device Provisioning Service (DPS). + Last updated 04/23/2021-+ --
iot-dps Concepts Tpm Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-tpm-attestation.md
Title: Azure IoT Hub Device Provisioning Service - TPM Attestation
+ Title: TPM Attestation with Azure DPS
+ description: This article provides a conceptual overview of the TPM attestation flow using IoT Device Provisioning Service (DPS). + Last updated 09/22/2021-+ -- # TPM attestation
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
Title: Azure IoT Hub Device Provisioning Service - X.509 certificate attestation
+ Title: X.509 certificate attestation with Azure DPS
+ description: Describes concepts specific to using X.509 certificate attestation with Device Provisioning Service (DPS) and IoT Hub + Last updated 09/14/2020-+ - # X.509 certificate attestation
iot-dps How To Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-control-access.md
Title: Access control and security for DPS by using shared access signatures | Microsoft Docs
-description: Concepts - how to control access to Azure IoT Hub Device Provisioning Service (DPS) for backend apps. Includes information about security tokens.
+ Title: Access control and security for DPS with security tokens
+
+description: Control access to Azure IoT Hub Device Provisioning Service (DPS) for backend apps by using shared access signatures and security tokens.
+ --+ Last updated 09/22/2021
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
Title: Tutorial - Provision devices using a symmetric key enrollment group in Azure IoT Hub Device Provisioning Service
+ Title: Tutorial - Provision devices using a symmetric key enrollment group in DPS
+ description: This tutorial shows how to use symmetric keys to provision devices through an enrollment group in your Device Provisioning Service (DPS) instance + Last updated 10/14/2022 -- zone_pivot_groups: iot-dps-set1
iot-dps How To Provision Multitenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-provision-multitenant.md
Title: Tutorial - Provision devices for geo latency in Azure IoT Hub Device Provisioning Service
+ Title: Tutorial - Provision devices for geo latency in DPS
+ description: This tutorial shows how to provision devices for geolocation/geolatency with your Device Provisioning Service (DPS) instance + Last updated 08/24/2022 - # Tutorial: Provision for geo latency
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
Title: Reprovision devices in Azure IoT Hub Device Provisioning Service
+ Title: Reprovision devices with DPS
+ description: Learn how to reprovision devices with your Device Provisioning Service (DPS) instance, and why you might need to do this. + Last updated 01/25/2021-+ - # How to reprovision devices
iot-dps How To Revoke Device Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-revoke-device-access-portal.md
Title: Disenroll or revoke device from Azure IoT Hub Device Provisioning Service
+ Title: Disenroll or revoke device from DPS
+ description: How to disenroll a device to prevent provisioning through Azure IoT Hub Device Provisioning Service (DPS) + Last updated 01/24/2022-+ - # How to disenroll or revoke a device from Azure IoT Hub Device Provisioning Service
iot-dps How To Roll Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-roll-certificates.md
Title: Roll X.509 certificates in Azure IoT Hub Device Provisioning Service
-description: How to roll X.509 certificates with your Device Provisioning Service (DPS) instance
+ Title: Roll X.509 certificates in DPS
+
+description: How to update or replace X.509 certificates with your Azure IoT Hub Device Provisioning Service (DPS) instance
+ Last updated 03/08/2022-+ - # How to roll X.509 device certificates
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
Title: How to transfer a payload between device and Azure Device Provisioning Service
+ Title: How to transfer a payload between devices and DPS
+ description: This document describes how to transfer a payload between device and Device Provisioning Service (DPS) + Last updated 09/21/2022-+ - # How to transfer payloads between devices and DPS
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Title: Diagnose and troubleshoot provisioning errors with Azure IoT Hub DPS
+ Title: Diagnose and troubleshoot provisioning errors with DPS
+ description: Learn to diagnose and troubleshoot common errors for Azure IoT Hub Device Provisioning Service (DPS) ++ --+ Last updated 05/25/2022-
-#Customer intent: As an operator for Azure IoT Hub DPS, I need to know how to find out when devices are not being provisioned and troubleshoot and resolve those issues right away.
# Troubleshooting with Azure IoT Hub Device Provisioning Service
iot-dps How To Unprovision Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-unprovision-devices.md
Title: Deprovision devices that were provisioned with Azure IoT Hub Device Provisioning Service
+ Title: Deprovision devices that were provisioned with DPS
+ description: How to deprovision devices that have been provisioned with Azure IoT Hub Device Provisioning Service (DPS) + Last updated 01/24/2022-+ - # How to deprovision devices that were previously auto-provisioned
iot-dps How To Use Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-allocation-policies.md
Title: How to use allocation policies with Device Provisioning Service (DPS)
+ Title: How to use allocation policies with DPS
+ description: This article shows how to use the Device Provisioning Service (DPS) allocation policies to automatically provision device across one or more IoT hubs. + Last updated 10/24/2022 -- # How to use allocation policies to provision devices across IoT hubs
iot-dps How To Verify Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-verify-certificates.md
Title: Verify X.509 CA certificates with Azure IoT Hub Device Provisioning Service
+ Title: Verify X.509 CA certificates with DPS
+ description: How to do proof-of-possession for X.509 CA certificates with Azure IoT Hub Device Provisioning Service (DPS) + Last updated 06/29/2021-+ - # How to do proof-of-possession for X.509 CA certificates with your Device Provisioning Service
iot-dps Iot Dps Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-ha-dr.md
Title: Azure IoT Hub Device Provisioning Service high availability and disaster recovery | Microsoft Docs
+ Title: High availability and disaster recovery with DPS
+ description: Describes the Azure and Device Provisioning Service features that help you to build highly available Azure IoT solutions with disaster recovery capabilities. ++ --+ Last updated 02/04/2022-
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-mqtt-support.md
Title: Understand Azure IoT Device Provisioning Service MQTT support | Microsoft Docs
+ Title: Understand DPS MQTT support
+ description: Developer guide - support for devices connecting to the Azure IoT Device Provisioning Service (DPS) device-facing endpoint using the MQTT protocol. ++ --+ Last updated 02/25/2022-
iot-dps Iot Dps Understand Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-understand-ip-address.md
Title: Understanding the IP address of your IoT Device Provisioning Service (DPS) instance | Microsoft Docs
-description: Understand how to query your IoT Device Provisioning Service (DPS) address and its properties. The IP address of your DPS instance can change during certain scenarios such as disaster recovery or regional failover.
+ Title: Understanding the IP address of your DPS instance
+
+description: Query your DPS IP address and its properties. The IP address of your DPS instance can change during scenarios like disaster recovery or regional failover.
+ --+ Last updated 02/22/2022
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
Title: Monitoring Azure IoT Hub Device Provisioning Service data reference #Required; *your official service name*
-description: Important reference material needed when you monitor Azure IoT Hub Device Provisioning Service
+ Title: Monitoring DPS data reference
+
+description: Important reference material needed when you monitor Azure IoT Hub Device Provisioning Service using Azure Monitor
+
iot-dps Monitor Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps.md
Title: Monitoring Azure IoT Hub Device Provisioning Service
-description: Start here to learn how to monitor Azure IoT Hub Device Provisioning Service
+ Title: Monitor DPS using Azure Monitor
+
+description: Start here to learn how to monitor metrics and logs from the Azure IoT Hub Device Provisioning Service by using Azure Monitor
+ -+ Last updated 04/15/2022
iot-dps Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/public-network-access.md
Title: Manage public network access for Azure IoT Device Provisioning Service (DPS)
+ Title: Manage public network access for DPS
+ description: Documentation on how to disable and enable public network access for Azure IoT Device Provisioning Service (DPS) + --+ Last updated 03/21/2022
iot-dps Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tls-support.md
- Title: Azure IoT Device Provisioning Service (DPS) TLS support
- description: Best practices in using secure TLS connections for devices and services communicating with the IoT Device Provisioning Service (DPS)
-
-
-
-
- Last updated 09/15/2022
-
+ Title: TLS support with DPS
+
+description: Best practices in using secure TLS connections for devices and services communicating with the IoT Device Provisioning Service (DPS)
+++++ Last updated : 09/15/2022 # TLS support in Azure IoT Hub Device Provisioning Service (DPS)
iot-dps Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/virtual-network-support.md
Title: Virtual network connections for DPS description: How to use the virtual networks connectivity pattern with Azure IoT Device Provisioning Service (DPS)- ++ --+ Last updated 03/21/2022- # Azure IoT Hub Device Provisioning Service (DPS) support for virtual networks
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
[!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)]
-This article describes how to configure the Azure IoT Edge for Linux (EFLOW) VM to support multiple network interface cards (NICs) and connect to multiple networks. By enabling multiple NIC support, applications running on the EFLOW VM can communicate with devices connected to the offline network, and at the same time, use IoT Edge to send data to the cloud.
+This article describes how to configure the Azure IoT Edge for Linux (EFLOW) virtual machine (VM) to support multiple network interface cards (NICs) and connect to multiple networks. By enabling multiple NIC support, applications running on the EFLOW VM can communicate with devices connected to the offline network, while using IoT Edge to send data to the cloud.
## Prerequisites
This article describes how to configure the Azure IoT Edge for Linux (EFLOW) VM
## Industrial scenario
-Industrial IoT is transcurring the era of IT and OT convergence. However, making traditional OT assets more intelligent with IT technologies also means a larger exposure to cyberattacks. This is one of the main reasons why multiple environments are designed using demilitarized zones or also known as DMZs.
+Industrial IoT is overtaking the era of information technology (IT) and operational technology (OT) convergence. However, making traditional OT assets more intelligent with IT technologies also means a larger exposure to cyber attacks. This scenario is one of the main reasons why multiple environments are designed using demilitarized zones, also known as, DMZs.
Imagine a workflow scenario where you have a networking configuration divided into two different networks or zones. In the first zone, you may have a secure network defined as the offline network. The offline network has no internet connectivity and is limited to internal access. In the second zone, you may have a demilitarized zone (DMZ), in which you may have a couple of devices that have limited internet connectivity. When moving the workflow to run on the EFLOW VM, you may have problems accessing the different networks since the EFLOW VM by default has only one NIC attached.
-Suppose you have an environment with some devices like PLCs or OPC UA compatible devices connected to the offline network, and you want to upload all the device's information to Azure using the OPC Publisher module running on the EFLOW VM.
+In this scenario, you have an environment with some devices like programmable logic controllers (PLCs) or open platform communications unified architecture (OPC UA)-compatible devices connected to the offline network, and you want to upload all the devices' information to Azure using the OPC Publisher module running on the EFLOW VM.
Since the EFLOW host device and the PLC or OPC UA devices are physically connected to the offline network, you can use the [Azure IoT Edge for Linux on Windows virtual multiple NIC configurations](./how-to-configure-multiple-nics.md) to connect the EFLOW VM to the offline network. By using an *external virtual switch*, you can connect the EFLOW VM to the offline network and directly communicate with other offline devices. For the other network, the EFLOW host device is physically connected to the DMZ (online network) with internet and Azure connectivity. Using an *internal or external switch*, you can connect the EFLOW VM to Azure IoT Hub using IoT Edge modules and upload the information sent by the offline devices through the offline NIC. ### Scenario summary Secure network: -- No internet connectivity, access restricted.-- PLCs or UPC UA compatible devices connected.
+- No internet connectivity - access restricted.
+- PLCs or UPC UA-compatible devices connected.
- EFLOW VM connected using an External virtual switch. DMZ:
DMZ:
The following steps are specific for the networking described in the example scenario. Ensure that the virtual switches used and the configurations used align with your networking environment. > [!NOTE]
-> The steps in this article assume that the EFLOW VM was deployed with an *external virtual switch* connected to the *secure network (offline)*. You can change the following steps to your specific network configuration you want to achieve. For more information about EFLOW multiple NIcs support, see [Azure IoT Edge for Linux on Windows virtual multiple NIC configurations](./how-to-configure-multiple-nics.md).
+> The steps in this article assume that the EFLOW VM was deployed with an *external virtual switch* connected to the *secure network (offline)*. You can change the following steps to the specific network configuration you want to achieve. For more information about EFLOW multiple NICs support, see [Azure IoT Edge for Linux on Windows virtual multiple NIC configurations](./how-to-configure-multiple-nics.md).
To finish the provisioning of the EFLOW VM and communicate with Azure, you need to assign another NIC that is connected to the DMZ network (online).
-For this scenario, you'll assign an *external virtual switch* connected to the DMZ network. For more information, review [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
+For this scenario, you assign an *external virtual switch* connected to the DMZ network. For more information, review [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
To create an external virtual switch, follow these steps:
To create an external virtual switch, follow these steps:
6. Under **Connection Type**, select **External Network** then choose the *network adapter* connected to your DMZ network. 7. Select **Apply**.
-Once the external virtual switch is created, you need to attach it to the EFLOW VM using the following steps. For more information about attaching multiple NICs, see [EFLOW Multiple NICs](https://github.com/Azure/iotedge-eflow/wiki/Multiple-NICs).
+Once the external virtual switch is created, you need to attach it to the EFLOW VM using the following steps. If you need to attach multiple NICs, see [EFLOW Multiple NICs](https://github.com/Azure/iotedge-eflow/wiki/Multiple-NICs).
-For the custom new *external virtual switch* you created, use the following PowerShell commands to attach it your EFLOW VM and set a static IP:
+For the custom new *external virtual switch* you created, use the following PowerShell commands to:
-1. `Add-EflowNetwork -vswitchName "OnlineOPCUA" -vswitchType "External"`
+1. Attach the switch to your EFLOW VM.
- :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png" alt-text="Screenshot of a successful creation of the external network named OnlineOPCUA.":::
+ ```powershell
+ Add-EflowNetwork -vswitchName "OnlineOPCUA" -vswitchType "External"
+ ```
-2. `Add-EflowVmEndpoint -vswitchName "OnlineOPCUA" -vEndpointName "OnlineEndpoint" -ip4Address 192.168.0.103 -ip4PrefixLength 24 -ip4GatewayAddress 192.168.0.1`
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png" alt-text="Screenshot of a successful creation of the external network named OnlineOPCUA." lightbox="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-network.png":::
- :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png" alt-text="Screenshot of a successful configuration of the OnlineOPCUA switch..":::
+1. Set a static IP.
-Once complete, you'll have the *OnlineOPCUA* switch assigned to the EFLOW VM. To check the multiple NIC attachment, use the following steps:
+ ```powershell
+ Add-EflowVmEndpoint -vswitchName "OnlineOPCUA" -vEndpointName "OnlineEndpoint" -ip4Address 192.168.0.103 -ip4PrefixLength 24 -ip4GatewayAddress 192.168.0.1
+ ```
+
+ :::image type="content" source="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png" alt-text="Screenshot of a successful configuration of the OnlineOPCUA switch.." lightbox="./media/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz/add-eflow-vm-endpoint.png":::
+
+Once complete, you have the *OnlineOPCUA* switch assigned to the EFLOW VM. To check the multiple NIC attachment, use the following steps:
1. Open an elevated PowerShell session by starting with **Run as Administrator**.
Once complete, you'll have the *OnlineOPCUA* switch assigned to the EFLOW VM. To
Connect-EflowVm ```
-1. List all the network interfaces assigned to the EFLOW virtual machine.
+1. Once you're in your VM, list all the network interfaces assigned to the EFLOW virtual machine.
```bash ifconfig ```
EFLOW uses the [route](https://man7.org/linux/man-pages/man8/route.8.html) servi
Connect-EflowVm ```
-1. List all the network routes configured in the EFLOW virtual machine.
+1. Once you're in your VM, list all the network routes configured in the EFLOW virtual machine.
```bash sudo route
sudo route add -net default gw yyy.yyy.yyy.yyy netmask 0.0.0.0 dev eth1 metric <
You can use the previous script to create your own custom script specific to your networking scenario. Once script is defined, save it, and assign execute permission. For example, if the script name is *route-setup.sh*, you can assign execute permission using the command `sudo chmod +x route-setup.sh`. You can test if the script works correctly by executing it manually using the command `sudo sh ./route-setup.sh` and then checking the routing table using the `sudo route` command.
-The final step is to create a Linux service that runs on startup, and executes the bash script to set the routes. You'll have to create a *systemd* unit file to load the service. The following is an example of that file.
+The final step is to create a Linux service that runs on startup, and executes the bash script to set the routes. You have to create a *systemd* unit file to load the service. The following is an example of that file.
```systemd [Unit]
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Title: Control access to IoT Hub by using Azure Active Directory description: This article describes how to control access to IoT Hub for back-end apps by using Azure AD and Azure RBAC. -+ --+ Last updated 01/18/2023
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Title: Control access to IoT Hub using SAS tokens | Microsoft Docs
+ Title: Control access to IoT Hub using SAS tokens
description: How to control access to IoT Hub for device apps and back-end apps using shared access signature tokens. --+ Last updated 04/28/2022
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Title: Azure IoT Hub cloud-to-device options | Microsoft Docs
+ Title: Azure IoT Hub cloud-to-device options
description: This article provides guidance on when to use direct methods, device twin's desired properties, or cloud-to-device messages for cloud-to-device communications. --+ Last updated 01/29/2018
iot-hub Iot Hub Devguide D2c Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-d2c-guidance.md
Title: Azure IoT Hub device-to-cloud options | Microsoft Docs
+ Title: Azure IoT Hub device-to-cloud options
description: This article provides guidance on when to use device-to-cloud messages, reported properties, or file upload for cloud-to-device communications. --+ Last updated 12/27/2022
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Title: Understand Azure IoT Hub device twins | Microsoft Docs
+ Title: Understand Azure IoT Hub device twins
description: This article describes how to use device twins to synchronize state and configuration data between IoT Hub and your devices --+ Last updated 04/27/2022
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
 Title: Understand Azure IoT Hub direct methods | Microsoft Docs
+ Title: Understand Azure IoT Hub direct methods
description: This article describes how use direct methods to invoke code on your devices from a service app. ++ --+ Last updated 07/15/2022-
iot-hub Iot Hub Devguide Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-endpoints.md
Title: Understand Azure IoT Hub endpoints | Microsoft Docs
+ Title: Understand Azure IoT Hub endpoints
description: This article provides information about IoT Hub device-facing and service-facing endpoints. --+ Last updated 12/21/2022
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-file-upload.md
Title: Understand Azure IoT Hub file upload | Microsoft Docs
+ Title: Understand Azure IoT Hub file upload
description: This article shows how to use the file upload feature of IoT Hub to manage uploading files from a device to an Azure storage blob container. --+ Last updated 12/30/2022
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Title: Understand the Azure IoT Hub identity registry description: This article provides a description of the IoT Hub identity registry and how to use it to manage your devices. Includes information about the import and export of device identities in bulk. + --+ Last updated 06/29/2021
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-jobs.md
Title: Understand Azure IoT Hub jobs | Microsoft Docs
+ Title: Understand Azure IoT Hub jobs
description: This article describes scheduling jobs to run on multiple devices connected to your IoT hub. Jobs can update tags and desired properties and invoke direct methods on multiple devices. --+ Last updated 05/06/2019
iot-hub Iot Hub Devguide Messages C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-c2d.md
Title: Understand Azure IoT Hub cloud-to-device messaging | Microsoft Docs
+ Title: Understand Azure IoT Hub cloud-to-device messaging
description: This developer guide discusses how to use cloud-to-device messaging with your IoT hub. It includes information about the message life cycle and configuration options. --+ Last updated 12/20/2022
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
Title: Understand Azure IoT Hub message format | Microsoft Docs
+ Title: Understand Azure IoT Hub message format
description: This article describes the format and expected content of IoT Hub messages.-++ --+ Last updated 2/7/2022-+
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Title: Understand Azure IoT Hub message routing
description: This article describes how to use message routing to send device-to-cloud messages. Includes information about sending both telemetry and non-telemetry data. ++ --+ Last updated 02/22/2023-
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
Title: Understand the Azure IoT Hub built-in endpoint | Microsoft Docs
+ Title: Understand the Azure IoT Hub built-in endpoint
description: This article describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages. + --+ Last updated 12/19/2022
iot-hub Iot Hub Devguide Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messaging.md
Title: Understand Azure IoT Hub messaging | Microsoft Docs
+ Title: Understand Azure IoT Hub messaging
description: This article describes device-to-cloud and cloud-to-device messaging with IoT Hub. Includes information about message formats and supported communications protocols. --+ Last updated 12/20/2022
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md
Title: Understand Azure IoT Hub module twins | Microsoft Docs
+ Title: Understand Azure IoT Hub module twins
description: This article describes how to use module twins to synchronize state and configuration data between IoT Hub and your devices ++ --+ Last updated 04/27/2022-
iot-hub Iot Hub Devguide No Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-no-sdk.md
Title: Develop without an Azure IoT SDK | Microsoft Docs
+ Title: Develop without an Azure IoT SDK
description: This article provides information about and links to topics that you can use to build device apps and back-end apps without using an Azure IoT SDK. --+ Last updated 10/12/2020
iot-hub Iot Hub Devguide Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-pricing.md
--+ Last updated 02/09/2023
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-protocols.md
Title: Azure IoT Hub communication protocols and ports | Microsoft Docs
+ Title: Azure IoT Hub communication protocols and ports
description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open for those protocols. + --+ Last updated 11/21/2022
iot-hub Iot Hub Devguide Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-query-language.md
Title: Understand the Azure IoT Hub query language description: This article provides a description of the SQL-like IoT Hub query language used to retrieve information about device/module twins and jobs from your IoT hub. + --+ Last updated 09/29/2022
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Title: Understand Azure IoT Hub quotas and throttling- description: This article provides a description of the quotas that apply to IoT Hub and the expected throttling behavior. + --+ Last updated 02/09/2023
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
Title: Query on Azure IoT Hub message routing description: Learn about the IoT Hub message routing query language that you can use to apply rich queries to messages to receive the data that matters to you. ++ --+ Last updated 02/22/2023-
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs
+ Title: Azure IoT Hub device and service SDKs
description: Links to the Azure IoT Hub SDKs that you can use to build device apps and back-end apps. + --+ Last updated 11/18/2022
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
Title: Access control and security for IoT Hub | Microsoft Docs
+ Title: Access control and security for IoT Hub
description: Overview on how to control access to IoT Hub, includes links to depth articles on AAD integration and SAS options. --+ Last updated 04/15/2021
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide.md
Title: Concepts overview for Azure IoT Hub | Microsoft Docs
-description: The Azure IoT Hub conceptual documentation includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, messaging and many other features. This article helps get you to the right articles to learn about a particular feature.
+ Title: Concepts overview for Azure IoT Hub
+description: The Azure IoT Hub conceptual documentation includes discussions of endpoints, security, the identity registry, device management, direct methods, device twins, file uploads, jobs, the IoT Hub query language, messaging and many other features.
+ --+ Last updated 11/03/2022
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
Title: Azure IoT Hub scaling description: How to choose the correct IoT hub tier and size to support your anticipated message throughput and desired features. -++ --+ Last updated 02/09/2023-
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
No, only the [global Azure cloud](https://azure.microsoft.com/global-infrastruct
Yes, IoT Central uses both IoT Hub and DPS in the backend. The TLS migration will affect your solution, and you need to update your devices to maintain connection.
+You can migrate your application from the Baltimore CyberTrust Root to the DigiCert Global G2 Root on your own schedule. We recommend the following process: 
+1. **Keep the Baltimore CyberTrust Root on your device until the transition period is completed on 15 February 2024** (necessary to prevent connection interruption).
+2. **In addition** to the Baltimore Root, ensure the DigiCert Global G2 Root is added to your trusted root store.
+3. Make sure you arenΓÇÖt pinning any intermediate or leaf certificates and are using the public roots to perform TLS server validation.
+4. In your IoT Central application you can find the Root Certification settings underΓÇ»**Settings**ΓÇ»>ΓÇ»**Application**ΓÇ»>ΓÇ»**Baltimore Cybertrust Migration**.ΓÇ»
+ 1. Select **DigiCert Global G2 Root** to migrate to the new certificate root.
+ 2. Click **Save** to initiate the migration.
+ 3. If needed, you can migrate back to the Baltimore root by selecting **Baltimore CyberTrust Root** and saving the changes. This option is available until 15 May 2023 and will then be disabled as Microsoft will start initiating the migration.
+ ### How long will it take my devices to reconnect? Several factors can affect device reconnection behavior.
iot-hub Query Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-jobs.md
Title: Run queries on Azure IoT Hub jobs description: This article describes how to retrieve information about device jobs from your Azure IoT hub using the query language. ++ --+ Last updated 09/29/2022- # Queries for IoT Hub jobs
iot-hub Query Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-twins.md
Title: Query Azure IoT Hub device twins and module twins description: This article describes how to retrieve information about device/module twins from your IoT hub using the query language. ++ --+ Last updated 09/29/2022- # Queries for IoT Hub device and module twins
iot-hub Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/virtual-network-support.md
- Title: Azure IoT Hub support for virtual networks
- description: How to use virtual networks connectivity pattern with IoT Hub
-
-
-
-
- Last updated 01/13/2023
-
+ Title: Azure IoT Hub support for virtual networks
+description: How to use virtual networks connectivity pattern with IoT Hub
+++++ Last updated : 01/13/2023 # IoT Hub support for virtual networks with Azure Private Link
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create Standard workflows in single-tenant Azure Logic Apps with the Azure portal
-description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using the Azure portal.
+ Title: Create example Standard logic app workflow in the Azure portal
+description: Create your first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
ms.suite: integration Previously updated : 02/06/2023 Last updated : 03/16/2023
-# Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
+# Customer intent: As a developer, I want to create my first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
-# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in the Azure portal
+# Create an example Standard workflow in single-tenant Azure Logic Apps with the Azure portal
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using the **Logic App (Standard)** resource type and the Azure portal. This resource type can host multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless). Also, workflows in the same logic app and tenant run in the same process as the redesigned Azure Logic Apps runtime, so they share the same resources and provide better performance. For more information about the single-tenant Azure Logic Apps offering, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This guide shows how to create an example automated workflow that waits for an inbound web request and then sends a message to an email account. More specifically, you'll create a [Standard logic app resource](logic-apps-overview.md#resource-environment-differences), which can include multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless) that run in single-tenant Azure Logic Apps.
-While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
-
-> [!TIP]
-> If you don't have an Office 365 account, you can use any other available action
-> that can send messages from your email account, for example, Outlook.com.
+> [!NOTE]
> > To create this example workflow in Visual Studio Code instead, follow the steps in
-> [Create integration workflows using single-tenant Azure Logic Apps and Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
+> [Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
> Both options provide the capability to develop, run, and deploy logic app workflows in the same kinds of environments. > However, with Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
-![Screenshot that shows the Azure portal with the workflow designer for the "Logic App (Standard)" resource.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
+While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the Request built-in trigger, which is followed by an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
+
+![Screenshot showing the Azure portal with the designer for Standard logic app workflow.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
As you progress, you'll complete these high-level tasks:
-* Create the logic app resource and add a blank [*stateful*](single-tenant-overview-compare.md#stateful-stateless) workflow.
+* Create a Standard logic app resource and add a blank [*stateful* workflow](single-tenant-overview-compare.md#stateful-stateless).
* Add a trigger and action. * Trigger a workflow run. * View the workflow's run and trigger history. * Enable or open the Application Insights after deployment. * Enable run history for stateless workflows.
+In single-tenant Azure Logic Apps, workflows in the same logic app resource and tenant run in the same process as the runtime, so they share the same resources and provide better performance. For more information about single-tenant Azure Logic Apps, see [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+ ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
As you progress, you'll complete these high-level tasks:
* An [Azure Storage account](../storage/common/storage-account-overview.md). If you don't have one, you can either create a storage account in advance or during logic app creation. > [!NOTE]
- > The **Logic App (Standard)** resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
+ >
+ > The Standard logic app resource type is powered by Azure Functions and has [storage requirements similar to function apps](../azure-functions/storage-considerations.md).
> [Stateful workflows](single-tenant-overview-compare.md#stateful-stateless) perform storage transactions, such as > using queues for scheduling and storing workflow states in tables and blobs. These transactions incur > [storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about > how stateful workflows store data in external storage, review [Stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
-* To create the same example workflow in this article, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+* To create the same example workflow in this guide, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
- If you choose a [different email connector](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Outlook.com, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+ If you don't have an Office 365 account, you can use [any other available email connector](/connectors/connector-reference/connector-reference-logicapps-connectors) that can send messages from your email account, for example, Outlook.com. If you use a different email connector, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
-* To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
+* To test the example workflow in this guide, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
-* If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+* If you create your logic app resource and enable [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
-* To deploy your **Logic App (Standard)** resource to an [App Service Environment v3 (ASEv3)](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
+* To deploy your Standard logic app resource to an [App Service Environment v3 (ASEv3) - Windows plan only](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
* Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v4 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
-## Best practices and recommendations
-
-For optimal designer responsiveness and performance, review and follow these guidelines:
--- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance. --- Consider splitting business logic into multiple workflows where necessary.--- Have no more than 10-15 workflows per logic app resource.- <a name="create-logic-app-resource"></a> ## Create a Standard logic app resource
-1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
-1. In the Azure portal search box, enter `logic apps`, and select **Logic apps**.
+1. In the Azure portal search box, enter **logic apps**, and select **Logic apps**.
![Screenshot that shows the Azure portal search box with the "logic apps" search term and the "Logic apps" group selected.](./media/create-single-tenant-workflows-azure-portal/find-logic-app-resource-template.png)
For optimal designer responsiveness and performance, review and follow these gui
|-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. | | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Fabrikam-Workflows-RG**. |
- | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, **.azurewebsites.net**, because the Standard logic app resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. |
1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type. The **Plan type** property specifies the hosting plan and billing model to use for your logic app. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md).
For optimal designer responsiveness and performance, review and follow these gui
| Property | Required | Value | Description | |-|-|-|-|
- | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <br><br>This example uses the name `Fabrikam-Service-Plan`. <br><br>**Note**: Only the Windows-based App Service plan is supported. Don't use a Linux-based App Service plan. |
- | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <p><p>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
+ | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <br><br>This example uses the name **Fabrikam-Service-Plan**. <br><br>**Note**: Only the Windows-based App Service plan is supported. Don't use a Linux-based App Service plan. |
+ | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app and workflows. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <br><br>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <br><br>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
1. Now continue making the following selections: | Property | Required | Value | Description | |-|-|-|-|
- | **Publish** | Yes | **Workflow** | This option appears and applies only when **Plan type** is set to the **Standard** logic app type. By default, this option is set to **Workflow** and creates an empty logic app resource where you add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
+ | **Publish** | Yes | **Workflow** | This option appears and applies only when **Plan type** is set to the **Standard** logic app type. By default, this option is set to **Workflow** and creates an empty logic app resource where you add your first workflow. <br><br>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
| **Region** | Yes | <*Azure-region*> | The Azure datacenter region to use for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>- If you previously chose **Docker Container**, select your custom location from the **Region** list. <br><br>- If you want to deploy your app to an existing [App Service Environment v3 resource](../app-service/environment/overview.md), you can select that environment from the **Region** list. | > [!NOTE]
For optimal designer responsiveness and performance, review and follow these gui
| Property | Required | Value | Description | |-|-|-|-|
- | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <p><p>- To deploy only to Azure, select **Azure Storage**. <p><p>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <p><p>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The ongoing workflow state, run history, and other runtime artifacts are stored in your SQL database. <p><p>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
- | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <p><p>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
+ | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <br><br>- To deploy only to Azure, select **Azure Storage**. <br><br>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <br><br>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The ongoing workflow state, run history, and other runtime artifacts are stored in your SQL database. <br><br>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
+ | **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <br><br>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <br><br>This example creates a storage account named **fabrikamstorageacct**. |
+
+1. On the **Networking** tab, you can leave the default options for this example.
+
+ For your specific, real-world scenarios, make sure to review and select the appropriate options. You can also change this configuration after you deploy your logic app. For more information, see [Secure traffic between Standard logic apps and Azure virtual networks using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
+
+ | Enable public access | Behavior |
+ |-|-|
+ | **On** | Your logic app has a public endpoint with an inbound address that's open to the internet and can't access an Azure virtual network. |
+ | **Off** | Your logic app has no public endpoint, but has a private endpoint instead for communication within an Azure virtual network, and is isolated to that virtual network. The private endpoint can communicate with endpoints in the virtual network, but only from clients within that network. This configuration also means that logic app traffic can be governed by network security groups or affected by virtual network routes. |
-1. Next, if your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app.
+ To enable your logic app to access endpoints in a virtual network, make sure to select the appropriate option:
+
+ | Enable network injection | Behavior |
+ |--|-|
+ | **On** | Your logic app workflows can privately and securely communicate with endpoints in the virtual network. |
+ | **Off** | Your logic app workflows can't communicate with endpoints in the virtual network. |
+
+1. If your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app workflows.
1. On the **Monitoring** tab, under **Application Insights**, set **Enable Application Insights** to **Yes** if not already selected.
After you create your empty logic app resource, you have to add your first workf
1. After the **New workflow** pane opens, provide a name for your workflow, and choose the state type, either [**Stateful** or **Stateless**](single-tenant-overview-compare.md#stateful-stateless). When you're done, select **Create**.
- This example adds a blank stateful workflow named `Fabrikam-Stateful-Workflow`. By default, the workflow is enabled but doesn't do anything until you add a trigger and actions.
+ This example adds a blank stateful workflow named **Fabrikam-Stateful-Workflow**. By default, the workflow is enabled but doesn't do anything until you add a trigger and actions.
![Screenshot that shows the newly added blank stateful workflow "Fabrikam-Stateful-Workflow".](./media/create-single-tenant-workflows-azure-portal/logic-app-blank-workflow-created.png)
Before you can add a trigger to a blank workflow, make sure that the workflow de
1. Next to the designer surface, in the **Add a trigger** pane, under the **Choose an operation** search box, check that the **Built-in** tab is selected. This tab shows triggers that run natively in Azure Logic Apps.
-1. In the **Choose an operation** search box, enter `when a http request`, and select the built-in Request trigger that's named **When an HTTP request is received**.
+1. In the **Choose an operation** search box, enter **when a http request**, and select the built-in Request trigger that's named **When an HTTP request is received**.
![Screenshot that shows the designer and **Add a trigger** pane with "When an HTTP request is received" trigger selected.](./media/create-single-tenant-workflows-azure-portal/find-request-trigger.png)
- When the trigger appears on the designer, the trigger's details pane opens to show the trigger's properties, settings, and other actions.
+ When the trigger appears on the designer, the trigger's information pane opens to show the trigger's properties, settings, and other actions.
- ![Screenshot that shows the designer with the "When an HTTP request is received" trigger selected and trigger details pane open.](./media/create-single-tenant-workflows-azure-portal/request-trigger-added-to-designer.png)
+ ![Screenshot that shows the designer with the "When an HTTP request is received" trigger selected and trigger information pane open.](./media/create-single-tenant-workflows-azure-portal/request-trigger-added-to-designer.png)
> [!TIP]
- > If the details pane doesn't appear, makes sure that the trigger is selected on the designer.
+ > If the information pane doesn't appear, makes sure that the trigger is selected on the designer.
1. If you need to delete an item from the designer, [follow these steps for deleting items from the designer](#delete-from-designer).
Before you can add a trigger to a blank workflow, make sure that the workflow de
The **Choose an operation** prompt appears on the designer, and the **Add an action** pane reopens so that you can select the next action. > [!NOTE]
- > If the **Add an action** pane shows the error message, 'Cannot read property 'filter' of undefined`,
+ >
+ > If the **Add an action** pane shows the error message, **"Cannot read property 'filter' of undefined"**,
> save your workflow, reload the page, reopen your workflow, and try again. 1. In the **Add an action** pane, under the **Choose an operation** search box, select **Azure**. This tab shows the managed connectors that are available and hosted in Azure. > [!NOTE]
- > If the **Add an action** pane shows the error message, `The access token expiry UTC time '{token-expiration-date-time}' is earlier than current UTC time '{current-date-time}'`,
+ > If the **Add an action** pane shows the error message,
+ > **"The access token expiry UTC time {*token-expiration-date-time*} is earlier than current UTC time {*current-date-time*}"**,
> save your workflow, reload the page, reopen your workflow, and try adding the action again. This example uses the Office 365 Outlook action that's named **Send an email (V2)**.
- ![Screenshot that shows the designer and the **Add an action** pane with the Office 365 Outlook "Send an email" action selected.](./media/create-single-tenant-workflows-azure-portal/find-send-email-action.png)
+ ![Screenshot showing the designer, the pane named Add an action, and the selected Office 365 Outlook named Send an email.](./media/create-single-tenant-workflows-azure-portal/find-send-email-action.png)
-1. In the action's details pane, on the **Create Connection** tab, select **Sign in** so that you can create a connection to your email account.
+1. In the action's information pane, on the **Create Connection** tab, select **Sign in** so that you can create a connection to your email account.
- ![Screenshot that shows the designer and the "Send an email (V2)" details pane with "Sign in" selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-sign-in.png)
+ ![Screenshot showing the designer, the pane named Send an email (V2) with Sign in button selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-sign-in.png)
1. When you're prompted for access to your email account, sign in with your account credentials. > [!NOTE]
- > If you get the error message, `Failed with error: 'The browser is closed.'. Please sign in again`,
+ > If you get the error message, **"Failed with error: 'The browser is closed.'. Please sign in again"**,
> check whether your browser blocks third-party cookies. If these cookies are blocked,
- > try adding `https://portal.azure.com` to the list of sites that can use cookies.
+ > try adding **https://portal.azure.com** to the list of sites that can use cookies.
> If you're using incognito mode, make sure that third-party cookies aren't blocked while working in that mode. > > If necessary, reload the page, open your workflow, add the email action again, and try creating the connection.
- After Azure creates the connection, the **Send an email** action appears on the designer and is selected by default. If the action isn't selected, select the action so that its details pane is also open.
+ After Azure creates the connection, the **Send an email** action appears on the designer and is selected by default. If the action isn't selected, select the action so that its information pane is also open.
-1. In the action details pane, on the **Parameters** tab, provide the required information for the action, for example:
+1. In the action information pane, on the **Parameters** tab, provide the required information for the action, for example:
- ![Screenshot that shows the designer and the "Send an email" details pane with the "Parameters" tab selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-details.png)
+ ![Screenshot that shows the designer and the "Send an email" information pane with the "Parameters" tab selected.](./media/create-single-tenant-workflows-azure-portal/send-email-action-details.png)
| Property | Required | Value | Description | |-|-|-|-|
- | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, `sophiaowen@fabrikam.com`. |
- | **Subject** | Yes | `An email from your example workflow` | The email subject |
- | **Body** | Yes | `Hello from your example workflow!` | The email body content |
+ | **To** | Yes | <*your-email-address*> | The email recipient, which can be your email address for test purposes. This example uses the fictitious email, **sophiaowen@fabrikam.com**. |
+ | **Subject** | Yes | **An email from your example workflow** | The email subject |
+ | **Body** | Yes | **Hello from your example workflow!** | The email body content |
> [!NOTE]
- > When making any changes in the details pane on the **Settings**, **Static Result**, or **Run After** tabs,
+ > When making any changes in the information pane on the **Settings**, **Static Result**, or **Run After** tabs,
> make sure that you select **Done** to commit those changes before you switch tabs or change focus to the designer. > Otherwise, the designer won't keep your changes.
To find the fully qualified domain names (FQDNs) for connections, follow these s
![Screenshot that shows the Azure portal and API Connection pane with "JSON View" selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-connection-view-json.png)
-1. Copy and save the `connectionRuntimeUrl` property value somewhere safe so that you can set up your firewall with this information.
+1. Copy and save the **connectionRuntimeUrl** property value somewhere safe so that you can set up your firewall with this information.
- ![Screenshot that shows the "connectionRuntimeUrl" property value selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-connection-runtime-url.png)
+ ![Screenshot showing the selected property value named connectionRuntimeUrl.](./media/create-single-tenant-workflows-azure-portal/logic-app-connection-runtime-url.png)
1. For each connection, repeat the relevant steps.
In this example, the workflow runs when the Request trigger receives an inbound
1. On the workflow designer, select the Request trigger that's named **When an HTTP request is received**.
-1. After the details pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
+1. After the information pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
- `http://<logic-app-name>.azurewebsites.net:443/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
+ **`https://<*logic-app-name*>.azurewebsites.net:443/api/<*workflow-name*>/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<*shared-access-signature*>`**
![Screenshot that shows the designer with the Request trigger and endpoint URL in the "HTTP POST URL" property.](./media/create-single-tenant-workflows-azure-portal/find-request-trigger-url.png) For this example, the URL looks like this:
- `https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`
+ **`https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`**
> [!TIP] > You can also find the endpoint URL on your logic app's **Overview** pane in the **Workflow URL** property.
In this example, the workflow runs when the Request trigger receives an inbound
1. On the **Create New** pane, under **Building Blocks**, select **Request**.
- 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, `Test workflow trigger`.
+ 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, **Test workflow trigger**.
1. Under **Select a collection or folder to save to**, select **Create Collection**.
- 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses `Logic Apps requests` as the collection name.
+ 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses **Logic Apps requests** as the collection name.
In the Postman app, the request pane opens so that you can send a request to the endpoint URL for the Request trigger.
For a stateful workflow, after each workflow run, you can view the run history,
| **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. | | **Cancelled** | The run was triggered and started but received a cancel request. | | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
- | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
| **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. | 1. To review the status for each step in a run, select the run that you want to review.
For a stateful workflow, after each workflow run, you can view the run history,
| **Cancelled** | The action was running but received a cancel request. | | **Failed** | The action failed. | | **Running** | The action is currently running. |
- | **Skipped** | The action was skipped because its `runAfter` conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
+ | **Skipped** | The action was skipped because its **runAfter** conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
| **Succeeded** | The action succeeded. | | **Succeeded with retries** | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. | | **Timed out** | The action stopped due to the timeout limit specified by that action's settings. |
For a stateful workflow, you can review the trigger history for each run, includ
1. To review a specific trigger history, select the ID for that run.
+## Best practices and recommendations
+
+For optimal designer responsiveness and performance, review and follow these guidelines:
+
+- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance.
+
+- Consider splitting business logic into multiple workflows where necessary.
+
+- Have no more than 10-15 workflows per logic app resource.
+ <a name="enable-open-application-insights"></a> ## Enable or open Application Insights after deployment
After Application Insights opens, you can review various metrics for your logic
To debug a stateless workflow more easily, you can enable the run history for that workflow, and then disable the run history when you're done. Follow these steps for the Azure portal, or if you're working in Visual Studio Code, see [Create stateful and stateless workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
-1. In the [Azure portal](https://portal.azure.com), open your **Logic App (Standard)** resource.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
1. On the logic app's menu, under **Settings**, select **Configuration**.
To debug a stateless workflow more easily, you can enable the run history for th
1. On the **Add/Edit application setting** pane, in the **Name** box, enter this operation option name:
- `Workflows.{yourWorkflowName}.OperationOptions`
-
-1. In the **Value** box, enter the following value: `WithStatelessRunHistory`
+ **Workflows.{*yourWorkflowName*}.OperationOptions**
- For example:
+1. In the **Value** box, enter the following value: **WithStatelessRunHistory**
- ![Screenshot that shows the Azure portal and Logic App (Standard) resource with the "Configuration" > "New application setting" < "Add/Edit application setting" pane open and the "Workflows.{yourWorkflowName}.OperationOptions" option set to "WithStatelessRunHistory".](./media/create-single-tenant-workflows-azure-portal/stateless-operation-options-run-history.png)
+ ![Screenshot showing Standard logic app and pane named Add/Edit application setting with Workflows.{yourWorkflowName}.OperationOptions set to WithStatelessRunHistory.](./media/create-single-tenant-workflows-azure-portal/stateless-operation-options-run-history.png)
1. To finish this task, select **OK**. On the **Configuration** pane toolbar, select **Save**.
-1. To disable the run history when you're done, either set the `Workflows.{yourWorkflowName}.OperationOptions`property to `None`, or delete the property and its value.
+1. To disable the run history when you're done, either set the property named **Workflows.{*your-workflow-name*}.OperationOptions** to **None**, or delete the property and its value.
<a name="view-connections"></a>
When you create connections within a workflow using [managed connectors](../conn
| **API Connections** | Connections created by managed connectors | | **Service Provider Connections** | Connections created by built-in connectors based on the service provider interface implementation. a specific connection instance, which shows more information about that connection. To view the selected connection's underlying resource definition, select **JSON View**. | | **JSON View** | The underlying resource definitions for all connections in the logic app |
- |||
<a name="delete-from-designer"></a>
To delete an item in your workflow from the designer, follow any of these steps:
* Select the item, and press the delete key. To confirm, select **OK**.
-* Select the item so that details pane opens for that item. In the pane's upper right corner, open the ellipses (**...**) menu, and select **Delete**. To confirm, select **OK**.
+* Select the item so that information pane opens for that item. In the pane's upper right corner, open the ellipses (**...**) menu, and select **Delete**. To confirm, select **OK**.
- ![Screenshot that shows a selected item on designer with the opened details pane plus the selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-azure-portal/delete-item-from-designer.png)
+ ![Screenshot that shows a selected item on designer with the opened information pane plus the selected ellipses button and "Delete" command.](./media/create-single-tenant-workflows-azure-portal/delete-item-from-designer.png)
> [!TIP] > If the ellipses menu isn't visible, expand your browser window wide enough so that
- > the details pane shows the ellipses (**...**) button in the upper right corner.
+ > the information pane shows the ellipses (**...**) button in the upper right corner.
<a name="restart-stop-start"></a>
Stopping a logic app affects workflow instances in the following ways:
You can stop or start multiple logic apps at the same time, but you can't restart multiple logic apps without stopping them first.
-1. In the Azure portal's main search box, enter `logic apps`, and select **Logic apps**.
+1. In the Azure portal's main search box, enter **logic apps**, and select **Logic apps**.
1. On the **Logic apps** page, review the logic app's **Status** column.
You can [delete a single or multiple logic apps at the same time](#delete-logic-
Deleting a logic app cancels in-progress and pending runs immediately, but doesn't run cleanup tasks on the storage used by the app.
-1. In the Azure portal's main search box, enter `logic apps`, and select **Logic apps**.
+1. In the Azure portal's main search box, enter **logic apps**, and select **Logic apps**.
1. From the **Logic apps** list, in the checkbox column, select a single or multiple logic apps to delete. On the toolbar, select **Delete**.
-1. When the confirmation box appears, enter `yes`, and select **Delete**.
+1. When the confirmation box appears, enter **yes**, and select **Delete**.
1. To confirm whether your operation succeeded or failed, on main Azure toolbar, open the **Notifications** list (bell icon).
Deleting a workflow affects workflow instances in the following ways:
* Azure Logic Apps doesn't create or run new workflow instances.
-* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. To refresh the metadata, you have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
+* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. To refresh the metadata, you have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an **Unauthorized** error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions.
1. In the Azure portal, open your logic app.
Deleting a workflow affects workflow instances in the following ways:
## Recover deleted logic apps
-If you use source control, you can seamlessly redeploy a deleted **Logic App (Standard)** resource to single-tenant Azure Logic Apps. However, if you're not using source control, try the following steps to recover your deleted logic app.
+If you use source control, you can seamlessly redeploy a deleted Standard logic app resource to single-tenant Azure Logic Apps. However, if you're not using source control, try the following steps to recover your deleted logic app.
> [!NOTE]
+>
> Before you try to recover your deleted logic app, review these considerations: >
-> * You can recover only deleted **Logic App (Standard)** resources that use the **Workflow Standard** hosting plan.
-> You can't recover deleted **Logic App (Consumption)** resources.
+> * You can recover only deleted Standard logic app resources that use the **Workflow Standard**
+> hosting plan. You can't recover deleted Consumption logic app resources.
> > * If your workflow starts with the Request trigger, the callback URL for the recovered logic app differs from the URL for the deleted logic app. >
If you use source control, you can seamlessly redeploy a deleted **Logic App (St
1. On the **Access keys** page, copy the account's primary connection string, and save for later use, for example:
- `DefaultEndpointsProtocol=https;AccountName=<storageaccountname>;AccountKey=<accesskey>;EndpointSuffix=core.windows.net`
+ **DefaultEndpointsProtocol=https;AccountName=<*storage-account-name*>;AccountKey=<*access-key*>;EndpointSuffix=core.windows.net**
1. On the storage account menu, under **Data storage**, select **File shares**, copy the name for the file share associated with your logic app, and save for later use.
-1. Create a new **Logic App (Standard)** resource using the same hosting plan and pricing tier. You can either use a new name or reuse the name from the deleted logic app.
+1. Create a new Standard logic app resource using the same hosting plan and pricing tier. You can either use a new name or reuse the name from the deleted logic app.
1. Before you continue, stop the logic app. From the logic app menu, select **Overview**. On the **Overview** page toolbar, select **Stop**.
If you use source control, you can seamlessly redeploy a deleted **Logic App (St
| App setting | Replacement value | |-|-|
- | `AzureWebJobsStorage` | Replace the existing value with the previously copied connection string from your storage account. |
- | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Replace the existing value with the previously copied string from your storage account. |
- | `WEBSITE_CONTENTSHARE` | Replace the existing value with the previously copied file share name. |
+ | **AzureWebJobsStorage** | Replace the existing value with the previously copied connection string from your storage account. |
+ | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | Replace the existing value with the previously copied string from your storage account. |
+ | **WEBSITE_CONTENTSHARE** | Replace the existing value with the previously copied file share name. |
1. On your logic app menu, under **Workflows**, select **Connections**.
If you use source control, you can seamlessly redeploy a deleted **Logic App (St
### New triggers and actions are missing from the designer picker for previously created workflows
-Single-tenant Azure Logic Apps supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer for you to select if your logic app uses an outdated version of the extension bundle, `Microsoft.Azure.Functions.ExtensionBundle.Workflows`.
+Single-tenant Azure Logic Apps supports built-in actions for Azure Function Operations, Liquid Operations, and XML Operations, such as **XML Validation** and **Transform XML**. However, for previously created logic apps, these actions might not appear in the designer for you to select if your logic app uses an outdated version of the extension bundle, **Microsoft.Azure.Functions.ExtensionBundle.Workflows**.
To fix this problem, follow these steps to delete the outdated version so that the extension bundle can automatically update to the latest version. > [!NOTE]
-> This specific solution applies only to **Logic App (Standard)** resources that you create using
+>
+> This specific solution applies only to Standard logic app resources that you create using
> the Azure portal, not the logic apps that you create and deploy using Visual Studio Code and the > Azure Logic Apps (Standard) extension. See [Supported triggers and actions are missing from the designer in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#missing-triggers-actions).
To fix this problem, follow these steps to delete the outdated version so that t
1. Browse to the following folder, which contains versioned folders for the existing bundle:
- `...\home\data\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows`
+ **...\home\data\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle.Workflows**
-1. Delete the version folder for the existing bundle. In the console window, you can run this command where you replace `{bundle-version}` with the existing version:
+1. Delete the version folder for the existing bundle. In the console window, you can run this command where you replace **{*bundle-version*}** with the existing version:
`rm -rf {bundle-version}` For example: `rm -rf 1.1.3` > [!TIP]
- > If you get an error such as "permission denied" or "file in use", refresh the page in your browser,
- > and try the previous steps again until the folder is deleted.
+ >
+ > If you get an error such as **"permission denied"** or **"file in use"**, refresh the
+ > page in your browser, and try the previous steps again until the folder is deleted.
1. In the Azure portal, return to your logic app's **Overview** page, and select **Restart**.
To fix this problem, follow these steps to delete the outdated version so that t
We'd like to hear from you about your experiences with this scenario! * For bugs or problems, [create your issues in GitHub](https://github.com/Azure/logicapps/issues).
-* For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
+* For questions, requests, comments, and other feedback, [use this feedback form](https://aka.ms/lafeedback).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
* Secure your compute instance with **[No public IP](./how-to-secure-training-vnet.md)**. * The compute instance is also a secure training compute target similar to [compute clusters](how-to-create-attach-compute-cluster.md), but it's single node. * You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview)**.
-* You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs.
+* You can also **[use a setup script](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs.
* To save on costs, **[create a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop)** to automatically start and stop the compute instance, or [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview)
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
In this article, you learn how to:
* [Create a schedule](#schedule-automatic-start-and-stop) to automatically start and stop the compute instance * [Enable idle shutdown](#enable-idle-shutdown-preview)
-You can also [use a setup script (preview)](how-to-customize-compute-instance.md) to create the compute instance with your own custom environment.
+You can also [use a setup script](how-to-customize-compute-instance.md) to create the compute instance with your own custom environment.
Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
machine-learning How To Customize Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-customize-compute-instance.md
Title: Customize compute instance with a script (preview)
+ Title: Customize compute instance with a script
description: Create a customized compute instance, using a startup script. Use the compute instance as your development environment, or as compute target for dev/test purposes.
Some examples of what you can do in a setup script:
* Set environment variables * Install JupyterLab extensions - ## Create the setup script The setup script is a shell script, which runs as `rootuser`. Create or upload the script into your **Notebooks** files:
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
Azure Machine Learning can deploy a trained model as a web service. The web serv
> The information in this article is specific to the deployment of the model. It provides information on the supported deployment configurations that allow the model to be used by Cognitive Search. > > For information on how to configure Cognitive Search to use the deployed model, see the [Build and deploy a custom skill with Azure Machine Learning](../search/cognitive-search-tutorial-aml-custom-skill.md) tutorial.
->
-> For the sample that the tutorial is based on, see [https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill).
When deploying a model for use with Azure Cognitive Search, the deployment must meet the following requirements:
When deploying a model for use with Azure Cognitive Search, the deployment must
* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
-* A registered model. If you do not have a model, use the example notebook at [https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill).
+* A registered model.
* A general understanding of [How and where to deploy models](v1/how-to-deploy-and-where.md).
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
az ml workspace update --name myworkspace --resource-group myresourcegroup --ima
* [Enable Azure Container Registry (ACR)](https://aka.ms/azureml/environment/acr-private-endpoint) * [How To Use Environments](https://aka.ms/azureml/environment/how-to-use-environments)
+### Unexpected Dockerfile Format
+<!--issueDescription-->
+This issue can happen when your Dockerfile is formatted incorrectly.
+
+**Potential causes:**
+* Your Dockerfile contains invalid syntax
+* Your Dockerfile contains characters that aren't compatible with UTF-8
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Ensure Dockerfile is formatted correctly and is encoded in UTF-8
+
+**Resources**
+* [Dockerfile format](https://docs.docker.com/engine/reference/builder/#format)
+ ## *Docker pull issues* ### Failed to pull Docker image <!--issueDescription-->
This issue can happen when you haven't specified any targets and no makefile is
**Affected areas (symptoms):** * Failure in building environments from UI, SDK, and CLI. * Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step.
+<!--/issueDescription-->
**Troubleshooting steps** * Ensure that you've spelled the makefile correctly
This issue can happen when you haven't specified any targets and no makefile is
**Resources** * [GNU Make](https://www.gnu.org/software/make/manual/make.html)+
+## *Copy issues*
+### File not found
+<!--issueDescription-->
+This issue can happen when Docker fails to find and copy a file.
+
+**Potential causes:**
+* Source file not found in Docker build context
+* Source file excluded by `.dockerignore`
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
<!--/issueDescription-->
+**Troubleshooting steps**
+* Ensure that the source file exists in the Docker build context
+* Ensure that the source and destination paths exist and are spelled correctly
+* Ensure that the source file isn't listed in the `.dockerignore` of the current and parent directories
+* Remove any trailing comments from the same line as the `COPY` command
+
+**Resources**
+* [Docker COPY](https://docs.docker.com/engine/reference/builder/#copy)
+* [Docker Build Context](https://docs.docker.com/engine/context/working-with-contexts/)
+ ## *Docker push issues* ### Failed to store Docker image <!--issueDescription-->
If you aren't using a virtual network, or if you've configured it correctly, tes
* For an image "helloworld", test pushing to your ACR by running `docker push helloworld` * See [Quickstart: Build and run a container image using Azure Container Registry Tasks](../container-registry/container-registry-quickstart-task-cli.md)
+## *Unknown Docker command*
+### Unknown Docker instruction
+<!--issueDescription-->
+This issue can happen when Docker doesn't recognize an instruction in the Dockerfile.
+
+**Potential causes:**
+* Unknown Docker instruction being used in Dockerfile
+* Your Dockerfile contains invalid syntax
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Ensure that the Docker command is valid and spelled correctly
+* Ensure there's a space between the Docker command and arguments
+* Ensure there's no unnecessary whitespace in the Dockerfile
+* Ensure Dockerfile is formatted correctly and is encoded in UTF-8
+
+**Resources**
+* [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)
+ ## *Miscellaneous build issues* ### Build log unavailable <!--issueDescription-->
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
The following steps prepare and configure the MySQL server hosted on-premises, i
All Data-in Replication functions are done by stored procedures. You can find all procedures at [Data-in Replication Stored Procedures](./reference-stored-procedures.md). The stored procedures can be run in the MySQL shell or MySQL Workbench.
- To link two servers and start replication, login to the target replica server in the Azure DB for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure DB for MySQL server.
+ To link two servers and start replication, login to the target replica server in the Azure Database for MySQL service and set the external instance as the source server. This is done by using the `mysql.az_replication_change_master` stored procedure on the Azure Database for MySQL server.
```sql CALL mysql.az_replication_change_master('<master_host>', '<master_user>', '<master_password>', <master_port>, '<master_log_file>', <master_log_pos>, '<master_ssl_ca>');
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
> [!Important] > Packet capture is now also available for **virtual machine scale sets**. To check it out, visit [Manage packet captures in virtual machine scale sets with Azure Network Watcher using the Azure portal](network-watcher-packet-capture-manage-portal-vmss.md).
-Network Watcher variable packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
+Network Watcher packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
-Packet capture is an extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine or Virtual Machine Scale Sets instance/(s), which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
+Packet capture is an extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine or virtual machine scale set instance(s), which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
> [!IMPORTANT] > Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
To reduce the information in order to capture only required information, followi
|Property|Description| |||
-|**Maximum bytes per packet (bytes)** | The number of bytes from each packet that are captured, all bytes are captured if left blank. The number of bytes from each packet that are captured, all bytes are captured if left blank. If you need only the IPv4 header ΓÇô indicate 34 here |
-|**Maximum bytes per session (bytes)** | Total number of bytes in that are captured, once the value is reached the session ends.|
+|**Maximum bytes per packet (bytes)** | The number of bytes from each packet that are captured, all bytes are captured if left blank. If you need only the IPv4 header ΓÇô indicate 34 here |
+|**Maximum bytes per session (bytes)** | Total number of bytes that are captured, once the value is reached the session ends.|
|**Time limit (seconds)** | Sets a time constraint on the packet capture session. The default value is 18000 seconds or 5 hours.| **Filtering (optional)**
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
Title: Create an Azure Red Hat OpenShift 4 private cluster
description: Learn how to create an Azure Red Hat OpenShift private cluster running OpenShift 4 Previously updated : 03/12/2020 Last updated : 03/17/2023 keywords: aro, openshift, az aro, red hat, cli
To create a private cluster without a public IP address, register for the featur
``` az feature register --namespace Microsoft.RedHatOpenShift --name UserDefinedRouting ```
-After you've registered the feature flag, [create the private ARO cluster](#create-the-cluster).
+After you've registered the feature flag, create the cluster [using the command above](#create-the-cluster).
Enabling this User Defined Routing option prevents a public IP address from being provisioned. User Defined Routing (UDR) allows you to create custom routes in Azure to override the default system routes or to add more routes to a subnet's route table. See [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md) to learn more.
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
These backup files can't be exported or used to create servers outside Azure Dat
## Backup frequency
-Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily.
+Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 15 minutes.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
[Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in two deployment modes: -- [Single Server](../overview-single-server.md) - [Flexible Server](./overview.md) -
+- [Single Server](../overview-single-server.md)
+
In this article, we will provide an overview and introduction to core concepts of flexible server deployment model.
One advantage of running your workload in Azure is global reach. The flexible se
| Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| China East 3 | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
-| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| China East 3 | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
In addition, consider the following points of contact as appropriate:
## Next steps Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)+
purview How To Policies Data Owner Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-arc-sql-server.md
Previously updated : 11/23/2022 Last updated : 03/17/2023 # Provision access by data owner for Azure Arc-enabled SQL Server (preview)
Follow this link for the steps to [update or delete a data owner policy in Micro
## Test the policy
-The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
+After creating the policy, any of the Azure AD users in the Subject should now be able to connect to the data sources in the scope of the policy. To test, use SSMS or any SQL client and try to query. Attempt access to a SQL table you have provided read access to.
-### Force policy download
-It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role.
+If you require additional troubleshooting, see the [Next steps](#next-steps) section in this guide.
-```sql
Force immediate download of latest published policies
-exec sp_external_policy_refresh reload
-```
+## Role definition detail
+This section contains a reference of how relevant Microsoft Purview data policy roles map to specific actions in SQL data sources.
-### Analyze downloaded policy state from SQL
-The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*.
-
-```sql
- Lists generally supported actions
-SELECT * FROM sys.dm_server_external_policy_actions
- Lists the roles that are part of a policy published to this server
-SELECT * FROM sys.dm_server_external_policy_roles
- Lists the links between the roles and actions, could be used to join the two
-SELECT * FROM sys.dm_server_external_policy_role_actions
- Lists all Azure AD principals that were given connect permissions
-SELECT * FROM sys.dm_server_external_policy_principals
- Lists Azure AD principals assigned to a given role on a given resource scope
-SELECT * FROM sys.dm_server_external_policy_role_members
- Lists Azure AD principals, joined with roles, joined with their data actions
-SELECT * FROM sys.dm_server_external_policy_principal_assigned_actions
-```
---
-## Additional information
-
-### Policy action mapping
-
-This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure Arc-enabled SQL Server.
-
-| **Microsoft Purview policy action** | **Data source specific actions** |
+| **Microsoft Purview policy role definition** | **Data source specific actions** |
|-|--| ||| | *Read* |Microsoft.Sql/sqlservers/Connect |
This section contains a reference of how actions in Microsoft Purview data polic
Check blog, demo and related how-to guides * [Concepts for Microsoft Purview data owner policies](./concept-policies-data-owner.md) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md)
-* [Enable Microsoft Purview data owner policies on an Azure SQL DB](./how-to-policies-data-owner-azure-sql-db.md)
+* [Enable Microsoft Purview data owner policies on an Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md)
+* Doc: [Troubleshoot Microsoft Purview policies for SQL data sources](./troubleshoot-policy-sql.md)
purview How To Policies Data Owner Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-azure-sql-db.md
Previously updated : 10/31/2022 Last updated : 03/17/2023 # Provision access by data owner for Azure SQL Database (preview)
After you've registered your resources, you'll need to enable Data Use Managemen
Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this screenshot. This will enable the access policies to be used with the given Azure SQL server and all its contained databases. ![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png) ## Create and publish a data owner policy
Follow this link for the steps to [unpublish a data owner policy in Microsoft Pu
Follow this link for the steps to [update or delete a data owner policy in Microsoft Purview](how-to-policies-data-owner-authoring-generic.md#update-or-delete-a-policy). ## Test the policy
+After creating the policy, any of the Azure AD users in the Subject should now be able to connect to the data sources in the scope of the policy. To test, use SSMS or any SQL client and try to query. Attempt access to a SQL table you have provided read access to.
-The Azure AD Accounts referenced in the access policies should now be able to connect to any database in the server to which the policies are published.
+If you require additional troubleshooting, see the [Next steps](#next-steps) section in this guide.
-### Force policy download
-It is possible to force an immediate download of the latest published policies to the current SQL database by running the following command. The minimal permission required to run it is membership in ##MS_ServerStateManager##-server role.
+## Role definition detail
+This section contains a reference of how relevant Microsoft Purview data policy roles map to specific actions in SQL data sources.
-```sql
Force immediate download of latest published policies
-exec sp_external_policy_refresh reload
-```
-
-### Analyze downloaded policy state from SQL
-The following DMVs can be used to analyze which policies have been downloaded and are currently assigned to Azure AD accounts. The minimal permission required to run them is VIEW DATABASE SECURITY STATE - or assigned Action Group *SQL Security Auditor*.
-
-```sql
- Lists generally supported actions
-SELECT * FROM sys.dm_server_external_policy_actions
- Lists the roles that are part of a policy published to this server
-SELECT * FROM sys.dm_server_external_policy_roles
- Lists the links between the roles and actions, could be used to join the two
-SELECT * FROM sys.dm_server_external_policy_role_actions
- Lists all Azure AD principals that were given connect permissions
-SELECT * FROM sys.dm_server_external_policy_principals
- Lists Azure AD principals assigned to a given role on a given resource scope
-SELECT * FROM sys.dm_server_external_policy_role_members
- Lists Azure AD principals, joined with roles, joined with their data actions
-SELECT * FROM sys.dm_server_external_policy_principal_assigned_actions
-```
-
-## Additional information
-
-### Policy action mapping
-
-This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure SQL DB.
-
-| **Microsoft Purview policy action** | **Data source specific actions** |
+| **Microsoft Purview policy role definition** | **Data source specific actions** |
|-|--| ||| | *Read* |Microsoft.Sql/sqlservers/Connect |
Check blog, demo and related how-to guides
* [Concepts for Microsoft Purview data owner policies](./concept-policies-data-owner.md) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-policies-data-owner-resource-group.md) * [Enable Microsoft Purview data owner policies on an Azure Arc-enabled SQL Server](./how-to-policies-data-owner-arc-sql-server.md)
+* Doc: [Troubleshoot Microsoft Purview policies for SQL data sources](./troubleshoot-policy-sql.md)
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
To delete a DevOps policy, ensure first that you have the Microsoft Purview Poli
## Test the DevOps policy After creating the policy, any of the Azure AD users in the Subject should now be able to connect to the data sources in the scope of the policy. To test, use SSMS or any SQL client and try to query some DMVs/DMFs. We list here a few examples. For more, you can consult the mapping of popular DMVs/DMFs in the [Microsoft Purview DevOps policies concept guide](./concept-policies-devops.md#mapping-of-popular-dmvs-and-dmfs)
+If you require additional troubleshooting, see the [Next steps](#next-steps) section in this guide.
+ ### Testing SQL Performance Monitor access If you provided the Subject(s) of the policy SQL Performance Monitor role, you can issue the following commands ```sql
SELECT * FROM [databaseName].schemaName.tableName
## Role definition detail
-This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure SQL MI.
+This section contains a reference of how relevant Microsoft Purview data policy roles map to specific actions in SQL data sources.
-| **DevOps role definition** | **Data source specific actions** |
+| **Microsoft Purview policy role definition** | **Data source specific actions** |
|-|--| | | | | *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect |
purview How To Policies Devops Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md
Previously updated : 03/10/2023 Last updated : 03/17/2023 # Provision access to system metadata in Azure SQL Database (preview)
After you've registered your resources, you'll need to enable Data Use Managemen
Once your data source has the **Data Use Management** toggle *Enabled*, it will look like this screenshot. This will enable the access policies to be used with the given data source ![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png) + ## Create a new DevOps policy Follow this link for the steps to [create a new DevOps policy in Microsoft Purview](how-to-policies-devops-authoring-generic.md#create-a-new-devops-policy).
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Previously updated : 02/16/2023 Last updated : 03/17/2023 # Connect to Azure Data Lake Storage in Microsoft Purview
Once your data source has the **Data Use Management** option set to **Enabled**
### Create a policy To create an access policy for Azure Data Lake Storage Gen2, follow this guide:
-* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy)
+* [Provision read/modify access on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy)
To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
Before you can create policies, you must register the Azure Arc-enabled SQL Serv
To create an access policy for Azure Arc-enabled SQL Server, follow these guides:
-* [DevOps policy on a single Azure Arc-enabled SQL Server instance - GA](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
-* [Data owner policy on a single Azure Arc-enabled SQL Server instance - Public Preview](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy)
+* [Provision access to system health, performance and audit information in SQL Server 2022](./how-to-policies-devops-arc-sql-server.md#create-a-new-devops-policy)
+* [Provision read/modify access on a single SQL Server 2022](./how-to-policies-data-owner-arc-sql-server.md#create-and-publish-a-data-owner-policy)
To create policies that cover all data sources inside a resource group or Azure subscription, see [Discover and govern multiple Azure sources in Microsoft Purview](register-scan-azure-multiple-sources.md#access-policy).
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Once your data source has the **Data Use Management** option set to **Enabled**
![Screenshot shows how to register a data source for policy with the option Data use management set to enable](./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png) ### Create a policy
-To create an access policy for Azure Blob Storage, follow this guide: [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy).
+To create an access policy for Azure Blob Storage, follow this guide: [Provision read/modify access on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy).
To create policies that cover all data sources inside a resource group or Azure subscription you can refer to [this section](register-scan-azure-multiple-sources.md#access-policy).
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Previously updated : 02/27/2023 Last updated : 03/17/2023
Once your data source has the **Data Use Management** option set to **Enabled**
### Create a policy To create an access policy on an entire Azure subscription or resource group, follow these guides: * [DevOps policy covering all sources in a subscription or resource group](./how-to-policies-devops-resource-group.md#create-a-new-devops-policy)
-* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md#create-and-publish-a-data-owner-policy)
+* [Provision read/modify access to all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md#create-and-publish-a-data-owner-policy)
## Next steps
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Previously updated : 01/10/2023 Last updated : 03/17/2023 # Discover and govern Azure SQL Database in Microsoft Purview
After your data source has the **Data use management** option set to **Enabled**
![Screenshot that shows the panel for registering a data source for a policy, including areas for name, server name, and data use management.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png) + ### Create a policy To create an access policy for Azure SQL Database, follow these guides:
-* [Provision access to system metadata in Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#create-a-new-devops-policy). Use this guide to apply a DevOps policy on a single SQL database.
-* [Provision access by data owner for Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy). Use this guide to provision access on a single SQL database account in your subscription.
-* [Resource group and subscription access provisioning by data owner](./how-to-policies-data-owner-resource-group.md). Use this guide to provision access on all enabled data sources in a resource group or across an Azure subscription. The prerequisite is that the subscription or resource group must be registered with the **Data use management** option enabled.
-* [Self-service policies for Azure SQL Database](./how-to-policies-self-service-azure-sql-db.md). Use this guide to allow data consumers to request access to data assets by using a self-service workflow.
+* [Provision access to system health, performance and audit information in Azure SQL Database](./how-to-policies-devops-azure-sql-db.md#create-a-new-devops-policy). Use this guide to apply a DevOps policy on a single SQL database.
+* [Provision read/modify access on a single Azure SQL Database](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy). Use this guide to provision access on a single SQL database account in your subscription.
+* [Self-service access policies for Azure SQL Database](./how-to-policies-self-service-azure-sql-db.md). Use this guide to allow data consumers to request access to data assets by using a self-service workflow.
+
+To create policies that cover all data sources inside a resource group or Azure subscription, see [Discover and govern multiple Azure sources in Microsoft Purview](register-scan-azure-multiple-sources.md#access-policy).
++ ## Extract lineage (preview) <a id="lineagepreview"></a>
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Previously updated : 11/01/2022 Last updated : 03/15/2023
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
### Required permissions for scan
-Microsoft Purview supports basic authentication (username and password) for scanning Oracle. The Oracle user must have read access to system tables in order to access advanced metadata. For classification, user also needs to have read permission on the tables/views to retrieve sample data.
+Microsoft Purview supports basic authentication (username and password) for scanning Oracle. The Oracle user must have read access to system tables in order to access advanced metadata.
+
+For classification, user also needs to be the owner of the table.
+
+>[!IMPORTANT]
+>If the user is not the owner of the table, the scan will run successfully and ingest metadata, but will not identify any classifications.
The user should have permission to create a session and role SELECT\_CATALOG\_ROLE assigned. Alternatively, the user may have SELECT permission granted for every individual system table that this connector queries metadata from:
remote-rendering Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/models.md
The following code snippets show how to load models with either function. To loa
async void LoadModel(RenderingSession session, Entity modelParent, string storageAccount, string containerName, string assetFilePath) { // load a model that will be parented to modelParent
- var modelOptions = new LoadModelOptions(
+ var modelOptions = LoadModelOptions.CreateForBlobStorage(
storageAccount, // storage account name + '.blob.core.windows.net', e.g., 'mystorageaccount.blob.core.windows.net' containerName, // name of the container in your storage account, e.g., 'mytestcontainer' assetFilePath, // the file path to the asset within the container, e.g., 'path/to/file/myAsset.arrAsset'
remote-rendering Spatial Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/spatial-queries.md
Spatial queries are operations with which you can ask the remote rendering service which objects are located in an area. Spatial queries are frequently used to implement interactions, such as figuring out which object a user is pointing at.
-All spatial queries are evaluated on the server. Accordingly, the queries are asynchronous operations and results will arrive with a delay that depends on your network latency.
+All spatial queries are evaluated on the server. Accordingly, the queries are asynchronous operations and results arrive with a delay that depends on your network latency.
## Ray casts
-A *ray cast* is a spatial query where the runtime checks which objects are intersected by a ray, starting at a given position and pointing into a certain direction. As an optimization, a maximum ray distance is also given, to not search for objects that are too far away.
+A *ray cast* is a spatial query where the runtime checks which objects intersect a ray, starting at a given position and pointing into a certain direction. As an optimization, a maximum ray distance is also given, to not search for objects that are too far away.
Although doing hundreds of ray casts each frame is computationally feasible on the server side, each query also generates network traffic, so the number of queries per frame should be kept as low as possible.
void CastRay(ApiHandle<RenderingSession> session)
There are three hit collection modes:
-* **`Closest`:** In this mode, only the closest hit will be reported.
+* **`Closest`:** In this mode, only the closest hit is reported.
* **`Any`:** Prefer this mode when all you want to know is *whether* a ray would hit anything, but don't care what was hit exactly. This query can be considerably cheaper to evaluate, but also has only few applications. * **`All`:** In this mode, all hits along the ray are reported, sorted by distance. Don't use this mode unless you really need more than the first hit. Limit the number of reported hits with the `MaxHits` option.
A Hit has the following properties:
* **`HitPosition`:** The world space position where the ray intersected the object. * **`HitNormal`:** The world space surface normal of the mesh at the position of the intersection. * **`DistanceToHit`:** The distance from the ray starting position to the hit.
-* **`HitType`:** What was hit by the ray: `TriangleFrontFace`, `TriangleBackFace` or `Point`. By default, [ARR renders double sided](single-sided-rendering.md#prerequisites) so the triangles the user sees aren't necessarily front facing. If you want to differentiate between `TriangleFrontFace` and `TriangleBackFace` in your code, make sure your models are authored with correct face directions first.
+* **`HitType`:** What is hit by the ray: `TriangleFrontFace`, `TriangleBackFace` or `Point`. By default, [ARR renders double sided](single-sided-rendering.md#prerequisites) so the triangles the user sees aren't necessarily front facing. If you want to differentiate between `TriangleFrontFace` and `TriangleBackFace` in your code, make sure your models are authored with correct face directions first.
## Spatial queries
void QueryAABB(ApiHandle<RenderingSession> session)
## API documentation
-* [C# RenderingConnection.RayCastQueryAabbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryaabbasync)
-* [C# RenderingConnection.RayCastQueryObbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryobbasync)
-* [C# RenderingConnection.RayCastQuerySphereAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastquerysphereasync)
-* [C++ RenderingConnection::RayCastQueryAabbAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryaabbasync)
-* [C++ RenderingConnection::RayCastQueryObbAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryobbasync)
-* [C++ RenderingConnection::RayCastQuerySphereAsync()](/cpp/api/remote-rendering/renderingconnection#raycastquerysphereasync)
+* [C# RenderingConnection.RayCastQueryAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryasync)
+* [C# RenderingConnection.SpatialQueryAabbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.spatialqueryaabbasync)
+* [C# RenderingConnection.SpatialQuerySphereAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.spatialquerysphereasync)
+* [C# RenderingConnection.SpatialQueryObbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.spatialqueryobbasync)
+* [C++ RenderingConnection::RayCastQueryAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryasync)
+* [C++ RenderingConnection::SpatialQueryAabbAsync()](/cpp/api/remote-rendering/renderingconnection#spatialqueryaabbasync)
+* [C++ RenderingConnection::SpatialQuerySphereAsync()](/cpp/api/remote-rendering/renderingconnection#spatialquerysphereasync)
+* [C++ RenderingConnection::SpatialQueryObbAsync()](/cpp/api/remote-rendering/renderingconnection#spatialqueryobbasync)
## Next steps
remote-rendering Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/security/security.md
Both the "AccountID + AccountKey" and the "URL + SAS Token" are both essentially
Azure Remote Rendering can securely access the contents of your Azure Blob Storage with the correct configuration. See [How-to: Link storage accounts](../../../how-tos/create-an-account.md#link-storage-accounts) to configure your Azure Remote Rendering instance with your blob storage accounts.
-When using a linked blob storage, you'll use slightly different methods for loading models:
+When using a linked blob storage, you use slightly different methods for loading models:
```cs var loadModelParams = new LoadModelFromSasOptions(modelPath, modelEntity);
var task = ARRSessionService.CurrentActiveSession.Connection.LoadModelFromSasAsy
The above lines use the `FromSas` version of the params and session action. They must be converted to the non-SAS versions: ```cs
-var loadModelParams = new LoadModelOptions(storageAccountPath, blobName, modelPath, modelEntity);
+var loadModelParams = LoadModelOptions.CreateForBlobStorage(storageAccountPath, blobName, modelPath, modelEntity);
var task = ARRSessionService.CurrentActiveSession.Connection.LoadModelAsync(loadModelParams); ```
Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke
} //Load a model that will be parented to the entity
- var loadModelParams = new LoadModelOptions($"{storageAccountName}.blob.core.windows.net", blobName, modelPath, modelEntity);
+ var loadModelParams = LoadModelOptions.CreateForBlobStorage($"{storageAccountName}.blob.core.windows.net", blobName, modelPath, modelEntity);
var loadModelAsync = ARRSessionService.CurrentActiveSession.Connection.LoadModelAsync(loadModelParams, progress); var result = await loadModelAsync; return modelEntity;
Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke
For the most part, this code is identical to the original `LoadModel` method, however we've replaced the SAS version of the method calls with the non-SAS versions.
- The additional inputs `storageAccountName` and `blobName` have also been added to the arguments. We'll call this new **LoadModel** method from another method similar to the very first **LoadTestModel** method we created in the first tutorial.
+ The additional inputs `storageAccountName` and `blobName` have also been added to the arguments. We call this new **LoadModel** method from another method similar to the first **LoadTestModel** method we created in the first tutorial.
1. Add the following method to **RemoteRenderingCoordinator** just after **LoadTestModel**
Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke
} ```
- This code adds three additional string variables to your **RemoteRenderingCoordinator** component.
+ This code adds three extra string variables to your **RemoteRenderingCoordinator** component.
![Screenshot that highlights the Storage Account Name, Blob Container Name, and Model Path of the RemoteRenderingCoordinator component.](./media/storage-account-linked-model.png) 1. Add your values to the **RemoteRenderingCoordinator** component. Having followed the [Quickstart for model conversion](../../../quickstarts/convert-model.md), your values should be:
- * **Storage Account Name**: Your storage account name, the globally unique name you choose for your storage account. In the quickstart this was *arrtutorialstorage*, your value will be different.
+ * **Storage Account Name**: Your storage account name, the globally unique name you choose for your storage account. In the quickstart this was *arrtutorialstorage*, your value is different.
* **Blob Container Name**: arroutput, the Blob Storage Container
- * **Model Path**: The combination of the "outputFolderPath" and the "outputAssetFileName" defined in the *arrconfig.json* file. In the quickstart this was "outputFolderPath":"converted/robot", "outputAssetFileName": "robot.arrAsset". Which would result in a Model Path value of "converted/robot/robot.arrAsset", your value will be different.
+ * **Model Path**: The combination of the "outputFolderPath" and the "outputAssetFileName" defined in the *arrconfig.json* file. In the quickstart, this was "outputFolderPath":"converted/robot", "outputAssetFileName": "robot.arrAsset". Which would result in a Model Path value of "converted/robot/robot.arrAsset", your value is different.
>[!TIP] > If you [run the **Conversion.ps1**](../../../quickstarts/convert-model.md#run-the-conversion) script, without the "-UseContainerSas" argument, the script will output all of the above values for your instead of the SAS token. ![Linked Model](./media/converted-output.png)
We have one more "password", the AccountKey, to remove from the local applicatio
## Azure Active Directory (Azure AD) authentication
-AAD authentication will allow you to determine which individuals or groups are using ARR in a more controlled way. ARR has built in support for accepting [Access Tokens](../../../../active-directory/develop/access-tokens.md) instead of using an Account Key. You can think of Access Tokens as a time-limited, user-specific key, that only unlocks certain parts of the specific resource it was requested for.
+AAD authentication allows you to determine which individuals or groups are using ARR in a more controlled way. ARR has built in support for accepting [Access Tokens](../../../../active-directory/develop/access-tokens.md) instead of using an Account Key. You can think of Access Tokens as a time-limited, user-specific key, that only unlocks certain parts of the specific resource it was requested for.
The **RemoteRenderingCoordinator** script has a delegate named **ARRCredentialGetter**, which holds a method that returns a **SessionConfiguration** object, which is used to configure the remote session management. We can assign a different method to **ARRCredentialGetter**, allowing us to use an Azure sign in flow, generating a **SessionConfiguration** object that contains an Azure Access Token. This Access Token will be specific to the user that's signing in.
The **RemoteRenderingCoordinator** script has a delegate named **ARRCredentialGe
>[!NOTE] > An *Owner* role is not sufficient to manage sessions via the client application. For every user you want to grant the ability to manage sessions you must provide the role **Remote Rendering Client**. For every user you want to manage sessions and convert models, you must provide the role **Remote Rendering Administrator**.
-With the Azure side of things in place, we now need to modify how your code connects to the AAR service. We do that by implementing an instance of **BaseARRAuthentication**, which will return a new **SessionConfiguration** object. In this case, the account info will be configured with the Azure Access Token.
+With the Azure side of things in place, we now need to modify how your code connects to the AAR service. We do that by implementing an instance of **BaseARRAuthentication**, which returns a new **SessionConfiguration** object. In this case, the account info is configured with the Azure Access Token.
1. Create a new script named **AADAuthentication** and replace its code with the following:
With the Azure side of things in place, we now need to modify how your code conn
>[!NOTE] > This code is by no means complete and is not ready for a commercial application. For example, at a minimum you'll likely want to add the ability to sign out too. This can be done using the `Task RemoveAsync(IAccount account)` method provided by the client application. This code is only intended for tutorial use, your implementation will be specific to your application.
-The code first tries to get the token silently using **AquireTokenSilent**. This will be successful if the user has previously authenticated this application. If it's not successful, move on to a more user-involved strategy.
+The code first tries to get the token silently using **AquireTokenSilent**. This is successful if the user has previously authenticated this application. If it's not successful, move on to a more user-involved strategy.
For this code, we're using the [device code flow](../../../../active-directory/develop/v2-oauth2-device-code.md) to obtain an Access Token. This flow allows the user to sign in to their Azure account on a computer or mobile device and have the resulting token sent back to the HoloLens application.
With this change, the current state of the application and its access to your Az
![Even better security](./media/security-three.png)
-Since the User Credentials aren't stored on the device (or in this case even entered on the device), their exposure risk is very low. Now the device is using a user-specific, time-limited Access Token to access ARR, which uses access control (IAM) to access the Blob Storage. These two steps have completely removed the "passwords" from the source code and increased security considerably. However, this isn't the most security available, moving the model and session management to a web service will improve security further. Additional security considerations are discussed in the [Commercial Readiness](../commercial-ready/commercial-ready.md) chapter.
+Since the User Credentials aren't stored on the device (or in this case even entered on the device), their exposure risk is low. Now the device is using a user-specific, time-limited Access Token to access ARR, which uses access control (IAM) to access the Blob Storage. These two steps have removed the "passwords" from the source code and increased security considerably. However, this isn't the most security available, moving the model and session management to a web service will improve security further. Additional security considerations are discussed in the [Commercial Readiness](../commercial-ready/commercial-ready.md) chapter.
### Testing AAD Auth
-In the Unity Editor, when AAD Auth is active, you will need to authenticate every time you launch the application. On device, the authentication step will happen the first time and only be required again when the token expires or is invalidated.
+In the Unity Editor, when AAD Auth is active, you'll need to authenticate every time you launch the application. On device, the authentication step happens the first time and only be required again when the token expires or is invalidated.
1. Add the **AAD Authentication** component to the **RemoteRenderingCoordinator** GameObject.
In the Unity Editor, when AAD Auth is active, you will need to authenticate ever
1. Fill in your values for the Client ID and the Tenant ID. These values can be found in your App Registration's Overview Page: * **Active Directory Application Client ID** is the *Application (client) ID* found in your AAD app registration (see image below).
- * **Azure Tenant ID** is the *Directory (tenant) ID* found in your AAD app registration ( see image below).
+ * **Azure Tenant ID** is the *Directory (tenant) ID* found in your AAD app registration (see image below).
* **Azure Remote Rendering Domain** is the same domain you've been using in the **RemoteRenderingCoordinator**'s Remote Rendering Domain. * **Azure Remote Rendering Account ID** is the same **Account ID** you've been using for **RemoteRenderingCoordinator**. * **Azure Remote Rendering Account Domain** is the same **Account Domain** you've been using in the **RemoteRenderingCoordinator**.
In the Unity Editor, when AAD Auth is active, you will need to authenticate ever
:::image type="content" source="./media/azure-active-directory-app-overview.png" alt-text="Screenshot that highlights the Application (client) ID and Directory (tenant) ID."::: 1. Press Play in the Unity Editor and consent to running a session.
- Since the **AAD Authentication** component has a view controller, its automatically hooked up to display a prompt after the session authorization modal panel.
+ Since the **AAD Authentication** component has a view controller, it's automatically hooked up to display a prompt after the session authorization modal panel.
1. Follow the instructions found in the panel to the right of the **AppMenu**. You should see something similar to this: ![Illustration that shows the instruction panel that appears to the right of the AppMenu.](./media/device-flow-instructions.png)
After this point, everything in the application should proceed normally. Check t
## Build to device
-If you're building an application using MSAL to device, you'll need to include a file in your project's **Assets** folder. This will help the compiler build the application correctly using the *Microsoft.Identity.Client.dll* included in the **Tutorial Assets**.
+If you're building an application using MSAL to device, you need to include a file in your project's **Assets** folder. This helps the compiler build the application correctly using the *Microsoft.Identity.Client.dll* included in the **Tutorial Assets**.
1. Add a new file in **Assets** named **link.xml** 1. Add the following for to the file:
Follow the steps found in [Quickstart: Deploy Unity sample to HoloLens - Build t
## Next steps
-The remainder of this tutorial set contains conceptual topics for creating a production-ready application that uses Azure Remote Rendering.
+The remainder of this tutorial set contains conceptual articles for creating a production-ready application that uses Azure Remote Rendering.
> [!div class="nextstepaction"] > [Next: Commercial Readiness](../commercial-ready/commercial-ready.md)
resource-mover About Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/about-move-process.md
Title: About the move process in Azure Resource Mover
description: Learn about the process for moving resources across regions with Azure Resource Mover -+ Last updated 02/02/2023
sap Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md
description: Define SAP parameters for Ansible
Previously updated : 10/19/2022 Last updated : 03/17/2023
-# Configure sap-parameters file
+# Configure SAP Installation parameters
+
+The Ansible playbooks use a combination of default parameters and parameters defined by the Terraform deployment for the SAP installation.
++
+## Default Parameters
+
+This table contains the default parameters defined by the framework.
+
+### User IDs
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Default Value | Type |
+> | - | -- | - | - |
+> | `sapadm_uid` | The UID for the sapadm account. | 2100 | Required |
+> | `sidadm_uid` | The UID for the sidadm account. | 2003 | Required |
+> | `hdbadm_uid` | The UID for the hdbadm account. | 2200 | Required |
+> | `sapinst_gid` | The GID for the sapinst group. | 2001 | Required |
+> | `sapsys_gid` | The GID for the sapsys group. | 2000 | Required |
+> | `hdbshm_gid` | The GID for the hdbshm group. | 2002 | Required |
+> | | | | |
+> | `db2sidadm_uid` | The UID for the db2sidadm account. | 3004 | Required |
+> | `db2sapsid_uid` | The UID for the db2sapsid account. | 3005 | Required |
+> | `db2sysadm_gid` | The UID for the db2sysadm group. | 3000 | Required |
+> | `db2sysctrl_gid` | The UID for the db2sysctrl group. | 3001 | Required |
+> | `db2sysmaint_gid` | The UID for the db2sysmaint group. | 3002 | Required |
+> | `db2sysmon_gid` | The UID for the db2sysmon group. | 2003 | Required |
+> | | | | |
+> | `orasid_uid` | The UID for the orasid account. | 3100 | Required |
+> | `oracle_uid` | The UID for the oracle account. | 3101 | Required |
+> | `observer_uid` | The UID for the observer account. | 4000 | Required |
+> | `dba_gid` | The GID for the dba group. | 3100 | Required |
+> | `oper_gid` | The GID for the oper group. | 3101 | Required |
+> | `asmoper_gid` | The GID for the asmoper group. | 3102 | Required |
+> | `asmadmin_gid` | The GID for the asmadmin group. | 3103 | Required |
+> | `asmdba_gid` | The GID for the asmdba group. | 3104 | Required |
+> | `oinstall_gid` | The GID for the oinstall group. | 3105 | Required |
+> | `backupdba_gid` | The GID for the backupdba group. | 3106 | Required |
+> | `dgdba_gid` | The GID for the dgdba group. | 3107 | Required |
+> | `kmdba_gid` | The GID for the kmdba group. | 3108 | Required |
+> | `racdba_gid` | The GID for the racdba group. | 3108 | Required |
-Ansible will use a file called sap-parameters.yaml that will contain the parameters required for the Ansible playbooks. The file is a .yaml file.
## Parameters
-The table below contains the parameters stored in the sap-parameters.yaml file, most of the values are pre-populated via the Terraform deployment.
+This table contains the parameters stored in the sap-parameters.yaml file, most of the values are prepopulated via the Terraform deployment.
### Infrastructure
The table below contains the parameters stored in the sap-parameters.yaml file,
### Disks
-Disks is a dictionary defining the disks of all the virtual machines in the SID.
+Disks define a dictionary with information about the disks of all the virtual machines in the SAP Application virtual machines.
> [!div class="mx-tdCol2BreakAll "] > | attribute | Description | Type |
Disks is a dictionary defining the disks of all the virtual machines in the SID.
> | `type` | This attribute is used to group the disks, each disk of the same type will be added to the LVM on the virtual machine | Required |
-See sample below
+Example of the disks dictionary:
```yaml disks:
disks:
### Oracle support
-From the v3.4 release, it is possible to deploy SAP on Azure systems in a Shared Home configuration using an Oracle database backend. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md).
+From the v3.4 release, it's possible to deploy SAP on Azure systems in a Shared Home configuration using an Oracle database backend. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md).
In order to install the Oracle backend using the SAP on Azure Deployment Automation Framework, you need to provide the following parameters
sap Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deployment-framework.md
# SAP on Azure Deployment Automation Framework
-The [SAP on Azure Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB. The framework uses [Terraform](https://www.terraform.io/) for infrastructure deployment, and [Ansible](https://www.ansible.com/) for the operating system and application configuration. The systems can be deployed on any of the SAP-supported operating system versions and deployed into any Azure region.
+The [SAP on Azure Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB using [Terraform](https://www.terraform.io/), and [Ansible](https://www.ansible.com/) for the operating system and application configuration. The systems can be deployed on any of the SAP-supported operating system versions and deployed into any Azure region.
-Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure.
+Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure.
[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can automate deployment and configuration of resources in your environment. The [automation framework](https://github.com/Azure/sap-automation) has two main components:-- Deployment infrastructure (control plane) -- SAP Infrastructure (SAP Workload)
+- Deployment infrastructure (control plane, hub component)
+- SAP Infrastructure (SAP Workload, spoke component)
-You'll use the control plane of the SAP on Azure Deployment Automation Framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
+You'll use the control plane of the SAP on Azure Deployment Automation Framework to deploy the SAP Infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
> [!NOTE] > This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance.
->
+>
> This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/).
-The automation framework can be used to deploy the following SAP architectures:
+The automation framework can be used to deploy the following SAP architectures:
- Standalone - Distributed
The dependency between the control plane and the application plane is illustrate
## About the control plane
-The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
+The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
The control plane provides the following services-- Terraform Deployment Infrastructure-- Ansible Controller-- Persistent storage for the Terraform state files-- Persistent storage for the Downloaded SAP Software-- Secure storage for deployment credentials-- Private DNS zone (optional)-
-The control plane is typically a regional resource deployed in to the hub subscription in a [hub and spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
-
-The key components of the control plane are:
-- Deployment virtual machine -- Storage account for Terraform state files-- Storage account for SAP installation media-- Azure Key Vault for deployment credentials
+- Deployment agents for running:
+ - Terraform Deployment
+ - Ansible configuration
+- Persistent storage for the Terraform state files
+- Persistent storage for the Downloaded SAP Software
+- Azure Key Vault for secure storage for deployment credentials
+- Private DNS zone (optional)
- Configuration Web Application
+The control plane is typically a regional resource deployed in to the hub subscription in a [hub and spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
++ The following diagram shows the key components of the control plane and workload zone. :::image type="content" source="./media/deployment-framework/automation-diagram-full.png" alt-text="Diagram showing the SAP on Azure Deployment Automation Framework environment.":::
-The application configuration will be performed from the Ansible Controller in the Control plane using a set of pre-defined playbooks. These playbooks will:
+The application configuration will be performed from the deployment agents in the Control plane using a set of pre-defined playbooks. These playbooks will:
- Configure base operating system settings - Configure SAP-specific operating system settings - Make the installation media available in the system-- Install the SAP system
+- Install the SAP system components
- Install the SAP database (SAP HANA, AnyDB) - Configure high availability (HA) using Pacemaker - Configure high availability (HA) for your SAP database
For more information of how to configure and deploy the control plane, see [Conf
## Software acquisition process
-The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the SAP Library resource group.
+The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the Control Plane's SAP Library resource group.
The software acquisition is using an SAP Application manifest file that contains the list of SAP software to be downloaded. The manifest file is a YAML file that contains the following information:
The software acquisition is using an SAP Application manifest file that contains
The SAP Software download playbook will process the manifest file and the dependent manifest files and download the SAP software from SAP using the specified SAP user account. The software will be downloaded to the SAP Library storage account and will be available for the installation process. As part of the download the process the application manifest and the supporting templates will also be persisted in the storage account. The application manifest and the dependent manifests will be aggregated into a single manifest file that will be used by the installation process.
-### Deployer Virtual Machine
+### Deployer Virtual Machines
-This virtual machine is used to run the orchestration scripts that will deploy the Azure resources using Terraform. It's also the Ansible Controller and is used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment.
+These virtual machines are used to run the orchestration scripts that will deploy the Azure resources using Terraform. They are also Ansible Controllers and are used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment.
## About the SAP Workload
-The SAP Workload contains all the Azure infrastructure resources for the SAP Deployments. These resources are deployed from the control plane.
+The SAP Workload contains all the Azure infrastructure resources for the SAP Deployments. These resources are deployed from the control plane.
The SAP Workload has two main components: - SAP Workload Zone-- SAP System
+- SAP System(s)
## About the SAP Workload Zone
-The workload zone allows for partitioning of the deployments into different environments (Development, Test, Production). The Workload zone will provide the shared services (networking, credentials management) to the SAP systems.
+The workload zone allows for partitioning of the deployments into different environments (Development, Test, Production). The Workload zone will provide the shared services (networking, credentials management) to the SAP systems.
The SAP Workload Zone provides the following services to the SAP Systems - Virtual Networking infrastructure
The following terms are important concepts for understanding the automation fram
### SAP concepts > [!div class="mx-tdCol2BreakAll "]
-> | Term | Description |
-> | - | -- |
+> | Term | Description |
+> | - | -- |
> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the **SID**. > | Landscape | A collection of systems in different environments within an SAP application. For example, SAP ERP Central Component (ECC), SAP customer relationship management (CRM), and SAP Business Warehouse (BW). | > | Workload zone | Partitions the SAP applications to environments, such as non-production and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vault, to all systems within. |
The following diagram shows the relationships between SAP systems, workload zone
> [!div class="nextstepaction"] > [Get started with the deployment automation framework](get-started.md)
+> [Planning for the automation framwework](plan-deployment.md)
> [Configuring Azure DevOps for the automation framwework](configure-devops.md) > [Configuring the control plane](configure-control-plane.md) > [Configuring the workload zone](configure-workload-zone.md)
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
Your service must be deployed in a region that supports availability zones. Azur
| Canada Central | January 30, 2021 or later | | Central India | January 20, 2022 or later | | Central US | December 4, 2020 or later |
+| China North 3 | September 7, 2022 or later |
| East Asia | January 13, 2022 or later | | East US | January 27, 2021 or later | | East US 2 | January 30, 2021 or later |
Your service must be deployed in a region that supports availability zones. Azur
| North Europe | January 28, 2021 or later | | Norway East | January 20, 2022 or later | | Qatar Central | August 25, 2022 or later |
+| South Africa North | September 7, 2022 or later |
| South Central US | April 30, 2021 or later | | South East Asia | January 31, 2021 or later | | Sweden Central | January 21, 2022 or later |
+| Switzerland North | September 7, 2022 or later |
| UAE North | September 9, 2022 or later | | UK South | January 30, 2021 or later | | US Gov Virginia | April 30, 2021 or later |
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md
The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction)
| [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) | A virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet and to send encrypted traffic between Azure virtual networks over the Microsoft network. | | [Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md) | Provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. | | [Azure Front Door](../../frontdoor/front-door-overview.md) | A global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. |
-| [Azure Firewall](../../firewall/overview.md) | A cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Azure Firewall is offered in two SKUs: [Standard](../../firewall/features.md) and [Premium](../../firewall/premium-features.md). |
+| [Azure Firewall](../../firewall/overview.md) | A cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Azure Firewall is offered in three SKUs: [Standard](../../firewall/features.md), [Premium](../../firewall/premium-features.md), and [Basic](../../firewall/overview.md#azure-firewall-basic). |
| [Azure Key Vault](../../key-vault/general/overview.md) | A secure secrets store for tokens, passwords, certificates, API keys, and other secrets. Key Vault can also be used to create and control the encryption keys used to encrypt your data. | | [Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md) | A fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. | | [Azure Private Link](../../private-link/private-link-overview.md) | Enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network. |
security Network Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/network-best-practices.md
To find available Azure virtual network security appliances, go to the [Azure Ma
## Deploy perimeter networks for security zones A [perimeter network](/azure/architecture/vdc/networking-virtual-datacenter) (also known as a DMZ) is a physical or logical network segment that provides an extra layer of security between your assets and the internet. Specialized network access control devices on the edge of a perimeter network allow only desired traffic into your virtual network.
-Perimeter networks are useful because you can focus your network access control management, monitoring, logging, and reporting on the devices at the edge of your Azure virtual network. A perimeter network is where you typically enable distributed denial of service (DDoS) prevention, intrusion detection/intrusion prevention systems (IDS/IPS), firewall rules and policies, web filtering, network antimalware, and more. The network security devices sit between the internet and your Azure virtual network and have an interface on both networks.
+Perimeter networks are useful because you can focus your network access control management, monitoring, logging, and reporting on the devices at the edge of your Azure virtual network. A perimeter network is where you typically enable [distributed denial of service (DDoS) protection](../../ddos-protection/ddos-protection-overview.md), intrusion detection/intrusion prevention systems (IDS/IPS), firewall rules and policies, web filtering, network antimalware, and more. The network security devices sit between the internet and your Azure virtual network and have an interface on both networks.
Although this is the basic design of a perimeter network, there are many different designs, like back-to-back, tri-homed, and multi-homed. Based on the Zero Trust concept mentioned earlier, we recommend that you consider using a perimeter network for all high security deployments to enhance the level of network security and access control for your Azure resources. You can use Azure or a third-party solution to provide an extra layer of security between your assets and the internet: -- Azure native controls. [Azure Firewall](../../firewall/overview.md) and the [web application firewall in Application Gateway](../../application-gateway/features.md#web-application-firewall) offer basic security advantages. Advantages are a fully stateful firewall as a service, built-in high availability, unrestricted cloud scalability, FQDN filtering, support for OWASP core rule sets, and simple setup and configuration.
+- Azure native controls. [Azure Firewall](../../firewall/overview.md) and [Azure Web Application Firewall](../../web-application-firewall/overview.md) offer basic security advantages. Advantages are a fully stateful firewall as a service, built-in high availability, unrestricted cloud scalability, FQDN filtering, support for OWASP core rule sets, and simple setup and configuration.
- Third-party offerings. Search the [Azure Marketplace](https://azuremarketplace.microsoft.com/) for next-generation firewall (NGFW) and other third-party offerings that provide familiar security tools and enhanced levels of network security. Configuration might be more complex, but a third-party offering might allow you to use existing capabilities and skillsets. ## Avoid exposure to the internet with dedicated WAN links
sentinel Sentinel Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solution.md
This article describes how to use the Microsoft Sentinel **Zero Trust (TIC 3.0)*
Zero Trust and TIC 3.0 are not the same, but they share many common themes and together provide a common story. The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** offers detailed crosswalks between Microsoft Sentinel and the Zero Trust model with the TIC 3.0 framework. These crosswalks help users to better understand the overlaps between the two.
-While the Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** provides best practice guidance, Microsoft does not guarantee nor imply compliance. All Trusted Internet Connection (TIC) requirements, validations, and controls are governed by the [Cybersecurity & Infrastructure Security Agency](https://www.cisa.gov/trusted-internet-connections).
+While the Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** provides best practice guidance, Microsoft does not guarantee nor imply compliance. All Trusted Internet Connection (TIC) requirements, validations, and controls are governed by the [Cybersecurity & Infrastructure Security Agency](https://www.cisa.gov/resources-tools/programs/trusted-internet-connections-tic).
The **Zero Trust (TIC 3.0)** solution provides visibility and situational awareness for control requirements delivered with Microsoft technologies in predominantly cloud-based environments. Customer experience will vary by user, and some panes may require additional configurations and query modification for operation.
spring-apps Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md
To deploy the Azure Spring Apps cluster using the Azure CLI script, follow these
az group create --name <your-resource-group-name> --location <location-name> ```
-1. Save the script for Azure Spring Apps [Standard tier](https://raw.githubusercontent.com/Azure/azure-spring-apps-reference-architecture/main/CLI/brownfield-deployment/azuredeploySpringStandard.sh) or [Enterprise tier](https://raw.githubusercontent.com/Azure/azure-spring-apps-reference-architecture/main/CLI/brownfield-deployment/azuredeploySpringEnterprise.sh) locally, then run it from the Bash prompt.
+1. Save the script for Azure Spring Apps [Standard tier](https://raw.githubusercontent.com/Azure/azure-spring-apps-landing-zone-accelerator/reference-architecture/CLI/brownfield-deployment/azuredeploySpringStandard.sh) or [Enterprise tier](https://raw.githubusercontent.com/Azure/azure-spring-apps-landing-zone-accelerator/reference-architecture/CLI/brownfield-deployment/azuredeploySpringEnterprise.sh) locally, then run it from the Bash prompt.
**Standard tier:**
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
The configuration file used in this quickstart is from the [Azure Spring Apps re
To apply the Terraform plan, follow these steps:
-1. Save the *variables.tf* file for [Standard ti