Updates from: 06/23/2022 00:58:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 04/25/2022 Last updated : 06/22/2022 zone_pivot_groups: b2c-policy-type
Custom email verification requires the use of a third-party email provider like
If you don't already have one, start by setting up a Mailjet account (Azure customers can unlock 6,000 emails with a limit of 200 emails/day).
-1. Follow the setup instructions at [Create a Mailjet Account](https://www.mailjet.com/guides/azure-mailjet-developer-resource-user-guide/enabling-mailjet/).
-1. To be able to send email, [register and validate](https://www.mailjet.com/guides/azure-mailjet-developer-resource-user-guide/enabling-mailjet/#how-to-configure-mailjet-for-use) your Sender email address or domain.
+1. Follow the setup instructions at [Create a Mailjet Account](https://dev.mailjet.com/email/guides/getting-started/).
+1. To be able to send email, [register and validate](https://dev.mailjet.com/email/guides/verify-your-domain) your Sender email address or domain.
2. Navigate to the [API Key Management page](https://dev.mailjet.com/email/guides/senders-and-domains/#use-a-sender-on-all-api-keys-(metasender)). Record the **API Key** and **Secret Key** for use in a later step. Both keys are generated automatically when your account is created. > [!IMPORTANT]
active-directory-b2c Direct Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/direct-signin.md
Title: Set up direct sign in using Azure Active Directory B2C
-description: Learn how to prepopulate the sign in name or redirect straight to a social identity provider.
+ Title: Set up direct sign-in using Azure Active Directory B2C
+description: Learn how to prepopulate the sign-in name or redirect straight to a social identity provider.
Previously updated : 03/31/2022 Last updated : 06/21/2022 zone_pivot_groups: b2c-policy-type
-# Set up direct sign in using Azure Active Directory B2C
+# Set up direct sign-in using Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
Title: Single-page application sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
+ Title: Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
description: Learn how to add single-page sign in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
Previously updated : 03/31/2022 Last updated : 06/21/2022
-# Single-page application sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
+# Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
Many modern applications have a single-page app (SPA) front end that is written primarily in JavaScript. Often, the app is written by using a framework like React, Angular, or Vue.js. SPAs and other JavaScript apps that run primarily in a browser have some additional challenges for authentication: - The security characteristics of these apps are different from traditional server-based web applications. -- Many authorization servers and identity providers do not support cross-origin resource sharing (CORS) requests.
+- Many authorization servers and identity providers don't support cross-origin resource sharing (CORS) requests.
- Full-page browser redirects away from the app can be invasive to the user experience. The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md).
-Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling is done entirely in the JavaScript client with either a page redirect or a pop-up box.
+Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling are done entirely in the JavaScript client with either a page redirect or a pop-up box.
-Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign up, sign in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant](tenant-management.md#get-your-tenant-name) if you have one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow).
+Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant](tenant-management.md#get-your-tenant-name) if you've one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow).
We use the following figure to illustrate implicit sign in flow. Each step is described in detail later in the article.
The parameters in the HTTP GET request are explained in the table below.
| Parameter | Required | Description | | | -- | -- | |{tenant}| Yes | Name of your Azure AD B2C tenant|
-|{policy}| Yes| The user flow to be run. Specify the name of a user flow you've created in your Azure AD B2C tenant. For example: `b2c_1_sign_in`, `b2c_1_sign_up`, or `b2c_1_edit_profile`. |
+|{policy}| Yes| The name of user flow you want to run. Specify the name of a user flow you've created in your Azure AD B2C tenant. For example: `b2c_1_sign_in`, `b2c_1_sign_up`, or `b2c_1_edit_profile`. |
| client_id | Yes | The application ID that the [Azure portal](https://portal.azure.com/) assigned to your application. | | response_type | Yes | Must include `id_token` for OpenID Connect sign in. It can also include the response type `token`. If you use `token`, your app can immediately receive an access token from the authorize endpoint, without making a second request to the authorize endpoint. If you use the `token` response type, the `scope` parameter must contain a scope that indicates which resource to issue the token for. | | redirect_uri | No | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs that you added to a registered application in the portal, except that it must be URL-encoded. |
error=access_denied
## Validate the ID token
-Receiving an ID token is not enough to authenticate the user. Validate the ID token's signature, and verify the claims in the token per your app's requirements. Azure AD B2C uses [JSON Web Tokens (JWTs)](https://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) and public key cryptography to sign tokens and verify that they are valid.
+Receiving an ID token is not enough to authenticate the user. Validate the ID token's signature, and verify the claims in the token per your app's requirements. Azure AD B2C uses [JSON Web Tokens (JWTs)](https://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) and public key cryptography to sign tokens and verify that they're valid.
Many open-source libraries are available for validating JWTs, depending on the language you prefer to use. Consider exploring available open-source libraries rather than implementing your own validation logic. You can use the information in this article to help you learn how to properly use those libraries.
Several more validations that you should perform are described in detail in the
For more information about the claims in an ID token, see the [Azure AD B2C token reference](tokens-overview.md).
-After you have validated the ID token, you can begin a session with the user. In your app, use the claims in the ID token to obtain information about the user. This information can be used for display, records, authorization, and so on.
+After you've validated the ID token, you can begin a session with the user. In your app, use the claims in the ID token to obtain information about the user. This information can be used for display, records, authorization, and so on.
## Get access tokens If the only thing your web apps needs to do is execute user flows, you can skip the next few sections. The information in the following sections is applicable only to web apps that need to make authenticated calls to a web API that is protected by Azure AD B2C itself.
-Now that you've signed the user into your SPA, you can get access tokens for calling web APIs that are secured by Azure AD. Even if you have already received a token by using the `token` response type, you can use this method to acquire tokens for additional resources without redirecting the user to sign in again.
+Now that you've signed the user into your SPA, you can get access tokens for calling web APIs that are secured by Azure AD. Even if you've already received a token by using the `token` response type, you can use this method to acquire tokens for additional resources without redirecting the user to sign in again.
-In a typical web app flow, you would make a request to the `/token` endpoint. However, the endpoint does not support CORS requests, so making AJAX calls to get a refresh token is not an option. Instead, you can use the implicit flow in a hidden HTML iframe element to get new tokens for other web APIs. Here's an example, with line breaks for legibility:
+In a typical web app flow, you would make a request to the `/token` endpoint. However, the endpoint doesn't support CORS requests, so making AJAX calls to get a refresh token isn't an option. Instead, you can use the implicit flow in a hidden HTML iframe element to get new tokens for other web APIs. Here's an example, with line breaks for legibility:
```http https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize?
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| response_type |Required |Must include `id_token` for OpenID Connect sign-in. It might also include the response type `token`. If you use `token` here, your app can immediately receive an access token from the authorize endpoint, without making a second request to the authorize endpoint. If you use the `token` response type, the `scope` parameter must contain a scope that indicates which resource to issue the token for. | | redirect_uri |Recommended |The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except that it must be URL-encoded. | | scope |Required |A space-separated list of scopes. For getting tokens, include all scopes that you require for the intended resource. |
-| response_mode |Recommended |Specifies the method that is used to send the resulting token back to your app. For implicit flow, use `fragment`. Two other modes can be specified, `query` and `form_post`, but do not work in the implicit flow. |
+| response_mode |Recommended |Specifies the method that is used to send the resulting token back to your app. For implicit flow, use `fragment`. Two other modes can be specified, `query` and `form_post`, but don't work in the implicit flow. |
| state |Recommended |A value included in the request that is returned in the token response. It can be a string of any content that you want to use. Usually, a randomly generated, unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page or view the user was on. |
-| nonce |Required |A value included in the request, generated by the app, that is included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. Usually, the value is a randomized, unique string that identifies the origin of the request. |
-| prompt |Required |To refresh and get tokens in a hidden iframe, use `prompt=none` to ensure that the iframe does not get stuck on the sign-in page, and returns immediately. |
+| nonce |Required |A value included in the request, generated by the app that's included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. Usually, the value is a randomized, unique string that identifies the origin of the request. |
+| prompt |Required |To refresh and get tokens in a hidden iframe, use `prompt=none` to ensure that the iframe doesn't get stuck on the sign-in page, and returns immediately. |
| login_hint |Required |To refresh and get tokens in a hidden iframe, include the username of the user in this hint to distinguish between multiple sessions the user might have at a given time. You can extract the username from an earlier sign-in by using the `preferred_username` claim (the `profile` scope is required in order to receive the `preferred_username` claim). | | domain_hint |Required |Can be `consumers` or `organizations`. For refreshing and getting tokens in a hidden iframe, include the `domain_hint` value in the request. Extract the `tid` claim from the ID token of an earlier sign-in to determine which value to use (the `profile` scope is required in order to receive the `tid` claim). If the `tid` claim value is `9188040d-6c67-4c5b-b112-36a304b66dad`, use `domain_hint=consumers`. Otherwise, use `domain_hint=organizations`. |
error=user_authentication_required
If you receive this error in the iframe request, the user must interactively sign in again to retrieve a new token. ## Refresh tokens
-ID tokens and access tokens both expire after a short period of time. Your app must be prepared to refresh these tokens periodically. Implicit flows do not allow you to obtain a refresh token due to security reasons. To refresh either type of token, use the implicit flow in a hidden HTML iframe element. In the authorization request include the `prompt=none` parameter. To receive a new id_token value, be sure to use `response_type=id_token` and `scope=openid`, and a `nonce` parameter.
+ID tokens and access tokens both expire after a short period of time. Your app must be prepared to refresh these tokens periodically. Implicit flows don't allow you to obtain a refresh token due to security reasons. To refresh either type of token, use the implicit flow in a hidden HTML iframe element. In the authorization request include the `prompt=none` parameter. To receive a new id_token value, be sure to use `response_type=id_token` and `scope=openid`, and a `nonce` parameter.
-## Send a sign out request
+## Send a sign-out request
-When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD B2C.
+When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign-out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD B2C.
You can simply redirect the user to the `end_session_endpoint` that is listed in the same OpenID Connect metadata document described in [Validate the ID token](#validate-the-id-token). For example:
GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/
> [!NOTE]
-> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it does not necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
+> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it doesn't necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
> ## Next steps
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Previously updated : 03/31/2022 Last updated : 06/21/2022
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ![Screenshot of a twilio logo.](./medi) provides multiple solutions to enable MFA through SMS one-time password (OTP), time-based one-time password (TOTP), and push notifications, and to comply with SCA requirements for PSD2. | | ![Screenshot of a typingDNA logo](./medi) enables strong customer authentication by analyzing a userΓÇÖs typing pattern. It helps companies enable a silent MFA and comply with SCA requirements for PSD2. | | ![Screenshot of a whoiam logo](./medi) is a Branded Identity Management System (BRIMS) application that enables organizations to verify their user base by voice, SMS, and email. |
-| ![Screenshot of a xid logo](./medi) is a digital ID solution that provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified Personal Identification Information (PII) through the xID API. |
+| ![Screenshot of a xid logo](./medi) is a digital ID solution that provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified personal information through the xID API. |
## Role-based access control
Microsoft partners with the following ISVs for role-based access control.
|:-|:--| | ![Screenshot of a n8identity logo](./medi) is an Identity-as-a-Service governance platform that provides solution to address customer accounts migration and Customer Service Requests (CSR) administration running on Microsoft Azure. | | ![Screenshot of a Saviynt logo](./medi) cloud-native platform promotes better security, compliance, and governance through intelligent analytics and cross application integration for streamlining IT modernization. |
+| ![Screenshot of a WhoIAM Rampart logo](./medi) provides a fully integrated helpdesk and invitation-gated user registration experience. It allows support specialists to efficiently perform tasks like resetting passwords and multi-factor authentication without using Azure. It also enables apps and role-based access control (RBAC) for end-users of Azure AD B2C. |
## Secure hybrid access to on-premises application
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
+
+ Title: Configure Azure Active Directory B2C with WhoIAM Rampart
+
+description: Learn how to integrate Azure AD B2C authentication with WhoIAM Rampart
++++++ Last updated : 06/20/2022+++++
+# Configure WhoIAM Rampart with Azure Active Directory B2C
+
+In this sample tutorial, you'll learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with Rampart by WhoIAM. Rampart provides features for a fully integrated helpdesk and invitation-gated user registration experience. It allows support specialists to perform tasks like resetting passwords and multi-factor authentication without using Azure. It also enables apps and role-based access control (RBAC) for end-users of Azure AD B2C.
++
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/)
+
+- An [Azure AD B2C tenant](tutorial-create-tenant.md) linked to your Azure subscription.
+
+- An Azure DevOps Server instance
+
+- A [SendGrid account](https://sendgrid.com/)
+
+- A WhoIAM [trial account](https://www.whoiam.ai/contact-us/)
+
+## Scenario description
+
+WhoIAM Rampart is built entirely in Azure and runs in your Azure environment. The following components comprise the Rampart solution with Azure AD B2C:
+
+- **An Azure AD tenant**: Your Azure AD B2C tenant stores your users and manages who has access (and at what scope) to Rampart itself.
+
+- **Custom B2C policies**: To integrate with Rampart.
+
+- **A resource group**: It hosts Rampart functionality.
++
+## Step 1 - Onboard with Rampart
+
+Contact [WhoIAM](https://www.whoiam.ai/contact-us/) to start the onboarding process. Automated templates will deploy all necessary Azure resources, and they'll configure your DevOps instance with the required code and configuration according to your needs.
+
+## Step 2 - Configure and integrate Rampart with Azure AD B2C
+
+The tight integration of this solution with Azure AD B2C requires custom policies. WhoIAM provides these policies and assists with integrating them with your applications or existing policies, or both.
+
+Follow the steps mentioned in [Authorization policy execution](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution) for details on the custom policies provided by WhoIAM.
+
+## Step 3 - Test the solution
+
+The image shows an example of how WhoIAM Rampart displays a list of app registrations in your Azure AD B2C tenant. WhoIAM validates the implementation by testing all features and health check status endpoints.
++
+The applications screen should display a list of all user-created applications in your Azure AD B2C tenant.
+
+Likewise, the user's screen should display a list of all users in your Azure AD B2C directory and user management functions such as invitations, approvals, and RBAC management.
++
+## Next steps
+
+For more information, review the following articles:
+
+- [WhoIAM Rampart documentation](https://docs.gatekeeper.whoiamdemos.com/#/setup-guide?id=authorization-policy-execution)
+
+- [Custom policies in Azure AD B2C overview](custom-policy-overview.md)
++
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+
active-directory-b2c Protocols Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/protocols-overview.md
Previously updated : 03/31/2022 Last updated : 06/21/2022
In nearly all OAuth and OpenID Connect flows, four parties are involved in the e
:::image type="content" source="./media/protocols-overview/protocols_roles.png" alt-text="Diagram showing the four OAuth 2.0 Roles.":::
-* The **authorization server** is the Azure AD B2C endpoint. It securely handles anything related to user information and access. It also handles the trust relationships between the parties in a flow. It is responsible for verifying the user's identity, granting and revoking access to resources, and issuing tokens. It is also known as the identity provider.
+* The **authorization server** is the Azure AD B2C endpoint. It securely handles anything related to user information and access. It also handles the trust relationships between the parties in a flow. It's responsible for verifying the user's identity, granting and revoking access to resources, and issuing tokens. It's also known as the identity provider.
-* The **resource owner** is typically the end user. It is the party that owns the data, and it has the power to allow third parties to access that data or resource.
+* The **resource owner** is typically the end user. It's the party that owns the data, and it has the power to allow third parties to access that data or resource.
* The **OAuth client** is your app. It's identified by its Application ID. It's usually the party that end users interact with. It also requests tokens from the authorization server. The resource owner must grant the client permission to access the resource.
In nearly all OAuth and OpenID Connect flows, four parties are involved in the e
Azure AD B2C extends the standard OAuth 2.0 and OpenID Connect protocols by introducing policies. These allow Azure AD B2C to perform much more than simple authentication and authorization.
-To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called **user flows**. User flows fully describe consumer identity experiences, including sign up, sign in, and profile editing. User flows can be defined in an administrative UI. They can be executed by using a special query parameter in HTTP authentication requests.
+To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called **user flows**. User flows fully describe consumer identity experiences, including signing up, signing in, and profile editing. User flows can be defined in an administrative UI. They can be executed by using a special query parameter in HTTP authentication requests.
-Policies and user flows are not standard features of OAuth 2.0 and OpenID Connect, so you should take the time to understand them. For more information, see the [Azure AD B2C user flow reference guide](user-flow-overview.md).
+Policies and user flows aren't standard features of OAuth 2.0 and OpenID Connect, so you should take the time to understand them. For more information, see the [Azure AD B2C user flow reference guide](user-flow-overview.md).
## Tokens The Azure AD B2C implementation of OAuth 2.0 and OpenID Connect makes extensive use of bearer tokens, including bearer tokens that are represented as JSON web tokens (JWTs). A bearer token is a lightweight security token that grants the "bearer" access to a protected resource.
-The bearer is any party that can present the token. Azure AD B2C must first authenticate a party before it can receive a bearer token. But if the required steps are not taken to secure the token in transmission and storage, it can be intercepted and used by an unintended party.
+The bearer is any party that can present the token. Azure AD B2C must first authenticate a party before it can receive a bearer token. But if the required steps aren't taken to secure the token in transmission and storage, it can be intercepted and used by an unintended party.
-Some security tokens have built-in mechanisms that prevent unauthorized parties from using them, but bearer tokens do not have this mechanism. They must be transported in a secure channel, such as a transport layer security (HTTPS).
+Some security tokens have built-in mechanisms that prevent unauthorized parties from using them, but bearer tokens don't have this mechanism. They must be transported in a secure channel, such as a transport layer security (HTTPS).
If a bearer token is transmitted outside a secure channel, a malicious party can use a man-in-the-middle attack to acquire the token and use it to gain unauthorized access to a protected resource. The same security principles apply when bearer tokens are stored or cached for later use. Always ensure that your app transmits and stores bearer tokens in a secure manner.
-For additional bearer token security considerations, see [RFC 6750 Section 5](https://tools.ietf.org/html/rfc6750).
+For extra bearer token security considerations, see [RFC 6750 Section 5](https://tools.ietf.org/html/rfc6750).
More information about the different types of tokens that are used in Azure AD B2C are available in [the Azure AD B2C token reference](tokens-overview.md). ## Protocols
-When you're ready to review some example requests, you can start with one of the following tutorials. Each corresponds to a particular authentication scenario. If you need help determining which flow is right for you, check out [the types of apps you can build by using Azure AD B2C](application-types.md).
+When you're ready to review some example requests, you can start with one of the following tutorials. Each corresponds to a particular authentication scenario. If you need help with determining which flow is right for you, check out [the types of apps you can build by using Azure AD B2C](application-types.md).
* [Build mobile and native applications by using OAuth 2.0](authorization-code-flow.md) * [Build web apps by using OpenID Connect](openid-connect.md)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Keep these limitations in mind:
- Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration. - A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
+- After a Temporary Access Pass is added to an account or expires, it can take a few minutes for the changes to replicate. Users may still see a prompt for Temporary Access Pass during this time.
## Troubleshooting
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md
The first thing we need to do is to configure the AD FS claims. Create two claim
12. In the Custom rule box, enter: ```ad-fs-claim-rule
- c:[Type == "http://schemas.microsoft.com/2014/03/psso"]
+ c:[Type == "https://schemas.microsoft.com/2014/03/psso"]
=> issue(claim = c); ```
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
Previously updated : 06/05/2020 Last updated : 06/20/2022
-# Enable and use Azure AD Multi-Factor Authentication with legacy applications using app passwords
+# Enforce Azure AD Multi-Factor Authentication with legacy applications using app passwords
-Some older, non-browser apps like Office 2010 or earlier and Apple Mail before iOS 11 don't understand pauses or breaks in the authentication process. If a user is enabled for Azure AD Multi-Factor Authentication and attempts to use one of these older, non-browser apps, they can't successfully authenticate. To use these applications in a secure way with Azure AD Multi-Factor Authentication enabled for user accounts, you can use app passwords. These app passwords replaced your traditional password to allow an app to bypass multi-factor authentication and work correctly.
+Some older, non-browser apps like Office 2010 or earlier and Apple Mail before iOS 11 don't understand pauses or breaks in the authentication process. An Azure AD Multi-Factor Authentication (Azure AD MFA) user who attempts to sign in to one of these older, non-browser apps, can't successfully authenticate. To use these applications in a secure way with Azure AD Multi-Factor Authentication enforced for user accounts, you can use app passwords. These app passwords replaced your traditional password to allow an app to bypass multi-factor authentication and work correctly.
-Modern authentication is supported for the Microsoft Office 2013 clients and later. Office 2013 clients, including Outlook, support modern authentication protocols and can be enabled to work with two-step verification. After the client is enabled, app passwords aren't required for the client.
+Modern authentication is supported for the Microsoft Office 2013 clients and later. Office 2013 clients, including Outlook, support modern authentication protocols and can work with two-step verification. After Azure AD MFA is enforced, app passwords aren't required for the client.
-This article shows you how to enable and use app passwords for legacy applications that don't support multi-factor authentication prompts.
+This article shows you how to use app passwords for legacy applications that don't support multi-factor authentication prompts.
>[!NOTE] > App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication. ## Overview and considerations
-When a user account is enabled for Azure AD Multi-Factor Authentication, the regular sign-in prompt is interrupted by a request for additional verification. Some older applications don't understand this break in the sign-in process, so authentication fails. To maintain user account security and leave Azure AD Multi-Factor Authentication enabled, app passwords can be used instead of the user's regular username and password. When an app password used during sign-in, there's no additional verification prompt, so authentication is successful.
+When a user account is enforced for Azure AD Multi-Factor Authentication, the regular sign-in prompt is interrupted by a request for additional verification. Some older applications don't understand this break in the sign-in process, so authentication fails. To maintain user account security and leave Azure AD Multi-Factor Authentication enforced, app passwords can be used instead of the user's regular username and password. When an app password used during sign-in, there's no additional verification prompt, so authentication is successful.
App passwords are automatically generated, not specified by the user. This automatically generated password makes it harder for an attacker to guess, so is more secure. Users don't have to keep track of the passwords or enter them every time as app passwords are only entered once per application.
When you use app passwords, the following considerations apply:
* There's a limit of 40 app passwords per user. * Applications that cache passwords and use them in on-premises scenarios can fail because the app password isn't known outside the work or school account. An example of this scenario is Exchange emails that are on-premises, but the archived mail is in the cloud. In this scenario, the same password doesn't work.
-* After Azure AD Multi-Factor Authentication is enabled on a user's account, app passwords can be used with most non-browser clients like Outlook and Microsoft Skype for Business. However, administrative actions can't be performed by using app passwords through non-browser applications, such as Windows PowerShell. The actions can't be performed even when the user has an administrative account.
- * To run PowerShell scripts, create a service account with a strong password and don't enable the account for two-step verification.
+* After Azure AD Multi-Factor Authentication is enforced on a user's account, app passwords can be used with most non-browser clients like Outlook and Microsoft Skype for Business. However, administrative actions can't be performed by using app passwords through non-browser applications, such as Windows PowerShell. The actions can't be performed even when the user has an administrative account.
+ * To run PowerShell scripts, create a service account with a strong password and don't enforced the account for two-step verification.
* If you suspect that a user account is compromised and revoke / reset the account password, app passwords should also be updated. App passwords aren't automatically revoked when a user account password is revoked / reset. The user should delete existing app passwords and create new ones. * For more information, see [Create and delete app passwords from the Additional security verification page](https://support.microsoft.com/account-billing/manage-app-passwords-for-two-step-verification-d6dc8c6d-4bf7-4851-ad95-6d07799387e9#create-and-delete-app-passwords-from-the-additional-security-verification-page).
Users can also create app passwords after registration. For more information and
## Next steps
-For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
+- For more information on how to allow users to quickly register for Azure AD Multi-Factor Authentication, see [Combined security information registration overview](concept-registration-mfa-sspr-combined.md).
+- For more information about enabled and enforced user states for Azure AD MFA, see [Enable per-user Azure AD Multi-Factor Authentication to secure sign-in events](howto-mfa-userstates.md)
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md
Previously updated : 06/14/2021 Last updated : 06/20/2022
Identify users and output methods registered:
Get-MsolUser -All | Select-Object @{N='UserPrincipalName';E={$_.UserPrincipalName}},@{N='MFA Status';E={if ($_.StrongAuthenticationRequirements.State){$_.StrongAuthenticationRequirements.State} else {"Disabled"}}},@{N='MFA Methods';E={$_.StrongAuthenticationMethods.methodtype}} | Export-Csv -Path c:\MFA_Report.csv -NoTypeInformation ```
-## Downloaded activity reports result codes
-
-The following table can help troubleshoot events using the downloaded version of the activity report from the previous portal steps or PowerShell commands. These result codes don't appear directly in the Azure portal.
-
-| Call Result | Description | Broad description |
-| | | |
-| SUCCESS_WITH_PIN | PIN Entered | The user entered a PIN.  If authentication succeeded then they entered the correct PIN.  If authentication is denied, then they entered an incorrect PIN or the user is set to Standard mode. |
-| SUCCESS_NO_PIN | Only # Entered | If the user is set to PIN mode and the authentication is denied, this means the user did not enter their PIN and only entered #. If the user is set to Standard mode and the authentication succeeds this means the user only entered # which is the correct thing to do in Standard mode. |
-| SUCCESS_WITH_PIN_BUT_TIMEOUT | # Not Pressed After Entry | The user did not send any DTMF digits since # was not entered.  Other digits entered are not sent unless # is entered indicating the completion of the entry. |
-|SUCCESS_NO_PIN_BUT_TIMEOUT | No Phone Input - Timed Out | The call was answered, but there was no response.  This typically indicates the call was picked up by voicemail. |
-| SUCCESS_PIN_EXPIRED | PIN Expired and Not Changed | The user's PIN is expired and they were prompted to change it, but the PIN change was not successfully completed. |
-| SUCCESS_USED_CACHE | Used Cache | Authentication succeeded without a Multi-Factor Authentication call since a previous successful authentication for the same username occurred within the configured cache timeframe. |
-| SUCCESS_BYPASSED_AUTH | Bypassed Auth | Authentication succeeded using a One-Time Bypass initiated for the user.  See the Bypassed User History Report for more details on the bypass. |
-| SUCCESS_USED_IP_BASED_CACHE | Used IP-based Cache | Authentication succeeded without a Multi-Factor Authentication call since a previous successful authentication for the same username, authentication type, application name, and IP occurred within the configured cache timeframe. |
-| SUCCESS_USED_APP_BASED_CACHE | Used App-based Cache | Authentication succeeded without a Multi-Factor Authentication call since a previous successful authentication for the same username, authentication type, and application name within the configured cache timeframe. |
-| SUCCESS_INVALID_INPUT | Invalid Phone Input | The response sent from the phone is not valid.  This could be from a fax machine or modem or the user may have entered * as part of their PIN. |
-| SUCCESS_USER_BLOCKED | User is Blocked | The user's phone number is blocked.  A blocked number can be initiated by the user during an authentication call or by an administrator using the Azure portal. <br> NOTE:  A blocked number is also a byproduct of a Fraud Alert. |
-| SUCCESS_SMS_AUTHENTICATED | Text Message Authenticated | For two-way test message, the user correctly replied with their one-time passcode (OTP) or OTP + PIN. |
-| SUCCESS_SMS_SENT | Text Message Sent | For Text Message, the text message containing the one-time passcode (OTP) was successfully sent.  The user will enter the OTP or OTP + PIN in the application to complete the authentication. |
-| SUCCESS_PHONE_APP_AUTHENTICATED | Mobile App Authenticated | The user successfully authenticated via the mobile app. |
-| SUCCESS_OATH_CODE_PENDING | OATH Code Pending | The user was prompted for their OATH code but didn't respond. |
-| SUCCESS_OATH_CODE_VERIFIED | OATH Code Verified | The user entered a valid OATH code when prompted. |
-| SUCCESS_FALLBACK_OATH_CODE_VERIFIED | Fallback OATH Code Verified | The user was denied authentication using their primary Multi-Factor Authentication method and then provided a valid OATH code for fallback. |
-| SUCCESS_FALLBACK_SECURITY_QUESTIONS_ANSWERED | Fallback Security Questions Answered | The user was denied authentication using their primary Multi-Factor Authentication method and then answered their security questions correctly for fallback. |
-| FAILED_PHONE_BUSY | Auth Already In Progress | Multi-Factor Authentication is already processing an authentication for this user.  This is often caused by RADIUS clients that send multiple authentication requests during the same sign-on. |
-| CONFIG_ISSUE | Phone Unreachable | Call was attempted, but either could not be placed or was not answered.  This includes busy signal, fast busy signal (disconnected), tri-tones (number no longer in service), timed out while ringing, etc. |
-| FAILED_INVALID_PHONENUMBER | Invalid Phone Number Format | The phone number has an invalid format.  Phone numbers must be numeric and must be 10 digits for country code +1 (United States & Canada). |
-| FAILED_USER_HUNGUP_ON_US | User Hung Up the Phone | The user answered the phone, but then hung up without pressing any buttons. |
-| FAILED_INVALID_EXTENSION | Invalid Extension | The extension contains invalid characters.  Only digits, commas, *, and # are allowed.  An @ prefix may also be used. |
-| FAILED_FRAUD_CODE_ENTERED | Fraud Code Entered | The user elected to report fraud during the call resulting in a denied authentication and a blocked phone number.|
-| FAILED_SERVER_ERROR | Unable to Place Call | The Multi-Factor Authentication service was unable to place the call. |
-| FAILED_SMS_NOT_SENT | Text Message Could Not Be Sent | The text message could not be sent.  The authentication is denied. |
-| FAILED_SMS_OTP_INCORRECT | Text Message OTP Incorrect | The user entered an incorrect one-time passcode (OTP) from the text message they received.  The authentication is denied. |
-| FAILED_SMS_OTP_PIN_INCORRECT | Text Message OTP + PIN Incorrect | The user entered an incorrect one-time passcode (OTP) and/or an incorrect user PIN.  The authentication is denied. |
-| FAILED_SMS_MAX_OTP_RETRY_REACHED | Exceeded Maximum Text Message OTP Attempts | The user has exceeded the maximum number of one-time passcode (OTP) attempts. |
-| FAILED_PHONE_APP_DENIED | Mobile App Denied | The user denied the authentication in the mobile app by pressing the Deny button. |
-| FAILED_PHONE_APP_INVALID_PIN | Mobile App Invalid PIN | The user entered an invalid PIN when authenticating in the mobile app. |
-| FAILED_PHONE_APP_PIN_NOT_CHANGED | Mobile App PIN Not Changed | The user did not successfully complete a required PIN change in the mobile app. |
-| FAILED_FRAUD_REPORTED | Fraud Reported | The user reported fraud in the mobile app. |
-| FAILED_PHONE_APP_NO_RESPONSE | Mobile App No Response | The user did not respond to the mobile app authentication request. |
-| FAILED_PHONE_APP_ALL_DEVICES_BLOCKED | Mobile App All Devices Blocked | The mobile app devices for this user are no longer responding to notifications and have been blocked. |
-| FAILED_PHONE_APP_NOTIFICATION_FAILED | Mobile App Notification Failed | A failure occurred when attempting to send a notification to the mobile app on the user's device. |
-| FAILED_PHONE_APP_INVALID_RESULT | Mobile App Invalid Result | The mobile app returned an invalid result. |
-| FAILED_OATH_CODE_INCORRECT | OATH Code Incorrect | The user entered an incorrect OATH code. The authentication is denied. |
-| FAILED_OATH_CODE_PIN_INCORRECT | OATH Code + PIN Incorrect | The user entered an incorrect OATH code and/or an incorrect user PIN. The authentication is denied. |
-| FAILED_OATH_CODE_DUPLICATE | Duplicate OATH Code | The user entered an OATH code that was previously used. The authentication is denied. |
-| FAILED_OATH_CODE_OLD | OATH Code Out of Date | The user entered an OATH code that precedes an OATH code that was previously used. The authentication is denied. |
-| FAILED_OATH_TOKEN_TIMEOUT | OATH Code Result Timeout | The user took too long to enter the OATH code and the Multi-Factor Authentication attempt had already timed out. |
-| FAILED_SECURITY_QUESTIONS_TIMEOUT | Security Questions Result Timeout | The user took too long to enter answer to security questions and the Multi-Factor Authentication attempt had already timed out. |
-| FAILED_AUTH_RESULT_TIMEOUT | Auth Result Timeout | The user took too long to complete the Multi-Factor Authentication attempt. |
-| FAILED_AUTHENTICATION_THROTTLED | Authentication Throttled | The Multi-Factor Authentication attempt was throttled by the service. |
- ## Additional MFA reports
The following additional information and reports are available for MFA events, i
## Next steps
-This article provided an overview of the sign-ins activity report. For more detailed information on what this report contains and understand the data, see [sign-in activity reports in Azure AD](../reports-monitoring/concept-sign-ins.md).
+This article provided an overview of the sign-ins activity report. For more detailed information on what this report contains, see [sign-in activity reports in Azure AD](../reports-monitoring/concept-sign-ins.md).
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
Title: Authorization basics description: Learn about the basics of authorization in the Microsoft identity platform. -+ Previously updated : 07/23/2021 Last updated : 06/16/2022 -+ #Customer intent: As an application developer, I want to understand the basic concepts of authorization in the Microsoft identity platform.
# Authorization basics
-**Authorization** (sometimes abbreviated as *AuthZ*) is used to set permissions that are used to evaluate access to resources or functionality. In contrast, **authentication** (sometimes abbreviated as *AuthN*) is focused on proving that an entity like a user or service is indeed who they claim to be.
+**Authorization** (sometimes abbreviated as *AuthZ*) is used to set permissions that enable evaluation of access to resources or functionality. In contrast, **authentication** (sometimes abbreviated as *AuthN*) is focused on proving that an entity like a user or service is indeed who they claim to be.
-Authorization can include specifying what functionality (or resources) an entity is allowed to access or what data that entity can access and what they can do with that data. This is often referred to as *access control*.
-
-> [!NOTE]
-> Authentication and authorization are concepts that are not limited to only users. Services or daemon applications are often built to make requests for resources as themselves rather than on behalf of a specific user. When discussing these topics, the term ΓÇ£entityΓÇ¥ is used to refer to either a user or an application.
+Authorization can include specifying the functionality, resources, or data an entity is allowed to access. Authorization also specifies what can be done with the data. This authorization action is often referred to as *access control*.
+Authentication and authorization are concepts that aren't limited to only users. Services or daemon applications are often built to make requests for resources as themselves rather than on behalf of a specific user. In this article, the term "entity" is used to refer to either a user or an application.
## Authorization approaches There are several common approaches to handle authorization. [Role-based access control](./custom-rbac-for-developers.md) is currently the most common approach using Microsoft identity platform. -
-### Authentication as authorization
+### Authentication as authorization
Possibly the simplest form of authorization is to grant or deny access based on whether the entity making a request has been authenticated. If the requestor can prove they're who they claim to be, they can access the protected resources or functionality. ### Access control lists
-Authorization via access control lists (ACLs) involves maintaining explicit lists of specific entities who do or don't have access to a resource or functionality. ACLs offer finer control over authentication-as-authorization but become difficult to manage as the number of entities increases.
+Authorization by using access control lists (ACLs) involves maintaining explicit lists of specific entities who do or don't have access to a resource or functionality. ACLs offer finer control over authentication-as-authorization but become difficult to manage as the number of entities increases.
-### Role-based access control
+### Role-based access control
Role-based access control (RBAC) is possibly the most common approach to enforcing authorization in applications. When using RBAC, roles are defined to describe the kinds of activities an entity may perform. An application developer grants access to roles rather than to individual entities. An administrator can then assign roles to different entities to control which ones have access to what resources and functionality.
-In advanced RBAC implementations, roles may be mapped to collections of permissions, where a permission describes a granular action or activity that can be performed. Roles are then configured as combinations of permissions. You compute the entitiesΓÇÖ overall permission set for an application by intersecting the permissions granted to the various roles the entity is assigned. A good example of this approach is the RBAC implementation that governs access to resources in Azure subscriptions.
+In advanced RBAC implementations, roles may be mapped to collections of permissions, where a permission describes a granular action or activity that can be performed. Roles are then configured as combinations of permissions. Compute the overall permission set for an entity by intersecting the permissions granted to the various roles the entity is assigned. A good example of this approach is the RBAC implementation that governs access to resources in Azure subscriptions.
> [!NOTE]
-> [Application RBAC](./custom-rbac-for-developers.md) differs from [Azure RBAC](../../role-based-access-control/overview.md) and [Azure AD RBAC](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources.
+> [Application RBAC](./custom-rbac-for-developers.md) differs from [Azure RBAC](../../role-based-access-control/overview.md) and [Azure AD RBAC](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps manage Azure resources. Azure AD RBAC allows management of Azure AD resources.
-### Attribute-based access control
+### Attribute-based access control
-Attribute-based access control (ABAC) is a more fine-grained access control mechanism. In this approach, rules are applied to attributes of the entity, the resources being accessed, and the current environment to determine whether access to some resources or functionality is permitted. An example might be only allowing users who are managers to access files identified with a metadata tag of ΓÇ£managers during working hours onlyΓÇ¥ during the hours of 9AM - 5PM on working days. In this case, access is determined by examining the userΓÇÖs attribute (status as manager), the resourceΓÇÖs attribute (metadata tag on a file), and also an environment attribute (the current time).
+Attribute-based access control (ABAC) is a more fine-grained access control mechanism. In this approach, rules are applied to the entity, the resources being accessed, and the current environment. The rules determine the level of access to resources and functionality. An example might be only allowing users who are managers to access files identified with a metadata tag of "managers during working hours only" during the hours of 9AM - 5PM on working days. In this case, access is determined by examining the attribute (status as manager) of the user, the attribute (metadata tag on a file) of the resource, and also an environment attribute (the current time).
-One advantage of ABAC is that more granular and dynamic access control can be achieved through rule and condition evaluations without the need to create large numbers of very specific roles and RBAC assignments.
+One advantage of ABAC is that more granular and dynamic access control can be achieved through rule and condition evaluations without the need to create large numbers of specific roles and RBAC assignments.
-One method for achieving ABAC with Azure Active Directory is using [dynamic groups](../enterprise-users/groups-create-rule.md). Dynamic groups allow administrators to dynamically assign users to groups based on specific user attributes with desired values. For example, an Authors group could be created where all users with the job title Author are dynamically assigned to the Authors group. Dynamic groups can be used in combination with RBAC for authorization where you map roles to groups and dynamically assign users to groups.
+One method for achieving ABAC with Azure Active Directory is using [dynamic groups](../enterprise-users/groups-create-rule.md). Dynamic groups allow administrators to dynamically assign users to groups based on specific user attributes with desired values. For example, an Authors group could be created where all users with the job title Author are dynamically assigned to the Authors group. Dynamic groups can be used in combination with RBAC for authorization where you map roles to groups and dynamically assign users to groups.
-[Azure ABAC](../../role-based-access-control/conditions-overview.md) is an example of an ABAC solution that is available today. Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions.
+[Azure ABAC](../../role-based-access-control/conditions-overview.md) is an example of an ABAC solution that is available today. Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions.
## Implementing authorization Authorization logic is often implemented within the applications or solutions where access control is required. In many cases, application development platforms offer middleware or other API solutions that simplify the implementation of authorization. Examples include use of the [AuthorizeAttribute](/aspnet/core/security/authorization/simple?view=aspnetcore-5.0&preserve-view=true) in ASP.NET or [Route Guards](./scenario-spa-sign-in.md?tabs=angular2#sign-in-with-a-pop-up-window) in Angular.
-For authorization approaches that rely on information about the authenticated entity, an application will evaluate information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md)). For information not contained in a security token, an application might make extra calls to external resources.
+For authorization approaches that rely on information about the authenticated entity, an application evaluates information exchanged during authentication. For example, by using the information that was provided within a [security token](./security-tokens.md)). For information not contained in a security token, an application might make extra calls to external resources.
It's not strictly necessary for developers to embed authorization logic entirely within their applications. Instead, dedicated authorization services can be used to centralize authorization implementation and management.
active-directory Custom Rbac For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-rbac-for-developers.md
Title: Custom role-based access control (RBAC) for application developers - Microsoft identity platform
-description: Learn about what custom RBAC is and why it's important to implement in your applications.
+ Title: Custom role-based access control for application developers
+description: Learn about what custom RBAC is and why it's important to implement in applications.
-+ Previously updated : 11/15/2021 Last updated : 06/16/2022 -+ #Customer intent: As a developer, I want to learn about custom RBAC and why I need to use it in my application.
# Role-based access control for application developers
-Role-based access control (RBAC) allows certain users or groups to have specific permissions regarding which resources they have access to, what they can do with those resources, and who manages which resources. Application role-based access control differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which helps you manage Azure resources. Azure AD RBAC allows you to manage Azure AD resources. This article explains application-specific role-based access control.
+Role-based access control (RBAC) allows certain users or groups to have specific permissions to access and manage resources. Application RBAC differs from [Azure role-based access control](../../role-based-access-control/overview.md) and [Azure AD role-based access control](../roles/custom-overview.md#understand-azure-ad-role-based-access-control). Azure custom roles and built-in roles are both part of Azure RBAC, which is used to help manage Azure resources. Azure AD RBAC is used to manage Azure AD resources. This article explains application-specific RBAC.
+## Roles definitions
-## What are roles?
+RBAC is a popular mechanism to enforce authorization in applications. When an organization uses RBAC, an application developer defines roles rather than authorizing individual users or groups. An administrator can then assign roles to different users and groups to control who has access to content and functionality.
-Role-based access control (RBAC) is a popular mechanism to enforce authorization in applications. When using RBAC, an application developer defines roles rather than authorizing individual users or groups. An administrator can then assign roles to different users and groups to control who has access to what content and functionality.
+RBAC helps an application developer to manage resources and their usage. RBAC also allows an application developer to control the areas of an application that users can access. Administrators can control which users have access to an application using the *User assignment required* property. Developers need to account for specific users within the application and what users can do within the application.
-RBAC helps you, as an app developer, manage resources and what users can do with those resources. RBAC also allows an app developer to control what areas of an app users have access to. While admins can control which users have access to an app using the *User assignment required* property, developers need to account for specific users within the app and what users can do within the app.
+An application developer first creates a role definition within the registration section of the application in the Azure AD administration center. The role definition includes a value that is returned for users who are assigned to that role. A developer can then use this value to implement application logic to determine what those users can or can't do in an application.
-As an app developer, you need to first create a role definition within the appΓÇÖs registration section in the Azure AD admin center. The role definition includes a value that is returned for users who are assigned to that role. A developer can then use this value to implement application logic to determine what those users can or can't do in an application.
+## RBAC options
-## Options for adding RBAC to apps
+The following guidance should be applied when considering including role-based access control authorization in an application:
-There are several considerations that must be managed when including role-based access control authorization in an application. These include:
-- Defining the roles that are required by an applicationΓÇÖs authorization needs. -- Applying, storing, and retrieving the pertinent roles for authenticated users. -- Affecting the desired application behavior based on the roles assigned to the current user.
+- Define the roles that are required for the authorization needs of the application.
+- Apply, store, and retrieve the pertinent roles for authenticated users.
+- Determine how the application behavior based on the roles assigned affects the current user.
-Once you define the roles, the Microsoft identity platform supports several different solutions that can be used to apply, store, and retrieve role information for authenticated users. These solutions include app roles, Azure AD groups, and the use of custom datastores for user role information.
+After the roles are defined, the Microsoft identity platform supports several different solutions that can be used to apply, store, and retrieve role information for authenticated users. These solutions include app roles, Azure AD groups, and the use of custom datastores for user role information.
-Developers have the flexibility to provide their own implementation for how role assignments are to be interpreted as application permissions. This can involve leveraging middleware or other functionality provided by their applicationsΓÇÖ platform or related libraries. Apps will typically receive user role information as claims and will decide user permissions based on those claims.
+Developers have the flexibility to provide their own implementation for how role assignments are to be interpreted as application permissions. This interpretation of permissions can involve using middleware or other options provided by the platform of the applications or related libraries. Applications typically receive user role information as claims and then decides user permissions based on those claims.
### App roles
-Azure AD supports declaring app roles for an application registration. When a user signs into an application, Azure AD will include a [roles claim](./access-tokens.md#payload-claims) for each role that the user has been granted for that application. Applications that receive tokens that contain these claims can then use this information to determine what permissions the user may exercise based on the roles they're assigned.
+Azure AD supports declaring app roles for an application. When a user signs into an application, Azure AD includes a [roles claim](./access-tokens.md#payload-claims) for each role that the user has been granted for that application. Applications receive the tokens that contain the role claims and then can use the information for permission assignments. The roles assigned to the user determine the level of access to resources and functionality.
### Groups
-Developers can also use [Azure AD groups](../fundamentals/active-directory-manage-groups.md) to implement RBAC in their applications, where the usersΓÇÖ memberships in specific groups are interpreted as their role memberships. When using Azure AD groups, Azure AD will include a [groups claim](./access-tokens.md#payload-claims) that will include the identifiers of all of the groups to which the user is assigned within the current Azure AD tenant. Applications that receive tokens that contain these claims can then use this information to determine what permissions the user may exercise based on the roles they're assigned.
+Developers can also use [Azure AD groups](../fundamentals/active-directory-manage-groups.md) to implement RBAC in their applications, where the memberships of the user in specific groups are interpreted as their role memberships. When an organization uses Azure AD groups, a [groups claim](./access-tokens.md#payload-claims) is included in the token that specifies the identifiers of all of the groups to which the user is assigned within the current Azure AD tenant.
> [!IMPORTANT]
-> When working with groups, developers need to be aware of the concept of an [overage claim](./access-tokens.md#payload-claims). By default, if a user is a member of more than the overage limit (150 for SAML tokens, 200 for JWT tokens, 6 if using the implicit flow), Azure AD will not emit a groups claim in the token. Instead, it will include an ΓÇ£overage claimΓÇ¥ in the token that indicates the tokenΓÇÖs consumer will need to query the Graph API to retrieve the userΓÇÖs group memberships. For more information about working with overage claims, see [Claims in access tokens](./access-tokens.md#claims-in-access-tokens). It is possible to only emit groups that are assigned to an application, though [group-based assignment](../manage-apps/assign-user-or-group-access-portal.md) does require Azure Active Directory Premium P1 or P2 edition.
+> When working with groups, developers need to be aware of the concept of an [overage claim](./access-tokens.md#payload-claims). By default, if a user is a member of more than the overage limit (150 for SAML tokens, 200 for JWT tokens, 6 if using the implicit flow), Azure AD doesn't emit a groups claim in the token. Instead, it includes an "overage claim" in the token that indicates the consumer of the token needs to query the Microsoft Graph API to retrieve the group memberships of the user. For more information about working with overage claims, see [Claims in access tokens](./access-tokens.md#claims-in-access-tokens). It's possible to only emit groups that are assigned to an application, though [group-based assignment](../manage-apps/assign-user-or-group-access-portal.md) does require Azure Active Directory Premium P1 or P2 edition.
### Custom data store App roles and groups both store information about user assignments in the Azure AD directory. Another option for managing user role information that is available to developers is to maintain the information outside of the directory in a custom data store. For example, in a SQL Database, Azure Table storage or Azure Cosmos DB Table API.
-Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the apps retrieve the roles. This is typically done using extensibility points defined in the middleware available to the platform that is being used to develop the application. Furthermore, developers are responsible for properly securing the custom data store.
+Using custom storage allows developers extra customization and control over how to assign roles to users and how to represent them. However, the extra flexibility also introduces more responsibility. For example, there's no mechanism currently available to include this information in tokens returned from Azure AD. If developers maintain role information in a custom data store, they'll need to have the applications retrieve the roles. Retrieving the roles is typically done using extensibility points defined in the middleware available to the platform that's being used to develop the application. Developers are responsible for properly securing the custom data store.
-Using [Azure AD B2C Custom policies](../../active-directory-b2c/custom-policy-overview.md) it is possible to interact with custom data stores and to include custom claims within a token.
+Using [Azure AD B2C Custom policies](../../active-directory-b2c/custom-policy-overview.md) it's possible to interact with custom data stores and to include custom claims within a token.
-## Choosing an approach
+## Choose an approach
-In general, app roles are the recommended solution. App roles provide the simplest programming model and are purpose made for RBAC implementations. However, specific application requirements may indicate that a different approach would be better solution.
+In general, app roles are the recommended solution. App roles provide the simplest programming model and are purpose made for RBAC implementations. However, specific application requirements may indicate that a different approach would be a better solution.
-Developers can use app roles to control whether a user can sign into an app, or an app can obtain an access token for a web API. App roles are preferred over Azure AD groups by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the same reasons.
+Developers can use app roles to control whether a user can sign into an application, or an application can obtain an access token for a web API. App roles are preferred over Azure AD groups by developers when they want to describe and control the parameters of authorization in their applications. For example, an application using groups for authorization breaks in the next tenant as both the group identifier and name could be different. An application using app roles remains safe.
Although either app roles or groups can be used for authorization, key differences between them can influence which is the best solution for a given scenario. | |App Roles |Azure AD Groups |Custom Data Store| |-|--||--|
-|**Programming model** |**Simplest**. They are specific to an application and are defined in the app registration. They move with the application.|**More complex**. Group IDs vary between tenants and overage claims may need to be considered. Groups aren't specific to an app, but to an Azure AD tenant.|**Most complex**. Developers must implement means by which role information is both stored and retrieved.|
+|**Programming model** |**Simplest**. They're specific to an application and are defined in the application registration. They move with the application.|**More complex**. Group identifiers vary between tenants and overage claims may need to be considered. Groups aren't specific to an application, but to an Azure AD tenant.|**Most complex**. Developers must implement means by which role information is both stored and retrieved.|
|**Role values are static between Azure AD tenants**|Yes |No |Depends on the implementation.|
-|**Role values can be used in multiple applications**|No. Unless role configuration is duplicated in each app registration.|Yes |Yes |
+|**Role values can be used in multiple applications**|No (Unless role configuration is duplicated in each application registration.)|Yes |Yes |
|**Information stored within directory**|Yes |Yes |No |
-|**Information is delivered via tokens**|Yes (roles claim) |Yes (In the case of an overage, *groups claims* may need to be retrieved at runtime) |No. Retrieved at runtime via custom code. |
-|**Lifetime**|Lives in app registration in directory. Removed when the app registration is removed.|Lives in directory. Remain intact even if the app registration is removed. |Lives in custom data store. Not tied to app registration.|
-
+|**Information is delivered via tokens**|Yes (roles claim) |Yes (If an overage, *groups claims* may need to be retrieved at runtime) |No (Retrieved at runtime via custom code.) |
+|**Lifetime**|Lives in application registration in directory. Removed when the application registration is removed.|Lives in directory. Remain intact even if the application registration is removed. |Lives in custom data store. Not tied to application registration.|
## Next steps - [How to add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md).-- [Register an application with the Microsoft identity platform](./quickstart-register-app.md). - [Azure Identity Management and access control security best practices](../../security/fundamentals/identity-management-best-practices.md).
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Title: Add app roles and get them from a token
-description: Learn how to add app roles to an application registered in Azure Active Directory, assign users and groups to these roles, and receive them in the 'roles' claim in the token.
+description: Learn how to add app roles to an application registered in Azure Active Directory. Assign users and groups to these roles, and receive them in the 'roles' claim in the token.
Previously updated : 05/06/2021 Last updated : 06/13/2022
# Add app roles to your application and receive them in the token
-Role-based access control (RBAC) is a popular mechanism to enforce authorization in applications. When using RBAC, an administrator grants permissions to roles, and not to individual users or groups. The administrator can then assign roles to different users and groups to control who has access to what content and functionality.
+Role-based access control (RBAC) is a popular mechanism to enforce authorization in applications. RBAC allows administrators to grant permissions to roles rather than to specific users or groups. The administrator can then assign roles to different users and groups to control who has access to what content and functionality.
-Using RBAC with Application Roles and Role Claims, developers can securely enforce authorization in their apps with less effort.
+By using RBAC with application role and role claims, developers can securely enforce authorization in their apps with less effort.
-Another approach is to use Azure AD Groups and Group Claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD Groups and Application Roles are not mutually exclusive; they can be used in tandem to provide even finer-grained access control.
+Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used in tandem to provide even finer-grained access control.
## Declare roles for an application You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted individually to the user and the user's group memberships. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
-> [!IMPORTANT]
-> Currently if you add a service principal to a group, and then assign an app role to that group, Azure AD does not add the `roles` claim to tokens it issues.
+Currently, if you add a service principal to a group, and then assign an app role to that group, Azure AD doesn't add the `roles` claim to tokens it issues.
-App roles are declared using the app roles by using[App roles UI](#app-roles-ui) in the Azure portal:
+App roles are declared using the app roles by using [App roles UI](#app-roles-ui) in the Azure portal:
-The number of roles you add counts toward application manifest limits enforced by Azure Active Directory. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md).
+The number of roles you add counts toward application manifest limits enforced by Azure AD. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md).
### App roles UI To create an app role by using the Azure portal's user interface: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select the **Directory + subscription** filter in top menu, and then choose the Azure Active Directory tenant that contains the app registration to which you want to add an app role.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant that contains the app registration to which you want to add an app role.
1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**, and then select the application you want to define app roles in. 1. Select **App roles**, and then select **Create app role**.
To create an app role by using the Azure portal's user interface:
| - | -- | -- | | **Display name** | Display name for the app role that appears in the admin consent and app assignment experiences. This value may contain spaces. | `Survey Writer` | | **Allowed member types** | Specifies whether this app role can be assigned to users, applications, or both.<br/><br/>When available to `applications`, app roles appear as application permissions in an app registration's **Manage** section > **API permissions > Add a permission > My APIs > Choose an API > Application permissions**. | `Users/Groups` |
- | **Value** | Specifies the value of the roles claim that the application should expect in the token. The value should exactly match the string referenced in the application's code. The value cannot contain spaces. | `Survey.Create` |
+ | **Value** | Specifies the value of the roles claim that the application should expect in the token. The value should exactly match the string referenced in the application's code. The value can't contain spaces. | `Survey.Create` |
| **Description** | A more detailed description of the app role displayed during admin app assignment and consent experiences. | `Writers can create surveys.` | | **Do you want to enable this app role?** | Specifies whether the app role is enabled. To delete an app role, deselect this checkbox and apply the change before attempting the delete operation. | _Checked_ |
To assign users and groups to roles by using the Azure portal:
1. Select the application in which you want to assign users or security group to roles. 1. Under **Manage**, select **Users and groups**. 1. Select **Add user** to open the **Add Assignment** pane.
-1. Select the **Users and groups** selector from the **Add Assignment** pane. A list of users and security groups is displayed. You can search for a certain user or group as well as select multiple users and groups that appear in the list.
+1. Select the **Users and groups** selector from the **Add Assignment** pane. A list of users and security groups is displayed. You can search for a certain user or group and select multiple users and groups that appear in the list.
1. Once you've selected users and groups, select the **Select** button to proceed. 1. Select **Select a role** in the **Add assignment** pane. All the roles that you've defined for the application are displayed. 1. Choose a role and select the **Select** button.
Confirm that the users and groups you added appear in the **Users and groups** l
Once you've added app roles in your application, you can assign an app role to a client app by using the Azure portal or programmatically by using [Microsoft Graph](/graph/api/user-post-approleassignments).
-When you assign app roles to an application, you create _application permissions_. Application permissions are typically used by daemon apps or back-end services that need to authenticate and make authorized API calls as themselves, without the interaction of a user.
+When you assign app roles to an application, you create _application permissions_. Application permissions are typically used by daemon apps or back-end services that need to authenticate and make authorized API call as themselves, without the interaction of a user.
To assign app roles to an application by using the Azure portal:
Because these are _application permissions_, not delegated permissions, an admin
The **Status** column should reflect that consent has been **Granted for \<tenant name\>**. <a name="use-app-roles-in-your-web-api"></a>+ ## Usage scenario of app roles
-If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registration**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
+If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registrations**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the access token. Your next step is to add code to your web API to check for those roles when the API is called.
To learn how to add authorization to your web API, see [Protected web API: Verif
Though you can use app roles or groups for authorization, key differences between them can influence which you decide to use for your scenario.
-| App roles | Groups |
-| | -- |
-| They are specific to an application and are defined in the app registration. They move with the application. | They are not specific to an app, but to an Azure AD tenant. |
-| App roles are removed when their app registration is removed. | Groups remain intact even if the app is removed. |
-| Provided in the `roles` claim. | Provided in `groups` claim. |
+| App roles | Groups |
+| | - |
+| They're specific to an application and are defined in the app registration. They move with the application. | They aren't specific to an app, but to an Azure AD tenant. |
+| App roles are removed when their app registration is removed. | Groups remain intact even if the app is removed. |
+| Provided in the `roles` claim. | Provided in `groups` claim. |
Developers can use app roles to control whether a user can sign in to an app or an app can obtain an access token for a web API. To extend this security control to groups, developers and admins can also assign security groups to app roles.
-App roles are preferred by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the very same reasons as it allows the SaaS app to be provisioned in multiple tenants.
+App roles are preferred by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the same reasons as it allows the SaaS app to be provisioned in multiple tenants.
## Next steps
Learn more about app roles with the following resources.
- Code samples on GitHub - [Add authorization using app roles & roles claims to an ASP\.NET Core web app](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md)
- - [Add authorization using groups and group claims to an ASP.NET Core web app](https://aka.ms/groupssample)
- - [Angular single-page application (SPA) calling a .NET Core web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl)
- - [React single-page application (SPA) calling a Node.js web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl)
- Reference documentation - [Azure AD app manifest](./reference-app-manifest.md)
- - [Azure AD access tokens](access-tokens.md)
- - [Azure AD ID tokens](id-tokens.md)
- - [Provide optional claims to your app](active-directory-optional-claims.md)
- Video: [Implement authorization in your applications with Microsoft identity platform](https://www.youtube.com/watch?v=LRoc-na27l0) (1:01:15)
active-directory Howto Implement Rbac For Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-implement-rbac-for-apps.md
Title: Implement role-based access control in apps
+ Title: Implement role-based access control in applications
description: Learn how to implement role-based access control in your applications. -+ Previously updated : 09/17/2021- Last updated : 06/16/2022+
-#Customer intent: As an application developer, I want to learn how to implement role-based access control in my apps so I can ensure that only those users with the right access privileges can access my app's functionality.
+#Customer intent: As an application developer, I want to learn how to implement role-based access control in my applications so I can make sure that only those users with the right access privileges can access the functionality of them.
-# Implement role-based access control in apps
+# Implement role-based access control
-Role-based access control (RBAC) allows users or groups to have specific permissions regarding which resources they have access to, what they can do with those resources, and who manages which resources.
-
-Typically, when we talk about implementing RBAC to protect a resource, we're looking to protect either a web application, a single-page application (SPA), or an API. This could be either for the entire application or API, or specific areas, features, or API methods.
-
-This article explains how to implement application-specific role-based access control. For more information about the basics of authorization, see [Authorization basics](./authorization-basics.md).
-
-## Implementing RBAC using the Microsoft identity platform
+Role-based access control (RBAC) allows users or groups to have specific permissions to access and manage resources. Typically, implementing RBAC to protect a resource includes protecting either a web application, a single-page application (SPA), or an API. This protection could be for the entire application or API, specific areas and features, or API methods. For more information about the basics of authorization, see [Authorization basics](./authorization-basics.md).
As discussed in [Role-based access control for application developers](./custom-rbac-for-developers.md), there are three ways to implement RBAC using the Microsoft identity platform: -- **App Roles** ΓÇô using the [App Roles feature in an application registration](./howto-add-app-roles-in-azure-ad-apps.md#declare-roles-for-an-application) in conjunction with logic within your application to interpret incoming App Role assignments.-- **Groups** ΓÇô using an incoming identityΓÇÖs group assignments in conjunction with logic within your application to interpret the group assignments. -- **Custom Data Store** ΓÇô retrieve and interpret role assignments using logic within your application.
+- **App Roles** ΓÇô using the [App Roles feature in an application](./howto-add-app-roles-in-azure-ad-apps.md#declare-roles-for-an-application) using logic within the application to interpret incoming app role assignments.
+- **Groups** ΓÇô using group assignments of an incoming identity using logic within the application to interpret the group assignments.
+- **Custom Data Store** ΓÇô retrieve and interpret role assignments using logic within the application.
-The preferred approach is to use *App Roles* as it is the easiest to implement. This approach is supported directly by the SDKs that are used in building apps utilizing the Microsoft identity platform. For more information on how to choose an approach, see [Choosing an approach](./custom-rbac-for-developers.md#choosing-an-approach).
+The preferred approach is to use *App Roles* as it is the easiest to implement. This approach is supported directly by the SDKs that are used in building apps utilizing the Microsoft identity platform. For more information on how to choose an approach, see [Choose an approach](./custom-rbac-for-developers.md#choose-an-approach).
-The rest of this article will show you how to define app roles and implement RBAC within your application using the app roles.
+## Define app roles
-## Defining roles for your application
+The first step for implementing RBAC for an application is to define the app roles for it and assign users or groups to it. This process is outlined in [How to: Add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md). After defining the app roles and assigning users or groups to them, access the role assignments in the tokens coming into the application and act on them accordingly.
-The first step for implementing RBAC for your application is to define the roles your application needs and assign users or groups to those roles. This process is outlined in [How to: Add app roles to your application and receive them in the token](./howto-add-app-roles-in-azure-ad-apps.md). Once you have defined your roles and assigned users or groups, you can access the role assignments in the tokens coming into your application and act on them accordingly.
+## Implement RBAC in ASP.NET Core
-## Implementing RBAC in ASP.NET Core
+ASP.NET Core supports adding RBAC to an ASP.NET Core web application or web API. Adding RBAC allows for easy implementation by using [role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks) with the ASP.NET Core *Authorize* attribute. It's also possible to use ASP.NET CoreΓÇÖs support for [policy-based role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#policy-based-role-checks).
-ASP.NET Core supports adding RBAC to an ASP.NET Core web application or web API. This allows for easy implementation of RBAC using [role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks) with the ASP.NET Core *Authorize* attribute. It is also possible to use ASP.NET CoreΓÇÖs support for [policy-based role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#policy-based-role-checks).
+### ASP.NET Core MVC web application
-### ASP.NET Core MVC web application
+Implementing RBAC in an ASP.NET Core MVC web application is straightforward. It mainly involves using the *Authorize* attribute to specify which roles should be allowed to access specific controllers or actions in the controllers. Follow these steps to implement RBAC in an ASP.NET Core MVC application:
-Implementing RBAC in an ASP.NET Core MVC web application is straightforward. It mainly involves using the *Authorize* attribute to specify which roles should be allowed to access specific controllers or actions in the controllers. Follow these steps to implement RBAC in your ASP.NET Core MVC application:
-1. Create an app registration with app roles and assignments as outlined in *Defining roles for your application* above.
+1. Create an application registration with app roles and assignments as outlined in *Define app roles* above.
1. Do one of the following steps:
- - Create a new ASP.NET Core MVC web app project using the **dotnet cli**. Specify the *--auth* flag with either *SingleOrg* for single tenant authentication or *MultiOrg* for multi-tenant authentication, the *--client-id* flag with the client if from your app registration, and the *--tenant-id* flag with your tenant if from your Azure AD tenant:
-
- ```bash
-
- dotnet new mvc --auth SingleOrg --client-id <YOUR-APPLICATION-CLIENT-ID> --tenant-id <YOUR-TENANT-ID>
-
- ```
-
- - Add the Microsoft.Identity.Web and Microsoft.Identity.Web.UI libraries to an existing ASP.NET Core MVC project:
-
- ```bash
- dotnet add package Microsoft.Identity.Web
+ - Create a new ASP.NET Core MVC web application project using the **dotnet cli**. Specify the `--auth` flag with either `SingleOrg` for single tenant authentication or `MultiOrg` for multi-tenant authentication, the `--client-id` flag with the client if from the application registration, and the `--tenant-id` flag with the tenant if from the Azure AD tenant:
+
+ ```bash
+ dotnet new mvc --auth SingleOrg --client-id <YOUR-APPLICATION-CLIENT-ID> --tenant-id <TENANT-ID>
+ ```
- dotnet add package Microsoft.Identity.Web.UI
+ - Add the Microsoft.Identity.Web and Microsoft.Identity.Web.UI libraries to an existing ASP.NET Core MVC project:
- ```
+ ```bash
+ dotnet add package Microsoft.Identity.Web
+ dotnet add package Microsoft.Identity.Web.UI
+ ```
- And then follow the instructions specified in [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](./quickstart-v2-aspnet-core-webapp.md?view=aspnetcore-5.0&preserve-view=true) to add authentication to your application.
-1. Add role checks on your controller actions as outlined in [Adding role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks).
+1. Follow the instructions specified in [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](./quickstart-v2-aspnet-core-webapp.md?view=aspnetcore-5.0&preserve-view=true) to add authentication to the application.
+1. Add role checks on the controller actions as outlined in [Adding role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks).
1. Test the application by trying to access one of the protected MVC routes. ### ASP.NET Core web API
-Implementing RBAC in an ASP.NET Core web API mainly involves utilizing the *Authorize* attribute to specify which roles should be allowed to access specific controllers or actions in the controllers. Follow these steps to implement RBAC in your ASP.NET Core web API:
-1. Create an app registration with app roles and assignments as outlined in *Defining roles for your application* above.
-1. Do one of the following steps:
- - Create a new ASP.NET Core MVC web API project using the **dotnet cli**. Specify the *--auth* flag with either *SingleOrg* for single tenant authentication or *MultiOrg* for multi-tenant authentication, the *--client-id* flag with the client if from your app registration, and the *--tenant-id* flag with your tenant if from your Azure AD tenant:
+Implementing RBAC in an ASP.NET Core web API mainly involves utilizing the *Authorize* attribute to specify which roles should be allowed to access specific controllers or actions in the controllers. Follow these steps to implement RBAC in the ASP.NET Core web API:
- ```bash
-
- dotnet new webapi --auth SingleOrg --client-id <YOUR-APPLICATION-CLIENT-ID> --tenant-id <YOUR-TENANT-ID>
-
- ```
+1. Create an application registration with app roles and assignments as outlined in *Define app roles* above.
+1. Do one of the following steps:
+
+ - Create a new ASP.NET Core MVC web API project using the **dotnet cli**. Specify the `--auth` flag with either `SingleOrg` for single tenant authentication or `MultiOrg` for multi-tenant authentication, the `--client-id` flag with the client if from the application registration, and the `--tenant-id` flag with the tenant if from the Azure AD tenant:
+
+ ```bash
+ dotnet new webapi --auth SingleOrg --client-id <YOUR-APPLICATION-CLIENT-ID> --tenant-id <TENANT-ID>
+ ```
- Add the Microsoft.Identity.Web and Swashbuckle.AspNetCore libraries to an existing ASP.NET Core web API project:
-
- ```bash
-
- dotnet add package Microsoft.Identity.Web
- dotnet add package Swashbuckle.AspNetCore
+ ```bash
+ dotnet add package Microsoft.Identity.Web
+ dotnet add package Swashbuckle.AspNetCore
+ ```
- ```
-
- And then follow the instructions specified in [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](./quickstart-v2-aspnet-core-webapp.md?view=aspnetcore-5.0&preserve-view=true) to add authentication to your application.
-1. Add role checks on your controller actions as outlined in [Adding role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks).
-1. Call the API from a client app. See [Angular single-page application calling .NET Core web API and using App Roles to implement Role-Based Access Control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles) for an end to end sample.
+1. Follow the instructions specified in [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](./quickstart-v2-aspnet-core-webapp.md?view=aspnetcore-5.0&preserve-view=true) to add authentication to the application.
+1. Add role checks on the controller actions as outlined in [Adding role checks](/aspnet/core/security/authorization/roles?view=aspnetcore-5.0&preserve-view=true#adding-role-checks).
+1. Call the API from a client application. See [Angular single-page application calling .NET Core web API and using App Roles to implement Role-Based Access Control](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl/1-call-api-roles) for an end to end sample.
-
-## Implementing RBAC in other platforms
+## Implement RBAC in other platforms
### Angular SPA using MsalGuard
-Implementing RBAC in an Angular SPA involves the use of [msal-angular](https://www.npmjs.com/package/@azure/msal-angular) to authorize access to the Angular routes contained within the application. This is shown in the [Enable your Angular single-page application to sign-in users and call APIs with the Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial#chapter-5-control-access-to-your-protected-api-using-app-roles-and-security-groups) sample.
+
+Implementing RBAC in an Angular SPA involves the use of [msal-angular](https://www.npmjs.com/package/@azure/msal-angular) to authorize access to the Angular routes contained within the application. An example is shown in the [Enable your Angular single-page application to sign-in users and call APIs with the Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial#chapter-5-control-access-to-your-protected-api-using-app-roles-and-security-groups) sample.
> [!NOTE] > Client-side RBAC implementations should be paired with server-side RBAC to prevent unauthorized applications from accessing sensitive resources. ### Node.js with Express application
-Implementing RBAC in a Node.js with express application involves the use of MSAL to authorize access to the Express routes contained within the application. This is shown in the [Enable your Node.js web app to sign-in users and call APIs with the Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial#chapter-4-control-access-to-your-app-using-app-roles-and-security-groups) sample.
+
+Implementing RBAC in a Node.js with express application involves the use of MSAL to authorize access to the Express routes contained within the application. An example is shown in the [Enable your Node.js web app to sign-in users and call APIs with the Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial#chapter-4-control-access-to-your-app-using-app-roles-and-security-groups) sample.
## Next steps
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
The following tables show Microsoft Authentication Library support for several a
The Microsoft identity platform has been certified by the OpenID Foundation as a [certified OpenID provider](https://openid.net/certification/). If you prefer to use a library other than the Microsoft Authentication Library (MSAL) or another Microsoft-supported library, choose one with a [certified OpenID Connect implementation](https://openid.net/developers/certified/).
-If you choose to hand-code your own protocol-level implementation of [OAuth 2.0 or OpenID Connect 1.0](active-directory-v2-protocols.md), pay close attention to the security considerations in each standard's specification and follow a software development lifecycle (SDL) methodology like the [Microsoft SDL][Microsoft-SDL].
+If you choose to hand-code your own protocol-level implementation of [OAuth 2.0 or OpenID Connect 1.0](active-directory-v2-protocols.md), pay close attention to the security considerations in each standard's specification and follow secure software design and development practices like those in the [Microsoft SDL][Microsoft-SDL].
## Single-page application (SPA)
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
The code for ASP.NET is similar to the code shown for ASP.NET Core:
- Finally, it calls the `AcquireTokenSilent` method of the confidential client application. - If interaction is required, the web app needs to challenge the user (re-sign in) and ask for more claims.
+>[!NOTE]
+>The scope should be the fully qualified scope name. For example,`({api_uri}/scope)`.
+ The following code snippet is extracted from [HomeController.cs#L157-L192](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/257c8f96ec3ff875c351d1377b36403eed942a18/WebApp/Controllers/HomeController.cs#L157-L192) in the [ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) ASP.NET MVC code sample: ```C#
active-directory Secure Group Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-group-access-control.md
Title: Secure access control using groups in Azure AD description: Learn about how groups are used to securely control access to resources in Azure AD. -+ Previously updated : 2/21/2022 Last updated : 6/16/2022 -+ # Customer intent: As a developer, I want to learn how to most securely use Azure AD groups to control access to resources.
# Secure access control using groups in Azure AD
-Azure Active Directory (Azure AD) allows the use of groups to manage access to resources in an organization. You should use groups for access control when you want to manage and minimize access to applications. When groups are used, only members of those groups can access the resource. Using groups also allows you to benefit from several Azure AD group management features, such as attribute-based dynamic groups, external groups synced from on-premises Active Directory, and Administrator managed or self-service managed groups. To learn more about the benefits of groups for access control, see [manage access to an application](../manage-apps/what-is-access-management.md).
+Azure Active Directory (Azure AD) allows the use of groups to manage access to resources in an organization. Use groups for access control to manage and minimize access to applications. When groups are used, only members of those groups can access the resource. Using groups also enables the following management features:
-While developing an application, you can authorize access with the [groups claim](/graph/api/resources/application?view=graph-rest-1.0#properties&preserve-view=true). To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+- Attribute-based dynamic groups
+- External groups synced from on-premises Active Directory
+- Administrator managed or self-service managed groups
-Today, many applications select a subset of groups with the *securityEnabled* flag set to *true* to avoid scale challenges, that is, to reduce the number of groups returned in the token. Setting the *securityEnabled* flag to be true for a group doesn't guarantee that the group is securely managed. Therefore, we suggest following the best practices described below:
+To learn more about the benefits of groups for access control, see [manage access to an application](../manage-apps/what-is-access-management.md).
+While developing an application, authorize access with the [groups claim](/graph/api/resources/application?view=graph-rest-1.0#properties&preserve-view=true). To learn more, see how to [configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+
+Today, many applications select a subset of groups with the `securityEnabled` flag set to `true` to avoid scale challenges, that is, to reduce the number of groups returned in the token. Setting the `securityEnabled` flag to be true for a group doesn't guarantee that the group is securely managed.
## Best practices to mitigate risk
-This table presents several security best practices for security groups and the potential security risks each practice mitigates.
+The following table presents several security best practices for security groups and the potential security risks each practice mitigates.
|Security best practice |Security risk mitigated | |--||
-|**Ensure resource owner and group owner are the same principal**. Applications should build their own group management experience and create new groups to manage access. For example, an application can create groups with *Group. Create* permission and add itself as the owner of the group. This way the application has control over its groups without being over privileged to modify other groups in the tenant.|When group owners and resource owners are different users or entities, group owners can add users to the group who aren't supposed to get access to the resource and thus give access to the resource unintentionally.|
-|**Build an implicit contract between resource owner(s) and group owner(s)**. The resource owner and the group owner should align on the group purpose, policies, and members that can be added to the group to get access to the resource. This level of trust is non-technical and relies on human or business contract.|When group owners and resource owners have different intentions, the group owner may add users to the group the resource owner didn't intend on giving access to. This can result in unnecessary and potentially risky access.|
-|**Use private groups for access control**. Microsoft 365 groups are managed by the [visibility concept](/graph/api/resources/group?view=graph-rest-1.0#group-visibility-options&preserve-view=true). This property controls the join policy of the group and visibility of group resources. Security groups have join policies that either allow anyone to join or require owner approval. On-premises-synced groups can also be public or private. When they're used to give access to a resource in the cloud, users joining this group on-premises can get access to the cloud resource as well.|When you use a *Public* group for access control, any member can join the group and get access to the resource. When a *Public* group is used to give access to an external resource, the risk of elevation of privilege exists.|
-|**Group nesting**. When you use a group for access control and it has other groups as its members, members of the subgroups can get access to the resource. In this case, there are multiple group owners - owners of the parent group and the subgroups.|Aligning with multiple group owners on the purpose of each group and how to add the right members to these groups is more complex and more prone to accidental grant of access. Therefore, you should limit the number of nested groups or don't use them at all if possible.|
+|**Ensure resource owner and group owner are the same principal**. Applications should build their own group management experience and create new groups to manage access. For example, an application can create groups with the `Group.Create` permission and add itself as the owner of the group. This way the application has control over its groups without being over privileged to modify other groups in the tenant.|When group owners and resource owners are different entities, group owners can add users to the group who aren't supposed to access the resource but can then access it unintentionally.|
+|**Build an implicit contract between the resource owner and group owner**. The resource owner and the group owner should align on the group purpose, policies, and members that can be added to the group to get access to the resource. This level of trust is non-technical and relies on human or business contract.|When group owners and resource owners have different intentions, the group owner may add users to the group the resource owner didn't intend on giving access to. This action can result in unnecessary and potentially risky access.|
+|**Use private groups for access control**. Microsoft 365 groups are managed by the [visibility concept](/graph/api/resources/group?view=graph-rest-1.0#group-visibility-options&preserve-view=true). This property controls the join policy of the group and visibility of group resources. Security groups have join policies that either allow anyone to join or require owner approval. On-premises-synced groups can also be public or private. Users joining an on-premises-synced group can get access to cloud resource as well.|When you use a public group for access control, any member can join the group and get access to the resource. The risk of elevation of privilege exists when a public group is used to give access to an external resource.|
+|**Group nesting**. When you use a group for access control and it has other groups as its members, members of the subgroups can get access to the resource. In this case, there are multiple group owners of the parent group and the subgroups.|Aligning with multiple group owners on the purpose of each group and how to add the right members to these groups is more complex and more prone to accidental grant of access. Limit the number of nested groups or don't use them at all if possible.|
## Next steps
-For more information about groups in Azure AD, see the following:
- - [Manage app and resource access using Azure Active Directory groups](../fundamentals/active-directory-manage-groups.md)-- [Access with Azure Active Directory groups](/azure/devops/organizations/accounts/manage-azure-active-directory-groups) - [Restrict your Azure AD app to a set of users in an Azure AD tenant](./howto-restrict-your-app-to-a-set-of-users.md)
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-least-privileged-access.md
Title: "Increase app security with the principle of least privilege"
-description: Learn how the principle of least privilege can help increase the security of your application, its data, and which features of the Microsoft identity platform you can use to implement least privileged access.
+ Title: "Increase application security with the principle of least privilege"
+description: Learn how the principle of least privilege can help increase the security of an application and its data.
-+ Previously updated : 09/09/2021 Last updated : 06/16/2022 -+
-# Customer intent: As a developer, I want to learn about the principle of least privilege and the features of the Microsoft identity platform that I can use to ensure my application and its users are restricted to actions and have access to only the data they need perform their tasks.
+# Customer intent: As a developer, I want to learn about the principle of least privilege and the features of the Microsoft identity platform that I can use to make sure my application and its users are restricted to actions and have access to only the data they need perform their tasks.
# Enhance security with the principle of least privilege
-The information security principle of least privilege asserts that users and applications should be granted access only to the data and operations they require to perform their jobs.
-
-Follow the guidance here to help reduce your application's attack surface and the impact of a security breach (the *blast radius*) should one occur in your Microsoft identity platform-integrated application.
+The information security principle of least privilege asserts that users and applications should be granted access only to the data and operations they require to perform their jobs. Follow the guidance here to help reduce the attack surface of an application and the impact of a security breach (the *blast radius*) should one occur in a Microsoft identity platform-integrated application.
## Recommendations at a glance - Prevent **overprivileged** applications by revoking *unused* and *reducible* permissions.-- Use the identity platform's **consent** framework to require that a human consents to the app's request to access protected data.
+- Use the identity platform's **consent** framework to require that a human consent to the request from the application to access protected data.
- **Build** applications with least privilege in mind during all stages of development.-- **Audit** your deployed applications periodically to identify overprivileged apps.
+- **Audit** the deployed applications periodically to identify the ones that are overprivileged.
-## What's an *overprivileged* application?
+## Overprivileged applications
-Any application that's been granted an **unused** or **reducible** permission is considered "overprivileged." Unused and reducible permissions have the potential to provide unauthorized or unintended access to data or operations not required by the app or its users to perform their jobs.
+Any application that's been granted an **unused** or **reducible** permission is considered overprivileged. Unused and reducible permissions have the potential to provide unauthorized or unintended access to data or operations not required by the application or its users to perform their jobs. Avoid security risks posed by unused and reducible permissions by granting only the appropriate permissions. The appropriate permissions are the ones with the least-permissive access required by an application or user to perform their required tasks.
- :::column span="":::
- ### Unused permissions
+### Unused permissions
- An unused permission is a permission that's been granted to an application but whose API or operation exposed by that permission isn't called by the app when used as intended.
+An unused permission is a permission that's been granted to an application but whose API or operation exposed by that permission isn't called by the application when used as intended.
- - **Example**: An application displays a list of files stored in the signed-in user's OneDrive by calling the Microsoft Graph API and leveraging the [Files.Read](/graph/permissions-reference) permission. However, the app has also been granted the [Calendars.Read](/graph/permissions-reference#calendars-permissions) permission, yet it provides no calendar features and doesn't call the Calendars API.
+- **Example**: An application displays a list of files stored in the signed-in user's OneDrive by calling the Microsoft Graph API using the [Files.Read](/graph/permissions-reference) permission. However, the application has also been granted the [Calendars.Read](/graph/permissions-reference#calendars-permissions) permission, yet it provides no calendar features and doesn't call the Calendars API.
- - **Security risk**: Unused permissions pose a *horizontal privilege escalation* security risk. An entity that exploits a security vulnerability in your application could use an unused permission to gain access to an API or operation not normally supported or allowed by the application when it's used as intended.
+- **Security risk**: Unused permissions pose a *horizontal privilege escalation* security risk. An entity that exploits a security vulnerability in the application could use an unused permission to gain access to an API or operation not normally supported or allowed by the application when it's used as intended.
- - **Mitigation**: Remove any permission that isn't used in API calls made by your application.
- :::column-end:::
- :::column span="":::
- ### Reducible permissions
+- **Mitigation**: Remove any permission that isn't used in API calls made by the application.
- A reducible permission is a permission that has a lower-privileged counterpart that would still provide the application and its users the access they need to perform their required tasks.
+### Reducible permissions
- - **Example**: An application displays the signed-in user's profile information by calling the Microsoft Graph API, but doesn't support profile editing. However, the app has been granted the [User.ReadWrite.All](/graph/permissions-reference#user-permissions) permission. The *User.ReadWrite.All* permission is considered reducible here because the less permissive *User.Read.All* permission grants sufficient read-only access to user profile data.
+A reducible permission is a permission that has a lower-privileged counterpart that would still provide the application and its users the access they need to perform their required tasks.
- - **Security risk**: Reducible permissions pose a *vertical privilege escalation* security risk. An entity that exploits a security vulnerability in your application could use the reducible permission for unauthorized access to data or to perform operations not normally allowed by that entity's role.
+- **Example**: An application displays the signed-in user's profile information by calling the Microsoft Graph API, but doesn't support profile editing. However, the application has been granted the [User.ReadWrite.All](/graph/permissions-reference#user-permissions) permission. The *User.ReadWrite.All* permission is considered reducible here because the less permissive *User.Read.All* permission grants sufficient read-only access to user profile data.
- - **Mitigation**: Replace each reducible permission in your application with its least-permissive counterpart still enabling the application's intended functionality.
- :::column-end:::
+- **Security risk**: Reducible permissions pose a *vertical privilege escalation* security risk. An entity that exploits a security vulnerability in the application could use the reducible permission for unauthorized access to data or to perform operations not normally allowed by that role of the entity.
-Avoid security risks posed by unused and reducible permissions by granting *just enough* permission: the permission with the least-permissive access required by an application or user to perform their required tasks.
+- **Mitigation**: Replace each reducible permission in the application with its least-permissive counterpart still enabling the intended functionality of the application.
## Use consent to control access to data
-Most applications you build will require access to protected data, and the owner of that data needs to [consent](application-consent-experience.md#consent-and-permissions) that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access
-
-Whenever an application that runs in your user's device requests access to protected data, the app should ask for the user's consent before granting access to the protected data. The end user is required to grant (or deny) consent for the requested permission before the application can progress.
+Most applications require access to protected data, and the owner of that data needs to [consent](application-consent-experience.md#consent-and-permissions) to that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access.
+Whenever an application that runs in a device requests access to protected data, the application should ask for the consent of the user before granting access to the protected data. The user is required to grant (or deny) consent for the requested permission before the application can progress.
-## Least privilege during app development
+## Least privilege during application development
-As a developer building an application, consider the security of your app and its users' data to be *your* responsibility.
+The security of an application and the user data that it accesses is the responsibility of the developer.
-Adhere to these guidelines during application development to help avoid building an overprivileged app:
+Adhere to these guidelines during application development to help avoid making it overprivileged:
-- Fully understand the permissions required for the API calls that your application needs to make.-- Understand the least privileged permission for each API call that your app needs to make using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+- Fully understand the permissions required for the API calls that the application needs to make.
+- Understand the least privileged permission for each API call that the application needs to make using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
- Find the corresponding [permissions](/graph/permissions-reference) from least to most privileged.-- Remove any duplicate sets of permissions in cases where your app makes API calls that have overlapping permissions.-- Apply only the least privileged set of permissions to your application by choosing the least privileged permission in the permission list.
+- Remove any duplicate sets of permissions in cases where the application makes API calls that have overlapping permissions.
+- Apply only the least privileged set of permissions to the application by choosing the least privileged permission in the permission list.
-## Least privilege for deployed apps
+## Least privilege for deployed applications
-Organizations often hesitate to modify running applications to avoid impacting their normal business operations. However, your organization should consider mitigating the risk of a security incident made possible or more severe by your app's overprivileged permissions to be worthy of a scheduled application update.
+Organizations often hesitate to modify running applications to avoid impacting their normal business operations. However, an organization should consider mitigating the risk of a security incident made possible or more severe by using overprivileged permissions to be worthy of a scheduled application update.
-Make these standard practices in your organization to help ensure your deployed apps aren't overprivileged and don't become overprivileged over time:
+Make these standard practices in an organization to help make sure that deployed applications aren't overprivileged and don't become overprivileged over time:
-- Evaluate the API calls being made from your applications.
+- Evaluate the API calls being made from the applications.
- Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and the [Microsoft Graph](/graph/overview) documentation for the required and least privileged permissions. - Audit privileges that are granted to users or applications.-- Update your applications with the least privileged permission set.-- Conduct permissions reviews regularly to make sure all authorized permissions are still relevant.
+- Update the applications with the least privileged permission set.
+- Review permissions regularly to make sure all authorized permissions are still relevant.
## Next steps
-**Protected resource access and consent**
-
-For more information about configuring access to protected resources and the user experience of providing consent to access those protected resources, see the following articles:
- - [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md) - [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md)-
-**Zero Trust** - Consider employing the least-privilege measures described here as part of your organization's proactive [Zero Trust security strategy](/security/zero-trust/).
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
Title: Best practices for Azure AD application registration configuration
-description: Learn about a set of best practices and general guidance on Azure AD application registration configuration.
+ Title: Security best practices for application properties
+description: Learn about the best practices and general guidance for security related application properties in Azure Active Directory.
-+ Previously updated : 07/8/2021 Last updated : 06/17/2022 -+
-# Azure AD application registration security best practices
+# Security best practices for application properties in Azure Active Directory
-An Azure Active Directory (Azure AD) application registration is a critical part of your business application. Any misconfiguration or lapse in hygiene of your application can result in downtime or compromise.
+Security is an important concept when registering an application in Azure Active Directory (Azure AD) and is a critical part of its business use in the organization. Any misconfiguration of an application can result in downtime or compromise. Depending on the permissions added to an application, there can be organization-wide effects.
-It's important to understand that your application registration has a wider impact than the business application because of its surface area. Depending on the permissions added to your application, a compromised app can have an organization-wide effect.
-Since an application registration is essential to getting your users logged in, any downtime to it can affect your business or some critical service that your business depends upon. So, it's important to allocate time and resources to ensure your application registration stays in a healthy state always. We recommend that you conduct a periodical security and health assessment of your applications much like a Security Threat Model assessment for your code. For a broader perspective on security for organizations, check the [security development lifecycle](https://www.microsoft.com/securityengineering/sdl) (SDL).
+Because secure applications are essential to the organization, any downtime to them because of security issues can affect the business or some critical service that the business depends upon. So, it's important to allocate time and resources to ensure applications stay in a healthy and secure state always. Conduct a periodical security and health assessment of applications much like a Security Threat Model assessment for code. For a broader perspective on security for organizations, see the [security development lifecycle](https://www.microsoft.com/securityengineering/sdl) (SDL).
-This article describes security best practices for the following application registration properties.
+This article describes security best practices for the following application properties:
- Redirect URI-- Implicit grant flow for access token-- Credentials-- AppId URI
+- Access tokens (used for implicit flows)
+- Certificates and secrets
+- Application ID URI
- Application ownership-- Checklist
-## Redirect URI configuration
+## Redirect URI
-It's important to keep Redirect URIs of your application up to date. A lapse in the ownership of one of the redirect URIs can lead to an application compromise. Ensure that all DNS records are updated and monitored periodically for changes. Along with maintaining ownership of all URIs, don't use wildcard reply URLs or insecure URI schemes such as http, or URN.
+It's important to keep Redirect URIs of your application up to date. Under **Authentication** for the application in the Azure portal, a platform must be selected for the application and then the **Redirect URI** property can be defined.
-![redirect URI](media/active-directory-application-registration-best-practices/redirect-uri.png)
-### Redirect URI summary
+Consider the following guidance for redirect URIs:
-| Do | Don't |
-| - | -- |
-| Maintain ownership of all URIs | Use wildcards |
-| Keep DNS up to date | Use URN scheme |
-| Keep the list small | -- |
-| Trim any unnecessary URIs | -- |
-| Update URLs from Http to Https scheme | -- |
+- Maintain ownership of all URIs. A lapse in the ownership of one of the redirect URIs can lead to an application compromise.
+- Make sure that all DNS records are updated and monitored periodically for changes.
+- Don't use wildcard reply URLs or insecure URI schemes such as http, or URN.
+- Keep the list small. Trim any unnecessary URIs. If possible, update URLs from Http to Https.
-## Implicit flow token configuration
+## Access tokens (used for implicit flows)
-Scenarios that required **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit grant flow misuse. If you configured your application registration to get Access tokens using implicit flow, but don't actively use it, we recommend you turn off the setting to protect from misuse.
+Scenarios that required **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit flow misuse. Under **Authentication** for the application in the Azure portal, a platform must be selected for the application and then the **Access tokens (used for implicit flows)** property can be set.
-![access tokens used for implicit flows](media/active-directory-application-registration-best-practices/implict-grant-flow.png)
-### Implicit grant flow summary
+Consider the following guidance related to implicit flow:
-| Do | Don't |
-| | - |
-| Understand if [implicit flow is required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant) | Use implicit flow unless [explicitly required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant) |
-| Separate app registration for (valid) implicit flow scenarios | -- |
-| Turn off unused implicit flow | -- |
+- Understand if [implicit flow is required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant). Don't use implicit flow unless [explicitly required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant).
+- If the application was configured to receive access tokens using implicit flow, but doesn't actively use them, turn off the setting to protect from misuse.
+- Use separate applications for valid implicit flow scenarios.
-## Credential configuration
+## Certificates and secrets
-Credentials are a vital part of an application registration when your application is used as a confidential client. If your app registration is used only as a Public Client App (allows users to sign in using a public endpoint), ensure that you don't have any credentials on your application object. Review the credentials used in your applications for freshness of use and their expiration. An unused credential on an application can result in security breach.
-While it's convenient to use password secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application. Monitor your production pipelines to ensure credentials of any kind are never committed into code repositories. If using Azure, we strongly recommend using Managed Identity so application credentials are automatically managed. Refer to the [managed identities documentation](../managed-identities-azure-resources/overview.md) for more details. [Credential Scanner](../../security/develop/security-code-analysis-overview.md#credential-scanner) is a static analysis tool that you can use to detect credentials (and other sensitive content) in your source code and build output.
+Certificates and secrets, also known as credentials, are a vital part of an application when it's used as a confidential client. Under **Certificates and secrets** for the application in the Azure portal, certificates and secrets can be added or removed.
-![certificates and secrets on Azure portal](media/active-directory-application-registration-best-practices/credentials.png)
-| Do | Don't |
-| - | |
-| Use [certificate credentials](./active-directory-certificate-credentials.md) | Use Password credentials |
-| Use Key Vault with [Managed identities](../managed-identities-azure-resources/overview.md) | Share credentials across apps |
-| Rollover frequently | Have many credentials on one app |
-| -- | Leave stale credentials available |
-| -- | Commit credentials in code |
+Consider the following guidance related to certificates and secrets:
-## AppId URI configuration
+- Always use [certificate credentials](./active-directory-certificate-credentials.md) whenever possible and don't use password credentials, also known as *secrets*. While it's convenient to use password secrets as a credential, when possible use x509 certificates as the only credential type for getting tokens for an application.
+- Use Key Vault with [Managed identities](../managed-identities-azure-resources/overview.md) to manage credentials for an application.
+- If an application is used only as a Public Client App (allows users to sign in using a public endpoint), make sure that there are no credentials specified on the application object.
+- Review the credentials used in applications for freshness of use and their expiration. An unused credential on an application can result in security breach. Rollover credentials frequently and don't share credentials across applications. Don't have many credentials on one application.
+- Monitor your production pipelines to prevent credentials of any kind from being committed into code repositories.
+- [Credential Scanner](../../security/develop/security-code-analysis-overview.md#credential-scanner) is a static analysis tool that can be used to detect credentials (and other sensitive content) in source code and build output.
-Certain applications can expose resources (via WebAPI) and as such need to define an AppId URI that uniquely identifies the resource in a tenant. We recommend using either of the following URI schemes: api or https, and set the AppId URI in the following formats to avoid URI collisions in your organization.
-The AppId URI acts as the prefix for the scopes referenced in the API's code, and it must use a verified customer owned domain. For multi-tenant applications the value must also be globally unique.
+## Application ID URI
-
-![Application ID URI](media/active-directory-application-registration-best-practices/app-id-uri.png)
+The **Application ID URI** property of the application specifies the globally unique URI used to identify the web API. It's the prefix for scopes and in access tokens, it's also the value of the audience claim and it must use a verified customer owned domain. For multi-tenant applications, the value must also be globally unique. Also referred to as an identifier URI. Under **Expose an API** for the application in the Azure portal, the **Application ID URI** property can be defined.
-### AppId URI summary
-| Do | Don't |
-| -- | - |
-| Avoid collisions by using valid URI formats. | Use wildcard AppId URI |
-| Use verified domain in Line of Business (LoB) apps | Malformed URI |
-| Inventory your AppId URIs | -- |
-| Use AppId URI to expose WebApi in your organization| Use AppId URI to identify the application, instead use the appId property|
-
-## App ownership configuration
+Consider the following guidance related to defining the Application ID URI:
-Ensure app ownership is kept to a minimal set of people within the organization. It's recommended to run through the owners list once every few months to ensure owners are still part of the organization and their charter accounts for ownership of the application registration. Check out [Azure AD access reviews](../governance/access-reviews-overview.md) for more details.
+- The api or https URI schemes are recommended. Set the property in the supported formats to avoid URI collisions in your organization. Don't use wildcards.
+- Use a verified domain in Line of Business (LoB) applications.
+- Keep an inventory of the URIs in your organization to help maintain security.
+- Use the Application ID URI to expose the WebApi in the organization and don't use the Application ID URI to identify the application, instead use the Application (client) ID property.
-![users provisioning service - owners](media/active-directory-application-registration-best-practices/app-ownership.png)
-### App ownership summary
+## App ownership configuration
-| Do | Don't |
-| - | -- |
-| Keep it small | -- |
-| Monitor owners list | -- |
+Owners can manage all aspects of a registered application. It's important to regularly review the ownership of all applications in the organization. For more information, see [Azure AD access reviews](../governance/access-reviews-overview.md). Under **Owners** for the application in the Azure portal, the owners of the application can be managed.
-## Checklist
-App developers can use the _Checklist_ available in Azure portal to ensure their app registration meets a high quality bar and provides guidance to integrate securely. The integration assistant highlights best practices and recommendation that help avoid common oversights when integrating with Microsoft identity platform.
+Consider the following guidance related to specifying application owners:
-![Integration assistant checklist on Azure portal](media/active-directory-application-registration-best-practices/checklist.png)
+- Application ownership should be kept to a minimal set of people within the organization.
+- An administrator should review the owners list once every few months to make sure that owners are still part of the organization and should still own an application.
-### Checklist summary
+## Integration assistant
-| Do | Don't |
-| -- | -- |
-| Use checklist to get scenario-based recommendation | -- |
-| Deep link into app registration blades | -- |
+The **Integration assistant** in Azure portal can be used to make sure that an application meets a high quality bar and to provide secure integration. The integration assistant highlights best practices and recommendation that help avoid common oversights when integrating with the Microsoft identity platform.
## Next steps
-For more information on Auth code flow, see the [OAuth 2.0 authorization code flow](./v2-oauth2-auth-code-flow.md).
+
+- For more information about the Auth code flow, see the [OAuth 2.0 authorization code flow](./v2-oauth2-auth-code-flow.md).
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Make sure you have [Node.js](https://nodejs.org/en/download/) installed, and the
// Set the front-end folder to serve public assets. app.use(express.static('JavaScriptSPA'))
- // Set up a route for https://docsupdatetracker.net/index.html.
app.get('*', function (req, res) {
- res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
+ res.sendFile(path.join(__dirname + '/JavaScriptSPA/https://docsupdatetracker.net/index.html'));
}); // Start the server.
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password
> * If users need to use [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to log in to the application, they will be blocked instead. > * ROPC is not supported in [hybrid identity federation](../hybrid/whatis-fed.md) scenarios (for example, Azure AD and ADFS used to authenticate on-premises accounts). If users are full-page redirected to an on-premises identity providers, Azure AD is not able to test the username and password against that identity provider. [Pass-through authentication](../hybrid/how-to-connect-pta.md) is supported with ROPC, however. > * An exception to a hybrid identity federation scenario would be the following: Home Realm Discovery policy with AllowCloudPasswordValidation set to TRUE will enable ROPC flow to work for federated users when on-premises password is synced to cloud. For more information, see [Enable direct ROPC authentication of federated users for legacy applications](../manage-apps/home-realm-discovery-policy.md#enable-direct-ropc-authentication-of-federated-users-for-legacy-applications).
+> * Passwords with leading or trailing whitespaces are not supported by the ROPC flow.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)]
active-directory Zero Trust For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/zero-trust-for-developers.md
Title: "Increase app security by following Zero Trust principles"
-description: Learn how following the Zero Trust principles can help increase the security of your application, its data, and which features of the Microsoft identity platform you can use to build Zero Trust-ready apps.
+ Title: "Increase application security using Zero Trust principles"
+description: Learn how using Zero Trust principles can help increase the security of your application and its data.
-+ Previously updated : 12/02/2021 Last updated : 06/16/2022 -+ # Customer intent: As a developer, I want to learn about the Zero Trust principles and the features of the Microsoft identity platform that I can use to build applications that are Zero Trust-ready.
-# Build Zero Trust-ready apps using Microsoft identity platform features and tools
+# Increase application security using Zero Trust principles
-You can no longer assume a secure network perimeter around the applications you build. Nearly every app you build will, by design, be accessed from outside the network perimeter. You also can't guarantee every app you build is secure or will remain so after it's deployed.
+A secure network perimeter around the applications that are developed can't be assumed. Nearly every developed application, by design, will be accessed from outside the network perimeter. Applications can't be guaranteed to be secure when they're developed or will remain so after they're deployed. It's the responsibility of the application developer to not only maximize the security of the application, but also minimize the damage the application can cause if it's compromised.
-Knowing this as the app developer, it's your responsibility to not only maximize your app's security, but also minimize the damage your app can cause if it's compromised.
-
-Additionally, you are responsible for supporting the evolving needs of your customers and users, who will expect that your application meets their Zero Trust security requirements.
-
-By learning the principles of the [Zero Trust model](https://www.microsoft.com/security/business/zero-trust?rtc=1) and adopting their practices, you can:
-- Build more secure apps-- Minimize the damage your apps could cause if there is a breach-
-## Zero Trust principles
+Additionally, the responsibility includes supporting the evolving needs of the customers and users, who expect that the application meets Zero Trust security requirements. Learn the principles of the [Zero Trust model](https://www.microsoft.com/security/business/zero-trust?rtc=1) and adopt the practices. By learning and adopting the principles, applications can be developed that are more secure and that minimize the damage they could cause if there's a break in security.
The Zero Trust model prescribes a culture of explicit verification rather than implicit trust. The model is anchored on three key [guiding principles](/security/zero-trust/#guiding-principles-of-zero-trust):+ - Verify explicitly - Use least privileged access - Assume breach
-## Best practices for building Zero Trust-ready apps with the Microsoft identity platform
+## Zero Trust best practices
-Follow these best practices to build Zero Trust-ready apps with the [Microsoft identity platform](./v2-overview.md) and its tools.
+Follow these best practices to build Zero Trust-ready applications with the [Microsoft identity platform](./v2-overview.md) and its tools.
### Verify explicitly
-The Microsoft identity platform offers authentication mechanisms for verifying the identity of the person or service accessing a resource. Apply the best practices described below to ensure that you *verify explicitly* any entities that need to access data or resources.
+The Microsoft identity platform offers authentication mechanisms for verifying the identity of the person or service accessing a resource. Apply the best practices described below to *verify explicitly* any entities that need to access data or resources.
-|Best practice |Benefits to app security |
-|-||
-|Use the [Microsoft Authentication Libraries](./reference-v2-libraries.md) (MSAL).|MSAL is a set of Microsoft's authentication libraries for developers. With MSAL, you can authenticate users and applications, and acquire tokens to access corporate resources using just a few lines of code. MSAL uses modern protocols ([OpenID Connect and OAuth 2.0](./active-directory-v2-protocols.md)) that remove the need for apps to ever handle a user's credentials directly. This vastly improves the security for both users and applications as the identity provider becomes the security perimeter. Also, these protocols continuously evolve to address new paradigms, opportunities, and challenges in identity security.|
-|Adopt enhanced security extensions like [Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) and Conditional Access authentication context when appropriate.|In Azure AD, some of the most used extensions include [Conditional Access](../conditional-access/overview.md), [Conditional Access authentication context](./developer-guide-conditional-access-authentication-context.md) and CAE. Applications that use enhanced security features like CAE and Conditional Access authentication context must be coded to handle claims challenges. Open protocols enable you to use the [claims challenges and claims requests](./claims-challenge.md) to invoke extra client capabilities. This might be to indicate to apps that they need to re-interact with Azure AD, like if there was an anomaly or if the user no longer satisfies the conditions under which they authenticated earlier. As a developer you can code for these extensions without disturbing their primary code flows for authentication.|
-|Use the correct **authentication flow** by [app type](./v2-app-types.md) for authentication. For web applications, you should always aim to use [confidential client flows](./authentication-flows-app-scenarios.md#single-page-public-client-and-confidential-client-applications). For mobile applications, you should use [brokers](./msal-android-single-sign-on.md#sso-through-brokered-authentication) or the [system browser](./msal-android-single-sign-on.md#sso-through-system-browser) for authentication. |The flows for web applications that can hold a secret (confidential clients) are considered more secure than public clients (for example: Desktop and Console apps). When you use the system web browser to authenticate your mobile apps, you get a secure [single sign-on](../manage-apps/what-is-single-sign-on.md) (SSO) experience that supports app protection policies.|
+| Best practice | Benefits to application security |
+| - | -- |
+| Use the [Microsoft Authentication Libraries](./reference-v2-libraries.md) (MSAL). | MSAL is a set of Microsoft Authentication Libraries for developers. With MSAL, users and applications can be authenticated, and tokens can be acquired to access corporate resources using just a few lines of code. MSAL uses modern protocols ([OpenID Connect and OAuth 2.0](./active-directory-v2-protocols.md)) that remove the need for applications to ever handle a user's credentials directly. This handling of credentials vastly improves the security for both users and applications as the identity provider becomes the security perimeter. Also, these protocols continuously evolve to address new paradigms, opportunities, and challenges in identity security. |
+| Adopt enhanced security extensions like [Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) and Conditional Access authentication context when appropriate. | In Azure AD, some of the most used extensions include [Conditional Access](../conditional-access/overview.md) (CA), [Conditional Access authentication context](./developer-guide-conditional-access-authentication-context.md) and CAE. Applications that use enhanced security features like CAE and Conditional Access authentication context must be coded to handle claims challenges. Open protocols enable the [claims challenges and claims requests](./claims-challenge.md) to be used to invoke extra client capabilities. The capabilities might be to continue interaction with Azure AD, such as when there was an anomaly or if the user authentication conditions change. These extensions can be coded into an application without disturbing the primary code flows for authentication. |
+| Use the correct **authentication flow** by [application type](./v2-app-types.md). For web applications, always try to use [confidential client flows](./authentication-flows-app-scenarios.md#single-page-public-client-and-confidential-client-applications). For mobile applications, try to use [brokers](./msal-android-single-sign-on.md#sso-through-brokered-authentication) or the [system browser](./msal-android-single-sign-on.md#sso-through-system-browser) for authentication. | The flows for web applications that can hold a secret (confidential clients) are considered more secure than public clients (for example: Desktop and Console applications). When the system web browser is used to authenticate a mobile application, a secure [Single Sign-On](../manage-apps/what-is-single-sign-on.md) (SSO) experience enables the use of application protection policies. |
### Use least privileged access
-Using the Microsoft identity platform, you can grant permissions (scopes) and verify that a caller has been granted proper permission before allowing access. You can enforce least privileged access in your apps by enabling fine-grained permissions that allow you to grant the smallest amount of access necessary. Follow the practices described below to ensure you adhere to the [principle of least privilege](./secure-least-privileged-access.md).
-
-| Do | Don't |
-| - | -- |
-| Evaluate the permissions you request to ensure that you seek the absolute least privileged set to get the job done. | Create "catch-all" permissions with access to the entire API surface. |
-| When designing APIs, provide granular permissions to allow least-privileged access. Start with dividing the functionality and data access into sections that can be controlled via [scopes](./v2-permissions-and-consent.md#scopes-and-permissions) and [App roles](./howto-add-app-roles-in-azure-ad-apps.md). | Add your APIs to existing permissions in a way that changes the semantics of the permission. |
-| Offer **read-only** permissions. "*Write* access", includes privileges for create, update, and delete operations. A client should never require write access to only read data | -- |
-| Offer both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Skipping application permissions can create hard requirement for your clients to achieve common scenarios like automation, microservices and more. | -- |
-| Consider "standard" and "full" access permissions if working with sensitive data. Restrict the sensitive properties so that they cannot be accessed using a "standard" access permission, for example *Resource.Read*. And then implement a "full" access permission, for example *Resource.ReadFull*, that returns all available properties including sensitive information.|-- |
+A developer uses the Microsoft identity platform to grant permissions (scopes) and verify that a caller has been granted proper permission before allowing access. Enforce least privileged access in applications by enabling fine-grained permissions that allow the smallest amount of access necessary to be granted. Consider the following practices to make sure of adherence to the [principle of least privilege](./secure-least-privileged-access.md):
+- Evaluate the permissions that are requested to make sure that the absolute least privileged is set to get the job done. Don't create "catch-all" permissions with access to the entire API surface.
+- When designing APIs, provide granular permissions to allow least-privileged access. Start with dividing the functionality and data access into sections that can be controlled by using [scopes](./v2-permissions-and-consent.md#scopes-and-permissions) and [App roles](./howto-add-app-roles-in-azure-ad-apps.md). Don't add APIs to existing permissions in a way that changes the semantics of the permission.
+- Offer **read-only** permissions. `Write` access, includes privileges for create, update, and delete operations. A client should never require write access to only read data.
+- Offer both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Skipping application permissions can create hard requirement for clients to achieve common scenarios like automation, microservices and more.
+- Consider "standard" and "full" access permissions if working with sensitive data. Restrict the sensitive properties so that they can't be accessed using a "standard" access permission, for example `Resource.Read`. And then implement a "full" access permission, for example `Resource.ReadFull` that returns all available properties including sensitive information.
### Assume breach
-The Microsoft identity platform app registration portal is the primary entry point for applications intending to use the platform for their authentication and associated needs. When registering and configuring your apps, follow the practices described below to minimize the damage your apps could cause if there is a security breach. For more guidance, check [Azure AD application registration security best practices](./security-best-practices-for-app-registration.md).
-
+The Microsoft identity platform application registration portal is the primary entry point for applications intending to use the platform for their authentication and associated needs. When registering and configuring applications, follow the practices described below to minimize the damage they could cause if there's a security breach. For more information, see [Azure AD application registration security best practices](./security-best-practices-for-app-registration.md).
-| Do | Don't |
-| - | -- |
-| Properly define your redirect URLs | Use the same app registration for multiple apps |
-| Check redirect URIs used in your app registration for ownership and to avoid domain takeovers | Create your application as a multi-tenant unless you really intended to|
-| Ensure app and service principal owners are always defined and maintained for your apps registered in your tenant | -- |
+Consider the following actions prevent breaches in security:
+- Properly define the redirect URIs for the application. Don't use the same application registration for multiple applications.
+- Verify redirect URIs used in the application registration for ownership and to avoid domain takeovers. Don't create the application as a multi-tenant unless it's intended to be. |
+- Make sure application and service principal owners are always defined and maintained for the applications registered in the tenant.
## Next steps
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
You can enforce Conditional Access policies such as require multi-factor authent
> [!NOTE] > Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
+### Missing application
+
+If the Azure Linux VM Sign-In application is missing from Conditional Access, use the following steps to remediate the issue:
+
+1. Check to make sure the application isn't in the tenant by:
+ 1. Sign in to the **Azure portal**.
+ 1. Browse to **Azure Active Directory** > **Enterprise applications**
+ 1. Remove the filters to see all applications, and search for "VM". If you don't see Azure Linux VM Sign-In as a result, the service principal is missing from the tenant.
+
+Another way to verify it is via Graph PowerShell:
+
+1. [Install the Graph PowerShell SDK](/powershell/microsoftgraph/installation) if you haven't already done so.
+1. `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`
+1. Sign-in with a Global Admin account
+1. Consent to permission prompt
+1. `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Linux VM Sign-In"'`
+ 1. If this command results in no output and returns you to the PowerShell prompt, you can create the Service Principal with the following Graph PowerShell command:
+ 1. `New-MgServicePrincipal -AppId ce6ff14a-7fdc-4685-bbe0-f6afdfcfa8e0`
+ 1. Successful output will show that the AppID and the Application Name Azure Linux VM Sign-In was created.
+1. Sign out of Graph PowerShell when complete with the following command: `Disconnect-MgGraph`
+ ## Login using Azure AD user account to SSH into the Linux VM ### Using Az CLI
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 06/08/2022 Last updated : 06/22/2022
The following device attributes can be used.
Device attribute | Values | Example -- | -- | - accountEnabled | true false | device.accountEnabled -eq true
- displayName | any string value | device.displayName -eq "Rob iPhone"
- deviceOSType | any string value | (device.deviceOSType -eq "iPad") -or (device.deviceOSType -eq "iPhone")<br>device.deviceOSType -contains "AndroidEnterprise"<br>device.deviceOSType -eq "AndroidForWork"<br>device.deviceOSType -eq "Windows"
- deviceOSVersion | any string value | device.deviceOSVersion -eq "9.1"<br>device.deviceOSVersion -startsWith "10.0.1"
deviceCategory | a valid device category name | device.deviceCategory -eq "BYOD"
+ deviceId | a valid Azure AD device ID | device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d"
+ deviceManagementAppId | a valid MDM application ID in Azure AD | device.deviceManagementAppId -eq "0000000a-0000-0000-c000-000000000000" for Intune MDM app ID
deviceManufacturer | any string value | device.deviceManufacturer -eq "Samsung" deviceModel | any string value | device.deviceModel -eq "iPad Air"
+ displayName | any string value | device.displayName -eq "Rob iPhone"
+ deviceOSType | any string value | (device.deviceOSType -eq "iPad") -or (device.deviceOSType -eq "iPhone")<br>device.deviceOSType -contains "AndroidEnterprise" <br>device.deviceOSType -eq "AndroidForWork"<br>device.deviceOSType -eq "Windows"
+ deviceOSVersion | any string value | device.deviceOSVersion -eq "9.1"<br>device.deviceOSVersion -startsWith "10.0.1"
deviceOwnership | Personal, Company, Unknown | device.deviceOwnership -eq "Company"
+ devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | device.devicePhysicalIDs -any _ -contains "[ZTDId]"<br>(device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881"<br>(device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342"
+ deviceTrustType | AzureAD, ServerAD, Workplace | device.deviceOwnership -eq "AzureAD"
enrollmentProfileName | Apple Device Enrollment Profile name, Android Enterprise Corporate-owned dedicated device Enrollment Profile name, or Windows Autopilot profile name | device.enrollmentProfileName -eq "DEP iPhones"
+ extensionAttribute1 | any string value | device.extensionAttribute1 -eq "some string value"
+ extensionAttribute2 | any string value | device.extensionAttribute2 -eq "some string value"
+ extensionAttribute3 | any string value | device.extensionAttribute3 -eq "some string value"
+ extensionAttribute4 | any string value | device.extensionAttribute4 -eq "some string value"
+ extensionAttribute5 | any string value | device.extensionAttribute5 -eq "some string value"
+ extensionAttribute6 | any string value | device.extensionAttribute6 -eq "some string value"
+ extensionAttribute7 | any string value | device.extensionAttribute7 -eq "some string value"
+ extensionAttribute8 | any string value | device.extensionAttribute8 -eq "some string value"
+ extensionAttribute9 | any string value | device.extensionAttribute9 -eq "some string value"
+ extensionAttribute10 | any string value | device.extensionAttribute10 -eq "some string value"
+ extensionAttribute11 | any string value | device.extensionAttribute11 -eq "some string value"
+ extensionAttribute12 | any string value | device.extensionAttribute12 -eq "some string value"
+ extensionAttribute13 | any string value | device.extensionAttribute13 -eq "some string value"
+ extensionAttribute14 | any string value | device.extensionAttribute14 -eq "some string value"
+ extensionAttribute15 | any string value | device.extensionAttribute15 -eq "some string value"
isRooted | true false | device.isRooted -eq true managementType | MDM (for mobile devices) | device.managementType -eq "MDM" memberOf | Any string value (valid group object ID) | device.memberof -any (group.objectId -in ['value'])
- deviceId | a valid Azure AD device ID | device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d"
objectId | a valid Azure AD object ID | device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d"
- devicePhysicalIds | any string value used by Autopilot, such as all Autopilot devices, OrderID, or PurchaseOrderID | device.devicePhysicalIDs -any _ -contains "[ZTDId]"<br>(device.devicePhysicalIds -any _ -eq "[OrderID]:179887111881"<br>(device.devicePhysicalIds -any _ -eq "[PurchaseOrderId]:76222342342"
+ profileType | a valid [profile type](https://docs.microsoft.com/graph/api/resources/device?view=graph-rest-1.0#properties) in Azure AD | device.profileType -eq "RegisteredDevice"
systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | device.systemLabels -contains "M365Managed" > [!NOTE]
-> For the deviceOwnership when creating Dynamic Groups for devices you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
+> When using deviceOwnership to create Dynamic Groups for devices, you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
+> When using deviceTrustType to create Dynamic Groups for devices, you need to set the value equal to "AzureAD" to represent Azure AD joined devices, "ServerAD" to represent Hybrid Azure AD joined devices or "Workplace" to represent Azure AD registered devices.
+> When using extensionAttribute1-15 to create Dynamic Groups for devices you need to set the value for extensionAttribute1-15 on the device. Learn more on [how to write extensionAttributes on an Azure AD device object](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http#example-2--write-extensionattributes-on-a-device)
## Next steps
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 06/20/2022 Last updated : 06/22/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on June 20th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on June 22nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 A3 FOR STUDENTS | M365EDU_A3_STUDENT | 7cfd9a2b-e110-4c39-bf20-c6a3f36a3121 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>PowerApps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A3 for students use benefit | M365EDU_A3_STUUSEBNFT | 18250162-5d87-4436-a834-d795c15c80f3 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A3 - Unattended License for students use benefit | M365EDU_A3_STUUSEBNFT_RPA1 | 1aa94593-ca12-4254-a738-81a5972958e8 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
-| Microsoft 365 A5 for Faculty | M365EDU_A5_FACULTY | e97c048c-37a4-45fb-ab50-922fbf07a370 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1(41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics -(Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/> Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance(41fcdd7d-4733-4863-9cf4-c65b83ce2d f4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Microsoft 365 A5 for Faculty | M365EDU_A5_FACULTY | e97c048c-37a4-45fb-ab50-922fbf07a370 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
| MICROSOFT 365 A5 FOR STUDENTS | M365EDU_A5_STUDENT | 46c119d4-0379-4a9d-85e4-97c66d3f909e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A5 for students use benefit | M365EDU_A5_STUUSEBNFT | 31d57bc7-3a05-4867-ab53-97a17835a411 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Microsoft 365 A5 without Audio Conferencing for students use benefit | M365EDU_A5_NOPSTNCONF_STUUSEBNFT | 81441ae1-0b31-4185-a6c0-32b6b84d419f| AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INTUNE_EDU (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MINECRAFT_EDUCATION_EDITION (4c246bbc-f513-4311-beff-eba54c353256)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_NO_SEEDING (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Premium) (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Intune for Education (da24caf9-af8e-485c-b7c8-e73336da2693)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Minecraft Education Edition (4c246bbc-f513-4311-beff-eba54c353256)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print Without Seeding (b67adbaf-a096-42c9-967e-5a84edbe0086)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
|Microsoft 365 E3 - Unattended License | SPE_E3_RPA1 | c2ac2ee4-9bb1-47e4-8541-d689c7e83371 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/> WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/> To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Microsoft 365 E3_USGOV_GCCHIGH | SPE_E3_USGOV_GCCHIGH | ca9d1dd9-dfe9-4fef-b97c-9bc1ea3c3658 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1(6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/> Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/> Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/> Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/> Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/> Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/> SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
-| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| Microsoft 365 E5 Developer (without Windows and Audio Conferencing) | DEVELOPERPACK_E5 | c42b9cae-ea4f-4ab7-9717-81576235ccac | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
-| Microsoft 365 E5 Compliance | INFORMATION_PROTECTION_COMPLIANCE | 184efa21-98c3-4e5d-95ab-d07053a96e67 | RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34) | Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34) |
-| Microsoft 365 E5 Security | IDENTITY_THREAT_PROTECTION | 26124093-3d78-432b-b5dc-48bf992543d5 | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) |
-| Microsoft 365 E5 Security for EMS E5 | IDENTITY_THREAT_PROTECTION_FOR_EMS_E5 | 44ac31e7-2999-4304-ad94-c948886741d4 | WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) |
-| Microsoft 365 E5 without Audio Conferencing | SPE_E5_NOPSTNCONF | cd2925a3-5076-4233-8931-638a8c94f773 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/> EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>RREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
+| Microsoft 365 E5 Developer (without Windows and Audio Conferencing) | DEVELOPERPACK_E5 | c42b9cae-ea4f-4ab7-9717-81576235ccac | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
+| Microsoft 365 E5 Compliance | INFORMATION_PROTECTION_COMPLIANCE | 184efa21-98c3-4e5d-95ab-d07053a96e67 | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) |
+| Microsoft 365 E5 Security | IDENTITY_THREAT_PROTECTION | 26124093-3d78-432b-b5dc-48bf992543d5 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) |
+| Microsoft 365 E5 Security for EMS E5 | IDENTITY_THREAT_PROTECTION_FOR_EMS_E5 | 44ac31e7-2999-4304-ad94-c948886741d4 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
+| Microsoft 365 E5 without Audio Conferencing | SPE_E5_NOPSTNCONF | cd2925a3-5076-4233-8931-638a8c94f773 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
| Microsoft 365 F1 | M365_F1 | 44575883-256e-4a79-9da4-ebe9acabe2b2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 K SKU (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SharePoint Online Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 | SPE_F1 | 66b55226-6b4f-492c-910c-a3b7a3c9d993 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>WIN10_ENT_LOC_F1 (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Kaizala Pro (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Windows 10 Enterprise E3 (Local Only) (e041597c-9c7f-4ed9-99b0-2663301576f7)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for Office 365 F3 (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Power Apps for Office 365 F3 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 F3 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a) | | Microsoft 365 F5 Security Add-on | SPE_F5_SEC | 67ffe999-d9ca-49e1-9d2c-03fb28aa7a48 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | | Microsoft 365 F5 Security + Compliance Add-on | SPE_F5_SECCOMP | 32b47245-eb31-44fc-b945-a8b1576c439f | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | | MICROSOFT FLOW FREE | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) | | MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) |
-| Microsoft 365 E5 Suite features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) |
+| Microsoft 365 E5 Suite Features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e) | Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e) |
| Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) | | MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | MICROSOFT DEFENDER FOR ENDPOINT | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Endpoint P1 | DEFENDER_ENDPOINT_P1 | 16a55f2f-ff35-4cd5-9146-fb784e3761a5 | Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4) | MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4) |
+| Microsoft Defender for Endpoint P2_XPLAT | MDATP_XPLAT | b126b073-72db-4a9d-87a4-b17afe41d4ab | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Intune_Defender (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MDE_SecurityManagement (1689aade-3d6a-4bfc-b017-46d2672df5ad)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
| Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | Microsoft Defender for Office 365 (Plan 1) Faculty | ATP_ENTERPRISE_FACULTY | 26ad4b5c-b686-462e-84b9-d7c22b46837f | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
This policy applies to all users who are accessing Azure Resource Manager servic
### Authentication methods
-Security defaults allow registration and use of Azure AD Multi-Factor Authentication **using only the Microsoft Authenticator app using notifications**. Conditional Access allows the use of any authentication method the administrator chooses to enable.
+Security defaults users are required to register for and use Azure AD Multi-Factor Authentication **using the Microsoft Authenticator app using notifications**. Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.
-| Method | Security defaults | Conditional Access |
-| | | |
-| Notification through mobile app | X | X |
-| Verification code from mobile app or hardware token | X** | X |
-| Text message to phone | | X |
-| Call to phone | | X |
-| App passwords | | X*** |
--- ** Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option.-- *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators.
+Starting in July 2022, anyone with the global administrator role assigned to them will be required to register a phone-based method like call or text as a backup method.
> [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
Security defaults allow registration and use of Azure AD Multi-Factor Authentica
Every organization should have at least two backup administrator accounts configured. We call these emergency access accounts.
-These accounts may be used in scenarios where your normal administrator accounts can't be used. For example: The person with the most recent Global Administrator access has left the organization. Azure AD prevents the last Global Administrator account from being deleted, but it doesn't prevent the account from being deleted or disabled on-premises. Either situation might make the organization unable to recover the account.
+These accounts may be used in scenarios where your normal administrator accounts can't be used. For example: The person with the most recent global administrator access has left the organization. Azure AD prevents the last global administrator account from being deleted, but it doesn't prevent the account from being deleted or disabled on-premises. Either situation might make the organization unable to recover the account.
Emergency access accounts are: -- Assigned Global Administrator rights in Azure AD.
+- Assigned global administrator rights in Azure AD.
- Aren't used on a daily basis. - Are protected with a long complex password.
If your organization is a previous user of per-user based Azure AD Multi-Factor
### Conditional Access
-You can use Conditional Access to configure policies similar to security defaults, but with more granularity including user exclusions, which aren't available in security defaults. If you're using Conditional Access in your environment today, security defaults won't be available to you.
+You can use Conditional Access to configure policies similar to security defaults, but with more granularity including selecting other authentication methods and the ability to exclude users, which aren't available in security defaults. If you're using Conditional Access in your environment today, security defaults won't be available to you.
![Warning message that you can have security defaults or Conditional Access not both](./media/concept-fundamentals-security-defaults/security-defaults-conditional-access.png)
active-directory Frontline Worker Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/frontline-worker-management.md
Title: Frontline worker management - Azure Active Directory
+ Title: Frontline worker management
description: Learn about frontline worker management capabilities that are provided through the My Staff portal. Previously updated : 03/16/2021--- Last updated : 06/16/2022++ #Customer intent: As a manager of frontline workers, I want an intuitive portal so that I can easily onboard new workers and provision shared devices. + # Frontline worker management Frontline workers account for over 80 percent of the global workforce. Yet because of high scale, rapid turnover, and fragmented processes, frontline workers often lack the tools to make their demanding jobs a little easier. Frontline worker management brings digital transformation to the entire frontline workforce. The workforce may include managers, frontline workers, operations, and IT. Frontline worker management empowers the frontline workforce by making the following activities easier to accomplish:+ - Streamlining common IT tasks with My Staff - Easy onboarding of frontline workers through simplified authentication - Seamless provisioning of shared devices and secure sign-out of frontline workers ## Delegated user management through My Staff
-Azure Active Directory (Azure AD) provides the ability to delegate user management to frontline managers through the [My Staff portal](../roles/my-staff-configure.md), helping save valuable time and reduce risks. By enabling simplified password resets and phone management directly from the store or factory floor, managers can grant access to employees without routing the request through the help-desk, IT, or operations.
+Azure Active Directory (Azure AD) in the My Staff portal enables delegation of user management. Frontline managers can save valuable time and reduce risks using the [My Staff portal](../roles/my-staff-configure.md). When an administrator enables simplified password resets and phone management directly from the store or factory floor, managers can grant access to employees without routing the request through the help-desk, IT, or operations.
![Delegated user management in the My Staff portal](media/concept-fundamentals-frontline-worker/delegated-user-management.png) ## Accelerated onboarding with simplified authentication
-My Staff also enables frontline managers to register their team members' phone numbers for [SMS sign-in](../authentication/howto-authentication-sms-signin.md). In many verticals, frontline workers maintain a local username and password combination, a solution that is often cumbersome, expensive, and error-prone. When IT enables authentication using SMS sign-in, frontline workers can log in with [single sign-on (SSO)](../manage-apps/what-is-single-sign-on.md) for Microsoft Teams and other apps using just their phone number and a one-time passcode (OTP) sent via SMS. This makes signing in for frontline workers simple and secure, delivering quick access to the apps they need most.
+My Staff also enables frontline managers to register their team members' phone numbers for [SMS sign-in](../authentication/howto-authentication-sms-signin.md). In many verticals, frontline workers maintain a local username and password combination, a solution that is often cumbersome, expensive, and error-prone. When IT enables authentication using SMS sign-in, frontline workers can log in with [Single Sign-On (SSO)](../manage-apps/what-is-single-sign-on.md) for Microsoft Teams and other applications using just their phone number and a one-time passcode (OTP) sent via SMS. Single Sign-On makes signing in for frontline workers simple and secure, delivering quick access to the apps they need most.
![SMS sign-in](media/concept-fundamentals-frontline-worker/sms-signin.png)
Frontline managers can also use Managed Home Screen (MHS) application to allow w
## Secure sign-out of frontline workers from shared devices
-Many companies use shared devices so frontline workers can do inventory management and point-of-sale transactions, without the IT burden of provisioning and tracking individual devices. With shared device sign-out, it's easy for a frontline worker to securely sign out of all apps on any shared device before handing it back to a hub or passing it off to a teammate on the next shift. Microsoft Teams is one of the apps that is currently supported on shared devices and it allows frontline workers to view tasks that are assigned to them. Once a worker signs out of a shared device, Intune and Azure AD clear all of the company data so the device can safely be handed off to the next associate. You can choose to integrate this capability into all your line-of-business [iOS](../develop/msal-ios-shared-devices.md) and [Android](../develop/msal-android-shared-devices.md) apps using the [Microsoft Authentication Library](../develop/msal-overview.md).
+Frontline workers in many companies use shared devices to do inventory management and sales transactions. Sharing devices reduces the IT burden of provisioning and tracking them individually. With shared device sign-out, it's easy for a frontline worker to securely sign out of all apps on any shared device before handing it back to a hub or passing it off to a teammate on the next shift. Frontline workers can use Microsoft Teams to view their assigned tasks. Once a worker signs out of a shared device, Intune and Azure AD clear all of the company data so the device can safely be handed off to the next associate. You can choose to integrate this capability into all your line-of-business [iOS](../develop/msal-ios-shared-devices.md) and [Android](../develop/msal-android-shared-devices.md) apps using the [Microsoft Authentication Library](../develop/msal-overview.md).
![Shared device sign-out](media/concept-fundamentals-frontline-worker/shared-device-signout.png) ## Next steps -- For more information on delegated user management, see [My Staff user documentation](https://support.microsoft.com/account-billing/manage-front-line-users-with-my-staff-c65b9673-7e1c-4ad6-812b-1a31ce4460bd).-- For inbound user provisioning from SAP SuccessFactors, see the tutorial on [configuring SAP SuccessFactors to Active Directory user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md).-- For inbound user provisioning from Workday, see the tutorial on [configuring Workday for automatic user provisioning](../saas-apps/workday-inbound-tutorial.md).
+- For more information on delegated user management, see [My Staff user documentation](https://support.microsoft.com/account-billing/manage-front-line-users-with-my-staff-c65b9673-7e1c-4ad6-812b-1a31ce4460bd).
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
In this article, you'll learn how to configure the way users consent to applicat
Before an application can access your organization's data, a user must grant the application permissions to do so. Different permissions allow different levels of access. By default, all users are allowed to consent to applications for permissions that don't require administrator consent. For example, by default, a user can consent to allow an app to access their mailbox but can't consent to allow an app unfettered access to read and write to all files in your organization.
-> [!IMPORTANT]
->To reduce the risk of malicious applications attempting to trick users into granting them access to your organization's data, we recommend that you allow user consent only for applications that have been published by a [verified publisher](../develop/publisher-verification-overview.md).
+To reduce the risk of malicious applications attempting to trick users into granting them access to your organization's data, we recommend that you allow user consent only for applications that have been published by a [verified publisher](../develop/publisher-verification-overview.md).
+
+>[!IMPORTANT]
+>As from September 30, 2022, the new default consent setting for new tenants will be to Follow Microsoft's Recommendation. Microsoft's initial recommendation at that time will be that end-users canΓÇÖt consent to multi-tenant applications without publisher verification if the application requests basic permissions like sign-in and read user profile permissions.
## Prerequisites
To configure user consent, you need:
## Configure user consent settings
-To configure user consent settings through the Azure portal, do the following:
+To configure user consent settings through the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Refer to the following guided configuration tutorials using Easy Button template
- [F5-BIG-IP Easy Button for SSO to Oracle JD Edwards](f5-big-ip-oracle-jde-easy-button.md)
+- [F5-BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)
+ - [F5-BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md) ## Azure AD B2B guest access
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Title: Protecting against consent phishing
-description: Learn ways of mitigating against app-based consent phishing attacks using Azure AD.
+ Title: Protect against consent phishing
+description: Learn ways of mitigating against application-based consent phishing attacks using Azure Active Directory.
-+ Previously updated : 08/09/2021 Last updated : 06/17/2022 -+
-#Customer intent: As a developer, I want to learn how to protect against app-based consent phishing attacks so I can protect my users from malicious threat actors.
+#Customer intent: As a developer, I want to learn how to protect against application-based consent phishing attacks so I can protect my users from malicious threat actors.
-# Protecting against consent phishing
+# Protect against consent phishing
-Productivity is no longer confined to private networks, and work has shifted dramatically toward cloud services. While cloud applications enable employees to be productive remotely, attackers can also use application-based attacks to gain access to valuable organization data. You may be familiar with attacks focused on users, such as email phishing or credential compromise. ***Consent phishing*** is another threat vector to be aware of.
-This article explores what consent phishing is, what Microsoft does to protect you, and what steps organizations can take to stay safe.
+Productivity is no longer confined to private networks, and work has shifted dramatically toward cloud services. While cloud applications enable employees to be productive remotely, attackers can also use application-based attacks to gain access to valuable organization data. You may be familiar with attacks focused on users, such as email phishing or credential compromise. ***Consent phishing*** is another threat vector to be aware of.
+
+This article explores what consent phishing is, what Microsoft does to protect an organization, and what steps organizations can take to stay safe.
## What is consent phishing?
-Consent phishing attacks trick users into granting permissions to malicious cloud apps. These malicious apps can then gain access to usersΓÇÖ legitimate cloud services and data. Unlike credential compromise, *threat actors* who perform consent phishing will target users who can grant access to their personal or organizational data directly. The consent screen displays all permissions the app receives. Because the application is hosted by a legitimate provider (such as MicrosoftΓÇÖs identity platform), unsuspecting users accept the terms or hit ΓÇÿ*Accept*ΓÇÖ, which grants a malicious application the requested permissions to the userΓÇÖs or organization's data.
+Consent phishing attacks trick users into granting permissions to malicious cloud applications. These malicious applications can then gain access to legitimate cloud services and data of users. Unlike credential compromise, *threat actors* who perform consent phishing target users who can grant access to their personal or organizational data directly. The consent screen displays all permissions the application receives. Because the application is hosted by a legitimate provider (such as the Microsoft identity platform), unsuspecting users accept the terms, which grant a malicious application the requested permissions to the data. The following image shows an example of an OAuth app that is requesting access to a wide variety of permissions.
:::image type="content" source="./media/protect-consent-phishing/permissions-requested.png" alt-text="Screenshot showing permissions requested window requiring user consent.":::
-*An example of an OAuth app that is requesting access to a wide variety of permissions.*
+## Mitigating consent phishing attacks
-## Mitigating consent phishing attacks using Azure AD
+Administrators, users, or Microsoft security researchers may flag OAuth applications that appear to behave suspiciously. A flagged application is reviewed by Microsoft to determine whether it violates the terms of service. If a violation is confirmed, Azure AD disables the application and prevents further use across all Microsoft services.
-Admins, users, or Microsoft security researchers may flag OAuth applications that appear to behave suspiciously. A flagged application will be reviewed by Microsoft to determine whether the app violates the terms of service. If a violation is confirmed, Azure AD will disable the application and prevent further use across all Microsoft services.
+When Azure AD disables an OAuth application, the following actions occur:
-When Azure AD disables an OAuth application, a few things happen:
-- The malicious application and related service principals are placed into a fully disabled state. Any new token requests or requests for refresh tokens will be denied, but existing access tokens will still be valid until their expiration.-- We surface the disabled state through an exposed property called *disabledByMicrosoftStatus* on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph.-- Global admins who may have had a user in their organization that consented to an application before disablement by Microsoft should receive an email reflecting the action taken and recommended steps they can take to investigate and improve their security posture.
+- The malicious application and related service principals are placed into a fully disabled state. Any new token requests or requests for refresh tokens are denied, but existing access tokens are still valid until their expiration.
+- The disabled state is surfaced through an exposed property called *disabledByMicrosoftStatus* on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph.
+- An email is sent to a global administrator when a user in an organization consented to an application before it was disabled. The email specifies the action taken and recommended steps they can do to investigate and improve their security posture.
## Recommended response and remediation
-If your organization has been impacted by an application disabled by Microsoft, we recommend these immediate steps to keep your environment secure:
+If the organization has been impacted by an application disabled by Microsoft, the following immediate steps should be taken to keep the environment secure:
-1. Investigate the application activity for the disabled application, including:
+1. Investigate the application activity for the disabled application, including:
- The delegated permissions or application permissions requested by the application. - The Azure AD audit logs for activity by the application and sign-in activity for users authorized to use the application.
-1. Review and implement the [guidance on defending against illicit consent grants](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants) in Microsoft cloud products, including auditing permissions and consent for the disabled application or any other suspicious apps found during review.
+1. Review and use the [guidance for defending against illicit consent grants](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants). The guidance includes auditing permissions and consent for disabled and suspicious applications found during review.
1. Implement best practices for hardening against consent phishing, described below. - ## Best practices for hardening against consent phishing attacks
-At Microsoft, we want to put admins in control by providing the right insights and capabilities to control how applications are allowed and used within organizations. While attackers will never rest, there are steps organizations can take to improve their security posture. Some best practices to follow include:
-
-* Educate your organization on how our permissions and consent framework works
- - Understand the data and the permissions an application is asking for and understand howΓÇ»[permissions and consent](../develop/v2-permissions-and-consent.md) work within our platform.
- - Ensure administrators know how toΓÇ»[manage and evaluate consent requests](./manage-consent-requests.md).
- - Routinely [audit apps and consented permissions](../../security/fundamentals/steps-secure-identity.md#audit-apps-and-consented-permissions) in your organization to ensure applications that are used are accessing only the data they need and adhering to the principles of least privilege.
-* Know how to spot and block common consent phishing tactics
- - Check for poor spelling and grammar. If an email message or the applicationΓÇÖs consent screen has spelling and grammatical errors, itΓÇÖs likely a suspicious application. In that case, you can report it directly on the [consent prompt](../develop/application-consent-experience.md#building-blocks-of-the-consent-prompt) with the ΓÇ£*Report it here*ΓÇ¥ link and Microsoft will investigate if it is a malicious application and disable it, if confirmed.
- - DonΓÇÖt rely on app names and domain URLs as a source of authenticity. Attackers like to spoof app names and domains that make it appear to come from a legitimate service or company to drive consent to a malicious app. Instead validate the source of the domain URL and use applications from [verified publishers](../develop/publisher-verification-overview.md) when possible.
- - Block [consent phishing emails with Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies#impersonation-settings-in-anti-phishing-policies-in-microsoft-defender-for-office-365) by protecting against phishing campaigns where an attacker is impersonating a known user in your organization.
- - Configure Microsoft Defender for Cloud Apps policies such as [activity policies](/cloud-app-security/user-activity-policies), [anomaly detection](/cloud-app-security/anomaly-detection-policy), and [OAuth app policies](/cloud-app-security/app-permission-policy) to help manage and take action on abnormal application activity in to your organization.
- - Investigate and hunt for consent phishing attacks by following the guidance on [advanced hunting with Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview).
-* Allow access to apps you trust and protect against those you donΓÇÖt trust
- - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps admins and end users understand the authenticity of application developers through a Microsoft supported vetting process.
- - [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to specific applications you trust, such as application developed by your organization or from verified publishers.
- - Create proactive [app governance](/microsoft-365/compliance/app-governance-manage-app-governance) policies to monitor third-party app behavior on the Microsoft 365 platform to address common suspicious app behaviors.
+Administrators should be in control of application use by providing the right insights and capabilities to control how applications are allowed and used within organizations. While attackers never rest, there are steps organizations can take to improve the security posture. Some best practices to follow include:
+
+- Educate your organization on how our permissions and consent framework works:
+ - Understand the data and the permissions an application is asking for and understand how [permissions and consent](../develop/v2-permissions-and-consent.md) works within the platform.
+ - Make sure that administrators know how to [manage and evaluate consent requests](./manage-consent-requests.md).
+ - Routinely [audit applications and consented permissions](../../security/fundamentals/steps-secure-identity.md#audit-apps-and-consented-permissions) in the organization to make sure that applications are accessing only the data they need and are adhering to the principles of least privilege.
+- Know how to spot and block common consent phishing tactics:
+ - Check for poor spelling and grammar. If an email message or the consent screen of the application has spelling and grammatical errors, it's likely a suspicious application. In that case, report it directly on the [consent prompt](../develop/application-consent-experience.md#building-blocks-of-the-consent-prompt) with the **Report it here** link and Microsoft will investigate if it's a malicious application and disable it, if confirmed.
+ - Don't rely on application names and domain URLs as a source of authenticity. Attackers like to spoof application names and domains that make it appear to come from a legitimate service or company to drive consent to a malicious application. Instead, validate the source of the domain URL and use applications from [verified publishers](../develop/publisher-verification-overview.md) when possible.
+ - Block [consent phishing emails with Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/set-up-anti-phishing-policies#impersonation-settings-in-anti-phishing-policies-in-microsoft-defender-for-office-365) by protecting against phishing campaigns where an attacker is impersonating a known user in the organization.
+ - Configure Microsoft Defender for Cloud Apps policies to help manage abnormal application activity in the organization. For example, [activity policies](/cloud-app-security/user-activity-policies), [anomaly detection](/cloud-app-security/anomaly-detection-policy), and [OAuth app policies](/cloud-app-security/app-permission-policy).
+ - Investigate and hunt for consent phishing attacks by following the guidance on [advanced hunting with Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview).
+- Allow access to trusted applications and protect against those applications that aren't:
+ - Use applications that have been publisher verified. [Publisher verification](../develop/publisher-verification-overview.md) helps administrators and users understand the authenticity of application developers through a Microsoft supported vetting process.
+ - [Configure user consent settings](./configure-user-consent.md?tabs=azure-portal) to allow users to only consent to specific trusted applications, such as applications developed by the organization or from verified publishers.
+ - Create proactive [application governance](/microsoft-365/compliance/app-governance-manage-app-governance) policies to monitor third-party application behavior on the Microsoft 365 platform to address common suspicious application behaviors.
## Next steps
-* [App consent grant investigation](/security/compass/incident-response-playbook-app-consent)
-* [Managing access to apps](./what-is-access-management.md)
-* [Restrict user consent operations in Azure AD](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations)
+- [Application consent grant investigation](/security/compass/incident-response-playbook-app-consent)
+- [Managing access to applications](./what-is-access-management.md)
+- [Restrict user consent operations in Azure AD](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 04/04/2022 Last updated : 06/22/2022
Welcome to what's new in Azure Active Directory application management documenta
### Updated articles -- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
+- [Configure the admin consent workflow](configure-admin-consent-workflow.md)
- [Grant tenant-wide admin consent to an application](grant-admin-consent.md) - [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
+- [Manage app consent policies](manage-app-consent-policies.md)
+- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)
- [Quickstart: View enterprise applications](view-applications-portal.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md) - [Review admin consent requests](review-admin-consent-requests.md)-- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-- [Manage app consent policies](manage-app-consent-policies.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)
- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md) - [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)-
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)
+- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
## February 2022 ### New articles -- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
+- [Properties of an enterprise application](application-properties.md)
- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md) - [Tutorial: Govern and monitor applications](tutorial-govern-monitor.md)-- [Properties of an enterprise application](application-properties.md)
+- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
### Updated articles -- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)
+- [Configure enterprise application properties](add-application-portal-configure.md)
+- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)
- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)-- [Home Realm Discovery for an application](home-realm-discovery-policy.md) - [Disable auto-acceleration sign-in](prevent-domain-hints-with-home-realm-discovery.md)-- [Configure enterprise application properties](add-application-portal-configure.md)-- [What is application management in Azure Active Directory?](what-is-application-management.md)
+- [Home Realm Discovery for an application](home-realm-discovery-policy.md)
+- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)
- [Overview of enterprise application ownership in Azure Active Directory](overview-assign-app-owners.md)-- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)-- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md) - [Tutorial: Configure F5 BIG-IPΓÇÖs Access Policy Manager for header-based SSO](f5-big-ip-header-advanced.md) - [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)-- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)-
-## January 2022
-
-### New articles
- - [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)-- [Your sign-in was blocked](troubleshoot-app-publishing.md)-- [Publish your application in the Azure Active Directory application gallery](v2-howto-app-gallery-listing.md)-
-### Updated articles
--- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)-- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)-- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)-- [Tutorial: Configure F5 BIG-IPΓÇÖs Access Policy Manager for header-based SSO](f5-big-ip-header-advanced.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)-- [Assign users and groups to an application](assign-user-or-group-access-portal.md)-- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)-- [Restrict access to a tenant](tenant-restrictions.md)-- [Configure how users consent to applications](configure-user-consent.md)-- [Troubleshoot password-based single sign-on](troubleshoot-password-based-sso.md)-- [Understand how users are assigned to apps](ways-users-get-assigned-to-applications.md)-- [Manage app consent policies](manage-app-consent-policies.md) - [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Quickstart: Add an enterprise application](add-application-portal.md)-- [Integrate F5 BIG-IP with Azure Active Directory](f5-aad-integration.md)-- [Configure risk-based step-up consent using PowerShell](configure-risk-based-step-up-consent.md)-- [An app page shows an error message after the user signs in](application-sign-in-problem-application-error.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-- [Disable how a user signs in for an application](disable-user-sign-in-portal.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)-- [Manage access to an application](what-is-access-management.md)--
-## December 2021
-
-### New articles
-
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)
+- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Configure risk-based step-up consent using PowerShell](configure-risk-based-step-up-consent.md)-- [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md)-- [Overview of enterprise application ownership in Azure Active Directory](overview-assign-app-owners.md)-- [Azure Active Directory admin consent workflow frequently asked questions](admin-consent-workflow-faq.md)-- [Review and take action on admin consent requests](review-admin-consent-requests.md)-- [Overview of the Azure Active Directory application gallery](overview-application-gallery.md)-
-### Updated articles
--- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)-- [Applications listed in Enterprise applications](application-list.md)-- [Quickstart: View enterprise applications](view-applications-portal.md)-- [Secure hybrid access: Secure legacy apps with Azure Active Directory](secure-hybrid-access.md)-- [Secure hybrid access with Azure Active Directory partner integrations](secure-hybrid-access-integrations.md)-- [Create collections on the My Apps portal](access-panel-collections.md)-- [Restrict access to a tenant](tenant-restrictions.md)-- [Reasons why applications appear in my all applications list](application-types.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Quickstart: Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md)-- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)-- [Configure how users consent to applications](configure-user-consent.md)-- [Consent and permissions overview](consent-and-permissions-overview.md)-- [Manage consent to applications and evaluate consent requests](manage-consent-requests.md)-- [Remove user access to applications](methods-for-removing-user-access.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Assign enterprise application owners](assign-app-owners.md)-- [Integrate Azure AD with F5 BIG-IP for form-based authentication single sign-on](f5-big-ip-forms-advanced.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-
-## November 2021
-
-### New articles
--- [Consent and permissions overview](consent-and-permissions-overview.md)-
-### Updated articles
--- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)-- [Unexpected error when performing consent to an application](application-sign-in-unexpected-user-consent-error.md)-- [Tutorial: Integrate Azure Active Directory with F5 BIG-IP for forms-based authentication Single sign-on](f5-big-ip-forms-advanced.md)-- [Enable self-service application assignment in Azure Active Directory](manage-self-service-access.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Grant tenant-wide admin consent to an application in Azure Active Directory](grant-admin-consent.md)-- [Assign users and groups to an application in Azure Active Directory](assign-user-or-group-access-portal.md)-- [Configure permission classifications in Azure Active Directory](configure-permission-classifications.md)-- [Review permissions granted to applications in Azure Active Directory](manage-application-permissions.md)-- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)-- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation-to-azure-active-directory.md)-- [Tutorial: Migrate Okta sign-on policies to Azure Active Directory Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)-- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)-
-## October 2021
-
-### Updated articles
--- [Manage consent to applications and evaluate consent requests in Azure Active Directory](manage-consent-requests.md)
+- [Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)
+- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
- [What is application management in Azure Active Directory?](what-is-application-management.md)-- [Configure how end-users consent to applications using Azure Active Directory](configure-user-consent.md)-- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)-- [Assign enterprise application owners](assign-app-owners.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-- [Secure hybrid access: Secure legacy apps with Azure Active Directory](secure-hybrid-access.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Tutorial: Migrate Okta sign on policies to Azure Active Directory Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)-- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)-- [Manage certificates for federated single sign-on in Azure Active Directory](manage-certificates-for-federated-single-sign-on.md)--
-## September 2021
-
-### New articles
--- [Home Realm Discovery for an application in Azure Active Directory](home-realm-discovery-policy.md)-
-### Updated articles
--- [Assign users and groups to an application in Azure Active Directory](assign-user-or-group-access-portal.md)-- [Configure sign in behavior for an application by using a Home Realm Discovery policy](configure-authentication-for-federated-users-portal.md)-- [Disable how a user signs in for an application in Azure Active Directory](disable-user-sign-in-portal.md)-- [Hide an Enterprise application in Azure Active Directory](hide-application-from-user-portal.md)-- [Enable self-service application assignment in Azure Active Directory](manage-self-service-access.md)-- [Disable auto-acceleration to a federated IDP during user sign-in with Home Realm Discovery policy](prevent-domain-hints-with-home-realm-discovery.md)-- [Manage access to apps in Azure Active Directory](what-is-access-management.md)-- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md)-- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation-to-azure-active-directory.md)-- [Tutorial: Migrate Okta sign-on policies to Azure AD Conditional Access](migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md)-- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)-- [Secure hybrid access with Azure Active Directory partner integrations](secure-hybrid-access-integrations.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Quickstart: Create and assign a user account in Azure Active Directory](add-application-portal-assign-users.md)-- [Quickstart: Configure enterprise application properties in Azure Active Directory](add-application-portal-configure.md)-- [Add an OpenID Connect-based single sign-on application in Azure Active Directory](add-application-portal-setup-oidc-sso.md)-- [Quickstart: Enable single sign-on for an enterprise application in Azure Active Directory](add-application-portal-setup-sso.md)-- [Quickstart: Add an enterprise application in Azure Active Directory](add-application-portal.md)-- [Quickstart: Delete an enterprise application in Azure Active Directory](delete-application-portal.md)-- [Quickstart: View enterprise applications in Azure Active Directory](view-applications-portal.md)-- [Create collections on the My Apps portal](access-panel-collections.md)-- [Manage app consent policies](manage-app-consent-policies.md)-- [Add linked single sign-on to an application in Azure Active Directory](configure-linked-sign-on.md)-- [Add password-based single sign-on to an application in Azure Active Directory](configure-password-single-sign-on-non-gallery-applications.md)-- [Plan a single sign-on deployment in Azure Active Directory](plan-sso-deployment.md)-- [What is single sign-on in Azure Active Directory?](what-is-single-sign-on.md)--
-## August 2021
-
-### New articles
--- [Protecting against consent phishing](protect-against-consent-phishing.md)-
-### Updated articles
--- [Configure permission classifications](configure-permission-classifications.md)-- [Configure group owner consent to apps accessing group data](configure-user-consent-groups.md)-- [Take action on over privileged or suspicious applications in Azure Active Directory](manage-application-permissions.md)-- [Managing consent to applications and evaluating consent requests](manage-consent-requests.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Quickstart: Add an application to your tenant](add-application-portal.md)-- [Assign users and groups to an enterprise application](assign-user-or-group-access-portal.md)-- [Managing access to apps](what-is-access-management.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-- [Advanced certificate signing options in a SAML token](certificate-signing-options.md)-- [Create collections on the My Apps portal](access-panel-collections.md)--
-## July 2021
-
-### Updated articles
--- [Create collections on the My Apps portal](access-panel-collections.md)-- [Quickstart: Assign users to an application](add-application-portal-assign-users.md)-- [Quickstart: Configure properties for an application](add-application-portal-configure.md)-- [Quickstart: Set up OIDC-based single sign-on for an application](add-application-portal-setup-oidc-sso.md)-- [Quickstart: Set up SAML-based single sign-on for an application](add-application-portal-setup-sso.md)-- [Quickstart: Add an application to your tenant](add-application-portal.md)-- [Quickstart: Delete an application from your tenant](delete-application-portal.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Quickstart: View the list of applications that are using your Azure Active Directory (Azure AD) tenant for identity management](view-applications-portal.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)--
-## June 2021
-
-### Updated articles
--- [Quickstart: Add an application to your Azure Active Directory (Azure AD) tenant](add-application-portal.md)-- [Configure group owner consent to apps accessing group data](configure-user-consent-groups.md)-- [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-configure.md)-- [Manage user assignment for an app in Azure Active Directory](assign-user-or-group-access-portal.md)-- [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)--
-## May 2021
-
-### Updated articles
--- [Azure Active Directory application management: What's new](whats-new-docs.md)-
-## April 2021
-
-### New articles
--- [Active Directory (Azure AD) Application Proxy frequently asked questions](../app-proxy/application-proxy-faq.yml)-
-### Updated articles
--- [Application management best practices](application-management-fundamentals.md)-- [Application management documentation](index.yml)-- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md)-- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)-- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-- [Single sign-on options in Azure AD](sso-options.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Header-based authentication for single sign-on with Application Proxy and PingAccess](../app-proxy/application-proxy-ping-access-publishing-guide.md)-- [Managing consent to applications and evaluating consent requests](manage-consent-requests.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-- [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)-- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)-
-## March 2021
-
-### New articles
--- [Azure Active Directory (Azure AD) Application Management certificates frequently asked questions](application-management-certs-faq.md)-- [Azure Active Directory PowerShell examples for Application Management](app-management-powershell-samples.md)-- [Disable auto-acceleration to a federated IDP during user sign-in with Home Realm Discovery policy](prevent-domain-hints-with-home-realm-discovery.md)-
-### Updated articles
--- [Plan Azure Active Directory My Apps configuration](my-apps-deployment-plan.md)-- [Integrating Azure Active Directory with applications getting started guide](plan-an-application-integration.md)-- [Integrate with SharePoint (SAML)](../app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md)-- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)-- [Use the AD FS application activity report to migrate applications to Azure AD](migrate-adfs-application-activity.md)-- [Plan a single sign-on deployment](plan-sso-deployment.md)-- [Azure Active Directory PowerShell examples for Application Management](app-management-powershell-samples.md)-- [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md)-- [Quickstart: Set up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant](add-application-portal-setup-sso.md)-- [Azure Active Directory application management: What's new](whats-new-docs.md)-- [Active Directory (Azure AD) Application Proxy frequently asked questions](../app-proxy/application-proxy-faq.yml)-- [Troubleshoot problems signing in to an application from Azure AD My Apps](application-sign-in-other-problem-access-panel.md)-- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)-- [Optimize traffic flow with Azure Active Directory Application Proxy](../app-proxy/application-proxy-network-topology.md)-- [Azure AD Application Proxy: Version release history](../app-proxy/application-proxy-release-version-history.md)-- [Configure Azure Active Directory sign in behavior for an application by using a Home Realm Discovery policy](configure-authentication-for-federated-users-portal.md)-- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md)-
-## February 2021
-
-### New articles
--- [Integrate with SharePoint (SAML)](../app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md)-- [Migrate application authentication to Azure Active Directory](migrate-application-authentication-to-azure-active-directory.md)-
-### Updated articles
--- [Integrate with SharePoint (SAML)](../app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](migrate-adfs-apps-to-azure.md)-- [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)-- [Use tenant restrictions to manage access to SaaS cloud applications](tenant-restrictions.md)
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
For example, if the policy in this document is updating the managed identities o
## Next steps -- [Deploy Azure Monitoring Agent](../../azure-monitor/overview.md)
+- [Deploy Azure Monitor Agent](../../azure-monitor/agents/azure-monitor-agent-manage.md#using-azure-policy)
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 03/22/2022 Last updated : 06/20/2022
For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr
## Azure portal
-You can add users, groups, or devices to administrative units using the Azure portal. You can also add users in a bulk operation.
+You can add users, groups, or devices to administrative units using the Azure portal. You can also add users in a bulk operation or create a new group in an administrative unit.
### Add a single user, group, or device to administrative units
You can add users, groups, or devices to administrative units using the Azure po
1. Select **Azure Active Directory**.
-1. Select **Administrative units** and then select the administrative unit that you want to add users, groups, or devices to.
+1. Select **Administrative units** and then select the administrative unit you want to add users, groups, or devices to.
1. Select one of the following:
You can add users, groups, or devices to administrative units using the Azure po
1. Select **Azure Active Directory**.
-1. Select **Administrative units** and then select the administrative unit that you want to add users to.
+1. Select **Administrative units** and then select the administrative unit you want to add users to.
1. Select the administrative unit to which you want to add users.
You can add users, groups, or devices to administrative units using the Azure po
1. Select **Submit**.
+### Create a new group in an administrative unit
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory**.
+
+1. Select **Administrative units** and then select the administrative unit you want to create a new group in.
+
+1. Select **Groups**.
+
+1. Select **New group** and complete the steps to create a new group.
+
+ ![Screenshot of the Administrative units page for creating a new group in an administrative unit.](./media/admin-units-members-add/admin-unit-create-group.png)
+ ## PowerShell Use the [Add-AzureADMSAdministrativeUnitMember](/powershell/module/azuread/add-azureadmsadministrativeunitmember) command to add users or groups to an administrative unit. Use the [Add-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/add-azureadmsadministrativeunitmember?view=azureadps-2.0-preview&preserve-view=true) command to add devices to an administrative unit.
+Use the [New-AzureADMSAdministrativeUnitMember (Preview)](/powershell/module/azuread/new-azureadmsadministrativeunitmember) to create a new group in an administrative unit. Currently, only group creation is supported with this command.
+ ### Add users to an administrative unit ```powershell
$deviceObj = Get-AzureADDevice -Filter "displayname eq 'TestDevice'"
Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.Id -RefObjectId $deviceObj.ObjectId ```
+### Create a new group in an administrative unit
+
+```powershell
+$exampleGroup = New-AzureADMSAdministrativeUnitMember -Id "<admin unit object id>" -OdataType "Microsoft.Graph.Group" -DisplayName "<Example group name>" -Description "<Example group description>" -MailEnabled $True -MailNickname "<examplegroup>" -SecurityEnabled $False -GroupTypes @("Unified")
+```
+ ## Microsoft Graph API Use the [Add a member](/graph/api/administrativeunit-post-members) API to add users or groups to an administrative unit.
-Use the [Add a member (Beta)](/graph/api/administrativeunit-post-members?view=graph-rest-beta&preserve-view=true) API to add devices to an administrative unit.
-
+Use the [Add a member (Beta)](/graph/api/administrativeunit-post-members?view=graph-rest-beta&preserve-view=true) API to add devices to an administrative unit or create a new group in an administrative unit.
### Add users to an administrative unit
Body
} ```
+### Create a new group in an administrative unit
+
+Request
+
+```http
+POST https://graph.microsoft.com/beta/administrativeUnits/{admin-unit-id}/members/
+```
+
+Body
+
+```http
+{
+ "@odata.type": "#Microsoft.Graph.Group",
+ "description": "{Example group description}",
+ "displayName": "{Example group name}",
+ "groupTypes": [
+ "Unified"
+ ],
+ "mailEnabled": true,
+ "mailNickname": "{examplegroup}",
+ "securityEnabled": false
+}
+```
+ ## Next steps - [Administrative units in Azure Active Directory](administrative-units.md)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 05/24/2022 Last updated : 06/21/2022
The following sections describe current support for administrative unit scenario
| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: | | Administrative unit-scoped management of user properties, passwords | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Administrative unit-scoped management of user licenses | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Administrative unit-scoped management of user licenses | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Administrative unit-scoped blocking and unblocking of user sign-ins | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Administrative unit-scoped management of user multi-factor authentication credentials | :heavy_check_mark: | :heavy_check_mark: | :x: |
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
Previously updated : 04/26/2022 Last updated : 06/20/2022
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure application proxy app | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Configure connector group properties | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Create application registration when ability is disabled for all users | [Application Developer](../roles/permissions-reference.md#application-developer) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Create connector group | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Delete connector group | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Disable application proxy | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Download connector service | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Read all configuration | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
+> | Configure application proxy app | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Configure connector group properties | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Create application registration when ability is disabled for all users | [Application Developer](permissions-reference.md#application-developer) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
+> | Create connector group | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Delete connector group | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Disable application proxy | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Download connector service | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Read all configuration | [Application Administrator](permissions-reference.md#application-administrator) | |
## External Identities/B2C
You can further restrict permissions by assigning roles at smaller scopes or by
> | Task | Least privileged role | Additional roles | > | - | | - | > | Create Azure AD B2C directories | [All non-guest users](../fundamentals/users-default-permissions.md) | |
-> | Create B2C applications | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Create enterprise applications | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) | [Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Create, read, update, and delete B2C policies | [B2C IEF Policy Administrator](../roles/permissions-reference.md#b2c-ief-policy-administrator) | |
-> | Create, read, update, and delete identity providers | [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) | |
-> | Create, read, update, and delete password reset user flows | [External ID User Flow Administrator](../roles/permissions-reference.md#external-id-user-flow-administrator) | |
-> | Create, read, update, and delete profile editing user flows | [External ID User Flow Administrator](../roles/permissions-reference.md#external-id-user-flow-administrator) | |
-> | Create, read, update, and delete sign-in user flows | [External ID User Flow Administrator](../roles/permissions-reference.md#external-id-user-flow-administrator) | |
-> | Create, read, update, and delete sign-up user flow | [External ID User Flow Administrator](../roles/permissions-reference.md#external-id-user-flow-administrator) | |
-> | Create, read, update, and delete user attributes | [External ID User Flow Attribute Administrator](../roles/permissions-reference.md#external-id-user-flow-attribute-administrator) | |
-> | Create, read, update, and delete users | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Configure B2B external collaboration settings | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | |
-> | [Read B2C audit logs](../../active-directory-b2c/faq.yml) | [Global Reader](../roles/permissions-reference.md#global-reader) | |
+> | Create B2C applications | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Create enterprise applications | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator) | [Application Administrator](permissions-reference.md#application-administrator) |
+> | Create, read, update, and delete B2C policies | [B2C IEF Policy Administrator](permissions-reference.md#b2c-ief-policy-administrator) | |
+> | Create, read, update, and delete identity providers | [External Identity Provider Administrator](permissions-reference.md#external-identity-provider-administrator) | |
+> | Create, read, update, and delete password reset user flows | [External ID User Flow Administrator](permissions-reference.md#external-id-user-flow-administrator) | |
+> | Create, read, update, and delete profile editing user flows | [External ID User Flow Administrator](permissions-reference.md#external-id-user-flow-administrator) | |
+> | Create, read, update, and delete sign-in user flows | [External ID User Flow Administrator](permissions-reference.md#external-id-user-flow-administrator) | |
+> | Create, read, update, and delete sign-up user flow | [External ID User Flow Administrator](permissions-reference.md#external-id-user-flow-administrator) | |
+> | Create, read, update, and delete user attributes | [External ID User Flow Attribute Administrator](permissions-reference.md#external-id-user-flow-attribute-administrator) | |
+> | Create, read, update, and delete users | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Configure B2B external collaboration settings | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | |
+> | [Read B2C audit logs](../../active-directory-b2c/faq.yml) | [Global Reader](permissions-reference.md#global-reader) | |
> [!NOTE] > Azure AD B2C Global Administrators do not have the same permissions as Azure AD Global Administrators. If you have Azure AD B2C Global Administrator privileges, make sure that you are in an Azure AD B2C directory and not an Azure AD directory.
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure company branding | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read all configuration | [Directory Readers](../roles/permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
+> | Configure company branding | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read all configuration | [Directory Readers](permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
## Company properties > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure company properties | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
+> | Configure company properties | [Global Administrator](permissions-reference.md#global-administrator) | |
## Connect > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Passthrough authentication | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | [Global Administrator](../roles/permissions-reference.md#global-administrator) |
-> | Seamless single sign-on | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
+> | Passthrough authentication | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | [Global Administrator](permissions-reference.md#global-administrator) |
+> | Seamless single sign-on | [Global Administrator](permissions-reference.md#global-administrator) | |
## Cloud Provisioning > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Passthrough authentication | [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) | |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) |
-> | Seamless single sign-on | [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) | |
+> | Passthrough authentication | [Hybrid Identity Administrator](permissions-reference.md#hybrid-identity-administrator) | |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | [Hybrid Identity Administrator](permissions-reference.md#hybrid-identity-administrator) |
+> | Seamless single sign-on | [Hybrid Identity Administrator](permissions-reference.md#hybrid-identity-administrator) | |
## Connect Health
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Manage domains | [Domain Name Administrator](../roles/permissions-reference.md#domain-name-administrator) | |
-> | Read all configuration | [Directory Readers](../roles/permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
+> | Manage domains | [Domain Name Administrator](permissions-reference.md#domain-name-administrator) | |
+> | Read all configuration | [Directory Readers](permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
## Domain Services > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator)<br>[Groups Administrator](../roles/permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor)| |
+> | Create Azure AD Domain Services instance | [Application Administrator](permissions-reference.md#application-administrator)<br>[Groups Administrator](permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor)| |
> | Perform all Azure AD Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | > | Read all configuration | Reader on Azure subscription containing AD DS service | |
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Disable device | [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator) | |
-> | Enable device | [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator) | |
+> | Disable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | |
+> | Enable device | [Cloud Device Administrator](permissions-reference.md#cloud-device-administrator) | |
> | Read basic configuration | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Read BitLocker keys | [Security Reader](../roles/permissions-reference.md#security-reader) | [Password Administrator](../roles/permissions-reference.md#password-administrator)<br/>[Security Administrator](../roles/permissions-reference.md#security-administrator) |
+> | Read BitLocker keys | [Security Reader](permissions-reference.md#security-reader) | [Password Administrator](permissions-reference.md#password-administrator)<br/>[Security Administrator](permissions-reference.md#security-administrator) |
## Enterprise applications > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Consent to any delegated permissions | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) | [Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Consent to application permissions not including Microsoft Graph | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) | [Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Consent to application permissions to Microsoft Graph | [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) | |
+> | Consent to any delegated permissions | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator) | [Application Administrator](permissions-reference.md#application-administrator) |
+> | Consent to application permissions not including Microsoft Graph | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator) | [Application Administrator](permissions-reference.md#application-administrator) |
+> | Consent to application permissions to Microsoft Graph | [Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) | |
> | Consent to applications accessing own data | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Create enterprise application | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator) | [Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Manage Application Proxy | [Application Administrator](../roles/permissions-reference.md#application-administrator) | |
-> | Manage user settings | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read access review of a group or of an app | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) |
+> | Create enterprise application | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator) | [Application Administrator](permissions-reference.md#application-administrator) |
+> | Manage Application Proxy | [Application Administrator](permissions-reference.md#application-administrator) | |
+> | Manage user settings | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read access review of a group or of an app | [Security Reader](permissions-reference.md#security-reader) | [Security Administrator](permissions-reference.md#security-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) |
> | Read all configuration | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Update enterprise application assignments | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Update enterprise application owners | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Update enterprise application properties | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Update enterprise application provisioning | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Update enterprise application self-service | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
-> | Update single sign-on properties | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](../roles/permissions-reference.md#application-administrator) |
+> | Update enterprise application assignments | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) |
+> | Update enterprise application owners | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
+> | Update enterprise application properties | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
+> | Update enterprise application provisioning | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
+> | Update enterprise application self-service | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
+> | Update single sign-on properties | [Enterprise application owner](../fundamentals/users-default-permissions.md#object-ownership) | [Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Application Administrator](permissions-reference.md#application-administrator) |
## Entitlement management > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Add resources to a catalog | [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator) | With entitlement management, you can delegate this task to the [catalog owner](../governance/entitlement-management-catalog-create.md#add-more-catalog-owners) |
-> | Add SharePoint Online sites to catalog | [SharePoint Administrator](../roles/permissions-reference.md#sharepoint-administrator) | |
+> | Add resources to a catalog | [Identity Governance Administrator](permissions-reference.md#identity-governance-administrator) | With entitlement management, you can delegate this task to the [catalog owner](../governance/entitlement-management-catalog-create.md#add-more-catalog-owners) |
+> | Add SharePoint Online sites to catalog | [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) | |
## Groups > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Assign license | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Create group | [Groups Administrator](../roles/permissions-reference.md#groups-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Create, update, or delete access review of a group or of an app | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Manage group expiration | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Manage group settings | [Groups Administrator](../roles/permissions-reference.md#groups-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Read all configuration (except hidden membership) | [Directory Readers](../roles/permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
-> | Read hidden membership | Group member | [Group owner](../fundamentals/users-default-permissions.md#object-ownership)<br/>[Password Administrator](../roles/permissions-reference.md#password-administrator)<br/>[Exchange Administrator](../roles/permissions-reference.md#exchange-administrator)<br/>[SharePoint Administrator](../roles/permissions-reference.md#sharepoint-administrator)<br/>[Teams Administrator](../roles/permissions-reference.md#teams-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Read membership of groups with hidden membership | [Helpdesk Administrator](../roles/permissions-reference.md#helpdesk-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator)<br/>[Teams Administrator](../roles/permissions-reference.md#teams-administrator) |
-> | Revoke license | [License Administrator](../roles/permissions-reference.md#license-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Update group membership | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Update group owners | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Update group properties | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Delete group | [Groups Administrator](../roles/permissions-reference.md#groups-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
+> | Assign license | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Create group | [Groups Administrator](permissions-reference.md#groups-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Create, update, or delete access review of a group or of an app | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Manage group expiration | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Manage group settings | [Groups Administrator](permissions-reference.md#groups-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Read all configuration (except hidden membership) | [Directory Readers](permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
+> | Read hidden membership | Group member | [Group owner](../fundamentals/users-default-permissions.md#object-ownership)<br/>[Password Administrator](permissions-reference.md#password-administrator)<br/>[Exchange Administrator](permissions-reference.md#exchange-administrator)<br/>[SharePoint Administrator](permissions-reference.md#sharepoint-administrator)<br/>[Teams Administrator](permissions-reference.md#teams-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) |
+> | Read membership of groups with hidden membership | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | [User Administrator](permissions-reference.md#user-administrator)<br/>[Teams Administrator](permissions-reference.md#teams-administrator) |
+> | Revoke license | [License Administrator](permissions-reference.md#license-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Update group membership | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Update group owners | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Update group properties | [Group owner](../fundamentals/users-default-permissions.md#object-ownership) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Delete group | [Groups Administrator](permissions-reference.md#groups-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
## Identity Protection > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure alert notifications| [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Configure and enable or disable MFA policy| [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Configure and enable or disable sign-in risk policy| [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Configure and enable or disable user risk policy | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Configure weekly digests | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Dismiss all risk detections | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Fix or dismiss vulnerability | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Read all configuration | [Security Reader](../roles/permissions-reference.md#security-reader) | |
-> | Read all risk detections | [Security Reader](../roles/permissions-reference.md#security-reader) | |
-> | Read vulnerabilities | [Security Reader](../roles/permissions-reference.md#security-reader) | |
+> | Configure alert notifications| [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Configure and enable or disable MFA policy| [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Configure and enable or disable sign-in risk policy| [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Configure and enable or disable user risk policy | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Configure weekly digests | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Dismiss all risk detections | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Fix or dismiss vulnerability | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | |
+> | Read all risk detections | [Security Reader](permissions-reference.md#security-reader) | |
+> | Read vulnerabilities | [Security Reader](permissions-reference.md#security-reader) | |
## Licenses > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Assign license | [License Administrator](../roles/permissions-reference.md#license-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Read all configuration | [Directory Readers](../roles/permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
-> | Revoke license | [License Administrator](../roles/permissions-reference.md#license-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Try or buy subscription | [Billing Administrator](../roles/permissions-reference.md#billing-administrator) | |
+> | Assign license | [License Administrator](permissions-reference.md#license-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Read all configuration | [Directory Readers](permissions-reference.md#directory-readers) | [Default user role](../fundamentals/users-default-permissions.md) |
+> | Revoke license | [License Administrator](permissions-reference.md#license-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Try or buy subscription | [Billing Administrator](permissions-reference.md#billing-administrator) | |
## Monitoring - Audit logs > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read audit logs | [Reports Reader](../roles/permissions-reference.md#reports-reader) | [Security Reader](../roles/permissions-reference.md#security-reader)<br/>[Security Administrator](../roles/permissions-reference.md#security-administrator) |
+> | Read audit logs | [Reports Reader](permissions-reference.md#reports-reader) | [Security Reader](permissions-reference.md#security-reader)<br/>[Security Administrator](permissions-reference.md#security-administrator) |
## Monitoring - Sign-ins > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read sign-in logs | [Reports Reader](../roles/permissions-reference.md#reports-reader) | [Security Reader](../roles/permissions-reference.md#security-reader)<br/>[Security Administrator](../roles/permissions-reference.md#security-administrator)<br/> [Global Reader](../roles/permissions-reference.md#global-reader) |
+> | Read sign-in logs | [Reports Reader](permissions-reference.md#reports-reader) | [Security Reader](permissions-reference.md#security-reader)<br/>[Security Administrator](permissions-reference.md#security-administrator)<br/> [Global Reader](permissions-reference.md#global-reader) |
## Multi-factor authentication > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Delete all existing app passwords generated by the selected users | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | [Disable per-user MFA](../authentication/howto-mfa-userstates.md) | [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator) (via PowerShell) | [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) (via PowerShell) |
-> | [Enable per-user MFA](../authentication/howto-mfa-userstates.md) | [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator) (via PowerShell) | [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) (via PowerShell) |
-> | Manage MFA service settings | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Require selected users to provide contact methods again | [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator) | |
-> | Restore multi-factor authentication on all remembered devices  | [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator) | |
+> | Delete all existing app passwords generated by the selected users | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | [Disable per-user MFA](../authentication/howto-mfa-userstates.md) | [Authentication Administrator](permissions-reference.md#authentication-administrator) (via PowerShell) | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) (via PowerShell) |
+> | [Enable per-user MFA](../authentication/howto-mfa-userstates.md) | [Authentication Administrator](permissions-reference.md#authentication-administrator) (via PowerShell) | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) (via PowerShell) |
+> | Manage MFA service settings | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Require selected users to provide contact methods again | [Authentication Administrator](permissions-reference.md#authentication-administrator) | |
+> | Restore multi-factor authentication on all remembered devices  | [Authentication Administrator](permissions-reference.md#authentication-administrator) | |
## MFA Server > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Block/unblock users | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure account lockout | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure caching rules | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure fraud alert | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure notifications | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure one-time bypass | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure phone call settings | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure providers | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Configure server settings | [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) | |
-> | Read activity report | [Global Reader](../roles/permissions-reference.md#global-reader) | |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | |
-> | Read server status | [Global Reader](../roles/permissions-reference.md#global-reader) | |
+> | Block/unblock users | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure account lockout | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure caching rules | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure fraud alert | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure notifications | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure one-time bypass | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure phone call settings | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure providers | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Configure server settings | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
+> | Read activity report | [Global Reader](permissions-reference.md#global-reader) | |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | |
+> | Read server status | [Global Reader](permissions-reference.md#global-reader) | |
## Organizational relationships > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Manage identity providers | [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) | |
-> | Manage settings | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Manage terms of use | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | |
+> | Manage identity providers | [External Identity Provider Administrator](permissions-reference.md#external-identity-provider-administrator) | |
+> | Manage settings | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Manage terms of use | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | |
## Password reset > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure authentication methods | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Configure customization | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Configure notification | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Configure on-premises integration | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Configure password reset properties | [User Administrator](../roles/permissions-reference.md#user-administrator) | [Global Administrator](../roles/permissions-reference.md#global-administrator) |
-> | Configure registration | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Read all configuration | [Security Administrator](../roles/permissions-reference.md#security-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
+> | Configure authentication methods | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Configure customization | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Configure notification | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Configure on-premises integration | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Configure password reset properties | [User Administrator](permissions-reference.md#user-administrator) | [Global Administrator](permissions-reference.md#global-administrator) |
+> | Configure registration | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Read all configuration | [Security Administrator](permissions-reference.md#security-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
## Privileged identity management > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Assign users to roles | [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) | |
-> | Configure role settings | [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) | |
-> | View audit activity | [Security Reader](../roles/permissions-reference.md#security-reader) | |
-> | View role memberships | [Security Reader](../roles/permissions-reference.md#security-reader) | |
+> | Assign users to roles | [Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) | |
+> | Configure role settings | [Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) | |
+> | View audit activity | [Security Reader](permissions-reference.md#security-reader) | |
+> | View role memberships | [Security Reader](permissions-reference.md#security-reader) | |
## Roles and administrators > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Manage role assignments | [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) | |
-> | Read access review of an Azure AD role | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator)<br/>[Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) |
+> | Manage role assignments | [Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) | |
+> | Read access review of an Azure AD role | [Security Reader](permissions-reference.md#security-reader) | [Security Administrator](permissions-reference.md#security-administrator)<br/>[Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) |
> | Read all configuration | [Default user role](../fundamentals/users-default-permissions.md) | | ## Security - Authentication methods
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure authentication methods | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Configure password protection | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Configure smart lockout | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Read all configuration | [Global Reader](../roles/permissions-reference.md#global-reader) | |
+> | Configure authentication methods | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Configure password protection | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Configure smart lockout | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Read all configuration | [Global Reader](permissions-reference.md#global-reader) | |
## Security - Conditional Access > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Configure MFA trusted IP addresses | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | |
-> | Create custom controls | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Create named locations | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Create policies | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Create terms of use | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Create VPN connectivity certificate | [Global Administrator](../roles/permissions-reference.md#global-administrator) | &nbsp; |
-> | Delete classic policy | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Delete terms of use | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Delete VPN connectivity certificate | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Disable classic policy | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Manage custom controls | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Manage named locations | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Manage terms of use | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Read all configuration | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Read named locations | [Security Reader](../roles/permissions-reference.md#security-reader) | [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator)<br/>[Security Administrator](../roles/permissions-reference.md#security-administrator) |
+> | Configure MFA trusted IP addresses | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | |
+> | Create custom controls | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Create named locations | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Create policies | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Create terms of use | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Create VPN connectivity certificate | [Global Administrator](permissions-reference.md#global-administrator) | &nbsp; |
+> | Delete classic policy | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Delete terms of use | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Delete VPN connectivity certificate | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Disable classic policy | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Manage custom controls | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Manage named locations | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Manage terms of use | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Read named locations | [Security Reader](permissions-reference.md#security-reader) | [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator)<br/>[Security Administrator](permissions-reference.md#security-administrator) |
## Security - Identity security score > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read all configuration | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Read security score | [Security Reader](../roles/permissions-reference.md#security-reader) | [Security Administrator](../roles/permissions-reference.md#security-administrator) |
-> | Update event status | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
+> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Read security score | [Security Reader](permissions-reference.md#security-reader) | [Security Administrator](permissions-reference.md#security-administrator) |
+> | Update event status | [Security Administrator](permissions-reference.md#security-administrator) | |
## Security - Risky sign-ins > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Read all configuration | [Security Reader](../roles/permissions-reference.md#security-reader) | |
-> | Read risky sign-ins | [Security Reader](../roles/permissions-reference.md#security-reader) | |
+> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | |
+> | Read risky sign-ins | [Security Reader](permissions-reference.md#security-reader) | |
## Security - Users flagged for risk > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Dismiss all events | [Security Administrator](../roles/permissions-reference.md#security-administrator) | |
-> | Read all configuration | [Security Reader](../roles/permissions-reference.md#security-reader) | |
-> | Read users flagged for risk | [Security Reader](../roles/permissions-reference.md#security-reader) | |
+> | Dismiss all events | [Security Administrator](permissions-reference.md#security-administrator) | |
+> | Read all configuration | [Security Reader](permissions-reference.md#security-reader) | |
+> | Read users flagged for risk | [Security Reader](permissions-reference.md#security-reader) | |
+
+## Temporary Access Pass (Preview)
+
+> [!div class="mx-tableFixed"]
+> | Task | Least privileged role | Additional roles |
+> | - | | - |
+> | Create, delete, or view a Temporary Access Pass for any user (except themselves) and can configure and manage authentication method policy | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Create, delete, or view a Temporary Access Pass for admins or members (except themselves) | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) | |
+> | Create, delete, or view a Temporary Access Pass for members (except themselves) | [Authentication Administrator](permissions-reference.md#authentication-administrator) | |
+> | View a Temporary Access Pass details for a user (without reading the code itself) | [Global Reader](permissions-reference.md#global-reader) | |
+> | Configure or update the Temporary Access Pass authentication method policy | [Authentication Policy Administrator](permissions-reference.md#authentication-policy-administrator) | |
## Users > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Add user to directory role | [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) | |
-> | Add user to group | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Assign license | [License Administrator](../roles/permissions-reference.md#license-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Create guest user | [Guest Inviter](../roles/permissions-reference.md#guest-inviter) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Reset guest user invite | [User Administrator](../roles/permissions-reference.md#user-administrator) | [Global Administrator](../roles/permissions-reference.md#global-administrator) |
-> | Create user | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Delete users | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Invalidate refresh tokens of limited admins | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Invalidate refresh tokens of non-admins | [Password Administrator](../roles/permissions-reference.md#password-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Invalidate refresh tokens of privileged admins | [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) | |
+> | Add user to directory role | [Privileged Role Administrator](permissions-reference.md#privileged-role-administrator) | |
+> | Add user to group | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Assign license | [License Administrator](permissions-reference.md#license-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Create guest user | [Guest Inviter](permissions-reference.md#guest-inviter) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Reset guest user invite | [User Administrator](permissions-reference.md#user-administrator) | [Global Administrator](permissions-reference.md#global-administrator) |
+> | Create user | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Delete users | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Invalidate refresh tokens of limited admins | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Invalidate refresh tokens of non-admins | [Password Administrator](permissions-reference.md#password-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Invalidate refresh tokens of privileged admins | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) | |
> | Read basic configuration | [Default user role](../fundamentals/users-default-permissions.md) | |
-> | Reset password for limited admins | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Reset password of non-admins | [Password Administrator](../roles/permissions-reference.md#password-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Reset password of privileged admins | [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) | |
-> | Revoke license | [License Administrator](../roles/permissions-reference.md#license-administrator) | [User Administrator](../roles/permissions-reference.md#user-administrator) |
-> | Update all properties except User Principal Name | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Update User Principal Name for limited admins | [User Administrator](../roles/permissions-reference.md#user-administrator) | |
-> | Update User Principal Name property on privileged admins | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Update user settings | [Global Administrator](../roles/permissions-reference.md#global-administrator) | |
-> | Update Authentication methods | [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator) | [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator)<br/>[Global Administrator](../roles/permissions-reference.md#global-administrator) |
+> | Reset password for limited admins | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Reset password of non-admins | [Password Administrator](permissions-reference.md#password-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Reset password of privileged admins | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) | |
+> | Revoke license | [License Administrator](permissions-reference.md#license-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Update all properties except User Principal Name | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Update User Principal Name for limited admins | [User Administrator](permissions-reference.md#user-administrator) | |
+> | Update User Principal Name property on privileged admins | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Update user settings | [Global Administrator](permissions-reference.md#global-administrator) | |
+> | Update Authentication methods | [Authentication Administrator](permissions-reference.md#authentication-administrator) | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator)<br/>[Global Administrator](permissions-reference.md#global-administrator) |
## Support > [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Submit support ticket | [Service Support Administrator](../roles/permissions-reference.md#service-support-administrator) | [Application Administrator](../roles/permissions-reference.md#application-administrator)<br/>[Azure Information Protection Administrator](../roles/permissions-reference.md#azure-information-protection-administrator)<br/>[Billing Administrator](../roles/permissions-reference.md#billing-administrator)<br/>[Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator)<br/>[Compliance Administrator](../roles/permissions-reference.md#compliance-administrator)<br/>[Dynamics 365 Administrator](../roles/permissions-reference.md#dynamics-365-administrator)<br/>[Desktop Analytics Administrator](../roles/permissions-reference.md#desktop-analytics-administrator)<br/>[Exchange Administrator](../roles/permissions-reference.md#exchange-administrator)<br/>[Intune Administrator](../roles/permissions-reference.md#intune-administrator)<br/>[Password Administrator](../roles/permissions-reference.md#password-administrator)<br/>[Power BI Administrator](../roles/permissions-reference.md#power-bi-administrator)<br/>[Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator)<br/>[SharePoint Administrator](../roles/permissions-reference.md#sharepoint-administrator)<br/>[Skype for Business Administrator](../roles/permissions-reference.md#skype-for-business-administrator)<br/>[Teams Administrator](../roles/permissions-reference.md#teams-administrator)<br/>[Teams Communications Administrator](../roles/permissions-reference.md#teams-communications-administrator)<br/>[User Administrator](../roles/permissions-reference.md#user-administrator) |
+> | Submit support ticket | [Service Support Administrator](permissions-reference.md#service-support-administrator) | [Application Administrator](permissions-reference.md#application-administrator)<br/>[Azure Information Protection Administrator](permissions-reference.md#azure-information-protection-administrator)<br/>[Billing Administrator](permissions-reference.md#billing-administrator)<br/>[Cloud Application Administrator](permissions-reference.md#cloud-application-administrator)<br/>[Compliance Administrator](permissions-reference.md#compliance-administrator)<br/>[Dynamics 365 Administrator](permissions-reference.md#dynamics-365-administrator)<br/>[Desktop Analytics Administrator](permissions-reference.md#desktop-analytics-administrator)<br/>[Exchange Administrator](permissions-reference.md#exchange-administrator)<br/>[Intune Administrator](permissions-reference.md#intune-administrator)<br/>[Password Administrator](permissions-reference.md#password-administrator)<br/>[Power BI Administrator](permissions-reference.md#power-bi-administrator)<br/>[Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator)<br/>[SharePoint Administrator](permissions-reference.md#sharepoint-administrator)<br/>[Skype for Business Administrator](permissions-reference.md#skype-for-business-administrator)<br/>[Teams Administrator](permissions-reference.md#teams-administrator)<br/>[Teams Communications Administrator](permissions-reference.md#teams-communications-administrator)<br/>[User Administrator](permissions-reference.md#user-administrator) |
## Next steps
active-directory Aclp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aclp-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ACLP | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ACLP'
description: Learn how to configure single sign-on between Azure Active Directory and ACLP.
Previously updated : 05/07/2019 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory integration with ACLP
+# Tutorial: Azure AD SSO integration with ACLP
-In this tutorial, you learn how to integrate ACLP with Azure Active Directory (Azure AD).
-Integrating ACLP with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ACLP with Azure Active Directory (Azure AD). When you integrate ACLP with Azure AD, you can:
-* You can control in Azure AD who has access to ACLP.
-* You can enable your users to be automatically signed-in to ACLP (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ACLP.
+* Enable your users to be automatically signed-in to ACLP with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with ACLP, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* ACLP single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* ACLP single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ACLP supports **SP** initiated SSO
-
-## Adding ACLP from the gallery
-
-To configure the integration of ACLP into Azure AD, you need to add ACLP from the gallery to your list of managed SaaS apps.
-
-**To add ACLP from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
+* ACLP supports **SP** initiated SSO.
-3. To add new application, click **New application** button on the top of dialog.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![The New application button](common/add-new-app.png)
+## Add ACLP from the gallery
-4. In the search box, type **ACLP**, select **ACLP** from result panel then click **Add** button to add the application.
-
- ![ACLP in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ACLP based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ACLP needs to be established.
-
-To configure and test Azure AD single sign-on with ACLP, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ACLP Single Sign-On](#configure-aclp-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ACLP test user](#create-aclp-test-user)** - to have a counterpart of Britta Simon in ACLP that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of ACLP into Azure AD, you need to add ACLP from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ACLP** in the search box.
+1. Select **ACLP** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with ACLP, perform the following steps:
+## Configure and test Azure AD SSO for ACLP
-1. In the [Azure portal](https://portal.azure.com/), on the **ACLP** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with ACLP using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ACLP.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with ACLP, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ACLP SSO](#configure-aclp-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ACLP test user](#create-aclp-test-user)** - to have a counterpart of B.Simon in ACLP that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **ACLP** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![ACLP Domain and URLs single sign-on information](common/sp-signonurl.png)
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://access.sans.org/go/<COMPANYNAME>`
To configure Azure AD single sign-on with ACLP, perform the following steps:
> [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [ACLP Client support team](mailto:mrichards@sans.org) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-
- ![The Certificate download link](common/copy-metadataurl.png)
-
-### Configure ACLP Single Sign-On
+1. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-To configure single sign-on on **ACLP** side, you need to send the **App Federation Metadata Url** to [ACLP support team](mailto:mrichards@sans.org). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ACLP.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ACLP.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ACLP**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ACLP**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure ACLP SSO
-2. In the applications list, select **ACLP**.
-
- ![The ACLP link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ACLP** side, you need to send the **App Federation Metadata Url** to [ACLP support team](mailto:mrichards@sans.org). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ACLP test user In this section, you create a user called Britta Simon in ACLP. Work with [ACLP support team](mailto:mrichards@sans.org) to add the users in the ACLP platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ACLP tile in the Access Panel, you should be automatically signed in to the ACLP for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ACLP Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to ACLP Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ACLP tile in the My Apps, this will redirect to ACLP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ACLP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Ekincare Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ekincare-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with eKincare | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with eKincare'
description: Learn how to configure single sign-on between Azure Active Directory and eKincare.
Previously updated : 02/05/2019 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory integration with eKincare
+# Tutorial: Azure AD SSO integration with eKincare
-In this tutorial, you learn how to integrate eKincare with Azure Active Directory (Azure AD).
-Integrating eKincare with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate eKincare with Azure Active Directory (Azure AD). When you integrate eKincare with Azure AD, you can:
-* You can control in Azure AD who has access to eKincare.
-* You can enable your users to be automatically signed-in to eKincare (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to eKincare.
+* Enable your users to be automatically signed-in to eKincare with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with eKincare, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* eKincare single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* eKincare single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* eKincare supports **IDP** initiated SSO
+* eKincare supports **IDP** initiated SSO.
-* eKincare supports **Just In Time** user provisioning
+* eKincare supports **Just In Time** user provisioning.
-## Adding eKincare from the gallery
+## Add eKincare from the gallery
To configure the integration of eKincare into Azure AD, you need to add eKincare from the gallery to your list of managed SaaS apps.
-**To add eKincare from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **eKincare**, select **eKincare** from result panel then click **Add** button to add the application.
-
- ![eKincare in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **eKincare** in the search box.
+1. Select **eKincare** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with eKincare based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in eKincare needs to be established.
+## Configure and test Azure AD SSO for eKincare
-To configure and test Azure AD single sign-on with eKincare, you need to complete the following building blocks:
+Configure and test Azure AD SSO with eKincare using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in eKincare.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure eKincare Single Sign-On](#configure-ekincare-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create eKincare test user](#create-ekincare-test-user)** - to have a counterpart of Britta Simon in eKincare that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with eKincare, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure eKincare SSO](#configure-ekincare-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create eKincare test user](#create-ekincare-test-user)** - to have a counterpart of B.Simon in eKincare that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with eKincare, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **eKincare** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **eKincare** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
-
- ![eKincare Domain and URLs single sign-on information](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<instancename>.ekincare.com/`
To configure Azure AD single sign-on with eKincare, perform the following steps:
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [eKincare Client support team](mailto:tech@ekincare.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. eKincare application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
+1. eKincare application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
- ![Screenshot that shows the "User Attributes" dialog with the "Edit" button selected.](common/edit-attribute.png)
+ ![Screenshot that shows the User Attributes dialog with the edit button selected.](common/edit-attribute.png "Attributes")
-6. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
+1. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
| Name | Source Attribute | | | |
To configure Azure AD single sign-on with eKincare, perform the following steps:
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot that shows the "User claims" dialog with the "Add new claim" and "Save" buttons selected.](common/new-save-attribute.png)
+ ![Screenshot that shows the "User claims" dialog with the "Add new claim" and "Save" buttons selected.](common/new-save-attribute.png "Claims")
- ![image](common/new-attribute-details.png)
+ ![Screenshot that shows the image of eKincare application.](common/new-attribute-details.png "Details")
b. In the **Name** textbox, type the attribute name shown for that row.
To configure Azure AD single sign-on with eKincare, perform the following steps:
g. Click **Save**.
-7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot that shows the Certificate download link.](common/metadataxml.png "Certificate")
-8. On the **Set up eKincare** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up eKincare** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure eKincare Single Sign-On
-
-To configure single sign-on on **eKincare** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [eKincare support team](mailto:tech@ekincare.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to eKincare.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **eKincare**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to eKincare.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **eKincare**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **eKincare**.
+## Configure eKincare SSO
- ![The eKincare link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **eKincare** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [eKincare support team](mailto:tech@ekincare.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create eKincare test user In this section, a user called Britta Simon is created in eKincare. eKincare supports **just-in-time user provisioning**, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in eKincare, a new one is created after authentication.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the eKincare tile in the Access Panel, you should be automatically signed in to the eKincare for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the eKincare for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the eKincare tile in the My Apps, you should be automatically signed in to the eKincare for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure eKincare you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Excelityglobal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/excelityglobal-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ExcelityGlobal | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with
+ExcelityGlobal'
description: Learn how to configure single sign-on between Azure Active Directory and ExcelityGlobal.
Previously updated : 03/04/2019 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory integration with ExcelityGlobal
+# Tutorial: Azure AD SSO integration with ExcelityGlobal
-In this tutorial, you learn how to integrate ExcelityGlobal with Azure Active Directory (Azure AD).
-Integrating ExcelityGlobal with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ExcelityGlobal with Azure Active Directory (Azure AD). When you integrate ExcelityGlobal with Azure AD, you can:
-* You can control in Azure AD who has access to ExcelityGlobal.
-* You can enable your users to be automatically signed-in to ExcelityGlobal (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ExcelityGlobal.
+* Enable your users to be automatically signed-in to ExcelityGlobal with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with ExcelityGlobal, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ExcelityGlobal single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* ExcelityGlobal single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ExcelityGlobal supports **IDP** initiated SSO
-
-## Adding ExcelityGlobal from the gallery
-
-To configure the integration of ExcelityGlobal into Azure AD, you need to add ExcelityGlobal from the gallery to your list of managed SaaS apps.
-
-**To add ExcelityGlobal from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* ExcelityGlobal supports **IDP** initiated SSO.
-4. In the search box, type **ExcelityGlobal**, select **ExcelityGlobal** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![ExcelityGlobal in the results list](common/search-new-app.png)
+## Add ExcelityGlobal from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ExcelityGlobal based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ExcelityGlobal needs to be established.
-
-To configure and test Azure AD single sign-on with ExcelityGlobal, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ExcelityGlobal Single Sign-On](#configure-excelityglobal-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ExcelityGlobal test user](#create-excelityglobal-test-user)** - to have a counterpart of Britta Simon in ExcelityGlobal that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with ExcelityGlobal, perform the following steps:
+To configure the integration of ExcelityGlobal into Azure AD, you need to add ExcelityGlobal from the gallery to your list of managed SaaS apps.
-1. In the [Azure portal](https://portal.azure.com/), on the **ExcelityGlobal** application integration page, select **Single sign-on**.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ExcelityGlobal** in the search box.
+1. Select **ExcelityGlobal** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- ![Configure single sign-on link](common/select-sso.png)
+## Configure and test Azure AD SSO for ExcelityGlobal
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+Configure and test Azure AD SSO with ExcelityGlobal using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ExcelityGlobal.
- ![Single sign-on select mode](common/select-saml-option.png)
+To configure and test Azure AD SSO with ExcelityGlobal, perform the following steps:
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ExcelityGlobal SSO](#configure-excelityglobal-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ExcelityGlobal test user](#create-excelityglobal-test-user)** - to have a counterpart of B.Simon in ExcelityGlobal that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+## Configure Azure AD SSO
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![ExcelityGlobal Domain and URLs single sign-on information](common/idp-intiated.png)
+1. In the Azure portal, on the **ExcelityGlobal** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- a. In the **Identifier** text box, type a URL using the following pattern:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- **For Production Environment** : `https://ess.excelityglobal.com`
+4. On the **Basic SAML Configuration** page, perform the following steps:
- **For Sandbox Environment** : `https://s6.excelityglobal.com`
+ a. In the **Identifier** text box, type one of the
+ following URLs:
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ | **Identifier** |
+ ||
+ | **For Production Environment** : `https://ess.excelityglobal.com` |
+ | **For Sandbox Environment** : `https://s6.excelityglobal.com` |
- **For Production Environment** : `https://ess.excelityglobal.com/ACS`
+ b. In the **Reply URL** text box, type one of the following URLs:
- **For Sandbox Environment** : `https://s6.excelityglobal.com/ACS`
+ | **Reply URL** |
+ |-|
+ | **For Production Environment** : `https://ess.excelityglobal.com/ACS` |
+ |**For Sandbox Environment** : `https://s6.excelityglobal.com/ACS` |
-5. Your ExcelityGlobal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. ExcelityGlobal application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
+1. Your ExcelityGlobal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, whereas **nameidentifier** is mapped with **user.userprincipalname**. ExcelityGlobal application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
- ![image](common/edit-attribute.png)
-
-6. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
-
- ![Edit SAML Signing Certificate](common/edit-certificate.png)
-
-7. In the **SAML Signing Certificate** section, copy the **Thumbprint** and save it on your computer.
+ ![Screenshot shows the image of ExcelityGlobal application.](common/edit-attribute.png "Image")
- ![Copy Thumbprint value](common/copy-thumbprint.png)
+1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
-8. On the **Set up ExcelityGlobal** section, copy the appropriate URL(s) as per your requirement.
+ ![Screenshot shows to edit SAML Signing Certificate.](common/edit-certificate.png "Certificate")
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. In the **SAML Signing Certificate** section, copy the **Thumbprint** and save it on your computer.
- a. Login URL
+ ![Screenshot shows to Copy Thumbprint value.](common/copy-thumbprint.png "Thumbprint")
- b. Azure AD Identifier
+1. On the **Set up ExcelityGlobal** section, copy the appropriate URL(s) as per your requirement.
- c. Logout URL
-
-### Configure ExcelityGlobal Single Sign-On
-
-To configure single sign-on on **ExcelityGlobal** side, you need to send the **Thumbprint value** and appropriate copied URLs from Azure portal to [ExcelityGlobal support team](https://www.excelityglobal.com/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ExcelityGlobal.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ExcelityGlobal.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ExcelityGlobal**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ExcelityGlobal**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure ExcelityGlobal SSO
-2. In the applications list, select **ExcelityGlobal**.
-
- ![The ExcelityGlobal link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **ExcelityGlobal** side, you need to send the **Thumbprint value** and appropriate copied URLs from Azure portal to [ExcelityGlobal support team](https://www.excelityglobal.com/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ExcelityGlobal test user In this section, you create a user called Britta Simon in ExcelityGlobal. Work with [ExcelityGlobal support team](https://www.excelityglobal.com/contact-us) to add the users in the ExcelityGlobal platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the ExcelityGlobal tile in the Access Panel, you should be automatically signed in to the ExcelityGlobal for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ExcelityGlobal for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ExcelityGlobal tile in the My Apps, you should be automatically signed in to the ExcelityGlobal for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ExcelityGlobal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
e. Select **Save**.
+ > [!NOTE]
+ > **User Attributes & Claims** allow only one group claim. To add a group claim, delete the existing group claim **user.groups [SecurityGroup]** already present in the claims to add the new claim or edit the existing one to **All groups**.
+ f. Select **Add a group claim**. g. Select **All groups**.
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* An existing [Klaxoon contract](https://klaxoon.com/enterprise).
+* An existing [Klaxoon contract](https://klaxoon.com/solutions-enterprise-excellence).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Klaxoon Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* An existing [Klaxoon contract](https://klaxoon.com/enterprise).
+* An existing [Klaxoon contract](https://klaxoon.com/solutions-enterprise-excellence).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Oracle Fusion Erp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-fusion-erp-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Oracle Fusion ERP | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Oracle Fusion ERP'
description: Learn how to configure single sign-on between Azure Active Directory and Oracle Fusion ERP.
Previously updated : 02/09/2021 Last updated : 06/17/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Oracle Fusion ERP
+# Tutorial: Azure AD SSO integration with Oracle Fusion ERP
In this tutorial, you'll learn how to integrate Oracle Fusion ERP with Azure Active Directory (Azure AD). When you integrate Oracle Fusion ERP with Azure AD, you can:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Oracle Fusion ERP supports **SP** initiated SSO.
+* Oracle Fusion ERP supports **SP and IDP** initiated SSO.
* Oracle Fusion ERP supports [**Automated** user provisioning and deprovisioning](oracle-fusion-erp-provisioning-tutorial.md) (recommended). ## Add Oracle Fusion ERP from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following step:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.fa.em2.oraclecloud.com/fscmUI/faces/AtkHomePageWelcome`
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.login.em2.oraclecloud.com:443/oam/fed`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.login.em2.oraclecloud.com:443/oam/fed`
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.fa.em2.oraclecloud.com/fscmUI/faces/AtkHomePageWelcome`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Oracle Fusion ERP Client support team](https://www.oracle.com/applications/erp/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle Fusion ERP Client support team](https://www.oracle.com/applications/erp/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you create a user called Britta Simon in Oracle Fusion ERP. Wor
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Oracle Fusion ERP Sign-on URL where you can initiate the login flow.
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Oracle Fusion ERP Sign-on URL where you can initiate the login flow.
+
+* Go to Oracle Fusion ERP Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
-* Go to Oracle Fusion ERP Sign-on URL directly and initiate the login flow from there.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Oracle Fusion ERP for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Oracle Fusion ERP tile in the My Apps, this will redirect to Oracle Fusion ERP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Oracle Fusion ERP tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Oracle Fusion ERP for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Pymetrics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pymetrics-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with pymetrics | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with pymetrics'
description: Learn how to configure single sign-on between Azure Active Directory and pymetrics.
Previously updated : 06/10/2020 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with pymetrics
+# Tutorial: Azure AD SSO integration with pymetrics
In this tutorial, you'll learn how to integrate pymetrics with Azure Active Directory (Azure AD). When you integrate pymetrics with Azure AD, you can:
In this tutorial, you'll learn how to integrate pymetrics with Azure Active Dire
* Enable your users to be automatically signed-in to pymetrics with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * pymetrics single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* pymetrics supports **SP and IDP** initiated SSO
-* pymetrics supports **Just In Time** user provisioning
-
-* Once you configure pymetrics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* pymetrics supports **SP and IDP** initiated SSO.
+* pymetrics supports **Just In Time** user provisioning.
-## Adding pymetrics from the gallery
+## Add pymetrics from the gallery
To configure the integration of pymetrics into Azure AD, you need to add pymetrics from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **pymetrics** in the search box. 1. Select **pymetrics** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for pymetrics
+## Configure and test Azure AD SSO for pymetrics
Configure and test Azure AD SSO with pymetrics using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in pymetrics.
-To configure and test Azure AD SSO with pymetrics, complete the following building blocks:
+To configure and test Azure AD SSO with pymetrics, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with pymetrics, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **pymetrics** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **pymetrics** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://www.pymetrics.com/saml2-sp/<CUSTOMERNAME>/<CUSTOMERNAME>/metadata/`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up pymetrics** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **pymetrics**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called Britta Simon is created in pymetrics. pymetrics s
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the pymetrics tile in the Access Panel, you should be automatically signed in to the pymetrics for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to pymetrics Sign-On URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to pymetrics Sign-On URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the pymetrics for which you set up the SSO.
-- [Try pymetrics with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the pymetrics tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the pymetrics for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect pymetrics with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure pymetrics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Reprints Desk Article Galaxy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reprints-desk-article-galaxy-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Reprints Desk - Article Galaxy | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Reprints Desk - Article Galaxy'
description: Learn how to configure single sign-on between Azure Active Directory and Reprints Desk - Article Galaxy.
Previously updated : 01/21/2020 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Reprints Desk - Article Galaxy
+# Tutorial: Azure AD SSO integration with Reprints Desk - Article Galaxy
In this tutorial, you'll learn how to integrate Reprints Desk - Article Galaxy with Azure Active Directory (Azure AD). When you integrate Reprints Desk - Article Galaxy with Azure AD, you can:
In this tutorial, you'll learn how to integrate Reprints Desk - Article Galaxy w
* Enable your users to be automatically signed-in to Reprints Desk - Article Galaxy with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Reprints Desk - Article Galaxy single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Reprints Desk - Article Galaxy supports **IDP** initiated SSO
-
-* Reprints Desk - Article Galaxy supports **Just In Time** user provisioning
+* Reprints Desk - Article Galaxy supports **IDP** initiated SSO.
-* [Once you configure the Reprints Desk - Article Galaxy you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session controls extend from Conditional Access. Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Reprints Desk - Article Galaxy supports **Just In Time** user provisioning.
-## Adding Reprints Desk - Article Galaxy from the gallery
+## Add Reprints Desk - Article Galaxy from the gallery
To configure the integration of Reprints Desk - Article Galaxy into Azure AD, you need to add Reprints Desk - Article Galaxy from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Reprints Desk - Article Galaxy** in the search box. 1. Select **Reprints Desk - Article Galaxy** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Reprints Desk - Article Galaxy
+## Configure and test Azure AD SSO for Reprints Desk - Article Galaxy
Configure and test Azure AD SSO with Reprints Desk - Article Galaxy using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Reprints Desk - Article Galaxy.
-To configure and test Azure AD SSO with Reprints Desk - Article Galaxy, complete the following building blocks:
+To configure and test Azure AD SSO with Reprints Desk - Article Galaxy, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Reprints Desk Article Galaxy SSO](#configure-reprints-desk-article-galaxy-sso)** - to configure the single sign-on settings on application side.
- * **[Create Reprints Desk Article Galaxy test user](#create-reprints-desk-article-galaxy-test-user)** - to have a counterpart of B.Simon in Reprints Desk - Article Galaxy that is linked to the Azure AD representation of user.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Reprints Desk - Article Galaxy SSO](#configure-reprints-deskarticle-galaxy-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Reprints Desk - Article Galaxy test user](#create-reprints-deskarticle-galaxy-test-user)** - to have a counterpart of B.Simon in Reprints Desk - Article Galaxy that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Reprints Desk - Article Galaxy** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Reprints Desk - Article Galaxy** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button. - 1. Reprints Desk - Article Galaxy application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of Reprints Desk application.](common/default-attributes.png "Image")
1. In addition to above, Reprints Desk - Article Galaxy application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Reprints Desk - Article Galaxy** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Reprints Desk - Article Galaxy**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Reprints Desk Article Galaxy SSO
+## Configure Reprints Desk - Article Galaxy SSO
To configure single sign-on on **Reprints Desk - Article Galaxy** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Reprints Desk - Article Galaxy support team](mailto:customersupport@reprintsdesk.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Reprints Desk Article Galaxy test user
+### Create Reprints Desk - Article Galaxy test user
In this section, a user called B.Simon is created in Reprints Desk - Article Galaxy. Reprints Desk - Article Galaxy supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Reprints Desk - Article Galaxy, a new one is created after authentication. ## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Reprints Desk - Article Galaxy tile in the Access Panel, you should be automatically signed in to the Reprints Desk - Article Galaxy for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Reprints Desk - Article Galaxy for which you set up the SSO.
-- [Try Reprints Desk - Article Galaxy with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Reprints Desk - Article Galaxy tile in the My Apps, you should be automatically signed in to the Reprints Desk - Article Galaxy for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Reprints Desk - Article Galaxy with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Reprints Desk - Article Galaxy you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Timetabling Solutions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timetabling-solutions-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Timetabling Solutions | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Timetabling Solutions'
description: Learn how to configure single sign-on between Azure Active Directory and Timetabling Solutions.
Previously updated : 04/10/2020 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Timetabling Solutions
+# Tutorial: Azure AD SSO integration with Timetabling Solutions
In this tutorial, you'll learn how to integrate Timetabling Solutions with Azure Active Directory (Azure AD). When you integrate Timetabling Solutions with Azure AD, you can:
In this tutorial, you'll learn how to integrate Timetabling Solutions with Azure
* Enable your users to be automatically signed-in to Timetabling Solutions with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Timetabling Solutions single sign-on (SSO) enabled subscription.-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Timetabling Solutions supports **SP** initiated SSO
-* Once you configure Timetabling Solutions you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Timetabling Solutions supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Timetabling Solutions from the gallery
+## Add Timetabling Solutions from the gallery
To configure the integration of Timetabling Solutions into Azure AD, you need to add Timetabling Solutions from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Timetabling Solutions** in the search box. 1. Select **Timetabling Solutions** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Timetabling Solutions
+## Configure and test Azure AD SSO for Timetabling Solutions
Configure and test Azure AD SSO with Timetabling Solutions using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Timetabling Solutions.
-To configure and test Azure AD SSO with Timetabling Solutions, complete the following building blocks:
+To configure and test Azure AD SSO with Timetabling Solutions, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Timetabling Solutions, complete the foll
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Timetabling Solutions** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Timetabling Solutions** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following step:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://auth.timetabling.education/login` 1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
- ![Edit SAML Signing Certificate](common/edit-certificate.png)
+ ![Screenshot shows to edit SAML Signing Certificate.](common/edit-certificate.png "Certificate")
1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer.
- ![Copy Thumbprint value](common/copy-thumbprint.png)
+ ![Screenshot shows to copy thumbprint value.](common/copy-thumbprint.png "Thumbprint")
1. On the **Set up Timetabling Solutions** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Timetabling Solutions**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Timetabling Solutions.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Timetabling Solutions tile in the Access Panel, you should be automatically signed in to the Timetabling Solutions for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal. This will redirect to Timetabling Solutions Sign-On URL where you can initiate the login flow.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Go to Timetabling Solutions Sign-On URL directly and initiate the login flow from there.
-- [Try Timetabling Solutions with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Timetabling Solutions tile in the My Apps, this will redirect to Timetabling Solutions Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Timetabling Solutions with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Timetabling Solutions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Torii Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/torii-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Torii | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Torii'
description: Learn how to configure single sign-on between Azure Active Directory and Torii.
Previously updated : 05/06/2020 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Torii
+# Tutorial: Azure AD SSO integration with Torii
In this tutorial, you'll learn how to integrate Torii with Azure Active Directory (Azure AD). When you integrate Torii with Azure AD, you can:
In this tutorial, you'll learn how to integrate Torii with Azure Active Director
* Enable your users to be automatically signed-in to Torii with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Torii single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Torii supports **SP and IDP** initiated SSO
-* Torii supports **Just In Time** user provisioning
-* Once you configure Torii you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Torii supports **SP and IDP** initiated SSO.
+* Torii supports **Just In Time** user provisioning.
-## Adding Torii from the gallery
+## Add Torii from the gallery
To configure the integration of Torii into Azure AD, you need to add Torii from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Torii** in the search box. 1. Select **Torii** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Torii
+## Configure and test Azure AD SSO for Torii
Configure and test Azure AD SSO with Torii using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Torii.
-To configure and test Azure AD SSO with Torii, complete the following building blocks:
+To configure and test Azure AD SSO with Torii, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Torii, complete the following building b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Torii** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Torii** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://api.toriihq.com/api/saml/<idOrg>/callback`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateraw.png)
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
1. On the **Set up Torii** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Torii**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called Britta Simon is created in Torii. Torii supports
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Torii tile in the Access Panel, you should be automatically signed in to the Torii for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Torii Sign-On URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Torii Sign-On URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Torii for which you set up the SSO.
-- [Try Torii with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Torii tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Torii for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Torii with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Torii you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Wisdom By Invictus Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wisdom-by-invictus-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Wisdom by Invictus | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Wisdom by Invictus'
description: Learn how to configure single sign-on between Azure Active Directory and Wisdom by Invictus.
Previously updated : 03/31/2020 Last updated : 06/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Wisdom by Invictus
+# Tutorial: Azure AD SSO integration with Wisdom by Invictus
In this tutorial, you'll learn how to integrate Wisdom by Invictus with Azure Active Directory (Azure AD). When you integrate Wisdom by Invictus with Azure AD, you can:
In this tutorial, you'll learn how to integrate Wisdom by Invictus with Azure Ac
* Enable your users to be automatically signed-in to Wisdom by Invictus with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Wisdom by Invictus single sign-on (SSO) enabled subscription.
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Wisdom by Invictus supports **SP and IDP** initiated SSO
-* Once you configure Wisdom by Invictus you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Wisdom by Invictus supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Wisdom by Invictus from the gallery
+## Add Wisdom by Invictus from the gallery
To configure the integration of Wisdom by Invictus into Azure AD, you need to add Wisdom by Invictus from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
To configure the integration of Wisdom by Invictus into Azure AD, you need to ad
Configure and test Azure AD SSO with Wisdom by Invictus using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Wisdom by Invictus.
-To configure and test Azure AD SSO with Wisdom by Invictus, complete the following building blocks:
+To configure and test Azure AD SSO with Wisdom by Invictus, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Wisdom by Invictus, complete the followi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Wisdom by Invictus** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Wisdom by Invictus** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://invictuselearning-pool7.com/?option=saml_user_login&idp=Microsoft` 1. Click **Save**. 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Wisdom by Invictus**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Wisdom by Invictus. Wo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Wisdom by Invictus tile in the Access Panel, you should be automatically signed in to the Wisdom by Invictus for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Wisdom by Invictus Sign-On URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Wisdom by Invictus Sign-On URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Wisdom by Invictus for which you set up the SSO.
-- [Try Wisdom by Invictus with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Wisdom by Invictus tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Wisdom by Invictus for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Wisdom by Invictus with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Wisdom by Invictus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Stor
We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
-Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](/azure/advisor/advisor-cost-recommendations#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore).
### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. We have observed that you have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. Note that if you decide to delete the disk, recovery is not possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
-Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](../virtual-machines/disks-find-unattached-portal.md).
+Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
## MariaDB
Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutos
This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. The recommended action is to validate and consider deleting the resources.
-Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](/azure/data-explorer/azure-advisor#azure-data-explorer-unused-cluster).
+Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster).
### Right-size Data Explorer resources for optimal cost One or more of these were detected: Low data capacity, CPU utilization, or memory utilization. The recommended action to improve the performance is to scale down and/or scale in the resource to the recommended configuration shown.
-Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](/azure/data-explorer/azure-advisor#correctly-size-azure-data-explorer-clusters-to-optimize-cost).
+Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize).
### Reduce Data Explorer table cache policy to optimize costs Reducing the table cache policy will free up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration.
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](/azure/data-explorer/kusto/management/cachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy).
-### Unused Data Explorer resources with data
+### Unused running Data Explorer resources
-This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found containing data but with no activity. The recommended action is to validate and consider stopping the unused resources.
+This recommendation surfaces all running Data Explorer resources with no user activity. Consider stopping the resources.
-Learn more about [Data explorer resource - StopUnusedClustersWithData (Unused Data Explorer resources with data)](/azure/data-explorer/azure-advisor#azure-data-explorer-clusters-containing-data-with-low-activity).
+Learn more about [Data explorer resource - StopUnusedClusters (Unused running Data Explorer resources)](/azure/data-explorer/azure-advisor#azure-data-explorer-unused-cluster).
### Cleanup unused storage in Data Explorer resources Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. The recommended action is to run the cleanup command to detect and delete unused storage artifacts and reduce cost. Note that data recoverability will be reset to the cleanup time and will not be available on data that was created before running the cleanup.
-Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](/azure/data-explorer/kusto/management/clean-extent-containers).
+Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers).
### Enable optimized autoscale for Data Explorer resources Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable this.
-Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](/azure/data-explorer/manage-cluster-horizontal-scaling#optimized-autoscale).
+Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale).
## Network
Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete
For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup which is even more cost effective
-Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](/azure/backup/sap-hana-faq-backup-azure-vm#policy).
+Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization).
## Storage
Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserv
We analyzed your Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
-Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs We analyzed your SQL PaaS usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider App Service stamp fee reserved instance to save over your on-demand costs We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for the isolated environment stamp fee and save over your Pay-as-you-go costs. Note that reserved instance only applies to the stamp fee and not to the App Service instances. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days.
-Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](https://aka.ms/rirecommendations).
### Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MariaDB hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider Database for MySQL reserved instance to save over your pay-as-you-go costs We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider Cache for Redis reserved instance to save over your pay-as-you-go costs We analyzed your Cache for Redis usage pattern over last 30 days and calculated reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs
-We analyze you Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+We analyze your Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
### (Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs We analyzed your Azure Blob and Datalake storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved instance applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
-
-### (Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs
-
-We analyzed your Azure Data Explorer usage pattern over last 30 days and recommend reserved capacity purchase that maximizes your savings. With reserved capacity you can pre-purchase Data Explorer hourly usage and get savings over your on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and last 30 day's usage pattern. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-
-Learn more about [Subscription - DataExplorerReservedCapacity ((Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and and Datalake storage Gen2 costs)](https://aka.ms/rirecommendations).
### Consider Azure Dedicated Host reserved instance to save over your on-demand costs
We analyzed your Azure VMware Solution usage over last 30 days and calculated re
Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
-### (Preview) Consider Databricks reserved capacity to save over your on-demand costs
-
-We analyzed your Databricks usage over last 30 days and calculated reserved capacity purchase that would maximize your savings. With reserved capacity you can pre-purchase hourly usage and save over your current on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-
-Learn more about [Subscription - DataBricksReservedCapacity ((Preview) Consider Databricks reserved capacity to save over your on-demand costs)](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
- ### Consider NetApp Storage reserved instance to save over your on-demand costs We analyzed your NetApp Storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephe
Auto-pause releases and shuts down unused compute resources after a set idle period of inactivity
-Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](/dotnet/api/microsoft.azure.management.synapse.models.autopauseproperties).
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance).
### Consider enabling autoscale feature on Spark compute. Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature.
-Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](../synapse-analytics/spark/apache-spark-autoscale.md).
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
## Next steps
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azur
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
-Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](../azure-functions/start-stop-vms/overview.md).
+Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
+
+## Azure VMware
+
+### New HCX version is available for upgrade
+
+Your HCX version is not latest. New HCX version is available for upgrade. Updating a VMware HCX system installs the latest features, problem fixes, and security patches.
+
+Learn more about [AVS Private cloud - HCXVersion (New HCX version is available for upgrade)](https://aka.ms/vmware/hcxdoc).
## Batch
Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v
Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes.
-Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](../batch/best-practices.md#pool-lifetime-and-billing).
+Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](https://aka.ms/batch_oldpool_learnmore).
### Delete and recreate your pool to remove a deprecated internal component Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
-Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](/azure/batch/best-practices#pool-lifetime-and-billing)
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore).
### Upgrade to the latest API version to ensure your Batch account remains operational. In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational.
-Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](/rest/api/batchservice/batch-api-status#rest-api-deprecation-status-and-upgrade-instructions).
+Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore).
### Delete and recreate your pool using a VM size that will soon be retired
Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your po
Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
-Learn more about [Batch account - EolImage (Recreate your pool with a new image)](/azure/batch/batch-pool-vm-sizes#supported-vm-images).
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more).
+
+## Cache for Redis
+
+### Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications
+
+Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. It's difficult to configure the network accurately and avoid affecting cache functionality. It's easy to break the cache accidentally while making configuration changes for other network resources. This is a common source of incidents affecting customer applications
-## Cognitive Service
+Learn more about [Redis Cache Server - PrivateLink (Injecting a cache into a virtual network (VNet) imposes complex requirements on your network configuration. This is a common source of incidents affecting customer applications)](https://aka.ms/VnetToPrivateLink).
+
+### TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.
+
+TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses. We highly recommend that you configure your cache to use TLS 1.2 only and your application should use TLS 1.2 or later. See https://aka.ms/TLSVersions for more information.
+
+Learn more about [Redis Cache Server - TLSVersion (TLS versions 1.0 and 1.1 are known to be susceptible to security attacks, and have other Common Vulnerabilities and Exposures (CVE) weaknesses.)](https://aka.ms/TLSVersions).
+
+## Cognitive Services
### Upgrade to the latest version of the Immersive Reader SDK We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](../applied-ai-services/immersive-reader/index.yml).
+Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
## Compute
Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade
If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
-Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](../azure-resource-manager/management/azure-subscription-service-limits.md).
+Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
### Add Azure Monitor to your virtual machine (VM) labeled as production
Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new
Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues).
-### An Azure environment update has been rolled out that may affect you Checkpoint Firewall.
+### An Azure environment update has been rolled out that may affect your Checkpoint Firewall.
The image version of the Checkpoint firewall installed may have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances.
-Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect you Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
+Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect your Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
### The iControl REST interface has an unauthenticated remote command execution vulnerability.
Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more in
Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](../virtual-network/create-vm-accelerated-networking-cli.md).
-### Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.
+### Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation
-We have identified that your Virtual Machine might be running a version of software image that is running drivers for Accelerated Networking (AN) that are not compatible with the Azure environment. It has a synthetic network interface which, either, is AN capable but may disconnect during a maintenance or NIC operation. It is recommended that you upgrade to the latest version of the image that addresses this issue. Please contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+We have identified that you are running a Network virtual Appliance (NVA) called Citrix Application Delivery Controller (ADC), and the NVA has accelerated networking enabled. The Virtual machine that this NVA is deployed on may experience connectivity issues during a platform maintenance operation. It is recommended that you follow the article provided by the vendor: https://aka.ms/Citrix_CTX331516
-Learn more about [Virtual machine - GetCitrixVFRevokeError (Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.)](https://www.citrix.com/support/).
+Learn more about [Virtual machine - GetCitrixVFRevokeError (Virtual machines with Citrix Application Delivery Controller (ADC) and accelerated networking enabled may disconnect during maintenance operation)](https://aka.ms/Citrix_CTX331516).
## Kubernetes
Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's
### Monitoring addon workspace is deleted
-Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
+Monitoring addon workspace is deleted. Correct issues to set up monitoring addon.
-Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](/azure/azure-monitor/containers/container-insights-optout#azure-cli).
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](https://aka.ms/aks-disable-monitoring-addon).
### Deprecated Kubernetes API in 1.16 is found
Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Depr
This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster
-Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](../aks/cluster-autoscaler.md).
+Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler).
### The AKS node pool subnet is full
Deprecated Kubernetes API in 1.22 has been found. Avoid using deprecated APIs.
Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn122IsFound (Deprecated Kubernetes API in 1.22 has been found)](https://aka.ms/aks-deprecated-k8s-api-1.22).
+## MySQL
+
+### Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols
+
+To support modern security standards, MySQL community edition discontinued the support for communication over Transport Layer Security (TLS) 1.0 and 1.1 protocols. Microsoft will also stop supporting connection over TLSv1 and TLSv1.1 to Azure Database for MySQL - Flexible server soon to comply with the modern security standards. We recommend you upgrade your client driver to support TLSv1.2.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlTlsDeprecation (Your Azure Database for MySQL - Flexible Server is vulnerable using weak, deprecated TLSv1 or TLSv1.1 protocols)](https://aka.ms/encrypted_connection_deprecated_protocols).
+ ## Desktop Virtualization ### Permissions missing for start VM on connect
-We have determined you have enabled start VM on connect but didn't gave the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
+We have determined you enabled start VM on connect but didn't grant the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement).
Your Azure Cosmos DB accounts are configured with periodic backup. Continuous ba
Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md).
-## Insights
+## Monitor
### Repair your log alert rule We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
-Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair).
### Log alert rule was disabled The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
-Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair).
## Key Vault
Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)
Reduce the table cache policy to match the usage patterns (query lookback period)
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](/azure/data-explorer/kusto/management/cachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
## Networking ### Resolve Azure Key Vault issue for your Application Gateway
-We've detected that one or more of your Application Gateways has been misconfigured to obtain their listener certificate(s) from Key Vault, which may result in operational issues. You should fix this misconfiguration immediately to avoid operational issues for your Application Gateway.
+We've detected that one or more of your Application Gateways is unable to obtain a certificate due to misconfigured Key Vault. You should fix this configuration immediately to avoid operational issues with your gateway.
-Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](../application-gateway/application-gateway-key-vault-common-errors.md).
+Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror).
### Application Gateway does not have enough capacity to scale out
-We've detected that your Application Gateway subnet does not have enough capacity for allowing scale out during high traffic conditions, which can cause downtime.
+We've detected that your Application Gateway subnet does not have enough capacity for allowing scale-out during high traffic conditions, which can cause downtime.
-Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](../application-gateway/application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway).
+Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](https://aka.ms/application-gateway-faq).
### Enable Traffic Analytics to view insights into traffic patterns across Azure resources Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow. With traffic analytics, you can view top talkers across Azure and non Azure deployments, investigate open ports, protocols and malicious flows in your environment and optimize your network deployment for performance. You can process flow logs at 10 mins and 60 mins processing intervals, giving you faster analytics on your traffic.
-Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](../network-watcher/traffic-analytics.md).
+Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore).
## SQL Virtual Machine
Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should
A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
-Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](/azure/storage/blobs/storage-performance-checklist#what-to-do-when-approaching-a-scalability-target).
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
### Update to newer releases of the Storage Java v12 SDK for better reliability.
Deploying an app to a slot first and swapping it into production makes sure that
Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](../app-service/deploy-staging-slots.md).
+### Update Service Connector API Version
+
+We have identified API calls from outdated Service Connector API for resources under this subscription. We recommend switching to the latest Service Connector API version. You need to update your existing code or tools to use the latest API version.
+
+Learn more about [App service - UpgradeServiceConnectorAPI (Update Service Connector API Version)](/azure/service-connector).
+
+### Update Service Connector SDK to the latest version
+
+We have identified API calls from an outdated Service Connector SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [App service - UpgradeServiceConnectorSDK (Update Service Connector SDK to the latest version)](/azure/service-connector).
+
+## Azure Center for SAP
+
+### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
+
+Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
+
+Learn more about [App Server Instance - VM_0001 (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces
+
+Azure Center for SAP solutions recommendation: Ensure Accelerated networking is enabled on all interfaces.
+
+Learn more about [Database Instance - NIC_0001_DB (Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces
+
+Azure Center for SAP solutions recommendation: Ensure Accelerated networking is enabled on all interfaces.
+
+Learn more about [App Server Instance - NIC_0001 (Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces
+
+Azure Center for SAP solutions recommendation: Ensure Accelerated networking is enabled on all interfaces.
+
+Learn more about [Central Server Instance - NIC_0001_ASCS (Azure Center for SAP recommendation: Ensure Accelerated networking is enabled on all interfaces)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
+
+Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
+
+Learn more about [Central Server Instance - VM_0001_ASCS (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP
+
+Azure Center for SAP solutions recommendation: All VMs in SAP system should be certified for SAP.
+
+Learn more about [Database Instance - VM_0001_DB (Azure Center for SAP recommendation: All VMs in SAP system should be certified for SAP)](https://launchpad.support.sap.com/#/notes/1928533).
+
+### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
+
+Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+
+Learn more about [App Server Instance - AllVmsHaveSameVnetApp (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.).
+
+### Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB
+
+Azure Center for SAP solutions recommendation: Swap space on HANA systems should be 2GB.
+
+Learn more about [Database Instance - SwapSpaceForSap (Azure Center for SAP recommendation: Swap space on HANA systems should be 2GB)](https://launchpad.support.sap.com/#/notes/1999997).
+
+### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
+
+Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+
+Learn more about [Central Server Instance - AllVmsHaveSameVnetAscs (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.).
+
+### Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET
+
+Azure Center for SAP recommendation: Ensure all NICs for a system should be attached to the same VNET.
+
+Learn more about [Database Instance - AllVmsHaveSameVnetDb (Azure Center for SAP recommendation: Ensure all NICs for a system are attached to the same VNET)](/azure/virtual-machines/workloads/sap/sap-deployment-checklist#:~:text=this%20article.-,Networking,-.).
+
+### Azure Center for SAP recommendation: Ensure network configuration is optimized for HANA and OS
+
+Azure Center for SAP solutions recommendation: Ensure network configuration is optimized for HANA and OS.
+
+Learn more about [Database Instance - NetworkConfigForSap (Azure Center for SAP recommendation: Ensure network configuration is optimized for HANA and OS)](https://launchpad.support.sap.com/#/notes/2382421).
## Next steps
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization ha
Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](/azure/azure-cache-for-redis/cache-troubleshoot-server#server-side-bandwidth-limitation).
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
### Improve your Cache and application performance when running with many connected clients Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](/azure/azure-cache-for-redis/cache-faq#performance-considerations-around-connections).
+Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
### Improve your Cache and application performance when running with high server load Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](/azure/azure-cache-for-redis/cache-troubleshoot-client#high-client-cpu-usage).
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
### Improve your Cache and application performance when running with high memory pressure Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](/azure/azure-cache-for-redis/cache-troubleshoot-client#memory-pressure-on-redis-client).
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
-## Cognitive Service
+### Improve your Cache and application performance when memory rss usage is high.
+
+Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSS (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory).
+
+### Improve your Cache and application performance when memory rss usage is high.
+
+Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheUsedMemoryRSSHigh (Improve your Cache and application performance when memory rss usage is high.)](https://aka.ms/redis/recommendations/memory).
+
+### Improve your Cache and application performance when running with high network bandwidth
+
+Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidthHigh (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+
+### Improve your Cache and application performance when running with high memory pressure
+
+Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheUsedMemoryHigh (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+
+### Improve your Cache and application performance when running with many connected clients
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheConnectedClientsHigh (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+
+### Improve your Cache and application performance when running with high server load
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheServerLoadHigh (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+
+### Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.
+
+Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache. If client host machine is running hot on memory, CPU or network bandwidth, the cache responses will not reach your application fast enough and could result in higher latency.
+
+Learn more about [Redis Cache Server - UnresponsiveClient (Cache instances perform best when the host machines where client application runs is able to keep up with responses from the cache.)](/azure/azure-cache-for-redis/cache-troubleshoot-client).
+
+## CDN
+
+### Upgrade SDK version recommendation
+
+The latest version of Azure Front Door Standard and Premium Client Library or SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Front Door Standard and Premium.
+
+Learn more about [Front Door Profile - UpgradeCDNToLatestSDKLanguage (Upgrade SDK version recommendation)](https://aka.ms/afd/tiercomparison).
+
+## Cognitive Services
### Upgrade to the latest Cognitive Service Text Analytics API version
-Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in PII endpoint
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in personally identifiable information endpoint
Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api).
Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest
Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
-Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](../cognitive-services/language-service/overview.md).
+Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
### Upgrade to the latest Cognitive Service Text Analytics SDK version
-Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in PII endpoint
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as personally identifiable information recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in personally identifiable information endpoint
Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp).
Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest
Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
-Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](../cognitive-services/language-service/overview.md).
+Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
## Communication services
Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user ex
When NVAs run at high CPU, packets can get dropped resulting in connection failures or high latency due to network retransmits. Your NVA is running at high CPU, so you should consider increasing the VM size as allowed by the NVA vendor's licensing requirements.
-Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](../virtual-machines/sizes.md).
+Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](https://aka.ms/NVAHighCPU).
### Use Managed disks to prevent disk I/O throttling Your virtual machine disks belong to a storage account that has reached its scalability target, and is susceptible to I/O throttling. To protect your virtual machine from performance degradation and to simplify storage management, use Managed Disks.
-Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](../virtual-machines/managed-disks-overview.md).
+Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](https://aka.ms/aa_avset_manageddisk_learnmore).
### Convert Managed Disks from Standard HDD to Premium SSD for performance
Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of U
Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
-Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](../aks/supported-kubernetes-versions.md).
+Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
## Data Factory
Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (U
A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance.
-Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](../data-factory/how-to-create-event-trigger.md).
+Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger).
## MariaDB
Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB se
Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
-Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](../mariadb/concepts-audit-logs.md).
+Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs).
## MySQL
Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL conn
Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
-Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](../mysql/concepts-audit-logs.md).
+Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs).
### Improve performance by optimizing MySQL temporary-table sizing
Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by opt
Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
-Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](../mysql/howto-redirection.md).
+Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection).
+
+### Increase the storage limit for MySQL Flexible Server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlStorageUpsell (Increase the storage limit for MySQL Flexible Server)](https://aka.ms/azure_mysql_flexible_server_storage).
+
+### Scale the MySQL Flexible Server to a higher SKU
+
+Our telemetry indicates that your Flexible Server is exceeding the connection limits associated with your current SKU. A large number of failed connection requests may adversely affect server performance. To improve performance, we recommend increasing the number of vCores or switching to a higher SKU.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlConnectionUpsell (Scale the MySQL Flexible Server to a higher SKU)](https://aka.ms/azure_mysql_flexible_server_storage).
+
+### Increase the MySQL Flexible Server vCores.
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlCpuUpcell (Increase the MySQL Flexible Server vCores.)](https://aka.ms/azure_mysql_flexible_server_pricing).
+
+### Improve performance by optimizing MySQL temporary-table sizing.
+
+Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlTmpTable (Improve performance by optimizing MySQL temporary-table sizing.)](https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html#internal-temporary-tables-engines).
+
+### Move your MySQL server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high memory usage for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlMemoryUpsell (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/azure_mysql_flexible_server_storage).
+
+### Add a MySQL Read Replica server
+
+Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+
+Learn more about [Azure Database for MySQL flexible server - OrcasMeruMysqlReadReplicaUpsell (Add a MySQL Read Replica server)](https://aka.ms/flexible-server-mysql-read-replicas).
## PostgreSQL
Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your Post
Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
-Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](../postgresql/howto-read-replicas-portal.md).
+Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
### Increase the PostgreSQL server vCores
Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve
Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
-Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](../postgresql/howto-optimize-query-stats-collection.md).
+Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
-Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](../postgresql/concepts-query-store.md).
+Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
### Increase the storage limit for PostgreSQL Flexible Server Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
-Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](../postgresql/flexible-server/concepts-limits.md).
+Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
### Optimize logging settings by setting LoggingCollector to -1
Consider our new offering Azure Database for PostgreSQL Flexible Server that pro
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md).
+### Move your PostgreSQL Flexible Server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, please review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing).
+ ## Desktop Virtualization ### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
You are seeing this advisor recommendation because HDInsight team's system log s
These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster. To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.+ Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](../hdinsight/hbase/apache-hbase-accelerated-writes.md). ### More than 75% of your queries are full scan queries.
Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault
This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
-Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](/azure/data-explorer/azure-advisor#correctly-size-azure-data-explorer-clusters-to-optimize-performance).
+Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
### Review table cache policies for Data Explorer tables This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
-Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](/azure/data-explorer/kusto/management/cachepolicy).
+Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
### Reduce Data Explorer table cache policy for better performance Reducing the table cache policy will free up unused data from the resource's cache and improve performance.
-Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](/azure/data-explorer/kusto/management/cachepolicy).
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
## Networking
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](/azure/traffic-manager/traffic-manager-monitoring#endpoint-failover-and-recovery).
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
### Configure DNS Time to Live to 60 seconds Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](../traffic-manager/traffic-manager-monitoring.md).
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs
Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your
Under high traffic load, the VPN gateway may drop packets due to high CPU.
-Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](../virtual-machines/sizes.md).
+Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
### Consider increasing the size of your VNet Gateway SKU to address high P2S use
Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consi
Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases
-Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](../application-gateway/high-traffic-support.md).
+Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
## SQL
Learn more about [Application gateway - HotAppGateway (Make sure you have enough
We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md).
+Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
### Remove data skew to increase query performance We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
-Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute#how-to-tell-if-your-distribution-column-is-a-good-choice).
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
### Update statistics on table columns We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
-Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md).
+Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
### Right-size overutilized SQL Databases
Learn more about [SQL database - sqlRightsizePerformance (Right-size overutilize
We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md).
+Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse We have detected that you had high tempdb utilization which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-tempdb).
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
### Convert tables to replicated tables with SQL Data Warehouse We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md).
+Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
### Split staged files in the storage account to increase load performance We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
-Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
### Increase batch size when loading to maximize load throughput, data compression, and query performance We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
-Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](/azure/synapse-analytics/sql/data-loading-best-practices#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
### Co-locate the storage account within the same region to minimize latency when loading We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
-Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
## Storage
Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the sto
When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
-Learn more about [Storage Account - StorageCallPutBlob (Use \"Put Blob\" for blobs smaller than 256 MB)](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs).
+Learn more about [Storage Account - StorageCallPutBlob (Use \""Put Blob\"" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
### Upgrade your Storage Client Library to the latest version for better reliability and performance The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
-Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](/nuget/consume-packages/install-use-packages-visual-studio).
+Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
### Upgrade to Standard SSD Disks for consistent and improved performance
The latest version of Storage Client Library/ SDK contains fixes to issues repor
One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
-Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](../storage/common/storage-account-overview.md).
+Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
We have noticed your Unmanaged HDD Disk is approaching performance targets. Azur
Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
-### No Snapshots Detected
-We have observed that there are no snapshots of your file shares. This means you are not protected from accidental file deletion or file corruption. Please enable snapshots to protect your data. One way to do this is through Azure
+## Subscription
+
+### Experience more predictable, consistent latency with a private connection to Azure
+
+Improve the performance, privacy, and reliability of your business-critical apps by extending your on-premises networks to Azure with Azure ExpressRoute. Establish private ExpressRoute connections directly from your WAN, through a cloud exchange facility, or through POP and IPVPN connections.
-Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](../backup/azure-file-share-backup-overview.md).
+Learn more about [Subscription - AzureExpressRoute (Experience more predictable, consistent latency with a private connection to Azure)](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager).
## Synapse
Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](../
Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group.
-Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](../synapse-analytics/sql/best-practices-dedicated-sql-pool.md#optimize-clustered-columnstore-tables).
+Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
+
+### CCI Tables with Deleted Records Over the Recommended Threshold
+
+Deleting a row from a compressed row group only logically marks the row as deleted. The row remains in the compressed row group until the partition or table is rebuilt.
+
+Learn more about [Synapse workspace - SynapseCCIHealthDeletedRowgroups (CCI Tables with Deleted Records Over the Recommended Threshold)](https://aka.ms/AzureSynapseCCIDeletedRowGroups).
### Update SynapseManagementClient SDK Version New SynapseManagementClient is using .NET SDK 4.0 or above.
-Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](/dotnet/api/microsoft.azure.management.synapse.synapsemanagementclient).
+Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
## Web
Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update
Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
-Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](../app-service/app-service-configure-premium-tier.md).
+Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
### Check outbound connections from your App Service resource Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
-Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](/azure/app-service/app-service-best-practices#socketresources).
-
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
## Next steps
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest F
API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
-Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](../api-management/configure-custom-domain.md).
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
### SSL/TLS renegotiation blocked
SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certi
Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](../api-management/api-management-howto-mutual-certificates-for-clients.md).
-## Cache
+## App
+
+### Increase the minimal replica count for your container app
+
+We detected the minimal replica count set for your container app may be lower than optimal. Consider increasing the minimal replica count for better availability.
+
+Learn more about [Resource - ContainerAppMinimalReplicaCountTooLow (Increase the minimal replica count for your container app)](https://aka.ms/containerappscalingrules).
+
+## Cache for Redis
### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact. Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](/azure/azure-cache-for-redis/cache-configure#memory-policies).
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+
+## CDN
+
+### Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate
+We recommend configuring the Azure Front Door customer certificate secret to ΓÇÿLatestΓÇÖ for the AFD to refer to the latest secret version in Azure Key Vault, so that the secret can be automatically rotated.
+
+Learn more about [Front Door Profile - SwitchVersionBYOC (Switch Secret version to ΓÇÿLatestΓÇÖ for the Azure Front Door customer certificate)](/azure/frontdoor/standard-premium/how-to-configure-https-custom-domain#certificate-renewal-and-changing-certificate-types).
## Compute ### Enable Backups on your Virtual Machines
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
-Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](/azure/virtual-machines/disks-types#premium-ssd).
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
### Enable virtual machine replication to protect your applications from regional outage Virtual machines which do not have replication enabled to another region are not resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms).
-Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](../site-recovery/azure-to-azure-quickstart.md).
-
-### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost
+### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost
-We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no additional cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes.
+We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no extra cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes.
-Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost)](../virtual-machines/managed-disks-overview.md).
+Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no extra cost)](https://aka.ms/md_overview).
### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
-Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](/azure/site-recovery/azure-to-azure-about-networking#outbound-connectivity-using-service-tags).
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags).
### Use Managed Disks to improve data reliability Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
-Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](../virtual-machines/managed-disks-overview.md).
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
### Check Point Virtual Machine may lose Network Connectivity.
-We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. It is recommended that you upgrade to a newer version of the image that addresses this issue. Please contact Check Point for further instructions on how to upgrade your image.
+We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. It is recommended that you upgrade to a newer version of the image that addresses this issue. Contact Check Point for further instructions on how to upgrade your image.
Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Acces
Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](../postgresql/concepts-logical.md).
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
### Improve PostgreSQL availability by removing inactive logical replication slots Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](../postgresql/flexible-server/concepts-logical.md#monitoring).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
## IoT Hub
Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSq
Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
-Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks).
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
## Cosmos DB
Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your o
### Configure your Azure Cosmos DB containers with a partition key
-Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Please migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
+Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey).
Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure F
Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
-Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](../hdinsight/hdinsight-release-notes.md).
+Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
### Deprecation of Older Spark Versions in HDInsight Spark cluster Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
-Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](../hdinsight/spark/migrate-versions.md).
+Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
### Enable critical updates to be applied to your HDInsight clusters
-HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate y
### Drop and recreate your HDInsight clusters to apply critical updates
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Please drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md). ### Apply critical updates to your HDInsight clusters
-The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md).
Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical
You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) will be retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 will be deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
-Learn more about [HDInsight cluster - VMDeprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
+Learn more about [HDInsight cluster - VM Deprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
## Hybrid Compute
Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disr
Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
-Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](../azure-arc/kubernetes/agent-upgrade.md).
+Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
## Media Services ### Increase Media Services quotas or limits to ensure continuity of service.
-Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
+Please be advised that your media account is about to hit its quota limits. Review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Do not create additional Azure Media accounts in an attempt to obtain higher limits.
-Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](/azure/media-services/latest/limits-quotas-constraints-reference).
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
## Networking
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more
### Move to production gateway SKUs from Basic gateways
-The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
+The VPN gateway Basic SKU is designed for development or testing scenarios. Move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
-Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsku).
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
### Add at least one more endpoint to the profile, preferably in another Azure region Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. It is also recommended that endpoints be in different regions.
-Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](../traffic-manager/traffic-manager-endpoint-types.md).
+Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
### Add an endpoint configured to "All (World)" For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
-Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \"All (World)\")](../traffic-manager/traffic-manager-manage-endpoints.md).
+Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \""All (World)\"")](https://aka.ms/Rf7vc5).
### Add or move one endpoint to another Azure region
-All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability if all endpoints in one region fail.
+All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability in case all endpoints in one region fail.
-Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](../traffic-manager/traffic-manager-configure-performance-routing-method.md).
+Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
### Avoid hostname override to ensure site integrity
-Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
-Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](/azure/application-gateway/troubleshoot-app-service-redirection-app-service-url#alternate-solution-use-a-custom-domain-name).
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
### Use ExpressRoute Global Reach to improve your design for disaster recovery
To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
+### Use NAT gateway for outbound connectivity
+
+Prevent risk of connectivity failures due to SNAT port exhaustion by using NAT gateway for outbound traffic from your virtual networks. NAT gateway scales dynamically and provides secure connections for traffic headed to the internet.
+
+Learn more about [Virtual network - natGateway (Use NAT gateway for outbound connectivity)](/azure/load-balancer/load-balancer-outbound-connections#2-associate-a-nat-gateway-to-the-subnet).
+ ### Enable Active-Active gateways for redundancy In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
-Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](../vpn-gateway/vpn-gateway-highlyavailable.md).
+Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
## Recovery Services
Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Rest
You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
-Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](../search/search-limits-quotas-capacity.md).
+Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
-Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](../search/search-limits-quotas-capacity.md).
+Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
### You are close to exceeding your available storage quota. Add additional partitions if you need more storage. You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
-Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](../search/search-limits-quotas-capacity.md).
+Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
## Storage
Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to pro
We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
-Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](/azure/storage/common/scalability-targets-standard-account#premium-performance-page-blob-storage).
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
## Web
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Di
Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
-Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](/azure/app-service/app-service-best-practices#CPUresources).
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
### Fix the backup database settings of your App Service resource Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
### Consider scaling up your App Service Plan SKU to avoid memory exhaustion The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
-Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](/azure/app-service/app-service-best-practices#memoryresources).
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
### Scale up your App Service resource to remove the quota limit
-Your app is part of a shared App Service plan and has met its quota multiple times. After meeting a quota, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
+Your app is part of a shared App Service plan and has met its quota multiple times. Once quota is met, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
-Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](../app-service/overview-hosting-plans.md).
+Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
### Use deployment slots for your App Service resource You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](../app-service/deploy-staging-slots.md).
+Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
### Fix the backup storage settings of your App Service resource Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup).
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
### Move your App Service resource to Standard or higher and use deployment slots You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
-Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](../app-service/deploy-staging-slots.md).
+Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
### Consider scaling out your App Service Plan to optimize user experience and availability.
We identified the below thread resulted in an unhandled exception for your App a
Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html). - ## Next steps Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
advisor Azure Advisor Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/azure-advisor-score.md
The calculation of the Advisor score can be summarized in four steps:
* Resources with long-standing recommendations will count more against your score. * Resources that you postpone or dismiss in Advisor are removed from your score calculation entirely.
-Advisor applies this model at an Advisor category level to give an Advisor score for each category. **Security** uses a [secure score](../defender-for-cloud/secure-score-security-controls.md#introduction-to-secure-score) model. A simple average produces the final Advisor score.
+Advisor applies this model at an Advisor category level to give an Advisor score for each category. **Security** uses a [secure score](../defender-for-cloud/secure-score-security-controls.md#overview-of-secure-score) model. A simple average produces the final Advisor score.
## Advisor score FAQs
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Replace *myKeyVaultName* with the name of your key vault. You will also need a
```azurecli-interactive # Retrieve the Key Vault Id and store it in a variable
-keyVaultId=$(az keyvault show --name myKeyVaultName --query "[id]" -o tsv)
+$keyVaultId=az keyvault show --name myKeyVaultName --query "[id]" -o tsv
# Retrieve the Key Vault key URL and store it in a variable
-keyVaultKeyUrl=$(az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv)
+$keyVaultKeyUrl=az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv
# Create a DiskEncryptionSet az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
Use the DiskEncryptionSet and resource groups you created on the prior steps, an
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
-desIdentity=$(az disk-encryption-set show -n myDiskEncryptionSetName -g myResourceGroup --query "[identity.principalId]" -o tsv)
+$desIdentity=az disk-encryption-set show -n myDiskEncryptionSetName -g myResourceGroup --query "[identity.principalId]" -o tsv
# Update security policy settings az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIdentity --key-permissions wrapkey unwrapkey get
Create a **new resource group** and AKS cluster, then use your key to encrypt th
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
-diskEncryptionSetId=$(az disk-encryption-set show -n mydiskEncryptionSetName -g myResourceGroup --query "[id]" -o tsv)
+$diskEncryptionSetId=az disk-encryption-set show -n mydiskEncryptionSetName -g myResourceGroup --query "[id]" -o tsv
# Create a resource group for the AKS cluster az group create -n myResourceGroup -l myAzureRegionName
Review [best practices for AKS cluster security][best-practices-security]
[customer-managed-keys-linux]: ../virtual-machines/disk-encryption.md#customer-managed-keys [key-vault-generate]: ../key-vault/general/manage-with-cli2.md [supported-regions]: ../virtual-machines/disk-encryption.md#supported-regions
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Title: Configure kubenet networking in Azure Kubernetes Service (AKS)
description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet. Previously updated : 06/02/2022 Last updated : 06/20/2022
This article shows you how to use *kubenet* networking to create and use a virtu
## Before you begin
-### [Azure CLI](#tab/azure-cli)
- You need the Azure CLI version 2.0.65 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-### [Azure PowerShell](#tab/azure-powershell)
-
-You need the Azure PowerShell version 7.5.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
--- ## Overview of kubenet networking with your own subnet In many environments, you have defined virtual networks and subnets with allocated IP address ranges. These virtual network resources are used to support multiple services and applications. To provide network connectivity, AKS clusters can use *kubenet* (basic networking) or Azure CNI (*advanced networking*).
Use *kubenet* when:
- You have limited IP address space. - Most of the pod communication is within the cluster.-- You don't need advanced AKS features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
+- You don't need advanced AKS features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
Use *Azure CNI* when:
For more information to help you decide which network model to use, see [Compare
## Create a virtual network and subnet
-### [Azure CLI](#tab/azure-cli)
- To get started with using *kubenet* and your own virtual network subnet, first create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ```azurecli-interactive
az network vnet create \
--subnet-prefix 192.168.1.0/24 ```
-### [Azure PowerShell](#tab/azure-powershell)
-
-To get started with using *kubenet* and your own virtual network subnet, first create a resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+Get the subnet resource ID and store as a variable:
-```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location eastus
+```azurecli-interactive
+SUBNET_ID=$(az network vnet subnet show --resource-group myResourceGroup --vnet-name myAKSVnet --name myAKSSubnet --query id -o tsv)
```
-If you don't have an existing virtual network and subnet to use, create these network resources using the [New-AzVirtualNetwork][new-azvirtualnetwork] and [New-AzVirtualNetworkSubnetConfig][new-azvirtualnetworksubnetconfig] cmdlets. In the following example, the virtual network is named *myAKSVnet* with the address prefix of *192.168.0.0/16*. A subnet is created named *myAKSSubnet* with the address prefix *192.168.1.0/24*.
+## Create an AKS cluster in the virtual network
-```azurepowershell-interactive
-$myAKSSubnet = New-AzVirtualNetworkSubnetConfig -Name myAKSSubnet -AddressPrefix 192.168.1.0/24
-$params = @{
- ResourceGroupName = 'myResourceGroup'
- Location = 'eastus'
- Name = 'myAKSVnet'
- AddressPrefix = '192.168.0.0/16'
- Subnet = $myAKSSubnet
-}
-New-AzVirtualNetwork @params
-```
+Now create an AKS cluster in your virtual network and subnet using the [az aks create][az-aks-create] command.
-
+### Create an AKS cluster with system-assigned managed identities
-## Create a service principal and assign permissions
+You can create an AKS cluster using a system-assigned managed identity by running the following CLI command.
-### [Azure CLI](#tab/azure-cli)
-
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. The service principal needs to have permissions to manage the virtual network and subnet that the AKS nodes use. To create a service principal, use the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command:
+> [!NOTE]
+> When using system-assigned identity, azure-cli will grant Network Contributor role to the system-assigned identity after the cluster is created.
+> System-assigned managed identity is only support for CLI. If you are using an ARM template or other clients, you need to use the [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities]
```azurecli-interactive
-az ad sp create-for-rbac
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 3 \
+ --network-plugin kubenet \
+ --vnet-subnet-id $SUBNET_ID
```
-The following example output shows the application ID and password for your service principal. These values are used in additional steps to assign a role to the service principal and then create the AKS cluster:
+> [!Note]
+> If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
-```output
-{
- "appId": "476b3636-5eda-4c0e-9751-849e70b5cfad",
- "displayName": "azure-cli-2019-01-09-22-29-24",
- "password": "tzG8Q~DRYSJtMPhajpHfYaG~.4Yp2VonoZfU9bjy",
- "tenant": "00000000-0000-0000-0000-000000000000"
-}
+```azurecli-interactive
+az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 3 \
+ --network-plugin kubenet --network-policy calico \
+ --vnet-subnet-id $SUBNET_ID
```
-To assign the correct delegations in the remaining steps, use the [az network vnet show][az-network-vnet-show] and [az network vnet subnet show][az-network-vnet-subnet-show] commands to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
+### Create an AKS cluster with user-assigned managed identities
-> [!NOTE]
-> If you are using CLI, you can skip this step. With ARM template or other clients, you need to do the below role assignment.
-
-```azurecli-interactive
-VNET_ID=$(az network vnet show --resource-group myResourceGroup --name myAKSVnet --query id -o tsv)
-SUBNET_ID=$(az network vnet subnet show --resource-group myResourceGroup --vnet-name myAKSVnet --name myAKSSubnet --query id -o tsv)
-```
+#### Create or obtain a managed identity
-Now assign the service principal for your AKS cluster *Network Contributor* permissions on the virtual network using the [az role assignment create][az-role-assignment-create] command. Provide your own *\<appId>* as shown in the output from the previous command to create the service principal:
+If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
```azurecli-interactive
-az role assignment create --assignee <appId> --scope $VNET_ID --role "Network Contributor"
+az identity create --name myIdentity --resource-group myResourceGroup
```
-### [Azure PowerShell](#tab/azure-powershell)
+The output should resemble the following:
-To allow an AKS cluster to interact with other Azure resources, an Azure Active Directory service principal is used. The service principal needs to have permissions to manage the virtual network and subnet that the AKS nodes use. To create a service principal, use the [New-AzADServicePrincipal][new-azadserviceprincipal] command:
-
-```azurepowershell-interactive
-$servicePrincipal = New-AzADServicePrincipal
+```output
+{
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+}
```
-The following example output shows the application ID and password for your service principal. These values are used in additional steps to assign a role to the service principal and then create the AKS cluster:
+If you have an existing managed identity, you can find the Principal ID by running the following command:
-```azurepowershell-interactive
-$servicePrincipal.AppId
-$servicePrincipal.PasswordCredentials[0].SecretText
-```
+```azurecli-interactive
+az identity show --ids <identity-resource-id>
+```
+
+The output should resemble the following:
```output
-476b3636-5eda-4c0e-9751-849e70b5cfad
-tzG8Q~DRYSJtMPhajpHfYaG~.4Yp2VonoZfU9bjy
+{
+ "clientId": "<client-id>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "eastus",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+}
```
-To assign the correct delegations in the remaining steps, use the [Get-AzVirtualNetwork][get-azvirtualnetwork] command to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
-
-> [!NOTE]
-> If you are using CLI, you can skip this step. With ARM template or other clients, you need to do the below role assignment.
+#### Add role assignment for managed identity
-```azurepowershell-interactive
-$myAKSVnet = Get-AzVirtualNetwork -ResourceGroupName myResourceGroup -Name myAKSVnet
-$VNET_ID = $myAKSVnet.Id
-$SUBNET_ID = $myAKSVnet.Subnets[0].Id
-```
+If you are using Azure CLI, the role will be added automatically and you can skip this step. If you are using an ARM template or other clients, you need to use the Principal ID of the cluster managed identity to perform a role assignment.
-Now assign the service principal for your AKS cluster *Network Contributor* permissions on the virtual network using the [New-AzRoleAssignment][new-azroleassignment] cmdlet. Provide your application ID as shown in the output from the previous command to create the service principal:
+To assign the correct delegations in the remaining steps, use the [az network vnet show][az-network-vnet-show] and [az network vnet subnet show][az-network-vnet-subnet-show] commands to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
-```azurepowershell-interactive
-New-AzRoleAssignment -ApplicationId $servicePrincipal.AppId -Scope $VNET_ID -RoleDefinitionName "Network Contributor"
+```azurecli-interactive
+VNET_ID=$(az network vnet show --resource-group myResourceGroup --name myAKSVnet --query id -o tsv)
``` -
+Now assign the managed identity for your AKS cluster *Network Contributor* permissions on the virtual network using the [az role assignment create][az-role-assignment-create] command. Provide the *\<principalId>* as shown in the output from the previous command to create the identity:
-## Create an AKS cluster in the virtual network
-
-### [Azure CLI](#tab/azure-cli)
-
-You've now created a virtual network and subnet, and created and assigned permissions for a service principal to use those network resources. Now create an AKS cluster in your virtual network and subnet using the [az aks create][az-aks-create] command. Define your own service principal *\<appId>* and *\<password>*, as shown in the output from the previous command to create the service principal.
+```azurecli-interactive
+az role assignment create --assignee <control-plane-identity-principal-id> --scope $VNET_ID --role "Network Contributor"
+```
-The following IP address ranges are also defined as part of the cluster create process:
+Example:
+```azurecli-interactive
+az role assignment create --assignee 22222222-2222-2222-2222-222222222222 --scope "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myAKSVnet" --role "Network Contributor"
+```
-* The *--service-cidr* is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+> [!NOTE]
+> Permission granted to your cluster's managed identity used by Azure may take up 60 minutes to populate.
-* The *--dns-service-ip* address should be the *.10* address of your service IP address range.
+#### Create an AKS cluster
-* The *--pod-cidr* should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
- * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
- * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*.
- * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
-
-* The *--docker-bridge-address* lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
+Now you can create an AKS cluster using the user-assigned managed identity by running the following CLI command. Provide the control plane identity resource ID via `assign-identity`
```azurecli-interactive az aks create \
az aks create \
--name myAKSCluster \ --node-count 3 \ --network-plugin kubenet \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNET_ID \
- --service-principal <appId> \
- --client-secret <password>
-```
-
-> [!Note]
-> If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
-
-```azurecli-interactive
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 3 \
- --network-plugin kubenet --network-policy calico \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
- --vnet-subnet-id $SUBNET_ID \
- --service-principal <appId> \
- --client-secret <password>
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-You've now created a virtual network and subnet, and created and assigned permissions for a service principal to use those network resources. Now create an AKS cluster in your virtual network and subnet using the [New-AzAksCluster][new-azakscluster] cmdlet. Define your own service principal *\<appId>* and *\<password>*, as shown in the output from the previous command to create the service principal.
-
-The following IP address ranges are also defined as part of the cluster create process:
-
-* The *-ServiceCidr* is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
-
-* The *-DnsServiceIP* address should be the *.10* address of your service IP address range.
-
-* The *-PodCidr* should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
- * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes.
- * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *-PodCidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*.
- * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
-
-* The *-DockerBridgeCidr* lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
-
-```azurepowershell-interactive
-# Create a PSCredential object using the service principal's ID and secret
-$password = ConvertTo-SecureString -String $servicePrincipal.PasswordCredentials[0].SecretText -AsPlainText -Force
-$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $servicePrincipal.AppId, $password
-
-$params = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myAKSCluster'
- NodeCount = 3
- NetworkPlugin = 'kubenet'
- ServiceCidr = '10.0.0.0/16'
- DnsServiceIP = '10.0.0.10'
- PodCidr = '10.244.0.0/16'
- DockerBridgeCidr = '172.17.0.1/16'
- NodeVnetSubnetID = $SUBNET_ID
- ServicePrincipalIdAndSecret = $credential
-}
-New-AzAksCluster @params
+ --enable-managed-identity \
+ --assign-identity <identity-resource-id>
```
-> [!Note]
-> If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
-
-```azurepowershell-interactive
-$params = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myAKSCluster'
- NodeCount = 3
- NetworkPlugin = 'kubenet'
- NetworkPolicy = 'calico'
- ServiceCidr = '10.0.0.0/16'
- DnsServiceIP = '10.0.0.10'
- PodCidr = '10.244.0.0/16'
- DockerBridgeCidr = '172.17.0.1/16'
- NodeVnetSubnetID = $SUBNET_ID
- ServicePrincipalIdAndSecret = $credential
-}
-New-AzAksCluster @params
-```
--- When you create an AKS cluster, a network security group and route table are automatically created. These network resources are managed by the AKS control plane. The network security group is automatically associated with the virtual NICs on your nodes. The route table is automatically associated with the virtual network subnet. Network security group rules and route tables are automatically updated as you create and expose services. ## Bring your own subnet and route table with kubenet
Kubenet networking requires organized route table rules to successfully route re
Limitations:
-* Permissions must be assigned before cluster creation, ensure you are using a service principal with write permissions to your custom subnet and custom route table.
* A custom route table must be associated to the subnet before you create the AKS cluster. * The associated route table resource cannot be updated after cluster creation. While the route table resource cannot be updated, custom rules can be modified on the route table. * Each AKS cluster must use a single, unique route table for all subnets associated with the cluster. You cannot reuse a route table with multiple clusters due to the potential for overlapping pod CIDRs and conflicting routing rules.
-* You can't provide your own subnet and route table with a system-assigned managed identity. To provide your own subnet and route table, you must use a [user-assigned managed identity][user-assigned managed identity], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table.
+* For system-assigned managed identity, it's only supported to provide your own subnet and route table via Azure CLI. That's because CLI will add the role assignment automatically. If you are using an ARM template or other clients, you must use a [user-assigned managed identity][Create an AKS cluster with user-assigned managed identities], assign permissions before cluster creation, and ensure the user-assigned identity has write permissions to your custom subnet and custom route table.
* Using the same route table with multiple AKS clusters isn't supported. After you create a custom route table and associate it to your subnet in your virtual network, you can create a new AKS cluster that uses your route table. You need to use the subnet ID for where you plan to deploy your AKS cluster. This subnet also must be associated with your custom route table.
-### [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive # Find your subnet ID az network vnet subnet list --resource-group
az network vnet subnet list --resource-group
``` ```azurecli-interactive
-# Create a kubernetes cluster with a custom subnet preconfigured with a route table
-az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id <MySubnetID>
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell-interactive
-# Find your subnet ID
-Get-AzVirtualNetwork -ResourceGroupName MyResourceGroup -Name myAKSVnet |
- Select-Object -ExpandProperty subnets |
- Select-Object -Property Id
-```
-
-```azurepowershell-interactive
-# Create a kubernetes cluster with a custom subnet preconfigured with a route table
-New-AzAksCluster -ResourceGroupName MyResourceGroup -Name MyManagedCluster -NodeVnetSubnetID <MySubnetID>
+# Create a kubernetes cluster with with a custom subnet preconfigured with a route table
+az aks create -g MyResourceGroup -n MyManagedCluster --vnet-subnet-id <MySubnetID-resource-id>
``` -- ## Next steps With an AKS cluster deployed into your existing virtual network subnet, you can now use the cluster as normal. Get started with [creating new apps using Helm][develop-helm] or [deploy existing apps using Helm][use-helm].
With an AKS cluster deployed into your existing virtual network subnet, you can
<!-- LINKS - Internal --> [install-azure-cli]: /cli/azure/install-azure-cli
-[install-azure-powershell]: /powershell/azure/install-az-ps
+[az-identity-create]: /cli/azure/identity#az_identity_create
[aks-network-concepts]: concepts-network.md [aks-network-nsg]: concepts-network.md#network-security-groups [az-group-create]: /cli/azure/group#az_group_create
-[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create
-[new-azvirtualnetwork]: /powershell/module/az.network/new-azvirtualnetwork
-[new-azvirtualnetworksubnetconfig]: /powershell/module/az.network/new-azvirtualnetworksubnetconfig
[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
-[new-azadserviceprincipal]: /powershell/module/az.resources/new-azadserviceprincipal
[az-network-vnet-show]: /cli/azure/network/vnet#az_network_vnet_show
-[get-azvirtualnetwork]: /powershell/module/az.network/get-azvirtualnetwork
[az-network-vnet-subnet-show]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_show [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment
[az-aks-create]: /cli/azure/aks#az_aks_create
-[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[byo-subnet-route-table]: #bring-your-own-subnet-and-route-table-with-kubenet [develop-helm]: quickstart-helm.md [use-helm]: kubernetes-helm.md
With an AKS cluster deployed into your existing virtual network subnet, you can
[express-route]: ../expressroute/expressroute-introduction.md [network-comparisons]: concepts-network.md#compare-network-models [custom-route-table]: ../virtual-network/manage-route-table.md
-[user-assigned managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity
+[Create an AKS cluster with user-assigned managed identities]: configure-kubenet.md#create-an-aks-cluster-with-user-assigned-managed-identities
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
When you provision a private AKS cluster, AKS by default creates a private FQDN
Private cluster is available in public regions, Azure Government, and Azure China 21Vianet regions where [AKS is supported](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service).
-> [!NOTE]
-> Azure Government sites are supported, however US Gov Texas isn't currently supported because of missing Private Link support.
- ## Prerequisites * Azure CLI >= 2.28.0 or Azure CLI with aks-preview extension 0.5.29 or later.
api-management How To Deploy Self Hosted Gateway Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-docker.md
This article provides the steps for deploying self-hosted gateway component of A
docker run -d -p 80:8080 -p 443:8081 --name <gateway-name> --env-file env.conf mcr.microsoft.com/azure-api-management/gateway:<tag> ```
-9. Execute the command. The command instructs your Docker environment to run the container using [container image](https://aka.ms/apim/sputnik/dhub) downloaded from the Microsoft Container Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
+9. Execute the command. The command instructs your Docker environment to run the container using a [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry, and to map the container's HTTP (8080) and HTTPS (8081) ports to ports 80 and 443 on the host.
10. Run the below command to check if the gateway container is running: ```console docker ps
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
6. Select the **\<gateway-name\>.yml** file link and download the YAML file. 7. Select the **copy** icon at the lower-right corner of the **Deploy** text box to save the `kubectl` commands to the clipboard. 8. Paste commands to the terminal (or command) window. The first command creates a Kubernetes secret that contains the access token generated in step 4. The second command applies the configuration file downloaded in step 6 to the Kubernetes cluster and expects the file to be in the current directory.
-9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/sputnik/dhub) downloaded from the Microsoft Container Registry.
+9. Run the commands to create the necessary Kubernetes objects in the [default namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and start self-hosted gateway pods from the [container image](https://aka.ms/apim/sputnik/registry-portal) downloaded from the Microsoft Artifact Registry.
10. Run the following command to check if the deployment succeeded. Note that it might take a little time for all the objects to be created and for the pods to initialize. ```console
api-management Self Hosted Gateway Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-migration-guide.md
+
+ Title: Self-hosted gateway migration guide - Azure API Management
+description: Learn how to migrate the Azure API Management self-hosted gateway to v2.
+
+documentationcenter: ''
++++ Last updated : 03/08/2022+++
+# Self-hosted gateway migration guide
+
+This article explains how to migrate existing self-hosted gateway deployments to self-hosted gateway v2.
+
+## What's new?
+
+As we strive to make it easier for customers to deploy our self-hosted gateway, we've **introduced a new configuration API** that removes the dependency on Azure Storage, unless you're using [API inspector](api-management-howto-api-inspector.md) or quotas.
+
+The new configuration API allows customers to more easily adopt, deploy and operate our self-hosted gateway in their existing infrastructure.
+
+We have [introduced new container image tags](how-to-self-hosted-gateway-on-kubernetes-in-production.md#container-image-tag) to let customers choose the best way to try our gateway and deploy it in production.
+
+To help customers run our gateway in production we've extended [our production guidance](how-to-self-hosted-gateway-on-kubernetes-in-production.md) to cover how to autoscale the gateway, and deploy it for high availability in your Kubernetes cluster.
+
+Learn more about the connectivity of our gateway, our new infrastructure requirements, and what happens if connectivity is lost in [this article](self-hosted-gateway-overview.md#connectivity-to-azure).
+
+## Prerequisites
+
+Before you can migrate to self-hosted gateway v2, you need to ensure your infrastructure [meets the requirements](self-hosted-gateway-overview.md#gateway-v2-requirements).
+
+## Migrating to self-hosted gateway v2
+
+Migrating from self-hosted gateway v2 requires a few small steps to be done:
+
+1. [Use the new container image](#container-image)
+2. [Use the new configuration API](#using-the-new-configuration-api)
+3. [Meet minimal security requirements](#meet-minimal-security-requirements)
+
+### Container Image
+
+Change the image tag in your deployment scripts to use `2.0.0` or above.
+
+Alternatively, choose one of our other [container image tags](self-hosted-gateway-overview.md#container-images).
+
+You can find a full list of available tags [here](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list) or find us on [Docker Hub](https://hub.docker.com/_/microsoft-azure-api-management-gateway).
+
+### Using the new configuration API
+
+In order to migrate to self-hosted gateway v2, customers need to use our new Configuration API v2.
+
+Currently, Azure API Management provides the following Configuration APIs for self-hosted gateway:
+
+| Configuration Service | URL | Supported | Requirements |
+| | | | |
+| v2 | `{name}.configuration.azure-api.net` | Yes | [Link](self-hosted-gateway-overview.md#gateway-v2-requirements) |
+| v1 | `{name}.management.azure-api.net/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/Microsoft.ApiManagement/service/{name}?api-version=2021-01-01-preview` | No | [Link](self-hosted-gateway-overview.md#gateway-v1-requirements) |
+
+Customer must use the new Configuration API v2 by changing their deployment scripts to use the new URL and meet infrastructure requirements.
+
+> [!IMPORTANT]
+> * DNS hostname must be resolvable to IP addresses and the corresponding IP addresses must be reachable.
+> This might require additional configuration in case you are using a private DNS, internal VNET or other infrastrutural requirements.
+
+### Meet minimal security requirements
+
+During startup, the self-hosted gateway will prepare the CA certificates that will be used. This requires the gateway container to run with at least user ID 1001 and can't use read-only file system.
+
+When configuring a security context for the container in Kubernetes, the following are required at minimum:
+
+```yaml
+securityContext:
+ runAsNonRoot: true
+ runAsUser: 1001
+ readOnlyRootFilesystem: false
+```
+
+However, as of `2.0.3` the self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway more securely.
+
+Here's an example of the security context for the self-hosted gateway:
+```yml
+securityContext:
+ allowPrivilegeEscalation: false
+ runAsNonRoot: true
+ runAsUser: 1001 # This is a built-in user, but you can use any user ie 1000 as well
+ runAsGroup: 2000 # This is just an example
+ privileged: false
+ capabilities:
+ drop:
+ - all
+```
+
+> [!WARNING]
+> Running the self-hosted gateway with read-only filesystem (`readOnlyRootFilesystem: true`) is not supported.
+
+## Known limitations
+
+Here's a list of known limitations for the self-hosted gateway v2:
+
+- Configuration API v2 doesn't support custom domain names
+
+## Next steps
+
+- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md)
+- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
+- [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Deploying self-hosted gateways into the same environments where the backend API
## Packaging and features
-The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/sputnik/dhub) from the Microsoft Container Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
+The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/sputnik/registry-portal) from the Microsoft Artifact Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
### Known limitations
We provide a variety of container images for self-hosted gateways to meet your n
| `v{major}-preview` | Use this tag if you always want to run our latest preview container image. | `v2-preview` | ✔️ | ❌ | | `latest` | Use this tag if you want to evaluate the self-hosted gateway. | `latest` | ✔️ | ❌ |
-You can find a full list of available tags [here](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
+You can find a full list of available tags [here](https://mcr.microsoft.com/product/azure-api-management/gateway/tags).
#### Use of tags in our official deployment options
The self-hosted gateway v2 requires the following:
* The public IP address of the API Management instance in its primary location * The hostname of the instance's configuration endpoint: `<apim-service-name>.configuration.azure-api.net`
-Additionally,customers that use API inspector or quotas in their policies have to ensure that the following additional dependencies are accessible:
+Additionally, customers that use API inspector or quotas in their policies have to ensure that the following additional dependencies are accessible:
* The hostname of the instance's associated blob storage account: `<blob-storage-account-name>.blob.core.windows.net` * The hostname of the instance's associated table storage account: `<table-storage-account-name>.table.core.windows.net`
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your App Service app and select **Save**. 1. Select **Expose an API**, and click **Set** next to "Application ID URI". This value uniquely identifies the application when it is used as a resource, allowing tokens to be requested that grant access. It is used as a prefix for scopes you create.
- For a single-tenant app, you can use the default value, which is in the form the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#appid-uri-configuration).
+ For a single-tenant app, you can use the default value, which is in the form the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri).
The value is automatically saved.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
zone_pivot_groups: app-service-containers-code
::: zone pivot="code-windows" > [!NOTE]
-> Mounting Azure Storage as a local share for App Service on Windows code is currently in preview.
+> Mounting Azure Storage as a local share for App Service on Windows code (non-container) is currently in preview.
>
-This guide shows how to mount Azure Storage Files as a network share in Windows code in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include:
+This guide shows how to mount Azure Storage Files as a network share in Windows code (non-container) in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include:
- Configure persistent storage for your App Service app and manage the storage separately. - Make static content like video and images readily available for your App Service app.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To ena
When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start will obtain read/write permissions on the cluster membership file. Other instances will read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
-The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](how-to-zone-redundancy.md). The JBoss EAP clustering feature is compatabile with the zone redundancy feature.
+The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](../availability-zones/migrate-app-service.md). The JBoss EAP clustering feature is compatabile with the zone redundancy feature.
### JBoss EAP App Service Plans
app-service Overview Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview-zone-redundancy.md
- Title: Zone redundancy in App Service Environment
-description: Overview of zone redundancy in an App Service Environment.
-- Previously updated : 04/06/2022---
-# Availability zone support for App Service Environment
-
-You can deploy App Service Environment across [availability zones](../../availability-zones/az-overview.md). This architecture is also known as zone redundancy. When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
-
-> [!NOTE]
-> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-
-You configure zone redundancy when you create your App Service Environment, and all App Service plans created in that App Service Environment will be zone redundant. You can only specify zone redundancy when you're creating a new App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
-
-When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have auto-scale configured, and if it determines that more instances are needed, auto-scale also issues a request to App Service to add more instances. Auto-scale behavior is independent of App Service platform behavior.
-
-There's no guarantee that requests for instances in a zone-down scenario will succeed, because back-filling lost instances occurs on a best effort basis. It's a good idea to scale your App Service plans to account for losing a zone.
-
-Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
-
-When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- one instance in all of the other zones used by the App Service plan.
-
-## In-region data residency
-
-A zone redundant App Service Environment will only store customer data within the region where it has been deployed. Both app content, settings and secrets stored in App Service remain within the region where the zone redundant App Service Environment is deployed.
-
-## Pricing
-
- There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There's no added charge for availability zone support if you've nine or more instances. If you've fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This difference is billed as Windows I1v2 instances.
-
-## Next steps
-
-* Read more about [availability zones](../../availability-zones/az-overview.md).
app-service How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/how-to-zone-redundancy.md
- Title: Availability Zone support for public multi-tenant App Service
-description: Learn how to deploy your App Service so that your apps are zone redundant.
-- Previously updated : 2/8/2022---
-# Availability Zone support for public multi-tenant App Service
-
-Microsoft Azure App Service can be deployed into [Availability Zones (AZ)](../availability-zones/az-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
-
-An app lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. When an App Service is configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones. For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone in the selected region.
-
-## Requirements
-
-Zone redundancy is a property of the App Service plan. The following are the current requirements/limitations for enabling zone redundancy:
--- Both Windows and Linux are supported-- Requires either **Premium v2** or **Premium v3** App Service plans-- Minimum instance count of three
- - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three
-- Can be enabled in any of the following regions:
- - West US 2
- - West US 3
- - Central US
- - East US
- - East US 2
- - Canada Central
- - Brazil South
- - North Europe
- - West Europe
- - Germany West Central
- - France Central
- - UK South
- - Japan East
- - Southeast Asia
- - Australia East
-- Zone redundancy can only be specified when creating a **new** App Service plan
- - Currently you can't convert a pre-existing App Service plan. See next bullet for details on how to create a new App Service plan that supports zone redundancy.
-- Zone redundancy is only supported in the newer portion of the App Service footprint
- - Currently if you're running on Pv3, then it's possible that you're already on a footprint that supports zone redundancy. In this scenario, you can create a new App Service plan and specify zone redundancy when creating the new App Service plan.
- - If you aren't using Pv3 or a scale unit that supports zone redundancy, are in an unsupported region, or are unsure, follow the steps below:
- - Create a new resource group in a region that is supported
- - This ensures the App Service control plane can find a scale unit in the selected region that supports zone redundancy
- - Create a new App Service plan (and app) in a region of your choice using the **new** resource group
- - Ensure the zoneRedundant property (described below) is set to true when creating the new App Service plan
-
-Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section of this article.
-
-Applications that are deployed in an App Service plan that has zone redundancy enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
-
-When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
-
-## How to Deploy a Zone Redundant App Service
-
-You can create a zone redundant App Service using the [Azure CLI](/cli/azure/install-azure-cli), [Azure portal](https://portal.azure.com), or an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/overview.md).
-
-To enable zone redundancy using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service plan. You can also include the `--number-of-workers` parameter to specify capacity. If you don't specify a capacity, the platform defaults to three. Capacity should be set based on the workload requirement, but no less than three. A good rule of thumb to choose capacity is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
-
-```azurecli
-az appservice plan create --resource-group MyResourceGroup --name MyPlan --sku P1v2 --zone-redundant --number-of-workers 6
-```
-
-> [!TIP]
-> To decide instance capacity, you can use the following calculation:
->
-> Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
->
-
-To create a zone redundant App Service using the Azure portal, enable the zone redundancy option during the "Create Web App" or "Create App Service Plan" experiences.
--
-The capacity/number of workers/instance count can be changed once the App Service Plan is created by navigating to the **Scale out (App Service plan)** settings.
--
-The only changes needed in an Azure Resource Manager template to specify a zone redundant App Service are the ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the same conditions described previously.
-
-The Azure Resource Manager template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/serverfarms",
- "apiVersion": "2018-02-01",
- "name": "your-appserviceplan-name-here",
- "location": "West US 3",
- "sku": {
- "name": "P1v3",
- "tier": "PremiumV3",
- "size": "P1v3",
- "family": "Pv3",
- "capacity": 3
- },
- "kind": "app",
- "properties": {
- "zoneRedundant": true
- }
- }
-]
-```
-
-## Pricing
-
-There's no additional cost associated with enabling the zone redundancy feature. Pricing for a zone redundant App Service is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable zone redundancy but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
-
-> [!div class="nextstepaction"]
-> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
-
-> [!div class="nextstepaction"]
-> [Learn how to scale up an app in Azure App Service](manage-scale-up.md)
-
-> [!div class="nextstepaction"]
-> [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md)
-
-> [!div class="nextstepaction"]
-> [Manage disaster recovery](manage-disaster-recovery.md)
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
Health check integrates with App Service's [authentication and authorization features](overview-authentication-authorization.md). No additional settings are required if these security features are enabled.
-If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. Once you have those features in-place, you can authenticate the health check request by inspecting the header, `x-ms-auth-internal-token`, and validating that it matches the SHA256 hash of the environment variable `WEBSITE_AUTH_ENCRPYTION_KEY`. If they match, then the health check request is valid and originating from App Service.
+If you're using your own authentication system, the Health check path must allow anonymous access. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. Once you have those features in-place, you can authenticate the health check request by inspecting the header, `x-ms-auth-internal-token`, and validating that it matches the SHA256 hash of the environment variable `WEBSITE_AUTH_ENCRYPTION_KEY`. If they match, then the health check request is valid and originating from App Service.
##### [.NET](#tab/dotnet)
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
zone_pivot_groups: app-service-platform-windows-linux
::: zone pivot="platform-linux" [Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service. This quickstart shows how to deploy a PHP app to Azure App Service on Linux.
-You create and deploy the web app using [Azure CLI](/cli/azure/get-started-with-azure-cli).
- ![Sample app running in Azure](media/quickstart-php/hello-world-in-browser.png) You can follow the steps here using a Mac, Windows, or Linux machine. Once the prerequisites are installed, it takes about five minutes to complete the steps.
To complete this quickstart, you need:
1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). 1. <a href="https://git-scm.com/" target="_blank">Git</a>
-1. <a href="https://php.net/manual/install.php" target="_blank">PHP</a>
-1. <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> to run commands in any shell to provision and configure Azure resources.
+1. <a href="https://php.net/manual/install.php" target="_blank">PHP</a>. <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> to run commands in any shell to provision and configure Azure resources.
+
+## 1 - Get the sample repository
-## 1 - Set up the sample application
+### [Azure CLI](#tab/cli)
+
+You can create the web app using the [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell, and you use Git to deploy sample PHP code to the web app.
1. In a terminal window, run the following commands. It will clone the sample application to your local machine, and navigate to the directory containing the sample code.
To complete this quickstart, you need:
1. In your terminal window, press **Ctrl+C** to exit the web server.
+### [Portal](#tab/portal)
+
+1. In your browser, navigate to the repository containing [the sample code](https://github.com/Azure-Samples/php-docs-hello-world).
+
+1. In the upper right corner, select **Fork**.
+
+ ![Screenshot of the Azure-Samples/php-docs-hello-world repo in GitHub, with the Fork option highlighted.](media/quickstart-php/fork-php-docs-hello-world-repo.png)
+
+1. On the **Create a new fork** screen, confirm the **Owner** and **Repository name** fields. Select **Create fork**.
+
+ ![Screenshot of the Create a new fork page in GitHub for creating a new fork of Azure-Samples/php-docs-hello-world.](media/quickstart-php/fork-details-php-docs-hello-world-repo.png)
+
+>[!NOTE]
+> This should take you to the new fork. Your fork URL will look something like this: https://github.com/YOUR_GITHUB_ACCOUNT_NAME/php-docs-hello-world
+++ ## 2 - Deploy your application code to Azure
+### [Azure CLI](#tab/cli)
+ Azure CLI has a command [`az webapp up`](/cli/azure/webapp#az_webapp_up) that will create the necessary resources and deploy your application in a single step. In the terminal, deploy the code in your local folder using the [`az webapp up`](/cli/azure/webapp#az_webapp_up) command:
You can launch the app at http://&lt;app-name>.azurewebsites.net
[!include [az webapp up command note](../../includes/app-service-web-az-webapp-up-note.md)]
-## 3 - Browse to the app
- Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`.
-![Empty web app page](media/quickstart-php/hello-world-in-browser.png)
+### [Portal](#tab/portal)
+
+1. Sign into the Azure portal.
+
+1. At the top of the portal, type **app services** in the search box. Under **Services**, select **App Services**.
+
+ ![Screenshot of the Azure portal with 'app services' typed in the search text box. In the results, the App Services option under Services is highlighted.](media/quickstart-php/azure-portal-search-for-app-services.png)
+
+1. On the **App Services** page, select **Create**.
+
+ ![Screenshot of the App Services page in the Azure portal. The Create button in the action bar is highlighted.](media/quickstart-php/azure-portal-create-app-service.png)
+
+1. Fill out the **Create Web App** page as follows.
+ - **Resource Group**: Create a resource group named *myResourceGroup*.
+ - **Name**: Type a globally unique name for your web app.
+ - **Publish**: Select *Code*.
+ - **Runtime stack**: Select *PHP 8.0*.
+ - **Operating system**: Select *Linux*.
+ - **Region**: Select an Azure region close to you.
+ - **App Service Plan**: Create an app service plan named *myAppServicePlan*.
+
+1. To change to the Free tier, next to **Sku and size**, select **Change size**.
+
+1. In the Spec Picker, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page.
+
+ ![Screenshot of the Spec Picker for the App Service Plan pricing tiers in the Azure portal. Dev/Test, F1, and Apply are highlighted.](media/quickstart-php/azure-portal-create-app-service-select-free-tier.png)
+
+1. Select the **Review + create** button at the bottom of the page.
+
+1. After validation runs, select the **Create** button at the bottom of the page. This will create an Azure resource group, app service plan, and app service.
+
+1. After the Azure resources are created, select **Go to resource**.
+
+1. From the left navigation, select **Deployment Center**.
+
+ ![Screenshot of the App Service in the Azure Portal. The Deployment Center option in the Deployment section of the left navigation is highlighted.](media/quickstart-php/azure-portal-configure-app-service-deployment-center.png)
+
+1. Under **Settings**, select a **Source**. For this quickstart, select *GitHub*.
+
+1. In the section under **GitHub**, select the following settings:
+ - Organization: Select your organization.
+ - Repository: Select *php-docs-hello-world*.
+ - Branch: Select the default branch for your repository.
+
+1. Select **Save**.
+
+ ![Screenshot of the Deployment Center for the App Service, focusing on the GitHub integration settings. The Save button in the action bar is highlighted.](media/quickstart-php/azure-portal-configure-app-service-github-integration.png)
+
+ > [!TIP]
+ > This quickstart uses GitHub. Additional continuous deployment sources include Bitbucket, Local Git, Azure Repos, and External Git. FTPS is also a supported deployment method.
+
+1. Once the GitHub integration is saved, from the left navigation of your app, select **Overview** > **URL**.
+
+ ![Screenshot of the App Service resource with the URL field highlighted.](media/quickstart-php/azure-portal-app-service-url.png)
-## 4 - Redeploy updates
++
+The PHP sample code is running in an Azure App Service.
+
+![Screenshot of the sample app running in Azure, showing 'Hello World!'.](media/quickstart-php/php-8-hello-world-in-browser.png)
+
+**Congratulations!** You've deployed your first PHP app to App Service using the Azure portal.
+
+## 3 - Update and redeploy the app
+
+### [Azure CLI](#tab/cli)
1. Using a local text editor, open the `index.php` file within the PHP app, and make a small change to the text within the string next to `echo`:
Browse to the deployed application in your web browser at the URL `http://<app-n
![Updated sample app running in Azure](media/quickstart-php/hello-azure-in-browser.png)
-## 5 - Manage your new Azure app
+### [Portal](#tab/portal)
+
+1. Browse to your GitHub fork of php-docs-hello-world.
+
+1. On your repo page, press `.` to start Visual Studio code within your browser.
+
+![Screenshot of the forked php-docs-hello-world repo in GitHub with instructions to press the period key on this screen.](media/quickstart-php/forked-github-repo-press-period.png)
+
+> [!NOTE]
+> The URL will change from GitHub.com to GitHub.dev. This feature only works with repos that have files. This does not work on empty repos.
+
+1. Edit **index.php** so that it shows "Hello Azure!" instead of "Hello World!"
+
+ ```php
+ <?php
+ echo "Hello Azure!";
+ ?>
+ ```
+
+1. From the **Source Control** menu, select the **Stage Changes** button to stage the change.
+
+ ![Screenshot of Visual Studio Code in the browser, highlighting the Source Control navigation in the sidebar, then highlighting the Stage Changes button in the Source Control panel.](media/quickstart-php/visual-studio-code-in-browser-stage-changes.png)
-1. Go to the <a href="https://portal.azure.com" target="_blank">Azure portal</a> to manage the web app you created. Search for and select **App Services**.
+1. Enter a commit message such as `Hello Azure`. Then, select **Commit and Push**.
+
+ ![Screenshot of Visual Studio Code in the browser, Source Control panel with a commit message of 'Hello Azure' and the Commit and Push button highlighted.](media/quickstart-php/visual-studio-code-in-browser-commit-push.png)
+
+1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
+
+ ![Screenshot of the updated sample app running in Azure, showing 'Hello Azure!'](media/quickstart-php/php-8-hello-azure-in-browser.png)
+++
+## 4 - Manage your new Azure app
+
+1. Go to the Azure portal to manage the web app you created. Search for and select **App Services**.
- ![Search for App Services, Azure portal, create PHP web app](media/quickstart-php/navigate-to-app-services-in-the-azure-portal.png)
+ ![Screenshot of the Azure portal with 'app services' typed in the search text box. In the results, the App Services option under Services is highlighted.](media/quickstart-php/azure-portal-search-for-app-services.png)
-2. Select the name of your Azure app.
+1. Select the name of your Azure app.
- ![Portal navigation to Azure app](./media/quickstart-php/php-docs-hello-world-app-service-list.png)
+ ![Screenshot of the App Services list in Azure. The name of the demo app service is highlighted.](media/quickstart-php/app-service-list.png)
Your web app's **Overview** page will be displayed. Here, you can perform basic management tasks like **Browse**, **Stop**, **Restart**, and **Delete**.
- ![App Service page in Azure portal](media/quickstart-php/php-docs-hello-world-app-service-detail.png)
+ ![Screenshot of the App Service overview page in Azure portal. In the action bar, the Browse, Stop, Swap (disabled), Restart, and Delete button group is highlighted.](media/quickstart-php/app-service-detail.png)
The web app menu provides different options for configuring your app.
-## Clean up resources
+## 5 - Clean up resources
When you're finished with the sample app, you can remove all of the resources for the app from Azure. It will not incur extra charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+### [Azure CLI](#tab/cli)
+ Delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete) command. ```azurecli-interactive
az group delete --name myResourceGroup
This command may take a minute to run.
+### [Portal](#tab/portal)
+
+1. From your App Service **Overview** page, select the resource group you created.
+
+1. From the resource group page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources.
++++ ## Next steps > [!div class="nextstepaction"]
This command may take a minute to run.
> [!div class="nextstepaction"] > [Configure PHP app](configure-language-php.md)
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
The following environment variables are related to the app environment in genera
| `WEBSOCKET_CONCURRENT_REQUEST_LIMIT` | Read-only. Limit for websocket's concurrent requests. For **Standard** tier and above, the value is `-1`, but there's still a per VM limit based on your VM size (see [Cross VM Numerical Limits](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits)). || | `WEBSITE_PRIVATE_EXTENSIONS` | Set to `0` to disable the use of private site extensions. || | `WEBSITE_TIME_ZONE` | By default, the time zone for the app is always UTC. You can change it to any of the valid values that are listed in [TimeZone](/previous-versions/windows/it-pro/windows-vista/cc749073(v=ws.10)). If the specified value isn't recognized, UTC is used. | `Atlantic Standard Time` |
-| `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | In the case of a storage volume failover or reconfiguration, your app is switched over to a standby storage volume. The default setting of `1` prevents your worker process from recycling when the storage infrastructure changes. If you are running a Windows Communication Foundation (WCF) app, disable it by setting it to `0`. The setting is slot-specific, so you should set it in all slots. ||
+| `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | After slot swaps, the app may experience unexpected restarts. This is because after a swap, the hostname binding configuration goes out of sync, which by itself doesn't cause restarts. However, certain underlying storage events (such as storage volume failovers) may detect these discrepancies and force all worker processes to restart. To minimize these types of restarts, set the app setting value to `1`on all slots (default is`0`). However, do not set this value if you are running a Windows Communication Foundation (WCF) application. For more information, see [Troubleshoot swaps](deploy-staging-slots.md#troubleshoot-swaps)||
| `WEBSITE_PROACTIVE_AUTOHEAL_ENABLED` | By default, a VM instance is proactively "autohealed" when it's using more than 90% of allocated memory for more than 30 seconds, or when 80% of the total requests in the last two minutes take longer than 200 seconds. If a VM instance has triggered one of these rules, the recovery process is an overlapping restart of the instance. Set to `false` to disable this recovery behavior. The default is `true`. For more information, see [Proactive Auto Heal](https://azure.github.io/AppService/2017/08/17/Introducing-Proactive-Auto-Heal.html). || | `WEBSITE_PROACTIVE_CRASHMONITORING_ENABLED` | Whenever the w3wp.exe process on a VM instance of your app crashes due to an unhandled exception for more than three times in 24 hours, a debugger process is attached to the main worker process on that instance, and collects a memory dump when the worker process crashes again. This memory dump is then analyzed and the call stack of the thread that caused the crash is logged in your App ServiceΓÇÖs logs. Set to `false` to disable this automatic monitoring behavior. The default is `true`. For more information, see [Proactive Crash Monitoring](https://azure.github.io/AppService/2021/03/01/Proactive-Crash-Monitoring-in-Azure-App-Service.html). || | `WEBSITE_DAAS_STORAGE_SASURI` | During crash monitoring (proactive or manual), the memory dumps are deleted by default. To save the memory dumps to a storage blob container, specify the SAS URI. ||
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
az sql db create \
-## 4 - Deploy to the App Service
-
-We're now ready to deploy our .NET app to the App Service.
-
-### [Deploy using Visual Studio](#tab/visualstudio-deploy)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01-240px.png" alt-text="A screenshot showing the publish dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.png"::: |
-| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02-240px.png" alt-text="A screenshot showing how to select the deployment target in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.png"::: |
-| [!INCLUDE [Deploy app service step 3](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03-240px.png" alt-text="A screenshot showing the sign-in to Azure dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.png"::: |
-| [!INCLUDE [Deploy app service step 4](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04-240px.png" alt-text="A screenshot showing the dialog to select the App Service instance to deploy to in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.png"::: |
-| [!INCLUDE [Deploy app service step 5](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05-240px.png" alt-text="A screenshot showing the publishing profile summary dialog in Visual Studio and the location of the publish button used to publish the app." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.png"::: |
-
-### [Deploy using Visual Studio Code](#tab/visual-studio-code-deploy)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01-240px.png" alt-text="A screenshot showing how to install the Azure Account and App Service extensions in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01.png"::: |
-| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder-small.png" alt-text="A screenshot showing how to deploy using the publish folder." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder.png"::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow-small.png" alt-text="A screenshot showing the command palette deployment workflow." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow.png"::: |
-
-### [Deploy using Local Git](#tab/azure-cli-deploy)
----
-## 5 - Connect the App to the Database
+## 4 - Connect the App to the Database
Next, we must connect the App hosted in our App Service to our database using a Connection String. You can use [Service Connector](../service-connector/overview.md) to create the connection.
To see the entirety of the command output, drop the `--query` in the command.
-## 6 - Generate the Database Schema
+## 5 - Generate the Database Schema
To generate our database schema, set up a firewall rule on the SQL database server. This rule lets your local computer connect to Azure. For this step, you'll need to know your local computer's IP address. For more information about how to find the IP address, [see here](https://whatismyipaddress.com/).
az sql server firewall-rule create --resource-group msdocs-core-sql --server <yo
-Next, update the *appsettings.json* file in the sample project with the [connection string Azure SQL Database](#5connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
+Next, update the *appsettings.json* file in the sample project with the [connection string Azure SQL Database](#4connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
```json "AZURE_SQL_CONNECTIONSTRING": "Data Source=<your-server-name>.database.windows.net,1433;Initial Catalog=coreDb;User ID=<username>;Password=<password>"
services.AddDbContext<MyDatabaseContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("AZURE_SQL_CONNECTIONSTRING"))); ```
-Finally, run the following commands to install the necessary CLI tools for Entity Framework Core. Create an initial database migration file and apply those changes to update the database:
+From a local terminal, run the following commands to install the necessary CLI tools for Entity Framework Core, create an initial database migration file, and apply those changes to update the database:
```dotnetcli
-dotnet tool install -g dotnet-ef \
-dotnet ef migrations add InitialCreate \
+cd <sample-root>\DotNetCoreSqlDb
+dotnet tool install -g dotnet-ef
+dotnet ef migrations add InitialCreate
dotnet ef database update ```
After the migration finishes, the correct schema is created.
If you receive the error `Client with IP address xxx.xxx.xxx.xxx is not allowed to access the server`, that means the IP address you entered into your Azure firewall rule is incorrect. To fix this issue, update the Azure firewall rule with the IP address provided in the error message.
+## 6 - Deploy to the App Service
+
+That we're able to create the schema in the database means that our .NET app can connect to the Azure database successfully with the new connection string. Remember that the service connector already configured the `AZURE_SQL_CONNECTIONSTRING` connection string in our App Service app. We're now ready to deploy our .NET app to the App Service.
+
+### [Deploy using Visual Studio](#tab/visualstudio-deploy)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01-240px.png" alt-text="A screenshot showing the publish dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-01.png"::: |
+| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02-240px.png" alt-text="A screenshot showing how to select the deployment target in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-02.png"::: |
+| [!INCLUDE [Deploy app service step 3](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03-240px.png" alt-text="A screenshot showing the sign-in to Azure dialog in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-03.png"::: |
+| [!INCLUDE [Deploy app service step 4](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04-240px.png" alt-text="A screenshot showing the dialog to select the App Service instance to deploy to in Visual Studio." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-04.png"::: |
+| [!INCLUDE [Deploy app service step 5](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05-240px.png" alt-text="A screenshot showing the publishing profile summary dialog in Visual Studio and the location of the publish button used to publish the app." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-deploy-app-service-05.png"::: |
+
+### [Deploy using Visual Studio Code](#tab/visual-studio-code-deploy)
+
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Deploy app service step 1](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01-240px.png" alt-text="A screenshot showing how to install the Azure Account and App Service extensions in Visual Studio Code." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-01.png"::: |
+| [!INCLUDE [Deploy app service step 2](<./includes/tutorial-dotnetcore-sqldb-app/visual-studio-code-deploy-app-service-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder-small.png" alt-text="A screenshot showing how to deploy using the publish folder." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-folder.png"::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow-small.png" alt-text="A screenshot showing the command palette deployment workflow." lightbox="./media/tutorial-dotnetcore-sqldb-app/visual-studio-code-publish-workflow.png"::: |
+
+### [Deploy using Local Git](#tab/azure-cli-deploy)
++++ ## 7 - Browse the Deployed Application and File Directory Go back to your web app in the browser. You can always get back to your site by selecting the **Browse** link at the top of the App Service overview page. If you refresh the page, you can now create todos and see them displayed on the home page. Congratulations!
application-gateway Configuration Front End Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-front-end-ip.md
Only one public IP address and one private IP address is supported. You choose t
- For a public IP address, you can create a new public IP address or use an existing public IP in the same location as the application gateway. For more information, see [static vs. dynamic public IP address](./application-gateway-components.md#static-versus-dynamic-public-ip-address). -- For a private IP address, you can specify a private IP address from the subnet where the application gateway is created. If you don't specify one, an arbitrary IP address is automatically selected from the subnet. The IP address type that you select (static or dynamic) can't be changed later. For more information, see [Create an application gateway with an internal load balancer](./application-gateway-ilb-arm.md).
+- For a private IP address, you can specify a private IP address from the subnet where the application gateway is created. For Application Gateway v2 sku deployments, a static IP address must be defined when adding a private IP address to the gateway. For Application Gateway v1 sku deployments, if you don't specify an IP address, an available IP address is automatically selected from the subnet. The IP address type that you select (static or dynamic) can't be changed later. For more information, see [Create an application gateway with an internal load balancer](./application-gateway-ilb-arm.md).
A front-end IP address is associated to a *listener*, which checks for incoming requests on the front-end IP.
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
When you create or open a project, the main tag editor window opens. The tag edi
### Identify text and tables
-Select **Run OCR on all files** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
+Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
applied-ai-services Get Started Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdk-rest-api.md
Previously updated : 11/02/2021 Last updated : 06/21/2021 zone_pivot_groups: programming-languages-set-formre recommendations: false
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Form Recognizer offers several prebuilt models to choose from. Each model has it
1. Choose a URL for the file you would like to analyze from the below options:
- * [**Sample invoice document**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/forms/Invoice_1.pdf).
+ * [**Sample invoice document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice_sample.jpg).
* [**Sample ID document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png). * [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg). * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
When you create or open a project, the main tag editor window opens. The tag edi
##### Identify text and tables
-Select **Run OCR on all files** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
+Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
In this quickstart, you'll use following features to analyze and extract data an
* The current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/). <!-- or [.NET Core](https://dotnet.microsoft.com/download). -->
-* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal to get your key and endpoint.
+
+* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
> [!TIP] > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
To interact with the Form Recognizer service, you'll need to create an instance
> [!NOTE] >
-> * Starting with .NET 6, new projects using the `console` template generate different code than previous versions.
-> * The new output uses recent C# features that simplify the code you need to write for a program.
-> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include the other program elements.
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates). 1. Open the **Program.cs** file.
Analyze and extract text, tables, structure, key-value pairs, and named entities
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```csharp using Azure;
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * To extract the layout from a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-layout` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document.
-**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```csharp using Azure;
Analyze and extract common fields from specific document types using a prebuilt
> * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-**Add the following code sample to your Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to your Program.cs file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```csharp
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
>[!NOTE] > Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
-[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. Get started with exploring the pre-trained models with sample documents or your own. Create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts.
+[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts.
:::image border="true" type="content" source="../media/quickstarts/form-recognizer-demo-preview3.gif" alt-text="Selecting the Layout API to analyze a newspaper document in the Form Recognizer Studio.":::
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you've completed the prerequisites, navigate to the [Form Recognizer Studio General Documents preview](https://formrecognizer.appliedai.azure.com).
+After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com).
In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
In the following example, we use the General Documents feature. The steps to use
1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
-1. Observe the highlighted extracted content in the document view. Hover your move over the keys and values to see details.
+1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details.
1. In the output section's Result tab, browse the JSON output to understand the service response format.
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
In this quickstart you'll use following features to analyze and extract data and
* If you aren't using VS Code, make sure you have the following installed in your development environment:
- * A [**Java Development Kit** (JDK)](https://wiki.openjdk.java.net/display/jdk8u) version 8 or later. For more information, *see* [supported Java Versions and update schedule](/azure/developer/java/fundamentals/java-support-on-azure#supported-java-versions-and-update-schedule).
+ * A [**Java Development Kit** (JDK)](/java/openjdk/download#openjdk-17) version 8 or later. For more information, *see* [Microsoft Build of OpenJDK](https://www.microsoft.com/openjdk).
- * [**Gradle**](https://gradle.org/), version 6.8 or later.
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
In this quickstart you'll use following features to analyze and extract data and
```console mkdir form-recognizer-app && form-recognizer-app ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
1. Run the `gradle init` command from your working directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
In this quickstart you'll use following features to analyze and extract data and
1. When prompted to choose a **DSL**, select **Kotlin**.
-1. Accept the default project name (form-recognizer-app)
+1. Accept the default project name (form-recognizer-app) by selecting **Return** or **Enter**.
### Install the client library
Extract text, tables, structure, key-value pairs, and named entities from docume
> * We've added the file URI value to the `documentUrl` variable in the main method. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```java
Extract text, selection marks, text styles, table structures, and bounding regio
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocumentFromUrl` method and pass `prebuilt-layout` as the model Id. The returned value is an `AnalyzeResult` object containing data about the submitted document. > * We've added the file URI value to the `documentUrl` variable in the main method.
-**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```java
Analyze and extract common fields from specific document types using a prebuilt
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```java
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
In this quickstart you'll use following features to analyze and extract data and
## Set up
-1. Create a new Node.js Express application: In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app named `form-recognizer-app`, and navigate to it.
+1. Create a new Node.js Express application: In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `form-recognizer-app`.
```console mkdir form-recognizer-app && cd form-recognizer-app
Extract text, tables, structure, key-value pairs, and named entities from docume
> * We've added the file URL value to the `formUrl` variable near the top of the file. > * To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```javascript
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URL value to the `formUrl` variable near the top of the file. > * To analyze a given file from a URL, you'll use the `beginAnalyzeDocuments` method and pass in `prebuilt-layout` as the model Id.
-**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```javascript
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Extract text, tables, structure, key-value pairs, and named entities from docume
> * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page. <!-- markdownlint-disable MD036 -->
-**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```python
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URL value to the `formUrl` variable in the `analyze_layout` function. > * To analyze a given file at a URL, you'll use the `begin_analyze_document_from_url` method and pass in `prebuilt-layout` as the model Id. The returned value is a `result` object containing data about the submitted document.
-**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```python
Analyze and extract common fields from specific document types using a prebuilt
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
```python # import libraries
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
To learn more about Form Recognizer features and development options, visit our
Before you run the cURL command, make the following changes:
-1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal.
+1. Replace `{endpoint}` with the endpoint value from your Azure portal Form Recognizer instance.
-1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
+1. Replace `{key}` with the key value from your Azure portal Form Recognizer instance.
1. Using the table below as a reference, replace `{modelID}` and `{your-document-url}` with your desired values.
-1. You'll need a document file at a URL. For this quickstart, you can use the sample forms provided in the below table for each feature.
+1. You'll need a document file at a URL. For this quickstart, you can use the sample forms provided in the table below for each feature.
#### POST request
You'll receive a `202 (Success)` response that includes an **Operation-Location*
After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes: + 1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal. 1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal. 1. Replace `{modelID}` with the same modelID you used to analyze your document.
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
description: Learn about regions and availability zones and how they work to hel
Previously updated : 05/30/2022 Last updated : 06/21/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
description: Learn what services are supported by availability zones and underst
Previously updated : 05/30/2022 Last updated : 06/21/2022
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure API Management](../api-management/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure App Configuration](../azure-app-configuration/faq.yml#how-does-app-configuration-ensure-high-data-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure App Service](../app-service/how-to-zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure App Service: App Service Environments](../app-service/environment/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure App Service](migrate-app-service.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure App Service: App Service Environment](migrate-app-service-environment.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
availability-zones Migrate App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service-environment.md
+
+ Title: Migrate Azure App Service Environment to availability zone support
+description: Learn how to migrate an Azure App Service Environment to availability zone support.
+++ Last updated : 06/08/2022+++++
+# Migrate App Service Environment to availability zone support
+
+This guide describes how to migrate an App Service Environment from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with Isolated v2 App Service plans. Availability zones are only supported on App Service Environment v3. If you're using App Service Environment v1 or v2 and want to use availability zones, you'll need to migrate to App Service Environment v3.
+
+Azure App Service Environment can be deployed across [Availability Zones (AZ)](../availability-zones/az-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
+
+When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
+
+## Prerequisites
+
+- You configure availability zones when you create your App Service Environment.
+ - All App Service plans created in that App Service Environment will automatically be zone redundant.
+- You can only specify availability zones when creating a **new** App Service Environment. A pre-existing App Service Environment can't be converted to use availability zones.
+- Availability zones are only supported in a [subset of regions](../app-service/environment/overview.md#regions).
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing App Service Environments to use availability zones, migration will consist of a side-by-side deployment where you'll create a new App Service Environment with availability zones enabled.
+
+Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled App Service Environment. For example, if you're using an [Application Gateway](../app-service/networking/app-gateway-with-service-endpoints.md), a [custom domain](../app-service/app-service-web-tutorial-custom-domain.md), or [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update those respective services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time using a service such as [Azure Traffic Manager](../app-service/web-sites-traffic-manager.md) and only fully cutover to your new availability zone enabled apps when everything is deployed and fully tested. For more information on App Service Environment migration options, see [App Service Environment migration](../app-service/environment/migration-alternatives.md). If you're already using App Service Environment v3, disregard the information about migration from previous versions and focus on the app migration strategies.
+
+## Migration guidance: Redeployment
+
+### When to use redeployment
+
+If you want your App Service Environment to use availability zones, redeploy your apps into a newly created availability zone enabled App Service Environment.
+
+### Important considerations when using availability zones
+
+Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
+
+Applications that are deployed in an App Service Environment that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environments only ensures continued uptime for deployed applications.
+
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
+
+## In-region data residency
+
+A zone redundant App Service Environment will only store customer data within the region where it has been deployed. App content, settings, and secrets stored in App Service remain within the region where the zone redundant App Service Environment is deployed.
+
+### How to redeploy
+
+The following steps describe how to enable availability zones.
+
+1. To redeploy and ensure you'll be able to use availability zones, you'll need to be on the App Service footprint that supports availability zones. Create your new App Service Environment in one of the [supported regions](../app-service/environment/overview.md#regions).
+1. Ensure the zoneRedundant property (described below) is set to true when creating the new App Service Environment.
+1. Create your new App Service plans and apps in the new App Service Environment using your desired deployment method.
+
+You can create an App Service Environment with availability zones using the [Azure CLI](/cli/azure/install-azure-cli), [Azure portal](https://portal.azure.com), or an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/overview.md).
+
+To enable availability zones using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service Environment.
+
+```azurecli
+az appservice ase create --resource-group MyResourceGroup --name MyAseName --zone-redundant --vnet-name MyVNet --subnet MySubnet --kind asev3 --virtual-ip-type Internal
+```
+
+To create an App Service Environment with availability zones using the Azure portal, enable the zone redundancy option during the "Create App Service Environment v3" experience on the Hosting tab.
+
+The only change needed in an Azure Resource Manager template to specify an App Service Environment with availability zones is the ***zoneRedundant*** property on the [Microsoft.Web/hostingEnvironments](/azure/templates/microsoft.web/hostingEnvironments?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true***.
+
+```json
+"resources": [
+ {
+ "apiVersion": "2019-08-01",
+ "type": "Microsoft.Web/hostingEnvironments",
+ "name": "MyAppServiceEnvironment",
+ "kind": "ASEV3",
+ "location": "West US 3",
+ "properties": {
+ "name": "MyAppServiceEnvironment",
+ "location": "West US 3",
+ "dedicatedHostCount": "0",
+ "zoneRedundant": true,
+ "InternalLoadBalancingMode": 0,
+ "virtualNetwork": {
+ "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet/subnets/MySubnet"
+ }
+ }
+ }
+]
+```
+
+## Pricing
+
+There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There's no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This difference is billed as Windows I1v2 instances.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about availability zones](az-overview.md)
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
++
+ Title: Migrate Azure App Service to availability zone support
+description: Learn how to migrate Azure App Service to availability zone support.
+++ Last updated : 06/07/2022++++++
+# Migrate App Service to availability zone support
+
+This guide describes how to migrate the public multi-tenant App Service from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+Azure App Service can be deployed into [Availability Zones (AZ)](../availability-zones/az-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
+
+An App Service lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. App Services are zonal services, which means that App Services can be deployed using one of the following methods:
+
+- For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone that is selected by the platform in the selected region.
+- For App Services that are configured to be zone redundant, the platform automatically spreads the VM instances in the App Service plan across all three zones in the selected region. If a VM instance capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.
+
+## Prerequisites
+
+Availability zone support is a property of the App Service plan. The following are the current requirements/limitations for enabling availability zones:
+
+- Both Windows and Linux are supported.
+- Requires either **Premium v2** or **Premium v3** App Service plans.
+- Minimum instance count of three is enforced.
+ - The platform will enforce this minimum count behind the scenes if you specify an instance count fewer than three.
+- Can be enabled in any of the following regions:
+ - West US 2
+ - West US 3
+ - Central US
+ - East US
+ - East US 2
+ - Canada Central
+ - Brazil South
+ - North Europe
+ - West Europe
+ - Germany West Central
+ - France Central
+ - UK South
+ - Japan East
+ - Southeast Asia
+ - Australia East
+- Availability zones can only be specified when creating a **new** App Service plan. A pre-existing App Service plan can't be converted to use availability zones.
+- Availability zones are only supported in the newer portion of the App Service footprint.
+ - Currently, if you're running on Pv3, then it's possible that you're already on a footprint that supports availability zones. In this scenario, you can create a new App Service plan and specify zone redundancy.
+ - If you aren't using Pv3 or a scale unit that supports availability zones, are in an unsupported region, or are unsure, see the [migration guidance](#migration-guidance-redeployment).
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing App Service plans to use availability zones, migration will consist of a side-by-side deployment where you'll create new App Service plans. Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled App Service. For example, if you're using an [Application Gateway](../app-service/networking/app-gateway-with-service-endpoints.md), a [custom domain](../app-service/app-service-web-tutorial-custom-domain.md), or [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update those respective services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time using a service such as [Azure Traffic Manager](../app-service/web-sites-traffic-manager.md) and only fully cutover to your new availability zone enabled apps when everything is deployed and fully tested.
+
+## Migration guidance: Redeployment
+
+### When to use redeployment
+
+If you want your App Service to use availability zones, redeploy your apps into newly created availability zone enabled App Service plans.
+
+### Important considerations when using availability zones
+
+Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section.
+
+Applications that are deployed in an App Service plan that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service plans only ensures continued uptime for deployed applications.
+
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan.
+
+### How to redeploy
+
+The following steps describe how to enable availability zones.
+
+1. To redeploy and ensure you'll be able to use availability zones, you'll need to be on the App Service footprint that supports availability zones. If you're already using the Pv3 SKU and are in one of the [supported regions](#prerequisites), you can move on to the next step. Otherwise, you should create a new resource group in one of the supported regions to ensure the App Service control plane can find a scale unit in the selected region that supports availability zones.
+1. Create a new App Service plan in one of the supported regions using the **new** resource group.
+1. Ensure the zoneRedundant property (described below) is set to true when creating the new App Service plan.
+1. Create your apps in the new App Service plan using your desired deployment method.
+
+You can create an App Service with availability zones using the [Azure CLI](/cli/azure/install-azure-cli), [Azure portal](https://portal.azure.com), or an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/overview.md).
+
+To enable availability zones using the Azure CLI, include the `--zone-redundant` parameter when you create your App Service plan. You can also include the `--number-of-workers` parameter to specify capacity. If you don't specify a capacity, the platform defaults to three. Capacity should be set based on the workload requirement, but no less than three. A good rule of thumb to choose capacity is to ensure sufficient instances for the application such that losing one zone of instances leaves sufficient capacity to handle expected load.
+
+```azurecli
+az appservice plan create --resource-group MyResourceGroup --name MyPlan --zone-redundant --number-of-workers 6
+```
+
+> [!TIP]
+> To decide instance capacity, you can use the following calculation:
+>
+> Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
+>
+
+To create an App Service with availability zones using the Azure portal, enable the zone redundancy option during the "Create Web App" or "Create App Service Plan" experiences.
++
+The capacity/number of workers/instance count can be changed once the App Service Plan is created by navigating to the **Scale out (App Service plan)** settings.
++
+The only changes needed in an Azure Resource Manager template to specify an App Service with availability zones are the ***zoneRedundant*** property (required) and optionally the App Service plan instance count (***capacity***) on the [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms?tabs=json) resource. The ***zoneRedundant*** property should be set to ***true*** and ***capacity*** should be set based on the same conditions described previously.
+
+The Azure Resource Manager template snippet below shows the new ***zoneRedundant*** property and ***capacity*** specification.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2018-02-01",
+ "name": "your-appserviceplan-name-here",
+ "location": "West US 3",
+ "sku": {
+ "name": "P1v3",
+ "tier": "PremiumV3",
+ "size": "P1v3",
+ "family": "Pv3",
+ "capacity": 3
+ },
+ "kind": "app",
+ "properties": {
+ "zoneRedundant": true
+ }
+ }
+]
+```
+
+## Pricing
+
+There's no additional cost associated with enabling availability zones. Pricing for a zone redundant App Service is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+
+> [!div class="nextstepaction"]
+> [Learn how to scale up an app in Azure App Service](../app-service/manage-scale-up.md)
+
+> [!div class="nextstepaction"]
+> [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md)
+
+> [!div class="nextstepaction"]
+> [Manage disaster recovery](../app-service/manage-disaster-recovery.md)
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
Title: Create Azure SQL Managed Instance using Azure Data Studio
-description: Create Azure SQL Managed Instance using Azure Data Studio
+ Title: Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
+description: Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
Previously updated : 07/30/2021 Last updated : 06/16/2021
-# Create SQL Managed Instance - Azure Arc using Azure Data Studio
+# Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio
-This document walks you through the steps for installing Azure SQL Managed Instance - Azure Arc using Azure Data Studio
+This document demonstrates how to install Azure SQL Managed Instance - Azure Arc using Azure Data Studio.
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)]
+## Create Azure Arc-enabled SQL Managed Instance
-## Create Azure SQL Managed Instance on Azure Arc
--- Launch Azure Data Studio-- On the Connections tab, Click on the three dots on the top left and choose "New Deployment"-- From the deployment options, select **Azure SQL Managed Instance - Azure Arc**
+1. Launch Azure Data Studio
+2. On the Connections tab, select on the three dots on the top left and choose **New Deployment...**.
+3. From the deployment options, select **Azure SQL managed instance**.
> [!NOTE] > You may be prompted to install the appropriate CLI here if it is not currently installed.-- Accept the Privacy and license terms and click **Select** at the bottom--- In the Deploy Azure SQL Managed Instance - Azure Arc blade, enter the following information:
- - Enter a name for the SQL Server instance
- - Enter and confirm a password for the SQL Server instance
- - Select the storage class as appropriate for data
- - Select the storage class as appropriate for logs
- - Select the storage class as appropriate for backups
-
- > [!NOTE]
->Note: Starting with the February release, a ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
-If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed.
--- Click the **Deploy** button--- This should initiate the creation of the Azure SQL Managed Instance - Azure Arc on the data controller.--- In a few minutes, your creation should successfully complete-
-## Connect to Azure SQL Managed Instance - Azure Arc from Azure Data Studio
--- View all the Azure SQL Managed Instances provisioned, using the following commands:-
-```azurecli
-az sql mi-arc list --k8s-namespace <namespace> --use-k8s
-```
-
-Output should look like this, copy the ServerEndpoint (including the port number) from here.
-
-```console
-
-Name Replicas ServerEndpoint State
- - -- -
-sqlinstance1 1/1 25.51.65.109:1433 Ready
-```
--- In Azure Data Studio, under **Connections** tab, click on the **New Connection** on the **Servers** view-- In the **Connection** blade, paste the ServerEndpoint into the Server textbox-- Select **SQL Login** as the Authentication type-- Enter *sa* as the user name-- Enter the password for the `sa` account-- Optionally, enter the specific database name to connect to-- Optionally, select/Add New Server Group as appropriate-- Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc
+
+4. Select **Select**.
+
+ Azure Data Studio opens **Azure SQL managed instance**.
+
+5. For **Resource Type**, choose **Azure SQL managed instance - Azure Arc**.
+6. Accept the privacy statement and license terms
+1. Review the required tools. Follow instructions to update tools before you proceed.
+1. Select **Next**.
+
+ Azure Data Studio allows you to set your specifications for the managed instance. The following table describes the fields:
+
+ |Setting | Description | Required or optional
+ |-|-|-|
+ |**Target Azure Controller** | Name of the Azure Arc data controller. | Required |
+ |**Instance name** | Managed instance name. | Required |
+ |**Username** | System administrator user name. | Required |
+ |**System administrator password** | SQL authentication password for the managed instance. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters.<br/></br> Confirm the password. | Required |
+ |**Service tier** | Specify the appropriate service tier: Business Critical or General Purpose. | Required |
+ |**I already have a SQL Server License** | Select if this managed instance will use a license from your organization. | Optional |
+ |**Storage Class (Data)** | Select from the list. | Required |
+ |**Volume Size in Gi (Data)** | The amount of space in gibibytes to allocate for data. | Required |
+ |**Storage Class (Database logs)** | Select from the list. | Required |
+ |**Volume Size in Gi (Database logs)** | The amount of space in gibibytes to allocate for database transaction logs. | Required |
+ |**Storage Class (Logs)** | Select from the list. | Required |
+ |**Volume Size in Gi (Logs)** | The amount of space in gibibytes to allocate for logs. | Required |
+ |**Storage Class (Backups)** | Select from the list. Specify a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If this storage class isn't RWX capable, the deployment may not succeed. | Required |
+ |**Volume Size in Gi (Backups)** | The size of the storage volume to be used for database backups in gibibytes. | Required |
+ |**Cores Request** | The number of cores to request for the managed instance. Integer. | Optional |
+ |**Cores Limit** | The request for the capacity for the managed instance in gigabytes. Integer. | Optional |
+ |**Memory Request** | Select from the list. | Required |
+ |**Point in time retention (days)** | The number of days to keep your point in time backups. | Optional |
+
+ After you've set all of the required values, Azure Data Studio enables the **Deploy** button. If this control is disabled, verify that you have all required settings configured.
+
+1. Select the **Deploy** button to create the managed instance.
+
+After you select the deploy button, the Azure Arc data controller initiates the deployment. The deployment creates the managed instance. The deployment process takes a few minutes to create the data controller.
+
+## Connect to Azure Arc-enabled SQL Managed Instance from Azure Data Studio
+
+View all the Azure SQL Managed Instances provisioned to this data controller. Use the following command:
+
+ ```azurecli
+ az sql mi-arc list --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Output should look like this, copy the ServerEndpoint (including the port number) from here.
+
+ ```console
+ Name Replicas ServerEndpoint State
+ - -- -
+ sqlinstance1 1/1 25.51.65.109:1433 Ready
+ ```
+
+1. In Azure Data Studio, under **Connections** tab, select the **New Connection** on the **Servers** view
+1. Under **Connection**>**Server**, paste the ServerEndpoint
+1. Select **SQL Login** as the Authentication type
+1. Enter *sa* as the user name
+1. Enter the password for the `sa` account
+1. Optionally, enter the specific database name to connect to
+1. Optionally, select/Add New Server Group as appropriate
+1. Select **Connect** to connect to the Azure SQL Managed Instance - Azure Arc
## Next Steps
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
The following image shows a properly configured distributed availability group:
4. Create the failover group resource on both sites.
- If the managed instance names are identical between the two sites, you do not need to use the `--shared-name <name of failover group>` parameter.
- If the managed instance names are different between the two sites, use the `--shared-name <name of failover group>` parameter.
-
- The following examples use the `--shared-name <name of failover group>...` to complete the task. The command seeds system databases in the disaster recovery instance, from the primary instance.
-
> [!NOTE]
- > The `shared-name` value should be identical on both sites.
+ > Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites.
```azurecli az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary DAG resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md
Some services in Azure Arc for data services depend upon being configured to use
At the time the data controller is provisioned, the storage class to be used for each of these persistent volumes is specified by either passing the --storage-class | -sc parameter to the `az arcdata dc create` command or by setting the storage classes in the control.json deployment template file that is used. If you are using the Azure portal to create the data controller in the directly connected mode, the deployment template that you choose will either have the storage class predefined in the template or if you select a template which does not have a predefined storage class then you will be prompted for one. If you use a custom deployment template, then you can specify the storage class.
-The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [create custom configuration temmplates](create-custom-configuration-template.md) to change the storage class configuration for the data controller pods at deployment time.
+The deployment templates that are provided out of the box have a default storage class specified that is appropriate for the target environment, but it can be overridden during deployment. See the detailed steps to [create custom configuration templates](create-custom-configuration-template.md) to change the storage class configuration for the data controller pods at deployment time.
If you set the storage class using the --storage-class | -sc parameter the storage class will be used for both log and data storage classes. If you set the storage classes in the deployment template file, you can specify different storage classes for logs and data.
When creating an instance using either `az sql mi-arc create` or `az postgres ar
|`--storage-class-data`, `-d`|Used to specify the storage class for all data files including transaction log files| |`--storage-class-logs`, `-g`|Used to specify the storage class for all log files| |`--storage-class-data-logs`|Used to specify the storage class for the database transaction log files.|
-|`--storage-class-backups`|Used to specify the storage class for all backup files.|
+|`--storage-class-backups`|Used to specify the storage class for all backup files. Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). |
+
+> [!WARNING]
+> If you don't specify a storage class for backups, the deployment uses the default storage class in Kubernetes. If this storage class isn't RWX capable, the deployment may not succeed.
The table below lists the paths inside the Azure SQL Managed Instance container that is mapped to the persistent volume for data and logs:
If there are multiple databases on a given database instance, all of the databas
Important factors to consider when choosing a storage class for the database instance pods:
+- Starting with the February, 2022 release of Azure Arc data services, you need to specify a **ReadWriteMany** (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, an Azure SQL managed instance deployment may not succeed.
- Database instances can be deployed in either a single pod pattern or a multiple pod pattern. An example of a single pod pattern is a General Purpose pricing tier Azure SQL managed instance. An example of a multiple pod pattern is a highly available Business Critical pricing tier Azure SQL managed instance. Database instances deployed with the single pod pattern **must** use a remote, shared storage class in order to ensure data durability and so that if a pod or node dies that when the pod is brought back up it can connect again to the persistent volume. In contrast, a highly available Azure SQL managed instance uses Always On Availability Groups to replicate the data from one instance to another either synchronously or asynchronously. Especially in the case where the data is replicated synchronously, there is always multiple copies of the data - typically three copies. Because of this, it is possible to use local storage or remote, shared storage classes for data and log files. If utilizing local storage, the data is still preserved even in the case of a failed pod, node, or storage hardware because there are multiple copies of the data. Given this flexibility, you might choose to use local storage for better performance. - Database performance is largely a function of the I/O throughput of a given storage device. If your database is heavy on reads or heavy on writes, then you should choose a storage class with hardware designed for that type of workload. For example, if your database is mostly used for writes, you might choose local storage with RAID 0. If your database is mostly used for reads of a small amount of "hot data", but there is a large overall storage volume of cold data, then you might choose a SAN device capable of tiered storage. Choosing the right storage class is not any different than choosing the type of storage you would use for any database. - If you are using a local storage volume provisioner, ensure that the local volumes that are provisioned for data, logs, and backups are each landing on different underlying storage devices to avoid contention on disk I/O. The OS should also be on a volume that is mounted to a separate disk(s). This is essentially the same guidance as would be followed for a database instance on physical hardware.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 11/09/2021 Last updated : 06/21/2022
This article provides information on troubleshooting and resolving issues that m
## Logs
-For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [Az arcappliance log](placeholder for published ref API) command. This command needs to be run from the client machine where you've deployed the Azure Arc resource bridge from.
+For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the client machine from which you've deployed the Azure Arc resource bridge.
-The `Az arcappliance log` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the client machine where the deployment of the appliance was performed from. If you are going to use a different client machine to run the Azure CLI command, you need to make sure the following files are copied to the new client machine:
+The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the client machine where the deployment of the appliance was performed from. To use a different client machine to run the Azure CLI command, you need to make sure the following files are copied to the new client machine:
```azurecli $HOME\.KVA\.ssh\logkey.pub
To save the logs to a destination folder, run the following command:
az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory> ```
-To specify the ip address of the Azure Arc resource bridge virtual machine, run the following command:
+To specify the IP address of the Azure Arc resource bridge virtual machine, run the following command:
```azurecli az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX ```
-## Az Arcappliance prepare fails when deploying to VMware
+## `az arcappliance prepare` fails when deploying to VMware
-The **arcappliance** extension for Azure CLI enables a prepare command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
+The `arcappliance` extension for Azure CLI enables a [prepare](/cli/azure/arcappliance/prepare) command, which enables you to download an OVA template to your vSphere environment. This OVA file is used to deploy the Azure Arc resource bridge. The `az arcappliance prepare` command uses the vSphere SDK and can result in the following error:
```azurecli $ az arcappliance prepare vmware --config-file <path to config>
value out of range.
### Cause
-This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it is a 32-bit Windows Installer package. However, the Azure CLI `Az arcappliance` extension needs to run in a 64-bit context.
+This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. When you install the Azure CLI, it's a 32-bit Windows Installer package. However, the Azure CLI `az arcappliance` extension needs to run in a 64-bit context.
### Resolution
Perform the following steps to configure your client machine with the Azure CLI
1. Verify Python is installed correctly by running `py` in a Command Prompt. 1. From an elevated PowerShell console, run `pip install azure-cli` to install the Azure CLI from PyPI.
-After completing these steps, in a new PowerShell console you can get started using the Azure Arc appliance CLI extension.
+After you complete these steps, in a new PowerShell console you can get started using the Azure Arc appliance CLI extension.
## Azure Arc resource bridge (preview) is unreachable Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services.
-Intermittently, the resource bridge (preview) can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address is not assigned to the Azure Arc resource bridge (preview) VM, any call to the resource bridge API server will fail. As a result you are unable to create any new resource through the resource bridge (preview), ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
+Intermittently, the resource bridge (preview) can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge (preview) VM, any call to the resource bridge API server will fail. As a result, you can't create any new resource through the resource bridge (preview), ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
Reboot the resource bridge (preview) VM and it should recover its IP address. If
## Resource bridge cannot be updated
-In this release, all the parameters are specified at time of creation. When there is a need to update the Azure Arc resource bridge, you need to delete it and redeploy it again.
+In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
-For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you will see the status stuck at *WaitForHeartBeat*.
+For example, if you specified the wrong location, or subscription during deployment, later the resource creation fails. If you only try to recreate the resource without redeploying the resource bridge VM, you'll see the status stuck at `WaitForHeartBeat`.
### Resolution
Delete the appliance, update the appliance YAML file, then redeploy and create t
## Token refresh error
-When you run the Azure CLI commands the following error may be returned, *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign into Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again.
+When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign into Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again.
### Resolution
-Sign into Azure again using the `Az login` command.
+Sign into Azure again using the `az login` command.
## Next steps
If you don't see your problem here or you can't resolve your issue, try one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* [Open an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
azure-arc Manage Vm Extensions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md
az connectedmachine extension create --resource-group "resourceGroupName" --mach
The following example enables the Microsoft Antimalware extension on an Azure Arc-enabled Windows server: ```azurecli
-az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Microsoft.Azure.Security" --type "IaaSAntimalware" --name "IaaSAntimalware" --settings '{"AntimalwareEnabled": true}'
+az connectedmachine extension create --resource-group "resourceGroupName" --machine-name "myMachineName" --location "regionName" --publisher "Microsoft.Azure.Security" --type "IaaSAntimalware" --name "IaaSAntimalware" --settings '"{\"AntimalwareEnabled\": \"true\"}"'
``` ## List extensions installed
azure-functions Azure Functions Az Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/azure-functions-az-redundancy.md
# Azure Functions support for availability zone redundancy
-Availability zone (AZ) support for Azure Functions is now available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant Functions application automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../app-service/how-to-zone-redundancy.md).
+Availability zone (AZ) support for Azure Functions is now available on Premium (Elastic Premium) and Dedicated (App Service) plans. A zone-redundant Functions application automatically balances its instances between availability zones for higher availability. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, refer [here](../availability-zones/migrate-app-service.md).
[!INCLUDE [functions-premium-plan-note](../../includes/functions-premium-plan-note.md)]
When hosting in a zone-redundant Premium plan, the following requirements must b
- You must use a [zone redundant storage account (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) for your function app's [storage account](storage-considerations.md#storage-account-requirements). If you use a different type of storage account, Functions may show unexpected behavior during a zonal outage. - Both Windows and Linux are supported.-- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [in this article](../app-service/how-to-zone-redundancy.md).
+- Must be hosted on an [Elastic Premium](functions-premium-plan.md) or Dedicated hosting plan. Instructions on zone redundancy with Dedicated (App Service) hosting plan can be found [in this article](../availability-zones/migrate-app-service.md).
- Availability zone (AZ) support isn't currently available for function apps on [Consumption](consumption-plan.md) plans. - Zone redundant plans must specify a minimum instance count of three. - Function apps hosted on a Premium plan must also have a minimum [always ready instances](functions-premium-plan.md#always-ready-instances) count of three.
There are currently two ways to deploy a zone-redundant premium plan and functio
| Setting | Suggested value | Notes for Zone Redundancy | | | - | -- | | **Storage Account** | A [zone-redundant storage account](storage-considerations.md#storage-account-requirements) | As mentioned above in the [requirements](#requirements) section, we strongly recommend using a zone-redundant storage account for your zone redundant function app. |
- | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../app-service/how-to-zone-redundancy.md). |
+ | **Plan Type** | Functions Premium | This article details how to create a zone redundant app in a Premium plan. Zone redundancy isn't currently available in Consumption plans. Information on zone redundancy on app service plans can be found [in this article](../availability-zones/migrate-app-service.md). |
| **Zone Redundancy** | Enabled | This field populates the flag that determines if your app is zone redundant or not. You won't be able to select `Enabled` unless you have chosen a region supporting zone redundancy, as mentioned in step 2. | ![Screenshot of Hosting tab of function app create page.](./media/functions-az-redundancy\azure-functions-hosting-az.png)
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md
Title: Create a Python function from the command line - Azure Functions description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 11/03/2020 Last updated : 06/15/2022 ms.devlang: python-+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
In this article, you use command-line tools to create a Python function that res
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
-There is also a [Visual Studio Code-based version](create-first-function-vs-code-python.md) of this article.
+There's also a [Visual Studio Code-based version](create-first-function-vs-code-python.md) of this article.
## Configure your local environment
-Before you begin, you must have the following:
+Before you begin, you must have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
Before you begin, you must have the following:
+ One of the following tools for creating Azure resources:
- + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
+ + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
- + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
+ + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later.
-+ [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
++ [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version). ### Prerequisite check
-Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources:
+Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources.
# [Azure CLI](#tab/azure-cli)
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x. + Run `az --version` to check that the Azure CLI version is 2.4 or later.
Verify your prerequisites, which depend on whether you are using Azure CLI or Az
# [Azure PowerShell](#tab/azure-powershell)
-+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x.
++ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools version is 4.x. + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later.
Verify your prerequisites, which depend on whether you are using Azure CLI or Az
## <a name="create-venv"></a>Create and activate a virtual environment
-In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Be sure to use Python 3.8, 3.7 or 3.6, which are supported by Azure Functions.
+In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.8, 3.7 or 3.6, which are supported by Azure Functions.
# [bash](#tab/bash)
You run all subsequent commands in this activated virtual environment.
In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
-1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the specified runtime:
+1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime.
```console func init LocalFunctionProj --python ```
-1. Navigate into the project folder:
+1. Go to the project folder.
```console cd LocalFunctionProj ```
- This folder contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
+ This folder contains various files for the project, including configuration files named *[local.settings.json]*(functions-develop-local.md#local-settings-file) and *[host.json]*(functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP). ```console func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ```+ `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*.
-
- Get the list of templates by using the following command.
-
+
+ Get the list of templates by using the following command:
+ ```console func templates list -l python ```
-
### (Optional) Examine the file contents
If desired, you can skip to [Run the function locally](#run-the-function-locally
:::code language="python" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/__init__.py":::
-For an HTTP trigger, the function receives request data in the variable `req` as defined in *function.json*. `req` is an instance of the [azure.functions.HttpRequest class](/python/api/azure-functions/azure.functions.httprequest). The return object, defined as `$return` in *function.json*, is an instance of [azure.functions.HttpResponse class](/python/api/azure-functions/azure.functions.httpresponse). To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python).
+For an HTTP trigger, the function receives request data in the variable `req` as defined in *function.json*. `req` is an instance of the [azure.functions.HttpRequest class](/python/api/azure-functions/azure.functions.httprequest). The return object, defined as `$return` in *function.json*, is an instance of [azure.functions.HttpResponse class](/python/api/azure-functions/azure.functions.httpresponse). For more information, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=python).
#### function.json *function.json* is a configuration file that defines the input and output `bindings` for the function, including the trigger type.
-You can change `scriptFile` to invoke a different Python file if desired.
+If desired, you can change `scriptFile` to invoke a different Python file.
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json":::
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
Before you can deploy your function code to Azure, you need to create three resources: -- A resource group, which is a logical container for related resources.-- A Storage account, which maintains state and other information about your projects.-- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.++ A resource group, which is a logical container for related resources.++ A storage account, which maintains the state and other information about your projects.++ A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources. Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
-1. If you haven't done so already, sign in to Azure:
+1. If you haven't done so already, sign in to Azure.
# [Azure CLI](#tab/azure-cli) ```azurecli
Use the following commands to create these items. Both Azure CLI and PowerShell
-1. When using the Azure CLI, you can turn on the `param-persist` option that automatically tracks the names of your created resources. To learn more, see [Azure CLI persisted parameter](/cli/azure/param-persist-howto).
+1. When you're using the Azure CLI, you can turn on the `param-persist` option that automatically tracks the names of your created resources. For more information, see [Azure CLI persisted parameter](/cli/azure/param-persist-howto).
# [Azure CLI](#tab/azure-cli) ```azurecli az config param-persist on ```+ # [Azure PowerShell](#tab/azure-powershell) This feature isn't available in Azure PowerShell.
-1. Create a resource group named `AzureFunctionsQuickstart-rg` in your chosen region:
+1. Create a resource group named `AzureFunctionsQuickstart-rg` in your chosen region.
# [Azure CLI](#tab/azure-cli)
Use the following commands to create these items. Both Azure CLI and PowerShell
> [!NOTE] > You can't host Linux and Windows apps in the same resource group. If you have an existing resource group named `AzureFunctionsQuickstart-rg` with a Windows function app or web app, you must use a different resource group.
-1. Create a general-purpose storage account in your resource group and region:
+1. Create a general-purpose storage account in your resource group and region.
# [Azure CLI](#tab/azure-cli)
Use the following commands to create these items. Both Azure CLI and PowerShell
- In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements).
+ In the previous example, replace `<STORAGE_NAME>` with a name that's appropriate to you and unique in Azure Storage. Names must contain 3 to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account [supported by Functions](storage-considerations.md#storage-account-requirements).
The storage account incurs only a few cents (USD) for this quickstart.
-1. Create the function app in Azure:
+1. Create the function app in Azure.
# [Azure CLI](#tab/azure-cli)
Use the following commands to create these items. Both Azure CLI and PowerShell
az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.9 --functions-version 4 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME> ```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you are using Python 3.8, 3.7, or 3.6, change `--runtime-version` to `3.8`, `3.7`, or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. If you're using Python 3.8, 3.7, or 3.6, change `--runtime-version` to `3.8`, `3.7`, or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
# [Azure PowerShell](#tab/azure-powershell)
Use the following commands to create these items. Both Azure CLI and PowerShell
- In the previous example, replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
+ In the previous example, replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
Use the following commands to create these items. Both Azure CLI and PowerShell
[!INCLUDE [functions-run-remote-azure-cli](../../includes/functions-run-remote-azure-cli.md)]
-Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal:
+Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal.
```console func azure functionapp logstream <APP_NAME> --browser
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
Title: "Create a C# function using Visual Studio Code - Azure Functions" description: "Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. " Previously updated : 09/14/2021 Last updated : 06/11/2022 ms.devlang: csharp adobe-target: true
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
# Quickstart: Create a C# function in Azure using Visual Studio Code
-In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
-
-This article creates an HTTP triggered function that runs on .NET 6.0. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+By default, this article shows you how to create C# functions that runs on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on Long Term Support (LTS) versions of .NET, such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in preview) [in an isolated process](dotnet-isolated-process-guide.md), see the [alternate version of this article](create-first-function-vs-code-csharp.md?tabs=isolated-process).
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
You also need an Azure account with an active subscription. [Create an account f
In this section, you use Visual Studio Code to create a local Azure Functions project in C#. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
-
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+1. Choose the Azure icon in the Activity bar, then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
-1. Choose a directory location for your project workspace and choose **Select**.
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Select the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- # [In-process](#tab/in-process)
+ # [.NET 6](#tab/in-process)
|Prompt|Selection| |--|--|
- |**Select a language for your function project**|Choose `C#`.|
- | **Select a .NET runtime** | Choose `.NET 6`.|
+ |**Select a language**|Choose `C#`.|
+ |**Select a .NET runtime** | Select `.NET 6`.|
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. | |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ |**Select how you would like to open your project**|Select `Add to workspace`.|
- # [Isolated process](#tab/isolated-process)
+ # [.NET 6 Isolated](#tab/isolated-process)
|Prompt|Selection| |--|--|
- |**Select a language for your function project**|Choose `C#`.|
+ |**Select a language**|Choose `C#`.|
| **Select a .NET runtime** | Choose `.NET 6 Isolated`.| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.|
In this section, you use Visual Studio Code to create a local Azure Functions pr
> + Make sure you have installed the .NET 6.0 SDK. > + Press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and change the default runtime version to `~4`.
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code-csharp](../../includes/functions-run-function-test-local-vs-code-csharp.md)]
-After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code to publish the project directly to Azure.
+After checking that the function runs correctly on your local computer, it's time to use Visual Studio Code to publish the project directly to Azure.
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
-# [In-process](#tab/in-process)
+# [.NET 6](#tab/in-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
-# [Isolated process](#tab/isolated-process)
+# [.NET 6 Isolated](#tab/isolated-process)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
Title: Create a Java function using Visual Studio Code - Azure Functions description: Learn how to create a Java function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/03/2020 Last updated : 06/22/2022 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions project in Java. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `Java`.
-
- + **Select a version of Java**: Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally.
-
- + **Provide a group ID**: Choose `com.function`.
-
- + **Provide an artifact ID**: Choose `myFunction`.
-
- + **Provide a version**: Choose `1.0-SNAPSHOT`.
-
- + **Provide a package name**: Choose `com.function`.
-
- + **Provide an app name**: Choose `myFunction-12345`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**| Choose `Java`.|
+ |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
+ | **Provide a group ID** | Choose `com.function`. |
+ | **Provide an artifact ID** | Choose `myFunction`. |
+ | **Provide a version** | Choose `1.0-SNAPSHOT`. |
+ | **Provide a package name** | Choose `com.function`. |
+ | **Provide an app name** | Choose `myFunction-12345`. |
+ |**Select a template for your project's first function**| Choose `HTTP trigger`.|
+ | **Select the build tool for Java project** | Choose `Maven`. |
+ |**Provide a function name**| Enter `HttpExample`.|
+ |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**| Choose `Add to workspace`.|
+
+1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=java#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions project in JavaScript. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Choose `Add to workspace`.|
- Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=javascript#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
-<a name="Publish the project to Azure"></a>
-
-## Deploy the project to Azure
-
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
-
-> [!IMPORTANT]
-> Deploying to an existing function app overwrites the content of that app in Azure.
--
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
-
-1. Provide the following information at the prompts:
-
- |Prompt| Selection|
- |--|--|
- |**Select Function App in Azure**|Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)|
- |**Enter a globally unique name for the function app**|Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.|
- |**Select a runtime**|Choose the version of Node.js you've been running on locally. You can use the `node --version` command to check your version.|
- |**Select a location for new resources**|For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.|
-
- The extension shows the status of individual resources as they are being created in Azure in the notification area.
-
- :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-
- When completed, the following Azure resources are created in your subscription, using names based on your function app name:
-
- [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
-
-1. A notification is displayed after your function app is created and the deployment package is applied.
-
- [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
-
-1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)] ## Change the code and redeploy to Azure
-1. In the VSCode Explorer view, select the `./HttpExample/index.js` file.
+1. In Visual Studio Code in the Explorer view, select the `./HttpExample/index.js` file.
+ 1. Replace the file with the following code to construct a JSON object and return it. ```javascript
In this section, you create a function app and related resources in your Azure s
} ``` 1. [Rerun the function](#run-the-function-locally) app locally.+ 1. In the prompt **Enter request body** change the request message body to { "name": "Tom","sport":"basketball" }. Press Enter to send this request message to your function.+ 1. View the response in the notification: ```json
Use the table below to resolve the most common issues encountered when using thi
|Problem|Solution| |--|--| |Can't create a local function project?|Make sure you have the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed.|
-|Can't run the function locally?|Make sure you have the [Azure Functions Core Tools installed](functions-run-local.md?tabs=windows%2Ccsharp%2Cbash) installed. <br/>When running on Windows, make sure that the default terminal shell for Visual Studio Code isn't set to WSL Bash.|
+|Can't run the function locally?|Make sure you have the [Azure Functions Core Tools installed](functions-run-local.md?tabs=node) installed. <br/>When running on Windows, make sure that the default terminal shell for Visual Studio Code isn't set to WSL Bash.|
|Can't deploy function to Azure?|Review the Output for error information. The bell icon in the lower right corner is another way to view the output. Did you publish to an existing function app? That action overwrites the content of that app in Azure.| |Couldn't run the cloud-based Function app?|Remember to use the query string to send in parameters.|
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md
Title: Create a function in Go or Rust using Visual Studio Code - Azure Functions description: Learn how to create a Go function as an Azure Functions custom handler, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 12/4/2020 Last updated : 06/22/2022 ms.devlang: golang, rust
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions custom handlers project. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `Custom`.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `Custom Handler`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger function. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer.
## Create and build your function
In this section, you publish your project to Azure in a function app running Lin
-## Publish the project to Azure
+## Create the function app in Azure
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
+In this section, you create a function app and related resources in your Azure subscription.
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
+1. Choose the Azure icon in the Activity bar. Then in the **Resources** area, select the **+** icon and choose the **Create Function App in Azure** option.
-
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
+ ![Create a resource in your Azure subscription](../../includes/media/functions-publish-project-vscode/function-app-create-resource.png)
1. Provide the following information at the prompts:
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
-
- + **Select subscription**: Choose the subscription to use. You won't see this if you only have one subscription.
-
- + **Select Function App in Azure**: Choose `+ Create new Function App (advanced)`.
-
- > [!IMPORTANT]
- > The `advanced` option lets you choose the specific operating system on which your function app runs in Azure, which in this case is Linux.
+ |Prompt|Selection|
+ |--|--|
+ |**Select subscription**| Choose the subscription to use. You won't see this when you have only one subscription visible under **Resources**. |
+ |**Enter a globally unique name for the function app**| Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.|
+ |**Select a runtime stack**| Choose **Custom Handler**. |
+ |**Select a location for new resources**| For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.|
- ![VS Code - Select advanced create new function app](./media/functions-create-first-function-vs-code/functions-vscode-create-azure-advanced.png)
+ The extension shows the status of individual resources as they are being created in Azure in the **Azure: Activity Log** panel.
- + **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
+ ![Log of Azure resource creation](../../includes/media/functions-publish-project-vscode/resource-activity-log.png)
- + **Select a runtime stack**: Choose `Custom Handler`.
-
- + **Select an OS**: Choose `Linux`.
-
- + **Select a hosting plan**: Choose `Consumption`.
-
- + **Select a resource group**: Choose `+ Create new resource group`. Enter a name for the resource group. This name must be unique within your Azure subscription. You can use the name suggested in the prompt.
-
- + **Select a storage account**: Choose `+ Create new storage account`. This name must be globally unique within Azure. You can use the name suggested in the prompt.
-
- + **Select an Application Insights resource**: Choose `+ Create Application Insights resource`. This name must be globally unique within Azure. You can use the name suggested in the prompt.
-
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.The extension shows the status of individual resources as they are being created in Azure in the notification area.
-
- :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-
-1. When completed, the following Azure resources are created in your subscription:
+1. When the creation is complete, the following Azure resources are created in your subscription. The resources are named based on your function app name:
[!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
- A notification is displayed after your function app is created and the deployment package is applied.
+ A notification is displayed after your function app is created and the deployment package is applied.
+
+ [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
-4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
+## Deploy the project to Azure
- ![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md
Title: Create a PowerShell function using Visual Studio Code - Azure Functions description: Learn how to create a PowerShell function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/04/2020 Last updated : 06/22/2022 ms.devlang: powershell
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions project in PowerShell. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `PowerShell`.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `PowerShell`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=powershell#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)] [!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
Title: Create a Python function using Visual Studio Code - Azure Functions description: Learn how to create a Python function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/04/2020 Last updated : 06/15/2022 ms.devlang: python-+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
There's also a [CLI-based version](create-first-function-cli-python.md) of this
## Configure your environment
-Before you get started, make sure you have the following requirements in place:
+Before you begin, make sure that you have the following requirements in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x.
-+ [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download).
++ Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download). + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-+ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
++ The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. + The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**. It is recommended that you create a new folder or choose an empty folder as the project workspace.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `Python`.
-
- + **Select a Python alias to create a virtual environment**: Choose the location of your Python interpreter.
- If the location isn't shown, type in the full path to your Python binary.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language**| Choose `Python`.|
+ |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
+ |**Select a template for your project's first function**| Choose `HTTP trigger`.|
+ |**Provide a function name**| Enter `HttpExample`.|
+ |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**| Choose `Add to workspace`.|
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
-## Publish the project to Azure
-
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
-
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
-
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
-
-1. Provide the following information at the prompts:
-
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app.
- You won't see this if you already have a valid function app opened.
-
- + **Select subscription**: Choose the subscription to use.
- You won't see this if you only have one subscription.
-
- + **Select Function App in Azure**: Choose `+ Create new Function App`.
- (Don't choose the `Advanced` option, which isn't covered in this article.)
-
- + **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
-
- + **Select a runtime**: Choose the version of Python you've been running on locally. You can use the `python --version` command to check your version.
-
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
-
- The extension shows the status of individual resources as they are being created in Azure in the notification area.
-
- :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
-
- [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
-
- A notification is displayed after your function app is created and the deployment package is applied.
-
- [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
-
-4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
In this section, you create a function app and related resources in your Azure s
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=python) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=python) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=python).
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-python)
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
Title: Create a TypeScript function using Visual Studio Code - Azure Functions description: Learn how to create a TypeScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/18/2021 Last updated : 06/18/2022 ms.devlang: typescript
Before you get started, make sure you have the following requirements in place:
In this section, you use Visual Studio Code to create a local Azure Functions project in TypeScript. Later in this article, you'll publish your function code to Azure.
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
+1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
- ![Choose Create a new project](media/functions-create-first-function-vs-code/create-new-project.png)
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `TypeScript`.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `TypeScript`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
+ Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=typescript#generated-project-files).
[!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)]
After you've verified that the function runs correctly on your local computer, i
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
-## Publish the project to Azure
-
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
-
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
-
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
-
-1. Provide the following information at the prompts:
-
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
-
- + **Select subscription**: Choose the subscription to use. You won't see this if you only have one subscription.
-
- + **Select Function App in Azure**: Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
-
- + **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
-
- + **Select a runtime**: Choose the version of Node.js you've been running on locally. You can use the `node --version` command to check your version.
-
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
-
- The extension shows the status of individual resources as they are being created in Azure in the notification area.
-
- :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
-
- [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
-
- A notification is displayed after your function app is created and the deployment package is applied.
-
- [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
-
-4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![Create complete notification](./media/functions-create-first-function-vs-code/function-create-notifications.png)
[!INCLUDE [functions-vs-code-run-remote](../../includes/functions-vs-code-run-remote.md)]
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
A .NET isolated function project is basically a .NET console app project that ta
+ Program.cs file that's the entry point for the app. + Any code files [defining your functions](#bindings).
-For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://go.microsoft.com/fwlink/p/?linkid=2197310).
+For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/NetFxWorker).
> [!NOTE] > To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|6.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md
Title: "Create your first durable function in Azure using C#"
description: Create and publish an Azure Durable Function using Visual Studio or Visual Studio Code. Previously updated : 03/18/2020 Last updated : 06/15/2022 zone_pivot_groups: code-editors-set-one ms.devlang: csharp-+ # Create your first durable function in C#
-*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you.
+Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you.
::: zone pivot="code-editor-vscode"
-In this article, you learn how to use Visual Studio Code to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the VS Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions).
+In this article, you learn how to use Visual Studio Code to locally create and test a "hello world" durable function. This function orchestrates and chains together calls to other functions. You can then publish the function code to Azure. These tools are available as part of the Visual Studio Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions).
-![Screenshot shows a Visual Studio Code window with a durable function.](./media/durable-functions-create-first-csharp/functions-vscode-complete.png)
## Prerequisites
To complete this tutorial:
* Install [Visual Studio Code](https://code.visualstudio.com/download).
-* Install the following VS Code extensions:
- - [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
- - [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+* Install the following Visual Studio Code extensions:
+ * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
+ * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
-* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md).
+* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md).
* Durable Functions require an Azure storage account. You need an Azure subscription.
To complete this tutorial:
## <a name="create-an-azure-functions-project"></a>Create your local project
-In this section, you use Visual Studio Code to create a local Azure Functions project.
+In this section, you use Visual Studio Code to create a local Azure Functions project.
-1. In Visual Studio Code, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
+1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
- ![Create a function project](media/durable-functions-create-first-csharp/functions-vscode-create-project.png)
+ :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-create-project.png" alt-text="Screenshot of create a function project window.":::
1. Choose an empty folder location for your project and choose **Select**.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- | | Select a language for your function app project | C# | Create a local C# Functions project. | | Select a version | Azure Functions v3 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | | Select a template for your project's first function | Skip for now | |
- | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
+ | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
## Add functions to the app
The following steps use a template to create the durable function code in your p
1. In the command palette, search for and select `Azure Functions: Create Function...`.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- |
The following steps use a template to create the durable function code in your p
| Provide a function name | HelloOrchestration | Name of the class in which functions are created | | Provide a namespace | Company.Function | Namespace for the generated class |
-1. When VS Code prompts you to select a storage account, choose **Select storage account**. Following the prompts, provide the following information to create a new storage account in Azure.
+1. When Visual Studio Code prompts you to select a storage account, choose **Select storage account**. Follow the prompts and provide the following information to create a new storage account in Azure:
| Prompt | Value | Description | | | -- | -- |
The following steps use a template to create the durable function code in your p
| Select a resource group | *unique name* | Name of the resource group to create | | Select a location | *region* | Select a region close to you |
-A class containing the new functions is added to the project. VS Code also adds the storage account connection string to *local.settings.json* and a reference to the [`Microsoft.Azure.WebJobs.Extensions.DurableTask`](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) NuGet package to the *.csproj* project file.
+A class containing the new functions is added to the project. Visual Studio Code also adds the storage account connection string to *local.settings.json* and a reference to the [`Microsoft.Azure.WebJobs.Extensions.DurableTask`](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) NuGet package to the *.csproj* project file.
Open the new *HelloOrchestration.cs* file to view the contents. This durable function is a simple function chaining example with the following methods: | Method | FunctionName | Description | | -- | | -- |
-| **`RunOrchestrator`** | `HelloOrchestration` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. |
-| **`SayHello`** | `HelloOrchestration_Hello` | The function returns a hello. It is the function that contains the business logic that is being orchestrated. |
+| **`RunOrchestrator`** | `HelloOrchestration` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. |
+| **`SayHello`** | `HelloOrchestration_Hello` | The function returns a hello. It's the function that contains the business logic that is being orchestrated. |
| **`HttpStart`** | `HelloOrchestration_HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | Now that you've created your function project and a durable function, you can test it on your local computer.
Now that you've created your function project and a durable function, you can te
Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code.
-1. To test your function, set a breakpoint in the `SayHello` activity function code and press F5 to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
+1. To test your function, set a breakpoint in the `SayHello` activity function code and press <kbd>F5</kbd> to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
> [!NOTE]
- > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging.
+ > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging).
1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function.
- ![Azure local output](media/durable-functions-create-first-csharp/functions-vscode-f5.png)
+ :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-f5.png" alt-text="Screenshot of Azure local output window.":::
-1. Using a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint.
+1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), and then send an HTTP POST request to the URL endpoint.
- The response is the initial result from the HTTP function letting us know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
+ The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
-1. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request.
+1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
- The request will query the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like:
+ The request will query the orchestration instance for the status. You must get an eventual response, which shows us that the instance has completed and includes the outputs or results of the durable function. It looks like:
```json {
Azure Functions Core Tools lets you run an Azure Functions project on your local
} ```
-1. To stop debugging, press **Shift + F5** in VS Code.
+1. To stop debugging, press <kbd>Shift + F5</kbd> in Visual Studio Code.
After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-publish-project-vscode](../../../includes/functions-publish-project-vscode.md)] ## Test your function in Azure
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in the following format:
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in the following format:
`https://<functionappname>.azurewebsites.net/api/HelloOrchestration_HttpStart`
-1. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
+1. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app.
## Next steps
You have used Visual Studio Code to create and publish a C# durable function app
::: zone pivot="code-editor-visualstudio"
-In this article, you learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022.
+In this article, you learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022.
-![Screenshot shows a Visual Studio 2019 window with a durable function.](./media/durable-functions-create-first-csharp/functions-vs-complete.png)
## Prerequisites
To complete this tutorial:
* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ.
-* Verify you have the [Azure Storage Emulator](../../storage/common/storage-use-emulator.md) installed and running.
+* Verify that you have the [Azure Storage Emulator](../../storage/common/storage-use-emulator.md) installed and running.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
The Azure Functions template creates a project that can be published to a functi
1. In Visual Studio, select **New** > **Project** from the **File** menu.
-1. In the **Create a new project** dialog, search for `functions`, choose the **Azure Functions** template, and select **Next**.
+1. In the **Create a new project** dialog, search for `functions`, choose the **Azure Functions** template, and then select **Next**.
- ![New project dialog to create a function in Visual Studio](./media/durable-functions-create-first-csharp/functions-vs-new-project.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-new-project.png" alt-text="Screenshot of new project dialog to create a function in Visual Studio.":::
-1. Type a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
+1. Enter a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or nonalphanumeric characters.
1. Under **Additional information**, use the settings specified in the table that follows the image.
- ![Create a new Azure Functions Application dialog in Visual Studio](./media/durable-functions-create-first-csharp/functions-vs-new-function.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-new-function.png" alt-text="Screenshot of create a new Azure Functions Application dialog in Visual Studio.":::
| Setting | Suggested value | Description | | | - |-- |
The Azure Functions template creates a project that can be published to a functi
| **Function** | Empty | Creates an empty function app. | | **Storage account** | Storage Emulator | A storage account is required for durable function state management. |
-4. Select **Create** to create an empty function project. This project has the basic configuration files needed to run your functions.
+1. Select **Create** to create an empty function project. This project has the basic configuration files needed to run your functions.
## Add functions to the app
The following steps use a template to create the durable function code in your p
1. Right-click the project in Visual Studio and select **Add** > **New Azure Function**.
- ![Add new function](./media/durable-functions-create-first-csharp/functions-vs-add-function.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-add-function.png" alt-text="Screenshot of Add new function.":::
-1. Verify **Azure Function** is selected from the add menu, type a name for your C# file, and then select **Add**.
+1. Verify **Azure Function** is selected from the add menu, enter a name for your C# file, and then select **Add**.
1. Select the **Durable Functions Orchestration** template and then select **Add**.
- ![Select durable template](./media/durable-functions-create-first-csharp/functions-vs-select-template.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-select-durable-template.png" alt-text="Screenshot of Select durable template.":::
-A new durable function is added to the app. Open the new .cs file to view the contents. This durable function is a simple function chaining example with the following methods:
+A new durable function is added to the app. Open the new *.cs* file to view the contents. This durable function is a simple function chaining example with the following methods:
| Method | FunctionName | Description | | -- | | -- |
-| **`RunOrchestrator`** | `<file-name>` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. |
-| **`SayHello`** | `<file-name>_Hello` | The function returns a hello. It is the function that contains the business logic that is being orchestrated. |
+| **`RunOrchestrator`** | `<file-name>` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. |
+| **`SayHello`** | `<file-name>_Hello` | The function returns a hello. It's the function that contains the business logic that is being orchestrated. |
| **`HttpStart`** | `<file-name>_HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. |
-Now that you've created your function project and a durable function, you can test it on your local computer.
+You can test it on your local computer now that you've created your function project and a durable function.
## Test the function locally
-Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You are prompted to install these tools the first time you start a function from Visual Studio.
+Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio.
-1. To test your function, press F5. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You may also need to enable a firewall exception so that the tools can handle HTTP requests.
+1. To test your function, press <kbd>F5</kbd>. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You may also need to enable a firewall exception so that the tools can handle HTTP requests.
-2. Copy the URL of your function from the Azure Functions runtime output.
+1. Copy the URL of your function from the Azure Functions runtime output.
- ![Azure local runtime](./media/durable-functions-create-first-csharp/functions-vs-debugging.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-debugging.png" alt-text="Screenshot of Azure local runtime.":::
-3. Paste the URL for the HTTP request into your browser's address bar and execute the request. The following shows the response in the browser to the local GET request returned by the function:
+1. Paste the URL for the HTTP request into your browser's address bar and execute the request. The following shows the response in the browser to the local GET request returned by the function:
- ![Screenshot shows a browser window with statusQueryGetUri called out.](./media/durable-functions-create-first-csharp/functions-vs-status.png)
+ :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-status.png" alt-text="Screenshot of the browser window with statusQueryGetUri called out.":::
- The response is the initial result from the HTTP function letting us know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
+ The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
-4. Copy the URL value for `statusQueryGetUri` and pasting it in the browser's address bar and execute the request.
+1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request.
- The request will query the orchestration instance for the status. You should get an eventual response that looks like the following. This output shows us the instance has completed, and includes the outputs or results of the durable function.
+ The request will query the orchestration instance for the status. You must get an eventual response that looks like the following. This output shows us the instance has completed and includes the outputs or results of the durable function.
```json {
Azure Functions Core Tools lets you run an Azure Functions project on your local
} ```
-5. To stop debugging, press **Shift + F5**.
+1. To stop debugging, press <kbd>Shift + F5</kbd>.
-After you have verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
+After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
## Publish the project to Azure
-You must have a function app in your Azure subscription before you can publish your project. You can create a function app right from Visual Studio.
+You must have a function app in your Azure subscription before publishing your project. You can create a function app right from Visual Studio.
[!INCLUDE [Publish the project to Azure](../../../includes/functions-vstools-publish.md)]
You must have a function app in your Azure subscription before you can publish y
1. Copy the base URL of the function app from the Publish profile page. Replace the `localhost:port` portion of the URL you used when testing the function locally with the new base URL.
- The URL that calls your durable function HTTP trigger should be in the following format:
+ The URL that calls your durable function HTTP trigger must be in the following format:
`https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>_HttpStart`
-2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
+2. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app.
## Next steps
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md
Title: Create your first durable function in Azure Functions using PowerShell
description: Create and publish an Azure Durable Function in PowerShell using Visual Studio Code. Previously updated : 08/10/2020 Last updated : 06/22/2022 ms.devlang: powershell
Azure Functions Core Tools lets you run an Azure Functions project on your local
After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. -
-## Publish the project to Azure
-
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
-
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
--
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
-
-1. Provide the following information at the prompts:
-
- + **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this if you already have a valid function app opened.
-
- + **Select subscription**: Choose the subscription to use. You won't see this if you only have one subscription.
-
- + **Select Function App in Azure**: Choose `+ Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
-
- + **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
-
- + **Select a runtime**: Choose the version of PowerShell you've been running on locally. You can use the `pwsh -version` command to check your version.
-
- > [!NOTE]
- > The Azure Functions VS Code extension may not support PowerShell 7 yet. If PowerShell 7 is not available as an option, select PowerShell 6.x for now and [update the version manually](#update-function-app-ps7) after the function app has been created.
-
- + **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
-
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
-
- + A resource group, which is a logical container for related resources.
- + A standard Azure Storage account, which maintains state and other information about your projects.
- + A consumption plan, which defines the underlying host for your serverless function app.
- + A function app, which provides the environment for executing your function code. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources within the same hosting plan.
- + An Application Insights instance connected to the function app, which tracks usage of your serverless function.
-
- A notification is displayed after your function app is created and the deployment package is applied.
-
-1. <a name="update-function-app-ps7"></a>If you were unable to select *PowerShell 7* earlier when creating the function app, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Upload Local Settings...`. Follow the prompts to select the function app you created. If prompted to overwrite existing settings, select *No to all*.
-
-1. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![Create complete notification](../../../includes/media/functions-publish-project-vscode/function-create-notifications.png)
## Test your function in Azure
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
Title: Create your first durable function in Azure using Python
description: Create and publish an Azure Durable Function in Python using Visual Studio Code. Previously updated : 12/23/2020 Last updated : 06/15/2022 ms.devlang: python-+ # Create your first durable function in Python
-*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you.
+Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you.
-In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure.
+In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chains together calls to other functions. You can then publish the function code to Azure.
-![Running durable function in Azure](./media/quickstart-python-vscode/functions-vs-code-complete.png)
## Prerequisites
To complete this tutorial:
* Install [Visual Studio Code](https://code.visualstudio.com/download).
-* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension.
+* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) Visual Studio Code extension.
-* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md).
+* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md).
* Durable Functions require an Azure storage account. You need an Azure subscription.
To complete this tutorial:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-## <a name="create-an-azure-functions-project"></a>Create your local project
+## <a name="create-an-azure-functions-project"></a>Create your local project
-In this section, you use Visual Studio Code to create a local Azure Functions project.
+In this section, you use Visual Studio Code to create a local Azure Functions project.
-1. In Visual Studio Code, press F1 (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
+1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
- ![Create function](media/quickstart-python-vscode/functions-create-project.png)
+ :::image type="content" source="media/quickstart-python-vscode/functions-create-project.png" alt-text="Screenshot of Create function window.":::
1. Choose an empty folder location for your project and choose **Select**.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- | | Select a language for your function app project | Python | Create a local Python Functions project. | | Select a version | Azure Functions v3 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. |
- | Python version | Python 3.6, 3.7, or 3.8 | VS Code will create a virtual environment with the version you select. |
+ | Python version | Python 3.6, 3.7, or 3.8 | Visual Studio Code will create a virtual environment with the version you select. |
| Select a template for your project's first function | Skip for now | |
- | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. |
+ | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. |
-Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
+Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files.
-A *requirements.txt* file is also created in the root folder. It specifies the Python packages needed to run your function app.
+A *requirements.txt* file is also created in the root folder. It specifies the Python packages required to run your function app.
## Install azure-functions-durable from PyPI
-When you created the project, the Azure Functions VS Code extension automatically created a virtual environment with your selected Python version. You will activate the virtual environment in a terminal and install some dependencies required by Azure Functions and Durable Functions.
+When you've created the project, the Azure Functions Visual Studio Code extension automatically creates a virtual environment with your selected Python version. You then need to activate the virtual environment in a terminal and install some dependencies required by Azure Functions and Durable Functions.
-1. Open *requirements.txt* in the editor and change its content to the following:
+1. Open the *requirements.txt* in the editor and change its content to the following code:
``` azure-functions
When you created the project, the Azure Functions VS Code extension automaticall
1. Open the editor's integrated terminal in the current folder (<kbd>Ctrl+Shift+`</kbd>).
-1. In the integrated terminal, activate the virtual environment in the current folder:
+1. In the integrated terminal, activate the virtual environment in the current folder, depending on your operating system:
- **Linux or macOS**
+ # [Linux](#tab/linux)
```bash source .venv/bin/activate ```
+ # [MacOS](#tab/macos)
- **Windows**
+ ```bash
+ source .venv/bin/activate
+ ```
+
+ # [Windows](#tab/windows)
```powershell .venv\scripts\activate ```
+
+
- ![Activate virtual environment](media/quickstart-python-vscode/activate-venv.png)
-
-1. In the integrated terminal where the virtual environment is activated, use pip to install the packages you just defined:
+1. In the integrated terminal where the virtual environment is activated, use pip to install the packages you defined.
```bash python -m pip install -r requirements.txt
When you created the project, the Azure Functions VS Code extension automaticall
A basic Durable Functions app contains three functions:
-* *Orchestrator function* - describes a workflow that orchestrates other functions.
-* *Activity function* - called by the orchestrator function, performs work, and optionally returns a value.
-* *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function.
+* *Orchestrator function*: Describes a workflow that orchestrates other functions.
+* *Activity function*: It's called by the orchestrator function, performs work, and optionally returns a value.
+* *Client function*: It's a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function.
### Orchestrator function
You use a template to create the durable function code in your project.
1. In the command palette, search for and select `Azure Functions: Create Function...`.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- |
Next, you'll add the referenced `Hello` activity function.
1. In the command palette, search for and select `Azure Functions: Create Function...`.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- |
Finally, you'll add an HTTP triggered function that starts the orchestration.
1. In the command palette, search for and select `Azure Functions: Create Function...`.
-1. Following the prompts, provide the following information:
+1. Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- |
You now have a Durable Functions app that can be run locally and deployed to Azu
Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. If you don't have it installed, you're prompted to install these tools the first time you start a function from Visual Studio Code.
-1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/\_\_init__.py*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
+1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/\_\_init__.py*). Press <kbd>F5</kbd> or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel.
> [!NOTE]
- > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging.
+ > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging).
-1. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**.
+1. Durable Functions require an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**.
- ![Create storage account](media/quickstart-python-vscode/functions-select-storage.png)
+ :::image type="content" source="media/quickstart-python-vscode/functions-select-storage.png" alt-text="Screenshot of how to create a storage account.":::
-1. Following the prompts, provide the following information to create a new storage account in Azure.
+1. Follow the prompts and provide the following information to create a new storage account in Azure:
| Prompt | Value | Description | | | -- | -- |
Azure Functions Core Tools lets you run an Azure Functions project on your local
1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function.
- ![Azure local output](media/quickstart-python-vscode/functions-f5.png)
+ :::image type="content" source="media/quickstart-python-vscode/functions-f5.png" alt-text="Screenshot of Azure local output.":::
-1. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
+1. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`.
- The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
+ The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.
-1. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request.
+1. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request.
- The request will query the orchestration instance for the status. You should get an eventual response, which shows the instance has completed, and includes the outputs or results of the durable function. It looks like:
+ The request will query the orchestration instance for the status. You must get an eventual response, which shows the instance has completed and includes the outputs or results of the durable function. It looks like:
```json {
Azure Functions Core Tools lets you run an Azure Functions project on your local
} ```
-1. To stop debugging, press <kbd>Shift+F5</kbd> in VS Code.
+1. To stop debugging, press <kbd>Shift+F5</kbd> in Visual Studio Code.
After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
After you've verified that the function runs correctly on your local computer, i
## Test your function in Azure
-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
+1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `http://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator`
-2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app.
+1. Paste this new URL for the HTTP request in your browser's address bar. You must get the same status response as before when using the published app.
## Next steps
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Title: Connect Azure Functions to Azure Storage using Visual Studio Code
-description: Learn how to connect Azure Functions to a Azure Queue Storage by adding an output binding to your Visual Studio Code project.
Previously updated : 02/07/2020
+description: Learn how to connect Azure Functions to an Azure Queue Storage by adding an output binding to your Visual Studio Code project.
Last updated : 06/15/2022 ms.devlang: csharp, java, javascript, powershell, python, typescript-+ zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to connect my function to Azure Storage so that I can easily write data to a storage queue.
zone_pivot_groups: programming-languages-set-functions
[!INCLUDE [functions-add-storage-binding-intro](../../includes/functions-add-storage-binding-intro.md)]
-This article shows you how to use Visual Studio Code to connect Azure Storage to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a message in an Azure Queue storage queue.
+In this article, you learn how to use Visual Studio Code to connect Azure Storage to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a message in an Azure Queue storage queue.
-Most bindings require a stored connection string that Functions uses to access the bound service. To make it easier, you use the Storage account that you created with your function app. The connection to this account is already stored in an app setting named `AzureWebJobsStorage`.
+Most bindings require a stored connection string that Functions uses to access the bound service. To make it easier, you use the storage account that you created with your function app. The connection to this account is already stored in an app setting named `AzureWebJobsStorage`.
## Configure your local environment
-Before you start this article, you must meet the following requirements:
+Before you begin, you must meet the following requirements:
* Install the [Azure Storage extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage).
-* Install [Azure Storage Explorer](https://storageexplorer.com/). Storage Explorer is a tool you'll use to examine queue messages generated by your output binding. Storage Explorer is supported on macOS, Windows, and Linux-based operating systems.
+* Install [Azure Storage Explorer](https://storageexplorer.com/). Storage Explorer is a tool that you'll use to examine queue messages generated by your output binding. Storage Explorer is supported on macOS, Windows, and Linux-based operating systems.
::: zone pivot="programming-language-csharp"+ * Install [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x). ::: zone-end
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-csharp.md).
+
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-csharp.md).
::: zone-end ::: zone pivot="programming-language-javascript"
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-node.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-node.md).
::: zone pivot="programming-language-java"
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-java.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-java.md).
::: zone pivot="programming-language-typescript"
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-typescript.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-typescript.md).
::: zone pivot="programming-language-python"
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-python.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-python.md).
::: zone pivot="programming-language-powershell"
-* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-powershell.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-powershell.md).
-This article assumes that you're already signed in to your Azure subscription from Visual Studio Code. You can sign in by running `Azure: Sign In` from the command palette.
+This article assumes that you're already signed in to your Azure subscription from Visual Studio Code. You can sign in by running `Azure: Sign In` from the command palette.
## Download the function app settings
-In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure along with the required Storage account. The connection string for this account is stored securely in app settings in Azure. In this article, you write messages to a Storage queue in the same account. To connect to your Storage account when running the function locally, you must download app settings to the local.settings.json file.
+In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure along with the required storage account. The connection string for this account is stored securely in the app settings in Azure. In this article, you write messages to a Storage queue in the same account. To connect to your storage account when running the function locally, you must download app settings to the *local.settings.json* file.
-1. Press the F1 key to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings....`.
+1. Press <kbd>F1</kbd> to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings....`.
-1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings.
+1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings.
> [!IMPORTANT]
- > Because it contains secrets, the local.settings.json file never gets published, and is excluded from source control.
+ > Because the *local.settings.json* file contains secrets, it never gets published, and is excluded from the source control.
-1. Copy the value `AzureWebJobsStorage`, which is the key for the Storage account connection string value. You use this connection to verify that the output binding works as expected.
+1. Copy the value `AzureWebJobsStorage`, which is the key for the storage account connection string value. You use this connection to verify that the output binding works as expected.
## Register binding extensions
-Because you're using a Queue storage output binding, you must have the Storage bindings extension installed before you run the project.
+Because you're using a Queue storage output binding, you must have the Storage bindings extension installed before you run the project.
::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-python,programming-language-powershell,programming-language-java"
-Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages.
+Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages.
-Extension bundles usage is enabled in the host.json file at the root of the project, which appears as follows:
+Extension bundles usage is enabled in the *host.json* file at the root of the project, which appears as follows:
:::code language="json" source="~/functions-quickstart-java/functions-add-output-binding-storage-queue/host.json":::
Now, you can add the storage output binding to your project.
## Add an output binding
-In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app.
+In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the *function.json* file. The way you define these attributes depends on the language of your function app.
::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-python,programming-language-powershell"
After the binding is defined, you can use the `name` of the binding to access it
[!INCLUDE [functions-run-function-test-local-vs-code-csharp](../../includes/functions-run-function-test-local-vs-code-csharp.md)] ::: zone-end + ## Run the function locally
-1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
+1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
+
+1. With the Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and select **Execute Function Now...**.
-1. With Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and choose **Execute Function Now...**.
+ :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Screenshot of executing function from Visual Studio Code.":::
- :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Execute function now from Visual Studio Code":::
+1. In the **Enter request body**, you see the request message body value of `{ "name": "Azure" }`. Press <kbd>Enter</kbd> to send this request message to your function.
-1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function.
-
1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools.
-Because you are using the storage connection string, your function connects to the Azure storage account when running locally. A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message.
+Because you're using the storage connection string, your function connects to the Azure storage account when running locally. A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message.
::: zone-end
Skip this section if you have already installed Azure Storage Explorer and conne
1. Run the [Azure Storage Explorer](https://storageexplorer.com/) tool, select the connect icon on the left, and select **Add an account**.
- ![Add an Azure account to Microsoft Azure Storage Explorer](./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-add-account.png)
+ :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-add-account.png" alt-text="Screenshot of how to add an Azure account to Microsoft Azure Storage Explorer.":::
-1. In the **Connect** dialog, choose **Add an Azure account**, choose your **Azure environment**, and select **Sign in...**.
+1. In the **Connect** dialog, choose **Add an Azure account**, choose your **Azure environment**, and then select **Sign in...**.
- ![Sign in to your Azure account](./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-connect-azure-account.png)
+ :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/storage-explorer-connect-azure-account.png" alt-text="Screenshot of the sign-in to your Azure account window.":::
After you successfully sign in to your account, you see all of the Azure subscriptions associated with your account. ### Examine the output queue
-1. In Visual Studio Code, press the F1 key to open the command palette, then search for and run the command `Azure Storage: Open in Storage Explorer` and choose your Storage account name. Your storage account opens in Azure Storage Explorer.
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette, then search for and run the command `Azure Storage: Open in Storage Explorer` and choose your storage account name. Your storage account opens in the Azure Storage Explorer.
-1. Expand the **Queues** node, and then select the queue named **outqueue**.
+1. Expand the **Queues** node, and then select the queue named **outqueue**.
The queue contains the message that the queue output binding created when you ran the HTTP-triggered function. If you invoked the function with the default `name` value of *Azure*, the queue message is *Name passed to the function: Azure*.
- ![Queue message shown in Azure Storage Explorer](./media/functions-add-output-binding-storage-queue-vs-code/function-queue-storage-output-view-queue.png)
+ :::image type="content" source="./media/functions-add-output-binding-storage-queue-vs-code/function-queue-storage-output-view-queue.png" alt-text="Screenshot of the queue message shown in Azure Storage Explorer.":::
-1. Run the function again, send another request, and you'll see a new message appear in the queue.
+1. Run the function again, send another request, and you see a new message in the queue.
Now, it's time to republish the updated function app to Azure. ## Redeploy and verify the updated app
-1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Deploy to function app...`.
+1. In Visual Studio Code, press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Deploy to function app...`.
1. Choose the function app that you created in the first article. Because you're redeploying your project to the same app, select **Deploy** to dismiss the warning about overwriting files.
-1. After deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure.
+1. After the deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure.
-1. Again [view the message in the storage queue](#examine-the-output-queue) to verify that the output binding again generates a new message in the queue.
+1. Again [view the message in the storage queue](#examine-the-output-queue) to verify that the output binding generates a new message in the queue.
## Clean up resources In Azure, *resources* refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
-You created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them:
+You've created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them:
[!INCLUDE [functions-cleanup-resources-vs-code-inner.md](../../includes/functions-cleanup-resources-vs-code-inner.md)]
You created resources to complete these quickstarts. You may be billed for these
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about developing Functions using Visual Studio Code:
-+ [Develop Azure Functions using Visual Studio Code](functions-develop-vs-code.md)
+* [Develop Azure Functions using Visual Studio Code](functions-develop-vs-code.md)
-+ [Azure Functions triggers and bindings](functions-triggers-bindings.md).
+* [Azure Functions triggers and bindings](functions-triggers-bindings.md).
::: zone pivot="programming-language-csharp"
-+ [Examples of complete Function projects in C#](/samples/browse/?products=azure-functions&languages=csharp).
+* [Examples of complete Function projects in C#](/samples/browse/?products=azure-functions&languages=csharp).
-+ [Azure Functions C# developer reference](functions-dotnet-class-library.md)
+* [Azure Functions C# developer reference](functions-dotnet-class-library.md)
::: zone pivot="programming-language-javascript"
-+ [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript).
+* [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript).
-+ [Azure Functions JavaScript developer guide](functions-reference-node.md)
+* [Azure Functions JavaScript developer guide](functions-reference-node.md)
::: zone-end ::: zone pivot="programming-language-java"
-+ [Examples of complete Function projects in Java](/samples/browse/?products=azure-functions&languages=java).
+* [Examples of complete Function projects in Java](/samples/browse/?products=azure-functions&languages=java).
-+ [Azure Functions Java developer guide](functions-reference-java.md)
+* [Azure Functions Java developer guide](functions-reference-java.md)
::: zone-end ::: zone pivot="programming-language-typescript"
-+ [Examples of complete Function projects in TypeScript](/samples/browse/?products=azure-functions&languages=typescript).
+* [Examples of complete Function projects in TypeScript](/samples/browse/?products=azure-functions&languages=typescript).
-+ [Azure Functions TypeScript developer guide](functions-reference-node.md#typescript)
+* [Azure Functions TypeScript developer guide](functions-reference-node.md#typescript)
::: zone-end ::: zone pivot="programming-language-python"
-+ [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python).
+* [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python).
-+ [Azure Functions Python developer guide](functions-reference-python.md)
+* [Azure Functions Python developer guide](functions-reference-python.md)
::: zone-end ::: zone pivot="programming-language-powershell"
-+ [Examples of complete Function projects in PowerShell](/samples/browse/?products=azure-functions&languages=azurepowershell).
+* [Examples of complete Function projects in PowerShell](/samples/browse/?products=azure-functions&languages=azurepowershell).
-+ [Azure Functions PowerShell developer guide](functions-reference-powershell.md)
+* [Azure Functions PowerShell developer guide](functions-reference-powershell.md)
::: zone-end
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
namespace CosmosDBSamplesV2
# [Isolated process](#tab/isolated-process)
-Example pending.
+This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages:
+
+ * [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/3.0.9)
+ * [Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues/5.0.0)
+
+* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-isolated)
+
+The examples refer to a simple `ToDoItem` type:
++
+<a id="queue-trigger-look-up-id-from-json-isolated"></a>
+
+### Queue trigger, look up ID from JSON
+
+The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection.
+ # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
There are two kinds of retries available for your functions: built-in retry beha
| Event Hubs | [Retry policies](#retry-policies) | Function-level | | Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) | | RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) |
-| Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](/service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) |
+| Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) |
|Timer | [Retry policies](#retry-policies) | Function-level | ### Retry policies
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
import azure.functions as func
def main(mytimer: func.TimerRequest) -> None:
- utc_timestamp = datetime.datetime.utcnow().replace(
- tzinfo=datetime.timezone.utc).isoformat()
+ utc_timestamp = datetime.datetime.now(datetime.timezone.utc).isoformat()
if mytimer.past_due: logging.info('The timer is past due!')
For information about what to do when the timer trigger doesn't work as expected
> [Go to a quickstart that uses a timer trigger](functions-create-scheduled-function.md) > [!div class="nextstepaction"]
-> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
+> [Learn more about Azure functions triggers and bindings](functions-triggers-bindings.md)
azure-functions Functions Create First Function Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-bicep.md
Title: Create your function app resources in Azure using Bicep
description: Create and deploy to Azure a simple HTTP triggered serverless function using Bicep. Previously updated : 05/12/2022 Last updated : 06/12/2022
# Quickstart: Create and deploy Azure Functions resources using Bicep
-In this article, you use Bicep to create a function that responds to HTTP requests.
+In this article, you use Azure Functions with Bicep to create a function app and related resources in Azure. The function app provides an execution context for your function code executions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
+After you create the function app, you can deploy Azure Functions project code to that app.
+ ## Prerequisites ### Azure account
Get-AzResource -ResourceGroupName exampleRG
-## Visit function app welcome page
-
-1. Use the output from the previous validation step to retrieve the unique name created for your function app.
-1. Open a browser and enter the following URL: **\<https://<appName.azurewebsites.net\>**. Make sure to replace **<\appName\>** with the unique name created for your function app.
-
-When you visit the URL, you should see a page like this:
- ## Clean up resources
Remove-AzResourceGroup -Name exampleRG
## Next steps
-Now that you've publish your first function, learn more by adding an output binding to your function.
-
-# [Visual Studio Code](#tab/visual-studio-code)
+Now that you've created your function app resources in Azure, you can deploy your code to the existing app by using one of the following tools:
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md)
+* [Visual Studio Code](functions-develop-vs-code.md#republish-project-files)
+* [Visual Studio](functions-develop-vs.md#publish-to-azure)
+* [Azure Functions Core Tools](functions-run-local.md#publish)
-# [Visual Studio](#tab/visual-studio)
-
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs.md)
-
-# [Command line](#tab/command-line)
-
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md)
--
azure-functions Functions Create First Function Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-resource-manager.md
Title: Create your first function using Azure Resource Manager templates description: Create and deploy to Azure a simple HTTP triggered serverless function by using an Azure Resource Manager template (ARM template). Previously updated : 3/5/2020 Last updated : 06/22/2022
# Quickstart: Create and deploy Azure Functions resources from an ARM template
-In this article, you use an Azure Resource Manager template (ARM template) to create a function that responds to HTTP requests.
+In this article, you use Azure Functions with an Azure Resource Manager template (ARM template) to create a function app and related resources in Azure. The function app provides an execution context for your function code executions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
If your environment meets the prerequisites and you're familiar with using ARM t
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Ffunction-app-create-dynamic%2Fazuredeploy.json)
+After you create the function app, you can deploy Azure Functions project code to that app.
+ ## Prerequisites ### Azure account Before you begin, you must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-### Create a local functions project
-
-This article requires a local functions code project to run on the Azure resources that you create. If you don't first create a project to publish, you won't be able to complete the deployment section of this article.
-
-Choose one of the following tabs, follow the link, and complete the section to create a function app in the language of your choice:
-
-# [Visual Studio Code](#tab/visual-studio-code)
-
-Create your local functions project in your chosen language in Visual Studio Code:
-
-+ [C#](create-first-function-vs-code-csharp.md)
-+ [Java](create-first-function-vs-code-java.md)
-+ [JavaScript](create-first-function-vs-code-node.md)
-+ [PowerShell](create-first-function-vs-code-powershell.md)
-+ [Python](create-first-function-vs-code-python.md)
-+ [TypeScript](create-first-function-vs-code-typescript.md)
-
-# [Visual Studio](#tab/visual-studio)
-
-[Create your local functions project in Visual Studio](functions-create-your-first-function-visual-studio.md#create-a-function-app-project)
-
-# [Command line](#tab/command-line)
-
-Create your local functions project in your chosen language from the command line:
-
-+ [C#](create-first-function-cli-csharp.md)
-+ [Java](create-first-function-cli-java.md)
-+ [JavaScript](create-first-function-cli-node.md)
-+ [PowerShell](create-first-function-cli-powershell.md)
-+ [Python](create-first-function-cli-python.md)
-+ [TypeScript](create-first-function-cli-typescript.md)
---
-After you've created your project locally, you create the resources required to run your new function in Azure.
- ## Review the template The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/function-app-create-dynamic/).
Read-Host -Prompt "Press [ENTER] to continue ..."
```
-## Validate the deployment
-
-Next you validate the function app hosting resources you created by publishing your project to Azure and calling the HTTP endpoint of the function.
-
-### Publish the function project to Azure
-
-Use the following steps to publish your project to the new Azure resources:
-
-# [Visual Studio Code](#tab/visual-studio-code)
--
-In the output, copy the URL of the HTTP trigger. You use this to test your function running in Azure.
-
-# [Visual Studio](#tab/visual-studio)
-
-1. In **Solution Explorer**, right-click the project and select **Publish**.
-
-1. In **Pick a publish target**, choose **Azure Functions Consumption plan** with **Select existing** and select **Create profile**.
-
- :::image type="content" source="media/functions-create-first-function-arm/choose-publish-target-visual-studio.png" alt-text="Choose an existing publish target":::
-
-1. Choose your **Subscription**, expand the resource group, select your function app, and select **OK**.
-
-1. After the publish completes, copy the **Site URL**.
-
- :::image type="content" source="media/functions-create-first-function-arm/publish-summary-site-url.png" alt-text="Copy the site URL from the publish summary":::
-
-1. Append the path `/api/<FUNCTION_NAME>?name=Functions`, where `<FUNCTION_NAME>` is the name of your function. The URL that calls your HTTP trigger function is in the following format:
-
- `http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>?name=Functions`
-
-You use this URL to test your HTTP trigger function running in Azure.
-
-# [Command line](#tab/command-line)
-
-To publish your local code to a function app in Azure, use the `publish` command:
-
-```cmd
-func azure functionapp publish <FUNCTION_APP_NAME>
-```
-
-In this example, replace `<FUNCTION_APP_NAME>` with the name of your function app. You may need to sign in again by using `az login`.
-
-In the output, copy the URL of the HTTP trigger. You use this to test your function running in Azure.
---
-### Invoke the function on Azure
-
-Paste the URL you copied for the HTTP request into your browser's address bar, make sure that the `name` query string as `?name=Functions` has been appended to the end of this URL, and then execute the request.
-
-You should see a response like:
-
-<pre>Hello Functions!</pre>
## Clean up resources
If you continue to the next step and add an Azure Storage queue output binding,
Otherwise, use the following command to delete the resource group and all its contained resources to avoid incurring further costs.
-```azurecli
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
az group delete --name <RESOURCE_GROUP_NAME> ```
-Replace `<RESOURCE_GROUP_NAME>` with the name of your resource group.
-
-## Next steps
-
-Now that you've publish your first function, learn more by adding an output binding to your function.
+# [PowerShell](#tab/PowerShell)
-# [Visual Studio Code](#tab/visual-studio-code)
-
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md)
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name <RESOURCE_GROUP_NAME>
+```
-# [Visual Studio](#tab/visual-studio)
+
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs.md)
+Replace `<RESOURCE_GROUP_NAME>` with the name of your resource group.
-# [Command line](#tab/command-line)
+## Next steps
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md)
+Now that you've created your function app resources in Azure, you can deploy your code to the existing app by using one of the following tools:
-
+* [Visual Studio Code](functions-develop-vs-code.md#republish-project-files)
+* [Visual Studio](functions-develop-vs.md#publish-to-azure)
+* [Azure Functions Core Tools](functions-run-local.md#publish)
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
description: Learn how to develop and test Azure Functions by using the Azure Fu
ms.devlang: csharp, java, javascript, powershell, python Previously updated : 05/19/2022 Last updated : 06/19/2022 #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
The extension can be used with the following languages, which are supported by t
* [Java](functions-reference-java.md) * [PowerShell](functions-reference-powershell.md) * [Python](functions-reference-python.md)
+* [TypeScript](functions-reference-node.md#typescript)
<sup>*</sup>Requires that you [set C# script as your default project language](#c-script-projects).
This article provides details about how to use the Azure Functions extension to
## Prerequisites
-Before you install and run the [Azure Functions extension][Azure Functions extension for Visual Studio Code], you must meet these requirements:
- * [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-* An active Azure subscription.
-
+* [Azure Functions extension][Azure Functions extension for Visual Studio Code]. You can also install the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), which is recommended for working with Azure resources.
-Other resources that you need, like an Azure storage account, are created in your subscription when you [publish by using Visual Studio Code](#publish-to-azure).
+* An active [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't yet have an account, you can create one from the extension in Visual Studio Code.
### Run local requirements
These prerequisites are only required to [run and debug your functions locally](
* [Debugger for Java extension](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-debug).
-* [Java 8](/azure/developer/jav#java-versions).
+* [Java](/azure/developer/jav#java-versions).
* [Maven 3 or later](https://maven.apache.org/).
These prerequisites are only required to [run and debug your functions locally](
* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
-* [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+* [Node.js](https://nodejs.org/), one of the [supported versions](functions-reference-node.md#node-version). Use the `node --version` command to check your version.
# [PowerShell](#tab/powershell)
These prerequisites are only required to [run and debug your functions locally](
# [Python](#tab/python)
-* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime, so download and installation might take some time.
+* The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time.
-* [Python 3.x](https://www.python.org/downloads/). For version information, see [Python versions](functions-reference-python.md#python-version) by the Azure Functions runtime.
+* [Python](https://www.python.org/downloads/), one of the [supported versions](functions-reference-python.md#python-version).
* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code. - ## Create an Azure Functions project The Functions extension lets you create a function app project, along with your first function. The following steps show how to create an HTTP-triggered function in a new Functions project. [HTTP trigger](functions-bindings-http-webhook.md) is the simplest function trigger template to demonstrate.
-1. From **Azure: Functions**, select the **Create Function** icon:
+1. 1. Choose the Azure icon in the Activity bar, then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
+
+ :::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
- :::image type="content" source="./media/functions-develop-vs-code/create-function.png" alt-text=" Screenshot for Create Function.":::
+1. Choose the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
-1. Select the folder for your function app project, and then **Select a language for your function project**.
+1. When prompted, **Select a language** for your project, and if necessary choose a specific language version.
1. Select the **HTTP trigger** function template, or you can select **Skip for now** to create a project without a function. You can always [add a function to your project](#add-a-function-to-your-project) later.
To learn more, see the [Queue storage output binding reference article](function
[!INCLUDE [functions-sign-in-vs-code](../../includes/functions-sign-in-vs-code.md)]
-## Publish to Azure
+## <a name="publish-to-azure"></a>Create Azure resources
-Visual Studio Code lets you publish your Functions project directly to Azure. In the process, you create a function app and related resources in your Azure subscription. The function app provides an execution context for your functions. The project is packaged and deployed to the new function app in your Azure subscription.
+Before you can publish your Functions project to Azure, you must have a function app and related resources in your Azure subscription to run your code. The function app provides an execution context for your functions. When you publish to a function app in Azure from Visual Studio Code, the project is packaged and deployed to the selected function app in your Azure subscription.
-When you publish from Visual Studio Code to a new function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
-
-When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
+When you create a function app in Azure, you can choose either a quick function app create path using defaults or an advanced path. This way you'll have more control over the remote resources created.
### Quick function app create
-When you choose **+ Create new function app in Azure...**, the extension automatically generates values for the Azure resources needed by your function app. These values are based on the function app name that you choose. For an example of using defaults to publish your project to a new function app in Azure, see the [Visual Studio Code quickstart article](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure).
-
-If you want to provide explicit names for the created resources, you must choose the advanced create path.
### <a name="enable-publishing-with-advanced-create-options"></a>Publish a project to a new function app in Azure by using advanced options The following steps publish your project to a new function app created with advanced create options:
-1. In the command pallet, enter **Azure Functions: Deploy to function app**.
+1. In the command pallet, enter **Azure Functions: Create function app in Azure...(Advanced)**.
1. If you're not signed in, you're prompted to **Sign in to Azure**. You can also **Create a free Azure account**. After signing in from the browser, go back to Visual Studio Code.
-1. If you have multiple subscriptions, **Select a subscription** for the function app, and then select **+ Create New Function App in Azure... _Advanced_**. This _Advanced_ option gives you more control over the resources you create in Azure.
- 1. Following the prompts, provide this information:
- | Prompt | Value | Description |
- | | -- | -- |
- | Select function app in Azure | Create New Function App in Azure | At the next prompt, type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. |
- | Select an OS | Windows | The function app runs on Windows. |
- | Select a hosting plan | Consumption plan | A serverless [Consumption plan hosting](consumption-plan.md) is used. |
- | Select a runtime for your new app | Your project language | The runtime must match the project that you're publishing. |
- | Select a resource group for new resources | Create New Resource Group | At the next prompt, type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. |
- | Select a storage account | Create new storage account | At the next prompt, type a globally unique name for the new storage account used by your function app and then select Enter. Storage account names must be between 3 and 24 characters long and can contain only numbers and lowercase letters. You can also select an existing account. |
- | Select a location for new resources | region | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ | Prompt | Selection |
+ | | -- |
+ | Enter a globally unique name for the new function app. | Type a globally unique name that identifies your new function app and then select Enter. Valid characters for a function app name are `a-z`, `0-9`, and `-`. |
+ | Select a runtime stack. | Choose the language version on which you've been running locally. |
+ | Select an OS. | Choose either Linux or Windows. Python apps must run on Linux |
+ | Select a resource group for new resources. | Choose **Create new resource group** and type a resource group name, like `myResourceGroup`, and then select enter. You can also select an existing resource group. |
+ | Select a location for new resources. | Select a location in a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ | Select a hosting plan. | Choose **Consumption** for serverless [Consumption plan hosting](consumption-plan.md), where you're only charged when your functions run. |
+ | Select a storage account. | Choose **Create new storage account** and at the prompt, type a globally unique name for the new storage account used by your function app and then select Enter. Storage account names must be between 3 and 24 characters long and can contain only numbers and lowercase letters. You can also select an existing account. |
+ | Select an Application Insights resource for your app. | Choose **Create new Application Insights resource** and at the prompt, type a name for the instance used to store runtime data from your functions.|
A notification appears after your function app is created and the deployment package is applied. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created.
The function URL is copied to the clipboard, along with any required keys passed
When the extension gets the URL of functions in Azure, it uses your Azure account to automatically retrieve the keys it needs to start the function. [Learn more about function access keys](security-concepts.md#function-access-keys). Starting non-HTTP triggered functions requires using the admin key.
-## Republish project files
+## <a name="republish-project-files"></a>Deploy project files
-When you set up [continuous deployment](functions-continuous-deployment.md), your function app in Azure is updated when you update source files in the connected source location. We recommend continuous deployment, but you can also republish your project file updates from Visual Studio Code.
+We recommend setting-up [continuous deployment](functions-continuous-deployment.md) so that your function app in Azure is updated when you update source files in the connected source location. You can also deploy your project files from Visual Studio Code.
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
+When you publish from Visual Studio Code, you take advantage of the [Zip deploy](functions-deployment-technologies.md#zip-deploy) technology.
## Run functions
As with uploading, if the local file is encrypted, it's decrypted, updated, and
## Monitoring functions
-When you [run functions locally](#run-functions-locally), log data is streamed to the Terminal console. You can also get log data when your Functions project is running in a function app in Azure. You can either connect to streaming logs in Azure to see near-real-time log data, or you can enable Application Insights for a more complete understanding of how your function app is behaving.
+When you [run functions locally](#run-functions-locally), log data is streamed to the Terminal console. You can also get log data when your Functions project is running in a function app in Azure. You can connect to streaming logs in Azure to see near-real-time log data. You should enable Application Insights for a more complete understanding of how your function app is behaving.
### Streaming logs
The Azure Functions extension provides a useful graphical interface in the area
| **Download Remote Settings** | Downloads settings from the chosen function app in Azure into your local.settings.json file. If the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings that have conflicting values in the two locations, you're prompted to choose how to proceed. Be sure to save changes to your local.settings.json file before you run this command. | | **Edit settings** | Changes the value of an existing function app setting in Azure. This command doesn't affect settings in your local.settings.json file. | | **Encrypt settings** | Encrypts individual items in the `Values` array in the [local settings](#local-settings). In this file, `IsEncrypted` is also set to `true`, which specifies that the local runtime will decrypt settings before using them. Encrypt local settings to reduce the risk of leaking valuable information. In Azure, application settings are always stored encrypted. |
-| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When triggering a function in Azure, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. |
+| **Execute Function Now** | Manually starts a function using admin APIs. This command is used for testing, both locally during debugging and against functions running in Azure. When a function in Azure starts, the extension first automatically obtains an admin key, which it uses to call the remote admin APIs that start functions in Azure. The body of the message sent to the API depends on the type of trigger. Timer triggers don't require you to pass any data. |
| **Initialize Project for Use with VS Code** | Adds the required Visual Studio Code project files to an existing Functions project. Use this command to work with a project that you created by using Core Tools. | | **Install or Update Azure Functions Core Tools** | Installs or updates [Azure Functions Core Tools], which is used to run functions locally. | | **Redeploy** | Lets you redeploy project files from a connected Git repository to a specific deployment in Azure. To republish local updates from Visual Studio Code, [republish your project](#republish-project-files). |
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
In the preceding step, if you can't find a storage account connection string, it
### Required application settings * Required:
- * [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)
+ * [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)
* Required for Premium plan functions:
- * [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md)
- * [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md)
+ * [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md)
+ * [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md)
For more information, see [App settings reference for Azure Functions](./functions-app-settings.md).
Your function app must be able to access the storage account. Common issues that
* The function app is deployed to your App Service Environment (ASE) without the correct network rules to allow traffic to and from the storage account. * The storage account firewall is enabled and not configured to allow traffic to and from functions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).+ * Verify that the `allowSharedKeyAccess` setting is set to `true` which is its default value. For more information, see [Prevent Shared Key authorization for an Azure Storage account](../storage/common/shared-key-authorization-prevent.md?tabs=portal#verify-that-shared-key-access-is-not-allowed). ## Daily execution quota is full
If you have a daily execution quota configured, your function app is temporarily
To verify the quota in the [Azure portal](https://portal.azure.com), select **Platform Features** > **Function App Settings** in your function app. If you're over the **Daily Usage Quota** you've set, the following message is displayed:
- > "The Function App has reached daily usage quota and has been stopped until the next 24 hours time frame."
+> "The Function App has reached daily usage quota and has been stopped until the next 24 hours time frame."
To resolve this issue, remove or increase the daily quota, and then restart your app. Otherwise, the execution of your app is blocked until the next day.
Your function app might be unreachable for either of the following reasons:
The Azure portal makes calls directly to the running app to fetch the list of functions, and it makes HTTP calls to the Kudu endpoint. Platform-level settings under the **Platform Features** tab are still available. To verify your ASE configuration:+ 1. Go to the network security group (NSG) of the subnet where the ASE resides. 1. Validate the inbound rules to allow traffic that's coming from the public IP of the computer where you're accessing the application.
-
+ You can also use the portal from a computer that's connected to the virtual network that's running your app or to a virtual machine that's running in your virtual network. For more information about inbound rule configuration, see the "Network Security Groups" section of [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups).
+## Container image unavailable (Linux)
+
+For Linux function apps that run from a container, the "Azure Functions runtime is unreachable" error can occur when the container image being referenced is unavailable or fails to start correctly.
+
+To confirm that the error is caused for this reason:
+
+1. Navigate to the Kudu endpoint for the function app, which is located at `https://scm.<FUNCTION_APP>.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app.
+
+1. Download the Docker logs ZIP file and review them locally, or review the docker logs from within Kudu.
+
+1. Check for any errors in the logs that would indicate that the container is unable to start successfully.
+
+Any such error would need to be remedied for the function to work correctly.
+
+When the container image can't be found, you should see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli) to change the container image being reference. If you've deployed a custom container image, you need to fix the image and redeploy the updated version to the referenced registry.
+ ## Next steps Learn about monitoring your function apps:- > [!div class="nextstepaction"]
-> [Monitor Azure Functions](functions-monitoring.md)
+> [Monitor Azure Functions](functions-monitoring.md)
+
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 03/21/2022 Last updated : 06/21/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; |
-| [Content Delivery Network](../../cdn/index.yml) | &#x2705; | &#x2705; |
+| [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: March 2022*
+*Last updated: June 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | | | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | | | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack)| &#x2705; | &#x2705; | | | | | [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Azure Monitor [Application Insights](../../azure-monitor/app/app-insights-overview.md) | | | | | &#x2705; | | Azure Monitor [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Batch](../../batch/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Cognitive
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | | | | [Cognitive | [Cognitive
-| [Cognitive
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive | [Cognitive | [Cognitive
-| [Container Instances](../../container-instances/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Container Instances](../../container-instances/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Content Delivery Network](../../cdn/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Content Delivery Network (CDN)](../../cdn/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Data Factory](../../data-factory/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Dynamics 365 Finance](/dynamics365/finance/) | &#x2705; | &#x2705; | | | | | [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Sales](/dynamics365/sales/help-hub) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Supply Chain Management](/dynamics365/supply-chain/) | &#x2705; | &#x2705; | | | |
-| [Event Grid](../../event-grid/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Event Grid](../../event-grid/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Event Hubs](../../event-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | | | [HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Import/Export](../../import-export/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Media Services](/azure/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Azure portal](../../azure-portal/index.yml) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | | | [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/) (formerly Microsoft Cloud App Security) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for Identity](/defender-for-identity/) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Sentinel](../../sentinel/index.yml) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Network Watcher](../../network-watcher/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Network Watcher](../../network-watcher/index.yml) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Query Online](/power-query/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Query Online](/power-query/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; | | | | | [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [SQL Database](/azure/azure-sql/database/sql-database-paas-overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Storage: Tables](../../storage/tables/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [StorSimple](../../storsimple/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
+
+ Title: Understanding Microsoft Azure Maps Transactions
+
+description: Learn about Microsoft Azure Maps Transactions
++ Last updated : 06/23/2022+++++
+# Understanding Azure Maps Transactions
+
+When you useΓÇ»[Azure Maps Services](index.yml), the API requests you make generate transactions. Your transaction usage is available for review in yourΓÇ»[Azure portal]( https://portal.azure.com) Metrics report. For additional information, see [View Azure Maps API usage metrics](how-to-view-api-usage.md). These transactions can be either billable or non-billable usage, depending on the service and the feature. ItΓÇÖs important to understand which usage generates a billable transaction and how itΓÇÖs calculated so you can plan and budget for the costs associated with using Azure Maps. Billable transactions will show up in your Cost Analysis report within the Azure portal.
+
+Below is a summary of which Azure Maps services generate transactions, billable and non-billable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated.
+
+## Azure Maps Transaction information by service
+
+| Azure Maps Service | Billable | Transaction Calculation | Meter |
+|--|-|-|-|
+| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
+| [Elevation (DEM)](/rest/api/maps/elevation)| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>|
+| [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
+| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the Creator table below. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
+| [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
+| [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch` which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
+| [Timezone](/rest/api/maps/timezone) | Yes | One request = 1 transaction | <ul><li>Location Insights Timezone (Gen2 pricing)</li><li>Standard S1 Time Zones Transactions (Gen1 S1 pricing)</li><li>Standard Time Zones Transactions (Gen1 S0 pricing)</li></ul> |
+| [Traffic](/rest/api/maps/traffic) | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> |
+| [Weather](/rest/api/maps/weather) | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> |
+
+<!-- In Bing Maps, any time a synchronous Truck Routing request is made, three transactions are counted. Does this apply also to Azure Maps?-->
+
+## Azure Maps Creator
+
+| Azure Maps Creator | Billable | Transaction Calculation | Meter |
+|-|-||-|
+| [Alias](/rest/api/maps/v2/alias) | No | One request = 1 transaction | Not applicable |
+| [Conversion](/rest/api/maps/v2/conversion) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) |
+| [Dataset](/rest/api/maps/v2/dataset) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)|
+| [Feature State](/rest/api/maps/v2/feature-state) | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) |
+| [Render v2](/rest/api/maps/render-v2) | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render v2, see Render v2 section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) |
+| [Tileset](/rest/api/maps/v2/tileset) | Are part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) |
+| [WFS](/rest/api/maps/v2/wfs) | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) |
+
+<!--
+| Service | Unit of measure | Price |
+||-|--|
+| Map provisioning | 1 storage unit per hour | $0.42 |
+| Map render | 1k transactions | $0.20 |
+| Feature state | 1k transactions | $0.03 |
+| Web feature | 1k transactions | $21 |
+-->
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps/)
+
+> [!div class="nextstepaction"]
+> [Pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+
+> [!div class="nextstepaction"]
+> [Manage the pricing tier of your Azure Maps account](how-to-manage-pricing-tier.md)
+
+> [!div class="nextstepaction"]
+> [View Azure Maps API usage metrics](how-to-view-api-usage.md)
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent (AMA) on Azure virtual
Previously updated : 05/10/2022 Last updated : 06/21/2022
The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-
## Prerequisites The following prerequisites must be met prior to installing the Azure Monitor agent. -- For methods other than Azure portal, you must have the following role assignments to install the agent: -
-| Built-in Role | Scope(s) | Reason |
-|:|:|:|
-| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
-| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates |
-- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)-- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).-- The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine.-- The virtual machine must have access to the following HTTPS endpoints:
+- **Permissions**: For methods other than Azure portal, you must have the following role assignments to install the agent:
+
+ | Built-in Role | Scope(s) | Reason |
+ |:|:|:|
+ | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
+ | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates |
+- **Non-Azure**: For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)
+- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both system-assigned and user-assigned managed identities are supported.
+ - **User-assigned**: This is recommended for large scale deployments, configurable via [built-in Azure policies](#using-azure-policy). It can be created once and shared across multiple VMs, and is thus more scalable than system-assigned.
+ - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers.
+ - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).
+- **Networking**: The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints:
- global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-nam
## Using Azure Policy
-Use the following policies and policy initiatives to automatically install the agent and associate it with a data collection rule, every time you create a virtual machine.
+Use the following policies and policy initiatives to **automatically install the agent and associate it with a data collection rule**, every time you create a virtual machine, scale set, or Arc-enabled server.
+
+> [!NOTE]
+> As per Microsoft Identity best practices, policies for installing Azure Monitor agent on **virtual machines and scale-sets** rely on **user-assigned managed identity**. This is the more scalable and resilient managed identity options for these resources.
+> For **Arc-enabled servers**, policies rely on only **system-assigned managed identity** as the only supported option today.
### Built-in policy initiatives
-[View prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
+Before proceeding, review [prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
-Policy initiatives for Windows and Linux virtual machines consist of individual policies that:
+Policy initiatives for Windows and Linux **virtual machines, scale-sets** consist of individual policies that:
-- Install the Azure Monitor agent extension on the virtual machine.-- Create and deploy the association to link the virtual machine to a data collection rule.
+- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details).
+ - `Bring Your Own User-Assigned Identity`: If set of `true`, it creates the built-in user-assigned managed identity in the predefined resource group, and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that **you must assign** to the machines beforehand.
+- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the parameters below
+ - `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the policy above. If set to `true`, it configures the agent to use an existing user-assigned identity that **you must assign** to the machine(s) in scope beforehand.
+ - `User-Assigned Managed Identity Name`: If using your own identity (selected `true`), specify the name of the identity that's assigned to the machine(s)
+ - `User-Assigned Managed Identity Resource Group`: If using your own identity (selected `true`), specify the resource group where the identity exists
+ - `Additional Virtual Machine Images`: Pass additional VM image names that you want to apply the policy to, if not already included
+- Create and deploy the association to link the machine to specified data collection rule.
+ - `Data Collection Rule Resource Id`: The ARM resourceId of the rule you want to associate via this policy, to all machines the policy is applied to.
![Partial screenshot from the Azure Policy Definitions page showing two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+#### Known issues:
+- Managed Identity default behavior: [Learn more](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
+- Possible race condition with using built-in user-assigned identity creation policy above. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#known-issues)
+- Assigning policy to resource groups: If the assignment scope of the policy is a resource group and not a subscription, the identity used by policy assignment (different from the user-assigned identity used by agent) must be manually granted [these roles](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#required-authorization) prior to assignment/remediation. Failing to do this will result in **deployment failures**.
+- Other [Managed Identity limitations](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations)
+ ### Built-in policies
-You can choose to use the individual policies from their respective policy initiatives, based on your needs. For example, if you only want to automatically install the agent, use the first policy from the initiative as shown in the following example.
+You can choose to use the individual policies from the policy initiative above to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative as shown below.
![Partial screenshot from the Azure Policy Definitions page showing policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Title: Collect text and IIS logs with Azure Monitor agent (preview) description: Configure collection of filed-based text logs using a data collection rule on virtual machines with the Azure Monitor agent. Previously updated : 06/06/2022 Last updated : 06/22/2022
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
:::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template."::: **Data collection rule for text log**
+
+ See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR.
```json {
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
ServiceNow admins must generate a client ID and client secret for their ServiceN
- [Set up OAuth for Rome](https://docs.servicenow.com/bundle/rome-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Quebec](https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Paris](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Orlando](https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for London](https://docs.servicenow.com/bundle/london-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) As a part of setting up OAuth, we recommend:
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
# Filter and preprocess telemetry in the Application Insights SDK
-You can write and configure plug-ins for the Application Insights SDK to customize how telemetry can be enriched and processed before it's sent to the Application Insights service.
+Plug-ins for the Application Insights SDK can customize how telemetry is enriched and processed before it's sent to the Application Insights service.
* [Sampling](sampling.md) reduces the volume of telemetry without affecting your statistics. It keeps together related data points so that you can navigate between them when you diagnose a problem. In the portal, the total counts are multiplied to compensate for the sampling. * Filtering with telemetry processors lets you filter out telemetry in the SDK before it's sent to the server. For example, you could reduce the volume of telemetry by excluding requests from robots. Filtering is a more basic approach to reducing traffic than sampling. It allows you more control over what's transmitted, but it affects your statistics. For example, you might filter out all successful requests.
Before you start:
## Filtering
-This technique gives you direct control over what's included or excluded from the telemetry stream. Filtering can be used to drop telemetry items from being sent to Application Insights. You can use filtering in conjunction with sampling, or separately.
+This technique gives you direct control over what's included or excluded from the telemetry stream. Filtering can be used to drop telemetry items from being sent to Application Insights. You can use filtering with sampling, or separately.
To filter telemetry, you write a telemetry processor and register it with `TelemetryConfiguration`. All telemetry goes through your processor. You can choose to drop it from the stream or give it to the next processor in the chain. Telemetry from the standard modules, such as the HTTP request collector and the dependency collector, and telemetry you tracked yourself is included. For example, you can filter out telemetry about requests from robots or successful dependency calls.
public void Process(ITelemetry item)
### Java
-To learn more about telemetry processors and their implementation in Java, please reference the [Java telemetry processors documentation](./java-standalone-telemetry-processors.md).
+To learn more about telemetry processors and their implementation in Java, reference the [Java telemetry processors documentation](./java-standalone-telemetry-processors.md).
### JavaScript web applications
Use telemetry initializers to enrich telemetry with additional information or to
For example, Application Insights for a web package collects telemetry about HTTP requests. By default, it flags as failed any request with a response code >=400. But if you want to treat 400 as a success, you can provide a telemetry initializer that sets the success property.
-If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors. So any enrichments done by initializers are visible to processors.
+If you provide a telemetry initializer, it's called whenever any of the Track*() methods are called. This initializer includes `Track()` methods called by the standard telemetry modules. By convention, these modules don't set any property that was already set by an initializer. Telemetry initializers are called before calling telemetry processors. So any enrichments done by initializers are visible to processors.
**Define your initializer**
ASP.NET **Core/Worker service apps: Load your initializer**
> [!NOTE] > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#adding-telemetryinitializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. This is done in the `Startup.ConfigureServices` method.
+For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#adding-telemetryinitializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
```csharp using Microsoft.ApplicationInsights.Extensibility;
public void Initialize(ITelemetry telemetry)
} } ```-
-#### Add information from HttpContext
-
-The following sample initializer reads data from [`HttpContext`](/aspnet/core/fundamentals/http-context) and appends it to a `RequestTelemetry` instance. The `IHttpContextAccessor` is automatically provided through constructor dependency injection.
-
-```csharp
-public class HttpContextRequestTelemetryInitializer : ITelemetryInitializer
-{
- private readonly IHttpContextAccessor httpContextAccessor;
-
- public HttpContextRequestTelemetryInitializer(IHttpContextAccessor httpContextAccessor)
- {
- this.httpContextAccessor =
- httpContextAccessor ??
- throw new ArgumentNullException(nameof(httpContextAccessor));
- }
-
- public void Initialize(ITelemetry telemetry)
- {
- var requestTelemetry = telemetry as RequestTelemetry;
- if (requestTelemetry == null) return;
-
- var claims = this.httpContextAccessor.HttpContext.User.Claims;
- Claim oidClaim = claims.FirstOrDefault(claim => claim.Type == "oid");
- requestTelemetry.Properties.Add("UserOid", oidClaim?.Value);
- }
-}
-```
- ## ITelemetryProcessor and ITelemetryInitializer What's the difference between telemetry processors and telemetry initializers?
What's the difference between telemetry processors and telemetry initializers?
* Telemetry initializers always run before telemetry processors. * Telemetry initializers may be called more than once. By convention, they don't set any property that was already set. * Telemetry processors allow you to completely replace or discard a telemetry item.
-* All registered telemetry initializers are guaranteed to be called for every telemetry item. For telemetry processors, SDK guarantees calling the first telemetry processor. Whether the rest of the processors are called or not is decided by the preceding telemetry processors.
-* Use telemetry initializers to enrich telemetry with additional properties or override an existing one. Use a telemetry processor to filter out telemetry.
+* All registered telemetry initializers are called for every telemetry item. For telemetry processors, SDK guarantees calling the first telemetry processor. Whether the rest of the processors are called or not is decided by the preceding telemetry processors.
+* Use telemetry initializers to enrich telemetry with more properties or override an existing one. Use a telemetry processor to filter out telemetry.
> [!NOTE] > JavaScript only has telemetry initializers which can [filter out events by using ITelemetryInitializer](#javascript-web-applications)
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based Application Insights allows you to take advantage of all the lat
* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price. Otherwise, Pay-as-you-go data ingestion and data retention are billed similarly in Log Analytics as they are in Application Insights. * Faster data ingestion via Log Analytics streaming ingestion.
+> [!NOTE]
+> After migrating to a workspace-based Application Insights resource, telemetry from multiple Application Insights resources may be stored in a common Log Analytics workspace. You will still be able to pull data from a specific Application Insights resource, [as described here](#understanding-log-queries).
+ ## Migration process When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate will change the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data.
This section walks through migrating a classic Application Insights resource to
![Migrate resource button](./media/convert-classic-resource/migrate.png)
-3. Choose the Log Analytics Workspace where you want all future ingested Application Insights telemetry to be stored.
+3. Choose the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription, or in a different subscription that shares the same Azure AD tenant. The Log Analytics workspace does not have to be in the same resource group as the Application Insights resource.
![Migration wizard UI with option to select targe workspace](./media/convert-classic-resource/migration.png)
Once your resource is migrated, you'll see the corresponding workspace info in t
Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. > [!NOTE]
-> After migrating to a workspace-based Application Insights resource we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
+> After migrating to a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/daily-cap.md) to limit ingestion and costs instead of the cap in Application Insights.
## Understanding log queries
To write queries against the [new workspace-based table structure/schema](#works
To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
-When you query directly from the Log Analytics UI within your workspace, you'll only see the data that is ingested post migration. To see both your classic Application Insights data + new data ingested after migration in a unified query experience use the Logs (Analytics) query view from within your migrated Application Insights resource.
+If you have multiple Application Insights resources store their telemetry in one Log Analytics workspace but you only want to query data from one specific Application Insights resource, you have two options:
+
+- Option 1: Go to the desired Application Insights resource and open the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource.
+- Option 2: Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and open the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in ```_ResourceId``` property that is available in all application specific tables.
+
+Notice that if you query directly from the Log Analytics workspace, you'll only see data that is ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource.
> [!NOTE] > If you rename your Application Insights resource after migrating to workspace-based model, the Application Insights Logs tab will no longer show the telemetry collected before renaming. You will be able to see all data (old and new) on the Logs tab of the associated Log Analytics resource.
You can check your current retention settings for Log Analytics under **General*
## Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). With workspace-based Application Insights resources data is stored in a Log Analytics workspace with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). Workspace-based Application Insights resources data is stored in a Log Analytics workspace, together with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
### Classic data structure The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
In this article, you'll learn how to capture logs with Application Insights in .
[nuget-ai-ws-tc]: https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel > [!TIP]
-> The [`Microsoft.ApplicationInsights.WorkerService`][nuget-ai-ws] NuGet package is beyond the scope of this article. It can be used to enable Application Insights for background services. For more information, see [Application Insights for Worker Service apps](./worker-service.md).
+> The [`Microsoft.ApplicationInsights.WorkerService`][nuget-ai-ws] NuGet package, used to enable Application Insights for background services, is out of scope. For more information, see [Application Insights for Worker Service apps](./worker-service.md).
Depending on the Application Insights logging package that you use, there will be various ways to register `ApplicationInsightsLoggerProvider`. `ApplicationInsightsLoggerProvider` is an implementation of <xref:Microsoft.Extensions.Logging.ILoggerProvider>, which is responsible for providing <xref:Microsoft.Extensions.Logging.ILogger> and <xref:Microsoft.Extensions.Logging.ILogger%601> implementations. ## ASP.NET Core applications
-To add Application Insights telemetry to ASP.NET Core applications, use the `Microsoft.ApplicationInsights.AspNetCore` NuGet package. You can configure this through [Visual Studio as a connected service](/visualstudio/azure/azure-app-insights-add-connected-service), or manually.
+To add Application Insights telemetry to ASP.NET Core applications, use the `Microsoft.ApplicationInsights.AspNetCore` NuGet package. You can configure this telemetry through [Visual Studio as a connected service](/visualstudio/azure/azure-app-insights-add-connected-service), or manually.
By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-auto-instrumentation-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
By default, ASP.NET Core applications have an Application Insights logging provi
public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetry();
- // Configure the Connection String/Instrumentation key in appsettings.json
+ // Configure the Connection String in appsettings.json
} public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
There are several limitations when you're logging from *Program.cs* and *Startup
* Telemetry is sent through the [InMemoryChannel](./telemetry-channels.md) telemetry channel. * No [sampling](./sampling.md) is applied to telemetry.
-* Standard [telemetry initializers or processors](./api-filtering-sampling.md) are not available.
+* Standard [telemetry initializers or processors](./api-filtering-sampling.md) aren't available.
-The following examples demonstrate this by explicitly instantiating and configuring *Program.cs* and *Startup.cs*.
+The following examples provide a demonstration by explicitly instantiating and configuring *Program.cs* and *Startup.cs*.
#### Example Program.cs
namespace WebApplication
}) .ConfigureLogging((context, builder) => {
- // Providing an instrumentation key is required if you're using the
+ // Providing a connection string is required if you're using the
// standalone Microsoft.Extensions.Logging.ApplicationInsights package, // or when you need to capture logs during application startup, such as // in Program.cs or Startup.cs itself. builder.AddApplicationInsights(
- context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"]);
+ configureTelemetryConfiguration: (config) => config.ConnectionString = context.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"],
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
// Capture all log-level entries from Program builder.AddFilter<ApplicationInsightsLoggerProvider>(
namespace WebApplication
} ```
-In the preceding code, `ApplicationInsightsLoggerProvider` is configured with your `"APPINSIGHTS_INSTRUMENTATIONKEY"` instrumentation key. Filters are applied, setting the log level to <xref:Microsoft.Extensions.Logging.LogLevel.Trace?displayProperty=nameWithType>.
-- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] #### Example Startup.cs
namespace WebApplication
public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetry();
- // Configure the Connection String/Instrumentation key in appsettings.json
+ // Configure the Connection String in appsettings.json
} // The ILogger<Startup> is resolved by dependency injection
namespace ConsoleApp
services.AddLogging(builder => { // Only Application Insights is registered as a logger provider
- builder.AddApplicationInsights("<YourInstrumentationKey>");
+ builder.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) => config.ConnectionString = "<YourConnectionString>",
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
}); IServiceProvider serviceProvider = services.BuildServiceProvider();
namespace ConsoleApp
The preceding example uses the `Microsoft.Extensions.Logging.ApplicationInsights` package. By default, this configuration uses the "bare minimum" `TelemetryConfiguration` setup for sending data to Application Insights: the `InMemoryChannel` channel. There's no sampling and no standard `TelemetryInitializer` instance. You can override this behavior for a console application, as the following example shows.
-Install this additional package:
+Also install this package:
```xml <PackageReference Include="Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel" Version="2.17.0" />
namespace ConsoleApp
services.AddLogging(builder => { // Only Application Insights is registered as a logger provider
- builder.AddApplicationInsights("<YourInstrumentationKey>");
+ builder.AddApplicationInsights(
+ configureTelemetryConfiguration: (config) => config.ConnectionString = "<YourConnectionString>",
+ configureApplicationInsightsLoggerOptions: (options) => { }
+ );
}); IServiceProvider serviceProvider = services.BuildServiceProvider();
The following examples show how to apply filter rules to `ApplicationInsightsLog
### Create filter rules in configuration with appsettings.json
-`ApplicationInsightsLoggerProvider` is aliased as "ApplicationInsights." The following section of *appsettings.json* overrides the default <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> log level of Application Insights to log categories that start with "Microsoft" at level <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
+`ApplicationInsightsLoggerProvider` is aliased as "ApplicationInsights". The following section of *appsettings.json* overrides the default <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> log level of Application Insights to log categories that start with "Microsoft" at level <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
```json {
Here's the change in the *appsettings.json* file:
### Why do some ILogger logs not have the same properties as others?
-Application Insights captures and sends `ILogger` logs by using the same `TelemetryConfiguration` information that's used for every other telemetry. But there's an exception. By default, `TelemetryConfiguration` is not fully set up when you log from *Program.cs* or *Startup.cs*. Logs from these places won't have the default configuration, so they won't be running all `TelemetryInitializer` instances and `TelemetryProcessor` instances.
+Application Insights captures and sends `ILogger` logs by using the same `TelemetryConfiguration` information that's used for every other telemetry. But there's an exception. By default, `TelemetryConfiguration` isn't fully set up when you log from *Program.cs* or *Startup.cs*. Logs from these places won't have the default configuration, so they won't be running all `TelemetryInitializer` instances and `TelemetryProcessor` instances.
-### I'm using the standalone package Microsoft.Extensions.Logging.ApplicationInsights, and I want to log some additional custom telemetry manually. How should I do that?
+### I'm using the standalone package Microsoft.Extensions.Logging.ApplicationInsights, and I want to log more custom telemetry manually. How should I do that?
-When you use the standalone package, `TelemetryClient` is not injected to the dependency injection (DI) container. You need to create a new instance of `TelemetryClient` and use the same configuration that the logger provider uses, as the following code shows. This ensures that the same configuration is used for all custom telemetry and telemetry from `ILogger`.
+When you use the standalone package, `TelemetryClient` isn't injected to the dependency injection (DI) container. You need to create a new instance of `TelemetryClient` and use the same configuration that the logger provider uses, as the following code shows. This requirement ensures that the same configuration is used for all custom telemetry and telemetry from `ILogger`.
```csharp public class MyController : ApiController
The Application Insights extension in Azure Web Apps uses the new provider. You
### I can't see some of the logs from my application in the workspace.
-This may happen because of adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](./sampling.md) for more details.
+Missing data can occur due to adaptive sampling. Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). See the [Sampling in Application Insights](./sampling.md) for more details.
## Next steps
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics Stream - Azure Application Insights
+ Title: Diagnose with Live Metrics - Application Insights - Azure Monitor
description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Last updated 05/31/2022
ms.devlang: csharp
-# Live Metrics Stream: Monitor & Diagnose with 1-second latency
+# Live Metrics: Monitor & Diagnose with 1-second latency
-Monitor your live, in-production web application by using Live Metrics Stream (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). Select and filter metrics and performance counters to watch in real time, without any disturbance to your service. Inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot debugger](./snapshot-debugger.md), Live Metrics Stream provides a powerful and non-invasive diagnostic tool for your live web site.
+Monitor your live, in-production web application by using Live Metrics (also known as QuickPulse) from [Application Insights](./app-insights-overview.md). Select and filter metrics and performance counters to watch in real time, without any disturbance to your service. Inspect stack traces from sample failed requests and exceptions. Together with [Profiler](./profiler.md) and [Snapshot debugger](./snapshot-debugger.md), Live Metrics provides a powerful and non-invasive diagnostic tool for your live website.
> [!NOTE]
-> Live Metrics only supports TLS 1.2. For more information refer to [Troubleshooting](#troubleshooting).
+> Live Metrics only supports TLS 1.2. For more information, refer to [Troubleshooting](#troubleshooting).
-With Live Metrics Stream, you can:
+With Live Metrics, you can:
* Validate a fix while it's released, by watching performance and failure counts. * Watch the effect of test loads, and diagnose issues live.
With Live Metrics Stream, you can:
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps. > [!NOTE]
-> The number of monitored server instances displayed by Live Metrics Stream may be lower than the actual number of instances allocated for the application. This is because many modern web servers will unload applications that do not receive requests over a period of time in order to conserve resources. Since Live Metrics Stream only counts servers that are currently running the application, servers that have already unloaded the process will not be included in that total.
+> The number of monitored server instances displayed by Live Metrics may be lower than the actual number of instances allocated for the application. This is because many modern web servers will unload applications that do not receive requests over a period of time in order to conserve resources. Since Live Metrics only counts servers that are currently running the application, servers that have already unloaded the process will not be included in that total.
## Get started 1. Follow language specific guidelines to enable Live Metrics. * [ASP.NET](./asp-net.md) - Live Metrics is enabled by default.
- * [ASP.NET Core](./asp-net-core.md)- Live Metrics is enabled by default.
- * [.NET/.NET Core Console/Worker](./worker-service.md)- Live Metrics is enabled by default.
- * [.NET Applications - Enable using code](#enable-livemetrics-using-code-for-any-net-application).
+ * [ASP.NET Core](./asp-net-core.md) - Live Metrics is enabled by default.
+ * [.NET/.NET Core Console/Worker](./worker-service.md) - Live Metrics is enabled by default.
+ * [.NET Applications - Enable using code](#enable-live-metrics-using-code-for-any-net-application).
* [Java](./java-in-process-agent.md) - Live Metrics is enabled by default. * [Node.js](./nodejs.md#live-metrics)
-2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, and then open Live Stream.
+2. In the [Azure portal](https://portal.azure.com), open the Application Insights resource for your app, then open Live Stream.
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters. > [!IMPORTANT]
-> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in the Live Metrics Stream.
+> Monitoring ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in Live Metrics.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### Enable LiveMetrics using code for any .NET application
+### Enable Live Metrics using code for any .NET application
-Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to set up Live Metrics
-manually.
+> [!NOTE]
+> Live Metrics is enabled by default when onboarding using the recommended instructions for .NET Applications.
+
+How to manually set up Live Metrics:
1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector) 2. The following sample console app code shows setting up Live Metrics.
namespace LiveMetricsDemo
while (true) { // Send dependency and request telemetry.
- // These will be shown in Live Metrics stream.
+ // These will be shown in Live Metrics.
// CPU/Memory Performance counter is also shown // automatically without any additional steps. client.TrackDependency("My dependency", "target", "http://sample",
namespace LiveMetricsDemo
While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it's important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
-## How does Live Metrics Stream differ from Metrics Explorer and Analytics?
+## How does Live Metrics differ from Metrics Explorer and Analytics?
| |Live Stream | Metrics Explorer and Analytics | ||||
You can monitor a value different from Count. The options depend on the type of
In addition to Application Insights telemetry, you can also monitor any Windows performance counter by selecting that from the stream options, and providing the name of the performance counter.
-Live metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs.
+Live Metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs.
## Sample Telemetry: Custom Live Diagnostic Events By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
See the details of an item in the live feed by clicking it. You can pause the fe
## Filter by server instance
-If you want to monitor a particular server role instance, you can filter by server. To filter select the server name under *Servers*.
+If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under *Servers*.
![Sampled live failures](./media/live-stream/filter-by-server.png) ## Secure the control channel
-> [!NOTE]
-> Currently, you can only set up an authenticated channel using manual instrumentation (SDK) and cannot authenticate servers using Azure service integration (or auto instrumentation).
+Live Metrics custom filters allow you to control which of your application's telemetry is streamed to the Live Metrics view in Azure portal. The filters criteria is sent to the apps that are instrumented with the Application Insights SDK. The filter value could potentially contain sensitive information such as CustomerID. To keep this value secured and prevent potential disclosure to unauthorized applications, you have two options:
-The custom filters criteria you specify in Live Metrics portal are sent back to the Live Metrics component in the Application Insights SDK. The filters could potentially contain sensitive information such as customerIDs. You can make the channel secure with a secret API key in addition to the instrumentation key.
+- Recommended: Secure Live Metrics channel using [Azure AD authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication)
+- Legacy (no longer recommended): Set up an authenticated channel by configuring a secret API key as explained below
+
+It is possible to try custom filters without having to set up an authenticated channel. Simply click on any of the filter icons and authorize the connected servers. Notice that if you choose this option, you will have to authorize the connected servers once every new session or when a new server comes online.
+
+> [!WARNING]
+> We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. The ΓÇ£Authorize connected serversΓÇ¥ dialog displays the date (highlighted below) after which this option will be disabled.
-### Create an API Key
+
+### Legacy option: Create API key
![API key > Create API key](./media/live-stream/api-key.png) ![Create API Key tab. Select "authenticate SDK control channel" then "generate key"](./media/live-stream/create-api-key.png) ### Add API key to Configuration
-### ASP.NET
+#### ASP.NET
In the applicationinsights.config file, add the AuthenticationApiKey to the QuickPulseTelemetryModule:
In the applicationinsights.config file, add the AuthenticationApiKey to the Quic
</Add> ```
-### ASP.NET Core
+#### ASP.NET Core
For [ASP.NET Core](./asp-net-core.md) applications, follow the instructions below.
public void ConfigureServices(IServiceCollection services)
More information on configuring ASP.NET Core applications can be found in our guidance on [configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
-### WorkerService
+#### WorkerService
For [WorkerService](./worker-service.md) applications, follow the instructions below.
Next, add the following line before the call `services.AddApplicationInsightsTel
More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configuring-or-removing-default-telemetrymodules).
-### Azure Function Apps
+#### Azure Function Apps
For Azure Function Apps (v2), securing the channel with an API key can be accomplished with an environment variable. Create an API key from within your Application Insights resource and go to **Settings > Configuration** for your Function App. Select **New application setting** and enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY` and a value that corresponds to your API key.
-Securing the control channel is not necessary if you recognize and trust all the connected servers. This option is made available so that you can try custom filters without having to set up an authenticated channel. If you choose this option you will have to authorize the connected servers once every new session or when a new server comes online. We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. To use custom filters without a secure channel simply click on any of the filter icons and authorize the connected servers. The ΓÇ£Authorize connected serversΓÇ¥ dialog displays the date (highlighted below) after which this option will be disabled.
--
-> [!NOTE]
-> We strongly recommend that you set up the authenticated channel before entering potentially sensitive information like CustomerID in the filter criteria.
->
- ## Supported features table | Language | Basic Metrics | Performance metrics | Custom filtering | Sample telemetry | CPU split by process |
Basic metrics include request, dependency, and exception rate. Performance metri
\* PerfCounters support varies slightly across versions of .NET Core that do not target the .NET Framework: - PerfCounters metrics are supported when running in Azure App Service for Windows. (AspNetCore SDK Version 2.4.1 or higher)-- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or On-prem etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
+- PerfCounters are supported when app is running in ANY Windows machines (VM or Cloud Service or on-premises etc.) (AspNetCore SDK Version 2.7.1 or higher), but for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher.
- PerfCounters are supported when app is running ANYWHERE (Linux, Windows, app service for Linux, containers, etc.) in the latest versions, but only for apps targeting .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher. ## Troubleshooting
-Live Metrics Stream uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check the [outgoing ports for Live Metrics Stream](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
+Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check the [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
-As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you are using an older version of TLS , Live Metrics will not display any data. For applications based on .NET Framework 4.5.1 refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
+As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you are using an older version of TLS, Live Metrics will not display any data. For applications based on .NET Framework 4.5.1, refer to [How to enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support newer TLS version.
> [!WARNING] > Currently, authenticated channel only supports manual SDK instrumentation. The authenticated channel cannot be configured with auto-instrumentation (used to be known as "codeless attach").
As described in the [Azure TLS 1.2 migration announcement](https://azure.microso
1. Verify you are using the latest version of the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector) 2. Edit the `ApplicationInsights.config` file * Verify that the connection string points to the Application Insights resource you are using
- * Locate the `QuickPulseTelemetryModule` configuration option; if it is not there add it
- * Locate the `QuickPulseTelemetryProcessor` configuration option; if it is not there add it
+ * Locate the `QuickPulseTelemetryModule` configuration option; if it is not there, add it
+ * Locate the `QuickPulseTelemetryProcessor` configuration option; if it is not there, add it
```xml <TelemetryModules>
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Container insights supports the following environments:
- [Azure Red Hat OpenShift](../../openshift/intro-openshift.md) version 4.x - [Red Hat OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4.x
+>[!NOTE]
+> Container insights support for Windows Server 2022 operating system in public preview.
## Supported Kubernetes versions The versions of Kubernetes and support policy are the same as those [supported in Azure Kubernetes Service (AKS)](../../aks/supported-kubernetes-versions.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights is a feature designed to monitor the performance of container
- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine) - [Azure Container Instances](../../container-instances/container-instances-overview.md) - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises-- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) (preview)
+- [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md)
Container insights supports clusters running the Linux and Windows Server 2019 operating system. The container runtimes it supports are Docker, Moby, and any CRI compatible runtime such as CRI-O and ContainerD.
+>[!NOTE]
+> Container insights support for Windows Server 2022 operating system in public preview.
+ Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. Container insights gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md), and log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
Container insights delivers a comprehensive monitoring experience to understand
Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights. Note that the video refers to *Azure Monitor for Containers* which is the previous name for *Container insights*.
-[!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
-
+> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
## How to access Container insights
azure-monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/functions.md
Title: Functions in Azure Monitor log queries description: This article describes how to use functions to call a query from another log query in Azure Monitor. -- Previously updated : 04/19/2021+++ Last updated : 06/22/2022
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
Title: "Log Analytics tutorial" description: Learn how to use Log Analytics in Azure Monitor to build and run a log query and analyze its results in the Azure portal. Previously updated : 06/28/2021+++ Last updated : 06/22/2022
azure-monitor Log Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-excel.md
Title: Integrate Log Analytics and Excel description: Get a Log Analytics query into Excel and refresh results inside Excel. -- Previously updated : 06/10/2021+++ Last updated : 06/22/2022
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
Title: Log Analytics integration with Power BI and Excel description: How to send results from Log Analytics to Power BI -- Previously updated : 11/03/2020+++ Last updated : 06/22/2022 # Log Analytics integration with Power BI
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Title: Log queries in Azure Monitor description: Reference information for Kusto query language used by Azure Monitor. Includes additional elements specific to Azure Monitor and elements not supported in Azure Monitor log queries. -- Previously updated : 10/09/2020+++ Last updated : 06/22/2022
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
Restricting access as explained above applies to data in the resource. However,
> Queries sent through the Azure Resource Management (ARM) API can't use Azure Monitor Private Links. These queries can only go through if the target resource allows queries from public networks (set through the Network Isolation blade, or [using the CLI](./private-link-configure.md#set-resource-access-flags)). > > The following experiences are known to run queries through the ARM API:
-> * Sentinel
> * LogicApp connector > * Update Management solution > * Change Tracking solution
azure-monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/queries.md
Title: Using queries in Azure Monitor Log Analytics
description: Overview of log queries in Azure Monitor Log Analytics including different types of queries and sample queries that you can use. -- Previously updated : 05/20/2021+++ Last updated : 06/22/2022 # Using queries in Azure Monitor Log Analytics
azure-monitor Query Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-packs.md
Title: Query packs in Azure Monitor
description: Query packs in Azure Monitor provide a way to share collections of log queries in multiple Log Analytics workspaces. -- Previously updated : 05/20/2021+++ Last updated : 06/22/2022
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
Title: Save a query in Azure Monitor Log Analytics (preview)
description: Describes how to save a query in Log Analytics. Previously updated : 05/20/2021+++ Last updated : 06/22/2022 # Save a query in Azure Monitor Log Analytics (preview)
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
description: Overview of VM insights, which monitors the health and performance
Previously updated : 06/08/2022 Last updated : 06/21/2022 # Overview of VM insights
azure-relay Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-application.md
Title: Authenticate from an application - Azure Relay (Preview) description: This article provides information about authenticating an application with Azure Active Directory to access Azure Relay resources. Previously updated : 07/02/2021 Last updated : 06/21/2022 # Authenticate and authorize an application with Azure Active Directory to access Azure Relay entities (Preview)
azure-relay Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-managed-identity.md
Title: Authenticate with managed identities for Azure Relay resources (preview) description: This article describes how to use managed identities to access with Azure Relay resources. Previously updated : 07/19/2021 Last updated : 06/21/2022 # Authenticate a managed identity with Azure Active Directory to access Azure Relay resources (preview)
azure-relay Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/diagnostic-logs.md
Title: Diagnostics logs for Hybrid Connections description: This article provides an overview of all the activity and diagnostics logs that are available for Azure Relay. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Enable diagnostics logs for Azure Relay Hybrid Connections When you start using your Azure Relay Hybrid Connections, you might want to monitor how and when your listeners and senders are opened and closed, and how your Hybrid Connections are created and messages are sent. This article provides an overview of activity and diagnostics logs provided by the Azure Relay service.
azure-relay Ip Firewall Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md
Title: Configure IP firewall for Azure Relay namespace description: This article describes how to Use firewall rules to allow connections from specific IP addresses to Azure Relay namespaces. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Configure IP firewall for an Azure Relay namespace
azure-relay Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/move-across-regions.md
Title: Move an Azure Relay namespace to another region description: This article shows you how to move an Azure Relay namespace from the current region to another region. Previously updated : 06/03/2021 Last updated : 06/21/2022
azure-relay Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/network-security.md
Title: Network security for Azure Relay description: This article describes how to use IP firewall rules and private endpoints with Azure Relay. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Network security for Azure Relay
azure-relay Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/private-link-service.md
Title: Integrate Azure Relay with Azure Private Link Service description: Learn how to integrate Azure Relay with Azure Private Link Service Previously updated : 11/10/2021 Last updated : 06/21/2022
azure-relay Relay Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-api-overview.md
Title: Azure Relay API overview | Microsoft Docs
description: This article provides an overview of available Azure Relay APIs (.NET Standard, .NET Framework, Node.js, etc.) Previously updated : 06/23/2021 Last updated : 06/21/2022 # Available Relay APIs
azure-relay Relay Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-authentication-and-authorization.md
Title: Azure Relay authentication and authorization | Microsoft Docs description: This article provides an overview of Shared Access Signature (SAS) authentication with the Azure Relay service. Previously updated : 07/19/2021 Last updated : 06/21/2022 # Azure Relay authentication and authorization
azure-relay Relay Create Namespace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-create-namespace-portal.md
Title: Create a Relay namespace using the Azure portal | Microsoft Docs description: This article provides a walkthrough that shows you how to create a Relay namespace using the Azure portal. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Create a Relay namespace using the Azure portal
azure-relay Relay Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-exceptions.md
Title: Azure Relay exceptions and how to resolve them | Microsoft Docs description: List of Azure Relay exceptions and suggested actions you can take to help resolve them. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay exceptions
azure-relay Relay Hybrid Connections Dotnet Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-api-overview.md
Title: Overview of Azure Relay .NET Standard APIs | Microsoft Docs
description: This article summarizes some of the key an overview of Azure Relay Hybrid Connections .NET Standard API. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay Hybrid Connections .NET Standard API overview
azure-relay Relay Hybrid Connections Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-dotnet-get-started.md
Title: Azure Relay Hybrid Connections - WebSockets in .NET
description: Write a C# console application for Azure Relay Hybrid Connections WebSockets. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Get started with Relay Hybrid Connections WebSockets in .NET
azure-relay Relay Hybrid Connections Http Requests Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md
Title: Azure Relay Hybrid Connections - HTTP requests in .NET
description: Write a C# console application for Azure Relay Hybrid Connections HTTP requests in .NET. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Get started with Relay Hybrid Connections HTTP requests in .NET
azure-relay Relay Hybrid Connections Http Requests Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-http-requests-node-get-started.md
Title: Azure Relay Hybrid Connections - HTTP requests in Node.js description: Write a Node.js console application for Azure Relay Hybrid Connections HTTP requests. Previously updated : 06/23/2021 Last updated : 06/21/2022
azure-relay Relay Hybrid Connections Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-node-get-started.md
Title: Azure Relay Hybrid Connections - WebSockets in Node description: Write a Node.js console application for Azure Relay Hybrid Connections WebSockets Previously updated : 06/23/2021 Last updated : 06/21/2022
azure-relay Relay Hybrid Connections Node Ws Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-node-ws-api-overview.md
Title: Overview of the Azure Relay Node APIs | Microsoft Docs description: This article provides an overview of the Node.js API for the Azure Relay service. It also shows how to use the hyco-ws Node package. Previously updated : 06/23/2021 Last updated : 06/21/2022
## Overview
-The [`hyco-ws`](https://www.npmjs.com/package/hyco-ws) Node package for Azure Relay Hybrid Connections is built on and extends the ['ws'](https://www.npmjs.com/package/ws) NPM package. This package re-exports all exports of that base package and adds new exports that enable integration with the Azure Relay service Hybrid Connections feature.
+The [`hyco-ws`](https://www.npmjs.com/package/hyco-ws) Node package for Azure Relay Hybrid Connections is built on and extends the [`ws`](https://www.npmjs.com/package/ws) NPM package. This package re-exports all exports of that base package and adds new exports that enable integration with the Azure Relay service Hybrid Connections feature.
Existing applications that `require('ws')` can use this package with `require('hyco-ws')` instead, which also enables hybrid scenarios in which an application can listen for WebSocket connections locally from "inside the firewall" and via Hybrid Connections, all at the same time. ## Documentation
-The APIs are [documented in the main 'ws' package](https://github.com/websockets/ws/blob/master/doc/ws.md). This article describes how this package differs from that baseline.
+The APIs are [documented in the main `ws` package](https://github.com/websockets/ws/blob/master/doc/ws.md). This article describes how this package differs from that baseline.
The key differences between the base package and this 'hyco-ws' is that it adds a new server class, exported via `require('hyco-ws').RelayedServer`, and a few helper methods.
This method calls the constructor to create a new instance of the RelayedServer
##### relayedConnect
-Simply mirroring the `createRelayedServer` helper in function, `relayedConnect` creates a client connection and subscribes to the 'open' event on the resulting socket.
+Mirror the `createRelayedServer` helper in function. The `relayedConnect` method creates a client connection and subscribes to the 'open' event on the resulting socket.
```JavaScript var uri = WebSocket.createRelaySendUri(ns, path);
azure-relay Relay Hybrid Connections Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-protocol.md
Title: Azure Relay Hybrid Connections protocol guide | Microsoft Docs description: This article describes the client-side interactions with the Hybrid Connections relay for connecting clients in listener and sender roles. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay Hybrid Connections protocol
azure-relay Relay Metrics Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-metrics-azure-monitor.md
Title: Azure Relay metrics in Azure Monitor | Microsoft Docs
description: This article provides information on how you can use Azure Monitor to monitor to state of Azure Relay. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay metrics in Azure Monitor
azure-relay Relay Migrate Acs Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-migrate-acs-sas.md
Title: Azure Relay - Migrate to Shared Access Signature authorization description: Describes how to migrate Azure Relay applications from using Azure Active Directory Access Control Service to Shared Access Signature authorization. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay - Migrate from Azure Active Directory Access Control Service to Shared Access Signature authorization
azure-relay Relay Port Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-port-settings.md
Title: Azure Relay port settings | Microsoft Docs description: This article includes a table that describes the required configuration for port values for Azure Relay. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Azure Relay port settings
azure-relay Relay What Is It https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-what-is-it.md
Title: What is Azure Relay? | Microsoft Docs description: This article provides an overview of the Azure Relay service, which allows you to develop cloud applications that consume on-premises services running in your corporate network without opening a firewall connection or making intrusive changes to your network infrastructure. Previously updated : 09/02/2021 Last updated : 06/21/2022
azure-relay Service Bus Dotnet Hybrid App Using Service Bus Relay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md
Title: Azure Windows Communication Foundation (WCF) Relay hybrid on-premises/clo
description: Learn how to expose an on-premises WCF service to a web application in the cloud by using Azure Relay Previously updated : 06/23/2021 Last updated : 06/21/2022 # Expose an on-premises WCF service to a web application in the cloud by using Azure Relay
azure-relay Service Bus Relay Rest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-relay-rest-tutorial.md
Title: 'Tutorial: REST tutorial using Azure Relay'
description: 'Tutorial: Build an Azure Relay host application that exposes a REST-based interface.' Previously updated : 06/23/2021 Last updated : 06/21/2022 # Tutorial: Azure WCF Relay REST tutorial
azure-relay Service Bus Relay Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/service-bus-relay-tutorial.md
Title: Expose an on-prem WCF REST service to clients using Azure Relay
description: This tutorial describes how to expose an on-premises WCF REST service to an external client by using Azure WCF Relay. Previously updated : 06/23/2021 Last updated : 06/21/2022 # Tutorial: Expose an on-premises WCF REST service to external client by using Azure WCF Relay
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md
keywords: deploy SQL Edge
ms.prod: sql ms.technology: machine-learning -- Previously updated : 05/06/2021+++ Last updated : 06/21/2022
This quickstart is based on **scikit-learn** and uses the [Boston Housing datase
* Install Python packages needed for this quickstart: 1. Open [New Notebook](/sql/azure-data-studio/sql-notebooks) connected to the Python 3 Kernel.
- 1. Click **Manage Packages**
- 1. In the **Installed** tab, look for the following Python packages in the list of installed packages. If any of these packages are not installed, select the **Add New** tab, search for the package, and click **Install**.
+ 1. Select **Manage Packages**
+ 1. In the **Installed** tab, look for the following Python packages in the list of installed packages. If any of these packages are not installed, select the **Add New** tab, search for the package, and select **Install**.
- **scikit-learn** - **numpy** - **onnxmltools**
onnx_model_path = 'boston1.model.onnx'
onnxmltools.utils.save_model(onnx_model, onnx_model_path) ```
+> [!NOTE]
+> You may need to set the `target_opset` parameter for the skl2onnx.convert_sklearn function if there is a mismatch between ONNX runtime version in SQL Edge and skl2onnx packge. For more information, see the [SQL Edge Release notes](release-notes.md) to get the ONNX runtime version corresponding for the release, and pick the `target_opset` for the ONNX runtime based on the [ONNX backward compatibility matrix](https://github.com/microsoft/onnxruntime/blob/master/docs/Versioning.md#version-matrix).
+ ## Test the ONNX model After converting the model to ONNX format, score the model to show little to no degradation in performance.
azure-sql-edge Onnx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md
-- Previously updated : 05/19/2020+++ Last updated : 06/21/2022 # Machine learning and AI with ONNX in SQL Edge
To obtain a model in the ONNX format:
- **Model Building Services**: Services such as the [automated Machine Learning feature in Azure Machine Learning](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) and [Azure Custom Vision Service](../cognitive-services/custom-vision-service/getting-started-build-a-classifier.md) support directly exporting the trained model in the ONNX format. -- [**Convert and/or export existing models**](https://github.com/onnx/tutorials#converting-to-onnx-format): Several training frameworks (e.g. [PyTorch](https://pytorch.org/docs/stable/onnx.html), Chainer, and Caffe2) support native export functionality to ONNX, which allows you to save your trained model to a specific version of the ONNX format. For frameworks that do not support native export, there are standalone ONNX Converter installable packages that enable you to convert models trained from different machine learning frameworks to the ONNX format.
+- [**Convert and/or export existing models**](https://github.com/onnx/tutorials#converting-to-onnx-format): Several training frameworks (for example, [PyTorch](https://pytorch.org/docs/stable/onnx.html), Chainer, and Caffe2) support native export functionality to ONNX, which allows you to save your trained model to a specific version of the ONNX format. For frameworks that do not support native export, there are standalone ONNX Converter installable packages that enable you to convert models trained from different machine learning frameworks to the ONNX format.
**Supported frameworks** * [PyTorch](http://pytorch.org/docs/master/onnx.html)
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/release-notes.md
Previously updated : 11/24/2020 Last updated : 6/21/2022 # Azure SQL Edge release notes This article describes what's new and what has changed with every new build of Azure SQL Edge.
+## Azure SQL Edge 1.0.6
+
+SQL engine build 15.0.2000.1565
+
+### What's new?
+
+- Security bug fixes
+ ## Azure SQL Edge 1.0.5 SQL engine build 15.0.2000.1562
azure-video-analyzer Monitor Log Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/monitor-log-edge.md
Using [Prometheus endpoint](https://prometheus.io/docs/practices/naming/) along
[![Diagram that shows the metrics collection using Log Analytics.](./media/telemetry-schema/log-analytics.svg)](./media/telemetry-schema/log-analytics.svg#lightbox) 1. Learn how to [collect metrics](https://github.com/Azure/iotedge/blob/main/test/modules/TestMetricsCollector/Program.cs)
-1. Use Docker CLI commands to build the [Docker file](https://github.com/Azure/iotedge/blob/main/mqtt/docker/linux/amd64/Dockerfile) and publish the image to your Azure container registry.
+1. Use Docker CLI commands to build the [Docker file](https://github.com/Azure/iotedge/blob/main/edge-hub/docker/linux/amd64/Dockerfile) and publish the image to your Azure container registry.
For more information about using the Docker CLI to push to a container registry, see [Push and pull Docker images](../../../container-registry/container-registry-get-started-docker-cli.md). For other information about Azure Container Registry, see the [documentation](../../../container-registry/index.yml). 1. After the push to Azure Container Registry is complete, the following is inserted into the deployment manifest:
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. Microsoft facial recognition services are Limited Access in order to help prevent the misuse of the services in accordance with our [AI Principles](https://www.microsoft.com/ai/responsible-ai?SilentAuth=1&wa=wsignin1.0&activetab=pivot1%3aprimaryr6) and [facial recognition](https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work/) principles. The Face Identify and Celebrity Recognition operations in Azure Video Indexer are Limited Access features that require registration.
-Since the announcement on June 11th, 2020, Azure face recognition services are strictly prohibited for use by or for U.S. police departments.
+Since the announcement on June 11th, 2020, customers may not use, or allow use of, any Azure facial recognition service by or for a police department in the United States.
## Application process
The Azure Video Indexer service is made available to customers and partners unde
FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices).
-If you need help with Azure Video Indexer, find support [here](/azure/cognitive-services/cognitive-services-support-options.md).
+If you need help with Azure Video Indexer, find support [here](../cognitive-services/cognitive-services-support-options.md).
[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure Video Indexer.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 05/16/2022 Last updated : 05/20/2022
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
-## May 2022 release updates
+## June 2022 release updates
+
+### Create Video Indexer blade improvements in Azure portal
+
+Azure Video Indexer now supports the creation of new resource using system-assigned managed identity or system and user assigned managed identity for the same resource.
+
+You can also change the primary managed identity using the **Identity** tab in the [Azure portal](https://portal.azure.com/#home).
+
+### Limited access of celebrity recognition and face identification features
+
+As part of Microsoft's commitment to responsible AI, we are designing and releasing Azure Video Indexer ΓÇô identification and celebrity recognition features. These features are designed to protect the rights of individuals and society and fostering transparent human-computer interaction. Thus, there is a limited access and use of Azure Video Indexer ΓÇô identification and celebrity recognition features.
+
+Identification and celebrity recognition features require registration and are only available to Microsoft managed customers and partners.
+Customers who wish to use this feature are required to apply and submit an [intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu). For more information, read [Azure Video Indexer limited access](limited-access-features.md).
+
+Also, see the following: the [announcement blog post](https://aka.ms/AAh91ff) and [investment and safeguard for facial recognition](https://aka.ms/AAh9oye).
+
+## May 2022
### Line breaking in transcripts
batch Credential Access Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/credential-access-key-vault.md
Title: Use certificates and securely access Azure Key Vault with Batch description: Learn how to programmatically access your credentials from Key Vault using Azure Batch. Previously updated : 08/25/2021 Last updated : 06/22/2022
To authenticate to Azure Key Vault from a Batch node, you need:
## Obtain a certificate
-If you don't already have a certificate, the easiest way to get one is to generate a self-signed certificate using the `makecert` command-line tool.
-
-You can typically find `makecert` in this path: `C:\Program Files (x86)\Windows Kits\10\bin\<arch>`. Open a command prompt as an administrator and navigate to `makecert` using the following example.
-
-```console
-cd C:\Program Files (x86)\Windows Kits\10\bin\x64
-```
-
-Next, use the `makecert` tool to create self-signed certificate files called `batchcertificate.cer` and `batchcertificate.pvk`. The common name (CN) used isn't important for this application, but it's helpful to make it something that tells you what the certificate is used for.
-
-```console
-makecert -sv batchcertificate.pvk -n "cn=batch.cert.mydomain.org" batchcertificate.cer -b 09/23/2019 -e 09/23/2019 -r -pe -a sha256 -len 2048
-```
-
-Batch requires a `.pfx` file. Use the [pvk2pfx](/windows-hardware/drivers/devtest/pvk2pfx) tool to convert the `.cer` and `.pvk` files created by `makecert` to a single `.pfx` file.
-
-```console
-pvk2pfx -pvk batchcertificate.pvk -spc batchcertificate.cer -pfx batchcertificate.pfx -po
-```
+If you don't already have a certificate, [use the PowerShell cmdlet `New-SelfSignedCertificate`](/powershell/module/pki/new-selfsignedcertificate) to make a new self-signed certificate.
## Create a service principal
batch Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md
Title: Azure Quickstart - Run your first Batch job in the Azure portal description: This quickstart shows how to use the Azure portal to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool. Previously updated : 05/25/2021 Last updated : 06/22/2022
Follow these steps to create a sample Batch account for test purposes. You need
1. Enter a value for **Account name**. This name must be unique within the Azure **Location** selected. It can contain only lowercase letters and numbers, and it must be between 3-24 characters.
-1. Under **Storage account**, click **Select a storage account**, then select an existing storage account or create a new one.
+1. Optionally, under **Storage account**, you can specify a storage account. Click **Select a storage account**, then select an existing storage account or create a new one.
1. Leave the other settings as is. Select **Review + create**, then select **Create** to create the Batch account.
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md
Title: Tutorial - Run a parallel workload using the .NET API
description: Tutorial - Transcode media files in parallel with ffmpeg in Azure Batch using the Batch .NET client library ms.devlang: csharp Previously updated : 12/13/2021 Last updated : 06/22/2022
In this tutorial, you convert MP4 media files in parallel to MP3 format using th
* A Batch account and a linked Azure Storage account. To create these accounts, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
-* [Windows 64-bit version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08) (.zip). Download the zip file to your local computer. For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
+* Download the appropriate version of ffmpeg for your use case to your local computer. This tutorial and the related sample app use the [Windows 64-bit version of ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08). For this tutorial, you only need the zip file. You do not need to unzip the file or install it locally.
## Sign in to Azure
private const string StorageAccountKey = "xxxxxxxxxxxxxxxxy4/xxxxxxxxxxxxxxxxfw
[!INCLUDE [batch-credentials-include](../../includes/batch-credentials-include.md)]
-Also, make sure that the ffmpeg application package reference in the solution matches the Id and version of the ffmpeg package that you uploaded to your Batch account.
+Also, make sure that the ffmpeg application package reference in the solution matches the identifier and version of the ffmpeg package that you uploaded to your Batch account. For example, `ffmpeg` and `4.3.1`.
```csharp const string appPackageId = "ffmpeg";
The sample creates an [OutputFile](/dotnet/api/microsoft.azure.batch.outputfile)
Then, the sample adds tasks to the job with the [AddTaskAsync](/dotnet/api/microsoft.azure.batch.joboperations.addtaskasync) method, which queues them to run on the compute nodes.
+Replace the executable's file path with the name of the version that you downloaded. This sample code uses the example `ffmpeg-4.3.1-2020-09-21-full_build`.
+ ```csharp // Create a collection to hold the tasks added to the job. List<CloudTask> tasks = new List<CloudTask>();
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
To use the Azure Key Vault VM extension, you need to have an Azure Active Direct
- If you are using RBAC preview, search for the name of the AAD app you created and assign it to the Key Vault Secrets User (preview) role. - If you are using vault access policies, then assign **Secret-Get** permissions to the AAD app you created. For more information, see [Assign access policies](../key-vault/general/assign-access-policy-portal.md)
-7. Install first version of the certificates created in the first step and the Key Vault VM extension using the ARM template as shown below:
+7. Install first
+step and the Key Vault VM extension using the ARM template snippet for `cloudService` resource as shown below:
```json
+ {
+ "osProfile":
{
- "osProfile":{
- "secrets":[
- {
- "sourceVault":{
- "id":"[parameters('sourceVaultValue')]"
- },
- "vaultCertificates":[
- {
- "certificateUrl":"[parameters('bootstrpCertificateUrlValue')]"
- }
- ]
- }
- ]
- }{
- "name":"KVVMExtensionForPaaS",
- "properties":{
- "type":"KeyVaultForPaaS",
- "autoUpgradeMinorVersion":true,
- "typeHandlerVersion":"1.0",
- "publisher":"Microsoft.Azure.KeyVault",
- "settings":{
- "secretsManagementSettings":{
- "pollingIntervalInS":"3600",
- "certificateStoreName":"My",
- "certificateStoreLocation":"LocalMachine",
- "linkOnRenewal":false,
- "requireInitialSync":false,
- "observedCertificates":"[parameters('keyVaultObservedCertificates']"
- },
- "authenticationSettings":{
- "clientId":"Your AAD app ID",
- "clientCertificateSubjectName":"Your boot strap certificate subject name [Do not include the 'CN=' in the subject name]"
+ "secrets":
+ [
+ {
+ "sourceVault":
+ {
+ "id": "[parameters('sourceVaultValue')]"
+ },
+ "vaultCertificates":
+ [
+ {
+ "certificateUrl": "[parameters('bootstrpCertificateUrlValue')]"
+ }
+ ]
}
- }
- }
- }
+ ]
+ },
+ "extensionProfile":
+ {
+ "extensions":
+ [
+ {
+ "name": "KVVMExtensionForPaaS",
+ "properties":
+ {
+ "type": "KeyVaultForPaaS",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "1.0",
+ "publisher": "Microsoft.Azure.KeyVault",
+ "settings":
+ {
+ "secretsManagementSettings":
+ {
+ "pollingIntervalInS": "3600",
+ "certificateStoreName": "My",
+ "certificateStoreLocation": "LocalMachine",
+ "linkOnRenewal": false,
+ "requireInitialSync": false,
+ "observedCertificates": "[parameters('keyVaultObservedCertificates']"
+ },
+ "authenticationSettings":
+ {
+ "clientId": "Your AAD app ID",
+ "clientCertificateSubjectName": "Your boot strap certificate subject name [Do not include the 'CN=' in the subject name]"
+ }
+ }
+ }
+ }
+ ]
+ }
+ }
``` You might need to specify the certificate store for boot strap certificate in ServiceDefinition.csdef like below:
To use the Azure Key Vault VM extension, you need to have an Azure Active Direct
``` ## Next steps
-Further improve your deployment by [enabling monitoring in Cloud Services (extended support)](enable-alerts.md)
+Further improve your deployment by [enabling monitoring in Cloud Services (extended support)](enable-alerts.md)
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 5/26/2022 Last updated : 6/21/2022 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+>[!NOTE]
+
+>The June Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the June Guest OS. This list is subject to change.
++
+## June 2022 Guest OS
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-06 | [5014692] | Latest Cumulative Update(LCU) | 6.44 | Jun 14, 2022 |
+| Rel 22-06 | [5014678] | Latest Cumulative Update(LCU) | 7.12 | Jun 14, 2022 |
+| Rel 22-06 | [5014702] | Latest Cumulative Update(LCU) | 5.68 | Jun 14, 2022 |
+| Rel 22-06 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.45 | May 10, 2022 |
+| Rel 22-06 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.13 | May 10, 2022 |
+| Rel 22-06 | [5014026] | Servicing Stack update | 5.69 | May 10, 2022 |
+| Rel 22-06 | [4494175] | Microcode | 5.69 | Sep 1, 2020 |
+| Rel 22-06 | [4494174] | Microcode | 6.45 | Sep 1, 2020 |
+
+[5014692]: https://support.microsoft.com/kb/5014692
+[5014678]: https://support.microsoft.com/kb/5014678
+[5014702]: https://support.microsoft.com/kb/5014702
+[5013641]: https://support.microsoft.com/kb/5013641
+[5013630]: https://support.microsoft.com/kb/5013630
+[5014026]: https://support.microsoft.com/kb/5014026
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
## May 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 5/26/2022 Last updated : 6/21/2022 # Azure Guest OS releases and SDK compatibility matrix
The September Guest OS has released.
| | | | | WA-GUEST-OS-7.12_202205-01 | May 26, 2022 | Post 7.14 | | WA-GUEST-OS-7.11_202204-01 | April 30, 2022 | Post 7.13 |
-|~~WA-GUEST-OS-7.10_202203-01~| March 19, 2022 | May 26, 2022 |
+|~~WA-GUEST-OS-7.10_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-7.9_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-7.8_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-7.6_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| | | | | WA-GUEST-OS-6.44_202205-01 | May 26, 2022 | Post 6.46 | | WA-GUEST-OS-6.43_202204-01 | April 30, 2022 | Post 6.45 |
-|~~WA-GUEST-OS-6.42_202203-01~| March 19, 2022 | May 26, 2022 |
+|~~WA-GUEST-OS-6.42_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-6.41_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-6.40_202201-02~~| February 11, 2022 | March 19, 2022 | |~~WA-GUEST-OS-6.38_202112-01~~| January 10, 2022 | March 2, 2022 |
The September Guest OS has released.
| | | | | WA-GUEST-OS-4.103_202205-01 | May 26, 2022 | Post 4.105 | | WA-GUEST-OS-4.102_202204-01 | April 30, 2022 | Post 4.104 |
-|~~WA-GUEST-OS-4.101_202203-01~| March 19, 2022 | May 26, 2022 |
+|~~WA-GUEST-OS-4.101_202203-01~~| March 19, 2022 | May 26, 2022 |
|~~WA-GUEST-OS-4.100_202202-01~~| March 2, 2022 | April 30, 2022 | |~~WA-GUEST-OS-4.99_202201-02~~| February 11 , 2022 | March 19, 2022 | |~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 |
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-describing-images.md
Computer Vision can analyze an image and generate a human-readable phrase that d
At this time, English is the only supported language for image description.
+Try out the image captioning features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Image description example The following JSON response illustrates what the Analyze API returns when describing the example image based on its visual features.
cognitive-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-adult-content.md
Computer Vision can detect adult material in images so that developers can restr
> [!NOTE] > Much of this functionality is offered by the [Azure Content Moderator](../content-moderator/overview.md) service. See this alternative for solutions to more rigorous content moderation scenarios, such as text moderation and human review workflows.
+Try out the adult content detection features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Content flag definitions The "adult" classification contains several different categories:
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Image Analysis can detect human faces within an image and generate rectangle coo
> [!NOTE] > This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
+Try out the face detection features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Face detection examples The following example demonstrates the JSON response returned by Analyze API for an image containing a single human face.
cognitive-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-detection.md
# Face detection and attributes + This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data. You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/serv
Each detected face corresponds to a `faceRectangle` field in the response. This is a set of pixel coordinates for the left, top, width, and height of the detected face. Using these coordinates, you can get the location and size of the face. In the API response, faces are listed in size order from largest to smallest.
+Try out the capabilities of face detection quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Face ID The face ID is a unique identifier string for each detected face in an image. You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
The Detection_03 model currently has the most accurate landmark detection. The e
## Attributes + Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected: * **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
This article explains the concept of Face recognition, its related operations, a
Verification is one-to-one matching that takes two faces and returns whether they are the same face, and identification is one-to-many matching that takes a single face as input and returns a set of matching candidates. Face recognition is important in implementing the identity verification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
+Try out the capabilities of face recognition quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Related data structures The recognition operations use mainly the following data structures. These objects are stored in the cloud and can be referenced by their ID strings. ID strings are always unique within a subscription, but name fields may be duplicated.
cognitive-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-object-detection.md
Object detection is similar to [tagging](concept-tagging-images.md), but the API
The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes.
+Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Object detection example The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Image Analysis can return content tags for thousands of recognizable objects, li
After you upload an image or specify an image URL, the Analyze API can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
+Try out the image tagging features quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ## Image tagging example The following JSON response illustrates what Computer Vision returns when tagging visual features detected in the example image.
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
# Add faces to a PersonGroup + This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure Cognitive Services Face .NET client library. ## Step 1: Initialization
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
# Find similar faces + The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image. This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
cognitive-services Identity Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-analyze-video.md
# Example: How to Analyze Videos in Real-time + This guide will demonstrate how to perform near-real-time analysis on frames taken from a live video stream. The basic components in such a system are: - Acquire frames from a video source
cognitive-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/identity-detect-faces.md
# Call the Detect API ++ This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs. The code snippets in this guide are written in C# by using the Azure Cognitive Services Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/specify-recognition-model.md
# Specify a face recognition model + This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure Face service. The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use. They can choose the model that best fits their use case.
cognitive-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-large-scale.md
# Example: Use the large-scale feature + This guide is an advanced article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide. LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-persondirectory.md
# Use the PersonDirectory structure + To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
You can use Computer Vision Spatial Analysis to ingest streaming video from cameras, extract insights, and generate events to be used by other systems. The service detects the presence and movements of people in video. It can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
+Try out the capabilities of Spatial Analysis quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ <!--This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
keywords: facial recognition, facial recognition software, facial analysis, face
> [!WARNING] > On June 11, 2020, Microsoft announced that it will not sell facial recognition technology to police departments in the United States until strong regulation, grounded in human rights, has been enacted. As such, customers may not use facial recognition features or functionality included in Azure Services, such as Face or Video Indexer, if a customer is, or is allowing use of such services by or for, a police department in the United States. When you create a new Face resource, you must acknowledge and agree in the Azure Portal that you will not use the service by or for a police department in the United States and that you have reviewed the Responsible AI documentation and will use this service in accordance with it. + The Azure Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identity verification, touchless access control, and face blurring for privacy.
+You can use the Face service through a client library SDK or by calling the REST API directly. Follow the quickstart to get started.
+
+> [!div class="nextstepaction"]
+> [Quickstart](quickstarts-sdk/identity-client-library.md)
+
+Or, you can try out the capabilities of Face service quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/identity-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. * The [how-to guides](./how-to/identity-detect-faces.md) contain instructions for using the service in more specific or customized ways.
Face detection is required as a first step in all the other scenarios. The Detec
Optionally, face detection can extract a set of face-related attributes, such as head pose, age, emotion, facial hair, and glasses. These attributes are general predictions, not actual classifications. Some attributes are useful to ensure that your application is getting high-quality face data when users add themselves to a Face service. For example, your application could advise users to take off their sunglasses if they're wearing sunglasses.
-> [!NOTE]
-> The face detection feature is also available through the [Computer Vision service](../computer-vision/overview.md). However, if you want to use other Face operations like Identify, Verify, Find Similar, or Face grouping, you should use this service instead.
For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
The following image shows an example of a database named `"myfriends"`. Each gro
After you create and train a group, you can do identification against the group with a new detected face. If the face is identified as a person in the group, the person object is returned.
+Try out the capabilities of face identification quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
+ ### Verification The verification operation answers the question, "Do these two faces belong to the same person?".
Verification is also a "one-to-one" matching of a face in an image to a single f
For more information about identity verification, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
+Try out the capabilities of face verification quickly and easily using Vision Studio.
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
## Find similar faces
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
keywords: computer vision, computer vision applications, computer vision service
The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.
-You can use Image Analysis through a client library SDK or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-g) to get started.
+You can use Image Analysis through a client library SDK or by calling the REST API directly. Follow the quickstart to get started.
> [!div class="nextstepaction"]
-> [Get started](quickstarts-sdk/image-analysis-client-library.md)
+> [Quickstart](quickstarts-sdk/image-analysis-client-library.md)
+
+Or, you can try out the capabilities of Image Analysis quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
This documentation contains the following types of articles: * The [quickstarts](./quickstarts-sdk/image-analysis-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
# What is Optical character recognition?
-Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. Microsoft's OCR technologies support extracting printed text in [several languages](./language-support.md). Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started.
+Optical character recognition (OCR) allows you to extract printed or handwritten text from images, such as photos of street signs and products, as well as from documents&mdash;invoices, bills, financial reports, articles, and more. Microsoft's OCR technologies support extracting printed text in [several languages](./language-support.md).
+
+Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started with the REST API or a client SDK. Or, try out the capabilities of OCR quickly and easily in your browser using Vision Studio.
+
+> [!div class="nextstepaction"]
+> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
![OCR demos](./Images/ocr-demo.gif)
cognitive-services Overview Vision Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-vision-studio.md
+
+ Title: What is Vision Studio?
+
+description: Learn how to set up and use Vision Studio to test features of Azure Computer Vision on the web.
++++++ Last updated : 06/13/2022++++
+# What is Vision Studio?
+
+[Vision Studio](https://portal.vision.cognitive.azure.com/) is a set of UI-based tools that lets you explore, build, and integrate features from Azure Computer Vision.
+
+Vision Studio provides you with a platform to try several service features and sample their returned data in a quick, straightforward manner. Using Studio, you can start experimenting with the services and learning what they offer without needing to write any code. Then, use the available client libraries and REST APIs to get started embedding these services into your own applications.
+
+## Get started using Vision Studio
+
+To use Vision Studio, you'll need an Azure subscription and a resource for Cognitive Services for authentication. You can also use this resource to call the services in the try-it-out experiences. Follow these steps to get started.
+
+1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/).
+
+1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window appear that prompts you to sign in to Azure and then choose or create a Vision resource. You have the option to skip this step and do it later.
+ :::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard.":::
+
+1. Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
+
+ :::image type="content" source="./Images/vision-studio-wizard-2.png" alt-text="Screenshot of Vision Studio resource selection panel.":::
+
+ > [!TIP]
+ > * When you select a location for your Azure resource, choose one that's closest to you for lower latency.
+ > * If you use the free pricing tier, you can keep using the Vision service even after your Azure free trial or service credit expires.
+
+1. Select **Create resource**. Your resource will be created, and you'll be able to try the different features offered by Vision Studio.
+
+ :::image type="content" source="./Images/vision-studio-home-page.png" alt-text="Screenshot of Vision Studio home page.":::
+
+1. From here, you can select any of the different features offered by Vision Studio. Some of them are outlined in the service quickstarts:
+ * [OCR quickstart](quickstarts-sdk/client-library.md?pivots=vision-studio)
+ * [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md?pivots=vision-studio)
+ * [Face quickstart](quickstarts-sdk/identity-client-library.md?pivots=vision-studio)
+
+## Pre-configured features
+
+Computer Vision offers multiple features that use prebuilt, pre-configured models for performing various tasks, such as: understanding how people move through a space, detecting faces in images, and extracting text from images. See the [Computer Vision overview](overview.md) for a list of features offered by the Vision service.
+
+Each of these features has one or more try-it-out experiences in Vision Studio that allow you to upload images and receive JSON and text responses. These experiences help you quickly test the features using a no-code approach.
+
+## Cleaning up resources
+
+If you want to remove a Cognitive Services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
+* [Using the Azure portal](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#clean-up-resources)
+* [Using the Azure CLI](/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows#clean-up-resources)
+
+> [!TIP]
+> In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen).
+
+## Next steps
+
+* Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
+* For more information on the features offered, see the [Azure Computer Vision overview](overview.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Azure's Computer Vision service gives you access to advanced algorithms that pro
Computer Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Cognitive Services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Computer Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
+## Getting started
+
+Use [Vision Studio](https://portal.vision.cognitive.azure.com/) to try out Computer Vision features quickly in your web browser.
+
+To get started building Computer Vision into your app, follow a quickstart.
+* [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md)
+* [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)
+* [Quickstart: Spatial Analysis container](spatial-analysis-container.md)
+ ## Image requirements Computer Vision can analyze images that meet the following requirements:
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/client-library.md
Title: "Quickstart: Optical character recognition (OCR) client library or REST API"
+ Title: "Quickstart: Optical character recognition (OCR)"
description: Learn how to use Optical character recognition (OCR) in your application through a native client library in the language of your choice.
zone_pivot_groups: programming-languages-computer-vision
keywords: computer vision, computer vision service
-# Quickstart: Use the Optical character recognition (OCR) client library or REST API
+# Quickstart: Optical character recognition (OCR)
Get started with the Computer Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.
Get started with the Computer Vision Read REST API or client libraries. The Read
[!INCLUDE [REST API quickstart](../includes/curl-quickstart.md)] ::: zone-end+++
cognitive-services Identity Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/identity-client-library.md
Title: 'Quickstart: Use the Face client library'
+ Title: 'Quickstart: Use the Face service'
-description: The Face API offers client libraries that makes it easy to detect, find similar, identify, verify and more.
+description: The Face API offers client libraries that make it easy to detect, find similar, identify, verify and more.
keywords: face search by image, facial recognition search, facial recognition, face recognition app
-# Quickstart: Use the Face client library
+# Quickstart: Use the Face service
+ ::: zone pivot="programming-language-csharp"
keywords: face search by image, facial recognition search, facial recognition, f
[!INCLUDE [cURL quickstart](../includes/identity-curl-quickstart.md)] ::: zone-end+++
cognitive-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/quickstarts-sdk/image-analysis-client-library.md
Title: "Quickstart: Image Analysis client library or REST API"
+ Title: "Quickstart: Image Analysis"
description: Learn how to use Image Analysis in your application through a native client library in the language of your choice.
zone_pivot_groups: programming-languages-computer-vision
keywords: computer vision, computer vision service
-# Quickstart: Use the Image Analysis client library or REST API
+# Quickstart: Image Analysis
Get started with the Image Analysis REST API or client libraries. The Analyze Image service provides you with AI algorithms for processing images and returning information on their visual features. Follow these steps to install a package to your application and try out the sample code for a basic task.
Get started with the Image Analysis REST API or client libraries. The Analyze Im
[!INCLUDE [REST API quickstart](../includes/image-analysis-curl-quickstart.md)] ::: zone-end+++
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## June 2022
+
+### Vision Studio launch
+
+Vision Studio is UI tool that lets you explore, build, and integrate features from Azure Cognitive Services for Vision into your applications.
+
+Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
+
+### Face transparency documentation
+* The [transparency documentation](https://aka.ms/faceraidocs) provides guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, providing support to people who believe their results were incorrect, and identifying and addressing fluctuations in accuracy due to variations in operational conditions.
+
+### Retirement of sensitive attributes
+
+* We have retired facial analysis capabilities that purport to infer emotional states and identity attributes, such as gender, age, smile, facial hair, hair and makeup.
+* Facial detection capabilities, (including detecting blur, exposure, glasses, headpose, landmarks, noise, occlusion, facial bounding box) will remain generally available and do not require an application.
+
+### Fairlearn package and Microsoft's Fairness Dashboard
+
+* [The open-source Fairlearn package and MicrosoftΓÇÖs Fairness Dashboard](https://github.com/microsoft/responsible-ai-toolbox/tree/main/notebooks/cognitive-services-examples/face-verification) aims to support customers to measure the fairness of Microsoft's facial verification algorithms on their own data, allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology.
+
+### Limited Access policy
+
+* As a part of aligning Face to the updated Responsible AI Standard, a new [Limited Access policy](https://aka.ms/AAh91ff) has been implemented for the Face API and Computer Vision. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. See details on Limited Access for Face [here](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context) and for Computer Vision [here](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context).
+ ## May 2022 ### OCR (Read) API model is generally available (GA)
See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-
### New Quality Attribute in Detection_01 and Detection_03 * To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concept-face-detection.md) and see how to use it with [QuickStart](./quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio). - ## September 2021 ### OCR (Read) API Public Preview supports 122 languages
See the [Read API how-to guide](how-to/call-read-api.md) to learn more.
### New Face API detection model
-* The new Detection 03 model is the most accurate detection model currently available. If you're a new a customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
+* The new Detection 03 model is the most accurate detection model currently available. If you're a new customer, we recommend using this model. Detection 03 improves both recall and precision on smaller faces found within images (64x64 pixels). Additional improvements include an overall reduction in false positives and improved detection on rotated face orientations. Combining Detection 03 with the new Recognition 04 model will provide improved recognition accuracy as well. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details.
### New detectable Face attributes * The `faceMask` attribute is available with the latest Detection 03 model, along with the additional attribute `"noseAndMouthCovered"` which detects whether the face mask is worn as intended, covering both the nose and mouth. To use the latest mask detection capability, users need to specify the detection model in the API request: assign the model version with the _detectionModel_ parameter to `detection_03`. See [Specify a face detection model](./how-to/specify-detection-model.md) for more details. ### New Face API Recognition Model
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
Previously updated : 01/16/2020 Last updated : 06/22/2022
-zone_pivot_groups: programming-languages-set-two
+zone_pivot_groups: programming-languages-set-three
# Configure OpenSSL for Linux
-With the Speech SDK version 1.19.0 and higher, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version. In previous versions, OpenSSL is statically linked to the core library of the SDK.
+With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version.
+
+> [!NOTE]
+> This article is only applicable where the Speech SDK is [supported on Linux](speech-sdk.md#supported-languages).
To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command: ```bash
The output on Ubuntu/Debian based systems should be:
OPENSSLDIR: "/usr/lib/ssl" ```
-Check whether there is `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
+Check whether there's a `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
-* If there is `/usr/lib/ssl/certs` and it contains many individual certificate files (with `.crt` or `.pem` extension), there is no need for further actions.
+* If the `/usr/lib/ssl/certs` exists, and if it contains many individual certificate files (with `.crt` or `.pem` extension), there's no need for further actions.
-* If OPENSSLDIR is something else than `/usr/lib/ssl` and/or there is a single certificate bundle file instead of multiple individual files, you need to set an appropriate SSL environment variable to indicate where the certificates can be found.
+* If OPENSSLDIR is something other than `/usr/lib/ssl` or there's a single certificate bundle file instead of multiple individual files, you need to set an appropriate SSL environment variable to indicate where the certificates can be found.
## Examples
+Here are some example environment variables to configure per OpenSSL directory.
-- OPENSSLDIR is `/opt/ssl`. There is `certs` subdirectory with many `.crt` or `.pem` files.
-Set environment variable `SSL_CERT_DIR` to point at `/opt/ssl/certs` before running a program that uses the Speech SDK. For example:
+- OPENSSLDIR is `/opt/ssl`. There's a `certs` subdirectory with many `.crt` or `.pem` files.
+Set the environment variable `SSL_CERT_DIR` to point at `/opt/ssl/certs` before using the Speech SDK. For example:
```bash export SSL_CERT_DIR=/opt/ssl/certs ``` -- OPENSSLDIR is `/etc/pki/tls` (like on RHEL/CentOS based systems). There is `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`.
-Set environment variable `SSL_CERT_FILE` to point at that file before running a program that uses the Speech SDK. For example:
+- OPENSSLDIR is `/etc/pki/tls` (like on RHEL/CentOS based systems). There's a `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`.
+Set the environment variable `SSL_CERT_FILE` to point at that file before using the Speech SDK. For example:
```bash export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt ``` ## Certificate revocation checks
-When the Speech SDK connects to the Speech Service, it verifies that the Transport Layer Security (TLS) certificate reported by the remote endpoint is trusted and has not been revoked. This provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
+When the Speech SDK connects to the Speech Service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](../../security/fundamentals/tls-certificate-changes.md).
-If a destination posing as the Speech Service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. Because the authenticity of a reported certificate cannot be checked without an updated CRL, the Speech SDK will by default also treat a failure to download a CRL from an Azure CA location as an error.
+If a destination posing as the Speech Service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
-### Large CRL files (>10MB)
+### Large CRL files (>10 MB)
-One cause of CRL-related failures is the use of particularly large CRL files. This is typically only applicable to special environments with extended CA chains and standard, public endpoints should not encounter this class of issue.
+One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
-The default maximum CRL size used by the Speech SDK (10MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech Service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15MB.
+The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech Service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
::: zone pivot="programming-language-csharp"
config.SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
::: zone pivot="programming-language-cpp"
-```C++
+```cpp
config->SetProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000"); ```
config.setProperty("CONFIG_MAX_CRL_SIZE_KB"", "15000");
::: zone pivot="programming-language-python"
-```Python
+```python
speech_config.set_property_by_name("CONFIG_MAX_CRL_SIZE_KB"", "15000") ``` ::: zone-end
-```ObjectiveC
-[config setPropertyTo:@"15000" byName:"CONFIG_MAX_CRL_SIZE_KB"];
+```go
+speechConfig.properties.SetPropertyByString("CONFIG_MAX_CRL_SIZE_KB", "15000")
``` ::: zone-end ### Bypassing or ignoring CRL failures
-If an environment cannot be configured to access an Azure CA location, the Speech SDK will never be able to retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
+If an environment can't be configured to access an Azure CA location, the Speech SDK will never be able to retrieve an updated CRL. You can configure the SDK either to continue and log download failures or to bypass all CRL checks.
> [!WARNING] > CRL checks are a security measure and bypassing them increases susceptibility to attacks. They should not be bypassed without thorough consideration of the security implications and alternative mechanisms for protecting against the attack vectors that CRL checks mitigate.
-To continue with the connection when a CRL cannot be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt will still be made to retrieve a CRL and failures will still be emitted in logs, but connection attempts will be allowed to continue.
+To continue with the connection when a CRL can't be retrieved, set the property `"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"` to `"true"`. An attempt will still be made to retrieve a CRL and failures will still be emitted in logs, but connection attempts will be allowed to continue.
::: zone pivot="programming-language-csharp"
config.SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
::: zone pivot="programming-language-cpp"
-```C++
+```cpp
config->SetProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true"); ```
config.setProperty("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true");
::: zone pivot="programming-language-python"
-```Python
+```python
speech_config.set_property_by_name("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true") ``` ::: zone-end +
+```go
-```ObjectiveC
-[config setPropertyTo:@"true" byName:"OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE"];
+speechConfig.properties.SetPropertyByString("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FAILURE", "true")
``` ::: zone-end
-To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech Service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS certificate.
+To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech Service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
::: zone pivot="programming-language-csharp"
config.SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
::: zone pivot="programming-language-cpp"
-```C++
+```cpp
config->SetProperty("OPENSSL_DISABLE_CRL_CHECK", "true"); ```
config.setProperty("OPENSSL_DISABLE_CRL_CHECK", "true");
::: zone pivot="programming-language-python"
-```Python
+```python
speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true") ``` ::: zone-end
-```ObjectiveC
-[config setPropertyTo:@"true" byName:"OPENSSL_DISABLE_CRL_CHECK"];
+```go
+speechConfig.properties.SetPropertyByString("OPENSSL_DISABLE_CRL_CHECK", "true")
``` ::: zone-end
speech_config.set_property_by_name("OPENSSL_DISABLE_CRL_CHECK", "true")
By default, the Speech SDK will cache a successfully downloaded CRL on disk to improve the initial latency of future connections. When no cached CRL is present or when the cached CRL is expired, a new list will be downloaded.
-Some Linux distributions do not have a `TMP` or `TMPDIR` environment variable defined. This will prevent the Speech SDK from caching downloaded CRLs and cause it to download a new CRL upon every connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
+Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK won't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK will download a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
## Next steps
-> [!div class="nextstepaction"]
-> [About the Speech SDK](speech-sdk.md)
+- [Speech SDK overview](speech-sdk.md)
+- [Install the Speech SDK](quickstarts/setup-platform.md)
cognitive-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-rhel-centos-7.md
export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/centos7-
``` > [!NOTE]
-> Starting with the Speech SDK 1.19.0 release, the Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7.
+> The Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7.
## Next steps
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
You'll also need:
## Step 2: Create a Visual Studio project
+Create a Visual Studio project for UWP development and [install the Speech SDK](/quickstarts/setup-platform.md?pivots=programming-language-csharp&tabs=uwp).
## Step 3: Add sample code
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
The Speech service doesn't offer a global endpoint. Determine if your applicatio
To get started with the Speech SDK: 1. Download the [Speech SDK](speech-sdk.md).
-1. Work through the Speech service [quickstart guides](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet) and [tutorials](how-to-recognize-intents-from-speech-csharp.md). Also look at the [code samples](./speech-sdk.md#sample-source-code) to get experience with the new APIs.
+1. Work through the Speech service [quickstart guides](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet) and [tutorials](how-to-recognize-intents-from-speech-csharp.md).
1. Update your application to use the Speech service. ## Support
cognitive-services How To Recognize Intents From Speech Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-intents-from-speech-csharp.md
LUIS uses two kinds of keys:
For this guide, you need the prediction key type. This guide uses the example Home Automation LUIS app, which you can create by following the [Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) quickstart. If you've created a LUIS app of your own, you can use it instead.
-When you create a LUIS app, LUIS automatically generates a authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
+When you create a LUIS app, LUIS automatically generates an authoring key so you can test the app using text queries. This key doesn't enable the Speech service integration and won't work with this guide. Create a LUIS resource in the Azure dashboard and assign it to the LUIS app. You can use the free subscription tier for this guide.
After you create the LUIS resource in the Azure dashboard, log into the [LUIS portal](https://www.luis.ai/home), choose your application on the **My Apps** page, then switch to the app's **Manage** page. Finally, select **Azure Resources** in the sidebar. On the **Azure Resources** page: Select the icon next to a key to copy it to the clipboard. (You may use either key.)
-## Create a speech project in Visual Studio
+## Create the project and add the workload
+To create a Visual Studio project for Windows development, you need to create the project, set up Visual Studio for .NET desktop development, install the Speech SDK, and choose the target architecture.
+
+To start, create the project in Visual Studio, and make sure that Visual Studio is set up for .NET desktop development:
+
+1. Open Visual Studio 2019.
+
+1. In the Start window, select **Create a new project**.
+
+1. In the **Create a new project** window, choose **Console App (.NET Framework)**, and then select **Next**.
+
+1. In the **Configure your new project** window, enter *helloworld* in **Project name**, choose or create the directory path in **Location**, and then select **Create**.
+
+1. From the Visual Studio menu bar, select **Tools** > **Get Tools and Features**, which opens Visual Studio Installer and displays the **Modifying** dialog box.
+
+1. Check whether the **.NET desktop development** workload is available. If the workload hasn't been installed, select the check box next to it, and then select **Modify** to start the installation. It may take a few minutes to download and install.
+
+ If the check box next to **.NET desktop development** is already selected, select **Close** to exit the dialog box.
+
+ ![Enable .NET desktop development](~/articles/cognitive-services/speech-service/media/sdk/vs-enable-net-desktop-workload.png)
+
+1. Close Visual Studio Installer.
+
+### Install the Speech SDK
+
+The next step is to install the [Speech SDK NuGet package](https://aka.ms/csspeech/nuget), so you can reference it in the code.
+
+1. In the Solution Explorer, right-click the **helloworld** project, and then select **Manage NuGet Packages** to show the NuGet Package Manager.
+
+ ![NuGet Package Manager](~/articles/cognitive-services/speech-service/media/sdk/vs-nuget-package-manager.png)
+
+1. In the upper-right corner, find the **Package Source** drop-down box, and make sure that **nuget.org** is selected.
+
+1. In the upper-left corner, select **Browse**.
+
+1. In the search box, type *Microsoft.CognitiveServices.Speech* and select **Enter**.
+
+1. From the search results, select the **Microsoft.CognitiveServices.Speech** package, and then select **Install** to install the latest stable version.
+
+ ![Install Microsoft.CognitiveServices.Speech NuGet package](~/articles/cognitive-services/speech-service/media/sdk/qs-csharp-dotnet-windows-03-nuget-install-1.0.0.png)
+
+1. Accept all agreements and licenses to start the installation.
+
+ After the package is installed, a confirmation appears in the **Package Manager Console** window.
+
+### Choose the target architecture
+
+Now, to build and run the console application, create a platform configuration matching your computer's architecture.
+
+1. From the menu bar, select **Build** > **Configuration Manager**. The **Configuration Manager** dialog box appears.
+
+ ![Configuration Manager dialog box](~/articles/cognitive-services/speech-service/media/sdk/vs-configuration-manager-dialog-box.png)
+
+1. In the **Active solution platform** drop-down box, select **New**. The **New Solution Platform** dialog box appears.
+
+1. In the **Type or select the new platform** drop-down box:
+ - If you're running 64-bit Windows, select **x64**.
+ - If you're running 32-bit Windows, select **x86**.
+
+1. Select **OK** and then **Close**.
## Add the code
For example, if you say "Turn off the lights", pause, and then say "Turn on the
![Audio file LUIS recognition results](media/sdk/luis-results-2.png)
-Look for the code from this article in the **samples/csharp/sharedcontent/console** folder.
+The Speech SDK team actively maintains a large set of examples in an open-source repository. For the sample source code repository, see the [Azure Cognitive Services Speech SDK on GitHub](https://aka.ms/csspeech/samples). There are samples for C#, C++, Java, Python, Objective-C, Swift, JavaScript, UWP, Unity, and Xamarin. Look for the code from this article in the **samples/csharp/sharedcontent/console** folder.
## Next steps
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
You can transcribe meetings and other conversations with the ability to add, rem
* Only available in the following subscription regions: `centralus`, `eastasia`, `eastus`, `westeurope` * Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-sdk-microphone.md).
-## Prerequisites
-
-This article assumes that you have an Azure Cognitive Services Speech resource key and region. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a new Azure Cognitive Services resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md?tabs=speech#create-a-new-azure-cognitive-services-resource).
- > [!NOTE] > The Speech SDK for C++, Java, Objective-C, and Swift support Conversation Transcription, but we haven't yet included a guide here.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 06/13/2022 Last updated : 06/21/2022 zone_pivot_groups: programming-languages-speech-services-nomore-variant
For more information, see [supported languages](language-support.md#language-ide
Speech supports both at-start and continuous language identification (LID). > [!NOTE]
-> Continuous language identification is only supported with Speech SDKs in C#, C++, and Python.
+> Continuous language identification is only supported with Speech SDKs in C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.
- At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change. - Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID does not support changing languages within the same sentence. For example, if you are primarily speaking Spanish and insert some English words, it will not detect the language change per word.
You implement at-start LID or continuous LID by calling methods for [recognize o
You can choose to prioritize accuracy or latency with language identification. > [!NOTE]
-> Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, and Python.
+> Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.
Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results. * **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
speechConfig.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageId
``` ::: zone-end+ ::: zone pivot="programming-language-cpp" Here is an example of using continuous LID while still prioritizing latency.
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguage
``` ::: zone-end+
+Here is an example of using continuous LID while still prioritizing latency.
+
+```java
+speechConfig.setProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+```
+++ ::: zone pivot="programming-language-python" Here is an example of using continuous LID while still prioritizing latency.
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get(); ``` +
+```java
+// Recognize once with At-start LID
+SpeechRecognitionResult result = recognizer->RecognizeOnceAsync().get();
+
+// Start and stop continuous recognition with At-start LID
+recognizer.startContinuousRecognitionAsync().get();
+recognizer.stopContinuousRecognitionAsync().get();
+
+// Start and stop continuous recognition with Continuous LID
+speechConfig.setProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
+recognizer.startContinuousRecognitionAsync().get();
+recognizer.stopContinuousRecognitionAsync().get();
+```
+ ::: zone-end ::: zone pivot="programming-language-python"
recognizer->StopContinuousRecognitionAsync().get();
result = recognizer.recognize_once() # Start and stop continuous recognition with At-start LID
-source_language_recognizer.start_continuous_recognition()
-source_language_recognizer.stop_continuous_recognition()
+recognizer.start_continuous_recognition()
+recognizer.stop_continuous_recognition()
# Start and stop continuous recognition with Continuous LID speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, value='Latency')
-source_language_recognizer.start_continuous_recognition()
-source_language_recognizer.stop_continuous_recognition()
+recognizer.start_continuous_recognition()
+recognizer.stop_continuous_recognition()
``` ::: zone-end
See more examples of standalone language identification on [GitHub](https://gith
You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md). > [!NOTE]
-> Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, and Python.
+> Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, Java, and Python.
> Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it. ::: zone pivot="programming-language-csharp"
auto detectedLanguage = autoDetectSourceLanguageResult->Language;
:::code language="cpp" source="~/samples-cognitive-services-speech-sdk/samples/cpp/windows/console/samples/speech_recognition_samples.cpp" id="SpeechContinuousRecognitionAndLanguageIdWithMultiLingualFile"::: ++ ::: zone-end ::: zone pivot="programming-language-java" See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java).
+### [Recognize once](#tab/once)
+ ```java AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE"));
autoDetectSourceLanguageConfig.close();
audioConfig.close(); result.close(); ```+
+### [Continuous recognition](#tab/continuous)
++ + ::: zone-end ::: zone pivot="programming-language-python"
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
To improve accuracy, customization is available for some languages and base mode
| Language | Locale (BCP-47) | |--|--| | Afrikaans (South Africa) | `af-ZA` |
+| Albanian (Albania) | `sq-AL` |
| Amharic (Ethiopia) | `am-ET` | | Arabic (Algeria) | `ar-DZ` | | Arabic (Bahrain), modern standard | `ar-BH` |
To improve accuracy, customization is available for some languages and base mode
| Arabic (Tunisia) | `ar-TN` | | Arabic (United Arab Emirates) | `ar-AE` | | Arabic (Yemen) | `ar-YE` |
+| Armenian (Armenia) | `hy-AM` |
+| Azerbaijani (Azerbaijan) | `az-AZ` |
+| Basque (Spain) | `eu-ES` |
| Bengali (India) | `bn-IN` | | Bulgarian (Bulgaria) | `bg-BG` | | Burmese (Myanmar) | `my-MM` |
To improve accuracy, customization is available for some languages and base mode
| French (Canada) | `fr-CA` | | French (France) | `fr-FR` | | French (Switzerland) | `fr-CH` |
+| Galician (Spain) | `gl-ES` |
+| Georgian (Georgia) | `ka-GE` |
| German (Austria) | `de-AT` | | German (Germany) | `de-DE` | | German (Switzerland) | `de-CH` |
To improve accuracy, customization is available for some languages and base mode
| Indonesian (Indonesia) | `id-ID` | | Irish (Ireland) | `ga-IE` | | Italian (Italy) | `it-IT` |
+| Italian (Switzerland) | `it-CH` |
| Japanese (Japan) | `ja-JP` | | Javanese (Indonesia) | `jv-ID` | | Kannada (India) | `kn-IN` |
+| Kazakh (Kazakhstan) | `kk-KZ` |
| Khmer (Cambodia) | `km-KH` | | Korean (Korea) | `ko-KR` | | Lao (Laos) | `lo-LA` |
To improve accuracy, customization is available for some languages and base mode
| Malay (Malaysia) | `ms-MY` | | Maltese (Malta) | `mt-MT` | | Marathi (India) | `mr-IN` |
+| Mongolian (Mongolia) | `mn-MN` |
+| Nepali (Nepal) | `ne-NP` |
| Norwegian (Bokmål, Norway) | `nb-NO` | | Persian (Iran) | `fa-IR` | | Polish (Poland) | `pl-PL` |
The following table lists the prebuilt neural voices supported in each language.
|||||| | Afrikaans (South Africa) | `af-ZA` | Female | `af-ZA-AdriNeural` | General | | Afrikaans (South Africa) | `af-ZA` | Male | `af-ZA-WillemNeural` | General |
+| Albanian (Albania) | `sq-AL` | Female | `sq-AL-AnilaNeural` <sup>New</sup> | General |
+| Albanian (Albania) | `sq-AL` | Male | `sq-AL-IlirNeural` <sup>New</sup> | General |
| Amharic (Ethiopia) | `am-ET` | Female | `am-ET-MekdesNeural` | General | | Amharic (Ethiopia) | `am-ET` | Male | `am-ET-AmehaNeural` | General | | Arabic (Algeria) | `ar-DZ` | Female | `ar-DZ-AminaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-TaimNeural` | General | | Arabic (Kuwait) | `ar-KW` | Female | `ar-KW-NouraNeural` | General | | Arabic (Kuwait) | `ar-KW` | Male | `ar-KW-FahedNeural` | General |
+| Arabic (Lebanon) | `ar-LB` | Female | `ar-LB-LaylaNeural` <sup>New</sup> | General |
+| Arabic (Lebanon) | `ar-LB` | Male | `ar-LB-RamiNeural` <sup>New</sup> | General |
| Arabic (Libya) | `ar-LY` | Female | `ar-LY-ImanNeural` | General | | Arabic (Libya) | `ar-LY` | Male | `ar-LY-OmarNeural` | General | | Arabic (Morocco) | `ar-MA` | Female | `ar-MA-MounaNeural` | General | | Arabic (Morocco) | `ar-MA` | Male | `ar-MA-JamalNeural` | General |
+| Arabic (Oman) | `ar-OM` | Female | `ar-OM-AyshaNeural` <sup>New</sup> | General |
+| Arabic (Oman) | `ar-OM` | Male | `ar-OM-AbdullahNeural` <sup>New</sup> | General |
| Arabic (Qatar) | `ar-QA` | Female | `ar-QA-AmalNeural` | General | | Arabic (Qatar) | `ar-QA` | Male | `ar-QA-MoazNeural` | General | | Arabic (Saudi Arabia) | `ar-SA` | Female | `ar-SA-ZariyahNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Arabic (United Arab Emirates) | `ar-AE` | Male | `ar-AE-HamdanNeural` | General | | Arabic (Yemen) | `ar-YE` | Female | `ar-YE-MaryamNeural` | General | | Arabic (Yemen) | `ar-YE` | Male | `ar-YE-SalehNeural` | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Female | `az-AZ-BabekNeural` <sup>New</sup> | General |
+| Azerbaijani (Azerbaijan) | `az-AZ` | Male | `az-AZ-BanuNeural` <sup>New</sup> | General |
| Bangla (Bangladesh) | `bn-BD` | Female | `bn-BD-NabanitaNeural` | General | | Bangla (Bangladesh) | `bn-BD` | Male | `bn-BD-PradeepNeural` | General |
-| Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` <sup>New</sup> | General |
-| Bengali (India) | `bn-IN` | Male | `bn-IN-BashkarNeural` <sup>New</sup> | General |
+| Bengali (India) | `bn-IN` | Female | `bn-IN-TanishaaNeural` | General |
+| Bengali (India) | `bn-IN` | Male | `bn-IN-BashkarNeural` | General |
+| Bosnian (Bosnia and Herzegovina) | `bs-BA` | Female | `bs-BA-VesnaNeural` <sup>New</sup> | General |
+| Bosnian (Bosnia and Herzegovina) | `bs-BA` | Male | `bs-BA-GoranNeural` <sup>New</sup> | General |
| Bulgarian (Bulgaria) | `bg-BG` | Female | `bg-BG-KalinaNeural` | General | | Bulgarian (Bulgaria) | `bg-BG` | Male | `bg-BG-BorislavNeural` | General | | Burmese (Myanmar) | `my-MM` | Female | `my-MM-NilarNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| English (South Africa) | `en-ZA` | Male | `en-ZA-LukeNeural` | General | | English (Tanzania) | `en-TZ` | Female | `en-TZ-ImaniNeural` | General | | English (Tanzania) | `en-TZ` | Male | `en-TZ-ElimuNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` | General |
| English (United Kingdom) | `en-GB` | Female | `en-GB-LibbyNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` | General, child voice |
| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` <sup>Retired on 30 October 2021, see below</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` | General |
| English (United Kingdom) | `en-GB` | Female | `en-GB-SoniaNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` | General |
| English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` | General |
| English (United States) | `en-US` | Female | `en-US-AmberNeural` | General | | English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-AshleyNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| French (Canada) | `fr-CA` | Female | `fr-CA-SylvieNeural` | General | | French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` | General | | French (Canada) | `fr-CA` | Male | `fr-CA-JeanNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` | General |
| French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) <sup>Public preview</sup> |
+| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` | General, child voice |
+| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` | General |
| French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` | General |
| French (Switzerland) | `fr-CH` | Female | `fr-CH-ArianeNeural` | General | | French (Switzerland) | `fr-CH` | Male | `fr-CH-FabriceNeural` | General | | Galician (Spain) | `gl-ES` | Female | `gl-ES-SabelaNeural` | General | | Galician (Spain) | `gl-ES` | Male | `gl-ES-RoiNeural` | General |
+| Georgian (Georgia) | `ka-GE` | Female | `ka-GE-EkaNeural` <sup>New</sup> | General |
+| Georgian (Georgia) | `ka-GE` | Male | `ka-GE-GiorgiNeural` <sup>New</sup> | General |
| German (Austria) | `de-AT` | Female | `de-AT-IngridNeural` | General | | German (Austria) | `de-AT` | Male | `de-AT-JonasNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` | General, child voice |
| German (Germany) | `de-DE` | Female | `de-DE-KatjaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` | General |
| German (Germany) | `de-DE` | Male | `de-DE-ConradNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` | General |
| German (Switzerland) | `de-CH` | Female | `de-CH-LeniNeural` | General | | German (Switzerland) | `de-CH` | Male | `de-CH-JanNeural` | General | | Greek (Greece) | `el-GR` | Female | `el-GR-AthinaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Hindi (India) | `hi-IN` | Male | `hi-IN-MadhurNeural` | General | | Hungarian (Hungary) | `hu-HU` | Female | `hu-HU-NoemiNeural` | General | | Hungarian (Hungary) | `hu-HU` | Male | `hu-HU-TamasNeural` | General |
-| Icelandic (Iceland) | `is-IS` | Female | `is-IS-GudrunNeural` <sup>New</sup> | General |
-| Icelandic (Iceland) | `is-IS` | Male | `is-IS-GunnarNeural` <sup>New</sup> | General |
+| Icelandic (Iceland) | `is-IS` | Female | `is-IS-GudrunNeural` | General |
+| Icelandic (Iceland) | `is-IS` | Male | `is-IS-GunnarNeural` | General |
| Indonesian (Indonesia) | `id-ID` | Female | `id-ID-GadisNeural` | General | | Indonesian (Indonesia) | `id-ID` | Male | `id-ID-ArdiNeural` | General | | Irish (Ireland) | `ga-IE` | Female | `ga-IE-OrlaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Japanese (Japan) | `ja-JP` | Male | `ja-JP-KeitaNeural` | General | | Javanese (Indonesia) | `jv-ID` | Female | `jv-ID-SitiNeural` | General | | Javanese (Indonesia) | `jv-ID` | Male | `jv-ID-DimasNeural` | General |
-| Kannada (India) | `kn-IN` | Female | `kn-IN-SapnaNeural` <sup>New</sup> | General |
-| Kannada (India) | `kn-IN` | Male | `kn-IN-GaganNeural` <sup>New</sup> | General |
-| Kazakh (Kazakhstan) | `kk-KZ` | Female | `kk-KZ-AigulNeural` <sup>New</sup> | General |
-| Kazakh (Kazakhstan) | `kk-KZ` | Male | `kk-KZ-DauletNeural` <sup>New</sup> | General |
+| Kannada (India) | `kn-IN` | Female | `kn-IN-SapnaNeural` | General |
+| Kannada (India) | `kn-IN` | Male | `kn-IN-GaganNeural` | General |
+| Kazakh (Kazakhstan) | `kk-KZ` | Female | `kk-KZ-AigulNeural` | General |
+| Kazakh (Kazakhstan) | `kk-KZ` | Male | `kk-KZ-DauletNeural` | General |
| Khmer (Cambodia) | `km-KH` | Female | `km-KH-SreymomNeural` | General | | Khmer (Cambodia) | `km-KH` | Male | `km-KH-PisethNeural` | General | | Korean (Korea) | `ko-KR` | Female | `ko-KR-SunHiNeural` | General | | Korean (Korea) | `ko-KR` | Male | `ko-KR-InJoonNeural` | General |
-| Lao (Laos) | `lo-LA` | Female | `lo-LA-KeomanyNeural` <sup>New</sup> | General |
-| Lao (Laos) | `lo-LA` | Male | `lo-LA-ChanthavongNeural` <sup>New</sup> | General |
+| Lao (Laos) | `lo-LA` | Female | `lo-LA-KeomanyNeural` | General |
+| Lao (Laos) | `lo-LA` | Male | `lo-LA-ChanthavongNeural` | General |
| Latvian (Latvia) | `lv-LV` | Female | `lv-LV-EveritaNeural` | General | | Latvian (Latvia) | `lv-LV` | Male | `lv-LV-NilsNeural` | General | | Lithuanian (Lithuania) | `lt-LT` | Female | `lt-LT-OnaNeural` | General | | Lithuanian (Lithuania) | `lt-LT` | Male | `lt-LT-LeonasNeural` | General |
-| Macedonian (Republic of North Macedonia) | `mk-MK` | Female | `mk-MK-MarijaNeural` <sup>New</sup> | General |
-| Macedonian (Republic of North Macedonia) | `mk-MK` | Male | `mk-MK-AleksandarNeural` <sup>New</sup> | General |
+| Macedonian (Republic of North Macedonia) | `mk-MK` | Female | `mk-MK-MarijaNeural` | General |
+| Macedonian (Republic of North Macedonia) | `mk-MK` | Male | `mk-MK-AleksandarNeural` | General |
| Malay (Malaysia) | `ms-MY` | Female | `ms-MY-YasminNeural` | General | | Malay (Malaysia) | `ms-MY` | Male | `ms-MY-OsmanNeural` | General |
-| Malayalam (India) | `ml-IN` | Female | `ml-IN-SobhanaNeural` <sup>New</sup> | General |
-| Malayalam (India) | `ml-IN` | Male | `ml-IN-MidhunNeural` <sup>New</sup> | General |
+| Malayalam (India) | `ml-IN` | Female | `ml-IN-SobhanaNeural` | General |
+| Malayalam (India) | `ml-IN` | Male | `ml-IN-MidhunNeural` | General |
| Maltese (Malta) | `mt-MT` | Female | `mt-MT-GraceNeural` | General | | Maltese (Malta) | `mt-MT` | Male | `mt-MT-JosephNeural` | General | | Marathi (India) | `mr-IN` | Female | `mr-IN-AarohiNeural` | General | | Marathi (India) | `mr-IN` | Male | `mr-IN-ManoharNeural` | General |
+| Mongolian (Mongolia) | `mn-MN` | Female | `mn-MN-YesuiNeural` <sup>New</sup> | General |
+| Mongolian (Mongolia) | `mn-MN` | Male | `mn-MN-BataaNeural` <sup>New</sup> | General |
+| Nepali (Nepal) | `ne-NP` | Female | `ne-NP-HemkalaNeural` <sup>New</sup> | General |
+| Nepali (Nepal) | `ne-NP` | Male | `ne-NP-SagarNeural` <sup>New</sup> | General |
| Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-IselinNeural` | General | | Norwegian (Bokmål, Norway) | `nb-NO` | Female | `nb-NO-PernilleNeural` | General | | Norwegian (Bokmål, Norway) | `nb-NO` | Male | `nb-NO-FinnNeural` | General |
-| Pashto (Afghanistan) | `ps-AF` | Female | `ps-AF-LatifaNeural` <sup>New</sup> | General |
-| Pashto (Afghanistan) | `ps-AF` | Male | `ps-AF-GulNawazNeural` <sup>New</sup> | General |
+| Pashto (Afghanistan) | `ps-AF` | Female | `ps-AF-LatifaNeural` | General |
+| Pashto (Afghanistan) | `ps-AF` | Male | `ps-AF-GulNawazNeural` | General |
| Persian (Iran) | `fa-IR` | Female | `fa-IR-DilaraNeural` | General | | Persian (Iran) | `fa-IR` | Male | `fa-IR-FaridNeural` | General | | Polish (Poland) | `pl-PL` | Female | `pl-PL-AgnieszkaNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Russian (Russia) | `ru-RU` | Female | `ru-RU-DariyaNeural` | General | | Russian (Russia) | `ru-RU` | Female | `ru-RU-SvetlanaNeural` | General | | Russian (Russia) | `ru-RU` | Male | `ru-RU-DmitryNeural` | General |
-| Serbian (Serbia, Cyrillic) | `sr-RS` | Female | `sr-RS-SophieNeural` <sup>New</sup> | General |
-| Serbian (Serbia, Cyrillic) | `sr-RS` | Male | `sr-RS-NicholasNeural` <sup>New</sup> | General |
-| Sinhala (Sri Lanka) | `si-LK` | Female | `si-LK-ThiliniNeural` <sup>New</sup> | General |
-| Sinhala (Sri Lanka) | `si-LK` | Male | `si-LK-SameeraNeural` <sup>New</sup> | General |
+| Serbian (Serbia, Cyrillic) | `sr-RS` | Female | `sr-RS-SophieNeural` | General |
+| Serbian (Serbia, Cyrillic) | `sr-RS` | Male | `sr-RS-NicholasNeural` | General |
+| Sinhala (Sri Lanka) | `si-LK` | Female | `si-LK-ThiliniNeural` | General |
+| Sinhala (Sri Lanka) | `si-LK` | Male | `si-LK-SameeraNeural` | General |
| Slovak (Slovakia) | `sk-SK` | Female | `sk-SK-ViktoriaNeural` | General | | Slovak (Slovakia) | `sk-SK` | Male | `sk-SK-LukasNeural` | General | | Slovenian (Slovenia) | `sl-SI` | Female | `sl-SI-PetraNeural` | General |
The following table lists the prebuilt neural voices supported in each language.
| Swedish (Sweden) | `sv-SE` | Male | `sv-SE-MattiasNeural` | General | | Tamil (India) | `ta-IN` | Female | `ta-IN-PallaviNeural` | General | | Tamil (India) | `ta-IN` | Male | `ta-IN-ValluvarNeural` | General |
+| Tamil (Malaysia) | `ta-MY` | Female | `ta-MY-KaniNeural` <sup>New</sup> | General |
+| Tamil (Malaysia) | `ta-MY` | Male | `ta-MY-SuryaNeural` <sup>New</sup> | General |
| Tamil (Singapore) | `ta-SG` | Female | `ta-SG-VenbaNeural` | General | | Tamil (Singapore) | `ta-SG` | Male | `ta-SG-AnbuNeural` | General | | Tamil (Sri Lanka) | `ta-LK` | Female | `ta-LK-SaranyaNeural` | General |
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` | General, child voice |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN-LN` | Female | `zh-CN-LN-XiaobeiNeural` <sup>New</sup> | General, Liaoning accent |
+| Chinese (Mandarin, Simplified) | `zh-CN-SC` | Male | `zh-CN-SC-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent |
| English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| English (United States) | `en-US` | Male | `en-US-TonyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` | General |
-| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` | General, child voice |
-| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` | General |
-| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` | General |
-| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` | General, child voice |
-| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` | General |
-| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-FabiolaNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-FiammaNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-ImeldaNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-IrmaNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-PalmiraNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Female | `it-IT-PierinaNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-BenignoNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-CalimeroNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-CataldoNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-GianniNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-LisandroNeural` <sup>New</sup> | General |
+| Italian (Italy) | `it-IT` | Male | `it-IT-RinaldoNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-BrendaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-ElzaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-GiovannaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-LeilaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-LeticiaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-ManuelaNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Female | `pt-BR-YaraNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-DonatoNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-FabioNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-HumbertoNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-JulioNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-NicolauNeural` <sup>New</sup> | General |
+| Portuguese (Brazil) | `pt-BR` | Male | `pt-BR-ValerioNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-BeatrizNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-CandelaNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-CarlotaNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-LarissaNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-MarinaNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-NuriaNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Female | `es-MX-RenataNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-CecilioNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-GerardoNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-LibertoNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-LucianoNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-PelayoNeural` <sup>New</sup> | General |
+| Spanish (Mexico) | `es-MX` | Male | `es-MX-YagoNeural` <sup>New</sup> | General |
### Voice styles and roles
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
Title: 'Quickstart: Set up the development environment'
+ Title: Install the Speech SDK
-description: In this quickstart, you'll learn how to install the Speech SDK for your preferred combination of platform and programming language.
+description: In this quickstart, you'll learn how to install the Speech SDK for your preferred programming language.
Previously updated : 01/24/2022 Last updated : 06/10/2022
-zone_pivot_groups: programming-languages-speech-services-one-nomore
+zone_pivot_groups: programming-languages-speech-sdk
-# Quickstart: Set up the development environment
+# Install the Speech SDK
::: zone pivot="programming-language-csharp"-
-**Choose your target environment**
-
-# [.NET](#tab/dotnet)
--
-# [.NET Core](#tab/dotnetcore)
--
-# [Unity](#tab/unity)
--
-# [UWP](#tab/uwp)
--
-# [Xamarin](#tab/xaml)
--
-* * *
::: zone-end ::: zone pivot="programming-language-cpp"
-**Choose your target environment**
-
-# [Linux](#tab/linux)
--
-# [macOS](#tab/macos)
--
-# [Windows](#tab/windows)
--
-* * *
::: zone-end ::: zone pivot="programming-language-java"-
-**Choose your target environment**
-
-# [Java Runtime](#tab/jre)
--
-# [Android](#tab/android)
--
-* * *
::: zone-end -- ::: zone-end --
-* * *
::: zone-end -
-**Choose your target environment**
-
-#### [Browser-based](#tab/browser)
-
-#### [Node.js](#tab/nodejs)
+## Next steps
-* * *
+* [Speech-to-text quickstart](../get-started-speech-to-text.md)
+* [Text-to-speech quickstart](../get-started-text-to-speech.md)
+* [Speech translation quickstart](../get-started-speech-translation.md)
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
In the [Speech SDK](speech-sdk.md), you specify the region as a parameter (for e
The Speech service is available in these regions for speech-to-text, pronunciation assessment, text-to-speech, and translation: If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) to [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversa
Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table: > [!NOTE] > The language parameter must be appended to the URL to avoid receiving an HTTP error. For example, the language set to `US English` by using the `West US` endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
Before you use the speech-to-text REST API for short audio, consider the followi
> [!TIP] > For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md). ### Regions and endpoints
https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversa
Replace `<REGION_IDENTIFIER>` with the identifier that matches the region of your subscription from this table: > [!NOTE] > You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
The text-to-speech REST API supports neural text-to-speech voices, which support
Before you use the text-to-speech REST API, understand that you need to complete a token exchange as part of authentication to access the service. ## Get a list of voices
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-sdk.md
Previously updated : 01/16/2022 Last updated : 06/14/2022 # What is the Speech SDK?
-The Speech software development kit (SDK) exposes many of the Speech service capabilities you can use to develop speech-enabled applications. The Speech SDK is available in many programming languages and across all platforms.
+The Speech SDK (software development kit) exposes many of the [Speech service capabilities](overview.md), so you can develop speech-enabled applications. The Speech SDK is available [in many programming languages](quickstarts/setup-platform.md) and across platforms. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and input and output streams.
+In some cases, you can't or shouldn't use the [Speech SDK](speech-sdk.md). In those cases, you can use REST APIs to access the Speech service. For example, use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) for [batch transcription](batch-transcription.md) and [custom speech](custom-speech-overview.md).
-## Scenario capabilities
+## Supported languages
-The Speech SDK exposes many features from the Speech service, but not all of them. The capabilities of the Speech SDK are often associated with scenarios. The Speech SDK is ideal for both real-time and non-real-time scenarios, by using local devices, files, Azure Blob Storage, and even input and output streams. When a scenario can't be achieved with the Speech SDK, look for a REST API alternative.
+The Speech SDK supports the following languages and platforms:
-### Speech-to-text
+| Programming language | Reference | Platform support |
+|-|-|-|
+| [C#](quickstarts/setup-platform.md?pivots=programming-language-csharp) <sup>1</sup> | [.NET](/dotnet/api/overview/azure/cognitiveservices/client/speechservice) | Windows, Linux, macOS, Mono, Xamarin.iOS, Xamarin.Mac, Xamarin.Android, UWP, Unity |
+| [C++](quickstarts/setup-platform.md?pivots=programming-language-cpp) <sup>2</sup> | [C++](/cpp/cognitive-services/speech/) | Windows, Linux, macOS |
+| [Go](quickstarts/setup-platform.md?pivots=programming-language-go) | [Go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) | Linux |
+| [Java](quickstarts/setup-platform.md?pivots=programming-language-java) | [Java](/java/api/com.microsoft.cognitiveservices.speech) | Android, Windows, Linux, macOS |
+| [JavaScript](quickstarts/setup-platform.md?pivots=programming-language-javascript) | [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/) | Browser, Node.js |
+| [Objective-C](quickstarts/setup-platform.md?pivots=programming-language-objectivec) | [Objective-C](/objectivec/cognitive-services/speech/) | iOS, macOS |
+| [Python](quickstarts/setup-platform.md?pivots=programming-language-python) | [Python](/python/api/azure-cognitiveservices-speech/) | Windows, Linux, macOS |
+| [Swift](quickstarts/setup-platform.md?pivots=programming-language-swift) | [Objective-C](/objectivec/cognitive-services/speech/) <sup>3</sup> | iOS, macOS |
-[Speech-to-text](speech-to-text.md) transcribes audio streams to text that your applications, tools, or devices can consume or display. Speech-to-text is also known as *speech recognition*. Use speech-to-text with [Language Understanding (LUIS)](../luis/index.yml) to derive user intents from transcribed speech and act on voice commands. Use [speech translation](speech-translation.md) to translate speech input to a different language with a single call. For more information, see [Speech-to-text basics](./get-started-speech-to-text.md).
+<sup>1 C# code samples are available in the documentation. The Speech SDK for C# is based on .NET Standard 2.0, so it supports many platforms and programming languages. For more information, see [.NET implementation support](/dotnet/standard/net-standard#net-implementation-support).</sup>
+<sup>2 C isn't a supported programming language for the Speech SDK.</sup>
+<sup>3 The Speech SDK for Swift shares client libraries and reference documentation with the Speech SDK for Objective-C.</sup>
-**Speech recognition, phrase list, intent, translation, and on-premises containers** are available on the following platforms:
- - C++/Windows and Linux and macOS
- - C# (Framework and .NET Core)/Windows and UWP and Unity and Xamarin and Linux and macOS
- - Java (Jre and Android)
- - JavaScript (browser and NodeJS)
- - Python
- - Swift
- - Objective-C
- - Go (speech recognition only)
+## Code samples
-### Text-to-speech
+Speech SDK code samples are available in the documentation and GitHub.
-[Text-to-speech](text-to-speech.md) converts text into humanlike synthesized speech. Text-to-speech is also known as *speech synthesis*. The input text is either string literals or uses the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). For more information on standard or neural voices, see [Text-to-speech language and voice support](language-support.md#text-to-speech).
+### Docs samples
-**Text-to-speech** is available on the following platforms:
+At the top of documentation pages that contain samples, options to select include C#, C++, Go, Java, JavaScript, Objective-C, Python, or Swift.
- - C++/Windows and Linux and macOS
- - C# (Framework and .NET Core)/Windows and UWP and Unity and Xamarin and Linux and macOS
- - Java (Jre and Android)
- - JavaScript (browser and NodeJS)
- - Python
- - Swift
- - Objective-C
- - Go
- - Text-to-speech REST API can be used in every other situation
-### Voice assistants
+If a sample is not available in your preferred programming language, you can select another programming language to get started and learn about the concepts, or see the reference and samples linked from the beginning of the article.
-[Voice assistants](voice-assistants.md) using the Speech SDK enable you to create natural, humanlike conversational interfaces for your applications and experiences. The Speech SDK provides fast, reliable interaction that includes speech-to-text, text-to-speech, and conversational data on a single connection. Your implementation can use the Bot Framework's Direct Line Speech channel or the integrated Custom Commands service for task completion. Also, voice assistants can use custom voices created in the [Custom Voice portal](https://aka.ms/customvoice) to add a unique voice output experience.
+### GitHub samples
-**Voice assistant** support is available on the following platforms:
+In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub.
- - C++/Windows and Linux and macOS
- - C#/Windows
- - Java/Windows and Linux and macOS and Android (Speech Devices SDK)
- - Go
+## Help options
-#### Keyword recognition
+The [Microsoft Q&A](/answers/topics/azure-speech.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-speech) forums are available for the developer community to ask and answer questions about Azure Cognitive Speech and other services. Microsoft monitors the forums and replies to questions that the community has not yet answered. To make sure that we see your question, tag it with 'azure-speech'.
-The concept of [keyword recognition](custom-keyword-basics.md) is supported in the Speech SDK. Keyword recognition is the act of identifying a keyword in speech, followed by an action upon hearing the keyword. For example, "Hey Cortana" would activate the Cortana assistant.
+You can suggest an idea or report a bug by creating an issue on GitHub:
+- [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/GHspeechissues)
+- [Microsoft/cognitive-services-speech-sdk-go](https://github.com/microsoft/cognitive-services-speech-sdk-go/issues)
+- [Microsoft/cognitive-services-speech-sdk-js](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues)
-**Keyword recognition** is available on the following platforms:
-
- - C++/Windows and Linux
- - C#/Windows and Linux
- - Python/Windows and Linux
- - Java/Windows and Linux and Android
-
-### Meeting scenarios
-
-The Speech SDK is perfect for transcribing meeting scenarios, whether from a single device or multidevice conversation.
-
-#### Conversation transcription
-
-[Conversation transcription](conversation-transcription.md) enables real-time, and asynchronous, speech recognition, speaker identification, and sentence attribution to each speaker. This process is also known as *diarization*. It's perfect for transcribing in-person meetings with the ability to distinguish speakers.
-
-**Conversation transcription** is available on the following platforms:
-
- - C++/Windows and Linux
- - C# (Framework and .NET Core)/Windows and UWP and Linux
- - Java/Windows and Linux and Android
-
-#### Multidevice conversation
-
-With [multidevice conversation](multi-device-conversation.md), you can connect multiple devices or clients in a conversation to send speech-based or text-based messages, with easy support for transcription and translation.
-
-**Multidevice conversation** is available on the following platforms:
-
- - C++/Windows
- - C# (Framework and .NET Core)/Windows
-
-### Custom/agent scenarios
-
-The Speech SDK can be used for transcribing call center scenarios, where telephony data is generated.
-
-#### Call center transcription
-
-[Call center transcription](call-center-transcription.md) is a common scenario for speech-to-text for transcribing large volumes of telephony data that might come from various systems, such as interactive voice response. The latest speech recognition models from the Speech service excel at transcribing this telephony data, even in cases when the data is difficult for a human to understand.
-
-**Call center transcription** is available through the batch Speech service via its REST API and can be used in any situation.
-
-### Codec-compressed audio input
-
-Several of the Speech SDK programming languages support codec-compressed audio input streams. For more information, see <a href="/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams" target="_blank">Use compressed audio input formats</a>.
-
-**Codec-compressed audio input** is available on the following platforms:
-
- - C++/Linux
- - C#/Linux
- - Java/Linux, Android, and iOS
-
-## REST API
-
-The Speech SDK covers many feature capabilities of the Speech service, but for some scenarios you might want to use the REST API.
-
-### Batch transcription
-
-[Batch transcription](batch-transcription.md) enables asynchronous speech-to-text transcription of large volumes of data. Batch transcription is only possible from the REST API. In addition to converting speech audio to text, batch speech-to-text also allows for diarization and sentiment analysis.
-
-## Customization
-
-The Speech service delivers great functionality with its default models across speech-to-text, text-to-speech, and speech translation. Sometimes you might want to increase the baseline performance to work even better with your unique use case. The Speech service has various no-code customization tools that make it easy. You can use them to create a competitive advantage with custom models based on your own data. These models will only be available to you and your organization.
-
-### Custom speech-to-text
-
-When you use speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. The creation and management of no-code Custom Speech models is available in the [Speech Studio](./custom-speech-overview.md). After the Custom Speech model is published, it can be consumed by the Speech SDK.
-
-### Custom text-to-speech
-
-Custom text-to-speech, also known as Custom Voice, is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. The creation and management of no-code Custom Voice models is available through the [Custom Voice portal](https://aka.ms/customvoice). After the Custom Voice model is published, it can be consumed by the Speech SDK.
-
-## Get the Speech SDK
-
-# [Windows](#tab/windows)
--
-# [Linux](#tab/linux)
--
-# [iOS](#tab/ios)
--
-# [macOS](#tab/macos)
--
-# [Android](#tab/android)
--
-# [Node.js](#tab/nodejs)
--
-# [Browser](#tab/browser)
-----
+See also [Azure Cognitive Services support and help options](../cognitive-services-support-options.md?context=/azure/cognitive-services/speech-service/context/context) to get support, stay up-to-date, give feedback, and report bugs for Cognitive Services.
## Next steps
-* [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
-* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)
+* [Install the SDK](quickstarts/setup-platform.md)
+* [Try the speech to text quickstart](./get-started-speech-to-text.md)
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Use the `mstts:silence` element to insert pauses before or after text, or betwee
In this example, `mtts:silence` is used to add 200 ms of silence between two sentences. ```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
<voice name="en-US-JennyNeural"> <mstts:silence type="Sentenceboundary" value="200ms"/> If weΓÇÖre home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
Title: "Quickstart: Get started with Translator"
+ Title: "Quickstart: Azure Cognitive Services Translator"
-description: "Learn to translate text, transliterate text, detect language and more with the Translator service. Examples are provided in C#, Java, JavaScript and Python."
+description: "Learn to translate text with the Translator service. Examples are provided in C#, Go, Java, JavaScript and Python."
Previously updated : 07/06/2021 Last updated : 06/16/2022 ms.devlang: csharp, golang, java, javascript, python-
-keywords: translator, translator service, translate text, transliterate text, language detection
-# Quickstart: Get started with Translator
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
-In this quickstart, you learn to use the Translator service via REST. You start with basic examples and move onto some core configuration options that are commonly used during development, including:
+# Quickstart: Azure Cognitive Services Translator
-* [Translation](#translate-text)
-* [Transliteration](#transliterate-text)
-* [Language identification/detection](#detect-language)
-* [Calculate sentence length](#get-sentence-length)
-* [Get alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
+In this quickstart, you'll get started using the Translator service to [translate text](reference/v3-0-translate.md) with a programming language of your choice and the REST API.
## Prerequisites
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have an Azure subscription, [create a Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You'll need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code below later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+To get started, you'll need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
- :::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
-* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
-## Platform setup
+ * You need the key and endpoint from the resource to connect your application to the Translator service. You'll paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
-# [C#](#tab/csharp)
+ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-* Create a new project: `dotnet new console -o your_project_name`
-* Replace Program.cs with the C# code shown below.
-* Set the key and endpoint values in Program.cs.
-* [Add Newtonsoft.Json using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/).
-* Run the program from the project directory: ``dotnet run``
+* Use the free pricing tier (F0) to try the service and upgrade later to a paid tier for production.
+## Headers
-# [Go](#tab/go)
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to include the following headers with each request. Don't worry, we'll include the headers for you in the sample code for each programming language.
-* Create a new Go project in your favorite code editor.
-* Add the code provided below.
-* Replace the `key` value with an access key valid for your subscription.
-* Save the file with a '.go' extension.
-* Open a command prompt on a computer with Go installed.
-* Build the file, for example: 'go build example-code.go'.
-* Run the file, for example: 'example-code'.
+For more information on Translator authentication options, *see* the [Translator v3 reference](/azure/cognitive-services/translator/reference/v3-0-reference#authentication) guide.
-# [Java](#tab/java)
+|Header|Value| Condition |
+| |: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>|
+|**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> |
+|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
+|||
-* Create a working directory for your project. For example: `mkdir sample-project`.
-* Initialize your project with Gradle: `gradle init --type basic`. When prompted to choose a **DSL**, select **Kotlin**.
-* Update `build.gradle.kts`. Keep in mind that you'll need to update your `mainClassName` depending on the sample.
- ```java
- plugins {
- java
- application
- }
- application {
- mainClassName = "<NAME OF YOUR CLASS>"
- }
- repositories {
- mavenCentral()
- }
- dependencies {
- compile("com.squareup.okhttp:okhttp:2.5.0")
- compile("com.google.code.gson:gson:2.8.5")
- }
- ```
-* Create a Java file and copy in the code from the provided sample. Don't forget to add your key.
-* Run the sample: `gradle run`.
+> [!IMPORTANT]
+>
+> Remember to remove the key from your code when you're done, and **never** post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../cognitive-services/cognitive-services-security.md).
+## Translate text
+The core operation of the Translator service is translating text. In this quickstart, you'll build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we'll review some parameters that can be used to adjust both the request and the response.
-# [Node.js](#tab/nodejs)
+### [C#: Visual Studio](#tab/csharp)
-* Create a new project in your favorite IDE or editor.
-* Copy the code from one of the samples into your project.
-* Set your key.
-* Run the program. For example: `node Translate.js`.
+### Set up
+1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
+ > [!TIP]
+ >
+ > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
-# [Python](#tab/python)
+1. Open Visual Studio.
-* Create a new project in your favorite IDE or editor.
-* Copy the code from one of the samples into your project.
-* Set your key.
-* Run the program. For example: `python translate.py`.
+1. On the Start page, choose **Create a new project**.
+ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
+1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
-
+ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
+
+1. In the **Configure your new project** dialog window, enter `translator_quickstart` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
+
+ :::image type="content" source="media/quickstarts/configure-new-project.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
+
+1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
-## Headers
+ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
-When calling the Translator service via REST, you'll need to make sure the following headers are included with each request. Don't worry, we'll include the headers in the sample code in the following sections.
+### Install the Newtonsoft.json package with NuGet
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>Authentication header(s)</td>
- <td><em>Required request header</em>.<br/><code>Ocp-Apim-Subscription-Key</code><br/><br/><em>Required request header if using a Cognitive Services Resource. Optional if using a Translator Resource.</em>.<br/><code>Ocp-Apim-Subscription-Region</code><br/><br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>.</td>
- </tr>
- <tr>
- <td>Content-Type</td>
- <td><em>Required request header</em>.<br/>Specifies the content type of the payload.<br/> Accepted value is <code>application/json; charset=UTF-8</code>.</td>
- </tr>
- <tr>
- <td>Content-Length</td>
- <td><em>Required request header</em>.<br/>The length of the request body.</td>
- </tr>
- <tr>
- <td>X-ClientTraceId</td>
- <td><em>Optional</em>.<br/>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named <code>ClientTraceId</code>.</td>
- </tr>
-</table>
+1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
-## Keys and endpoints
+ :::image type="content" source="media/quickstarts/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
-The samples on this page use hard-coded keys and endpoints for simplicity. Remember to **remove the key from your code when you're done**, and **never post it publicly**. For production, consider using a secure way of storing and accessing your credentials. See the Cognitive Services [security](../cognitive-services-security.md) article for more information.
+1. Select the Browse tab and type Newtonsoft.json.
-## Translate text
+ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
-The core operation of the Translator service is to translate text. In this section, you'll build a request that takes a single source (`from`) and provides two outputs (`to`). Then we'll review some parameters that can be used to adjust both the request and the response.
+1. Select install from the right package manager window to add the package to your project.
-# [C#](#tab/csharp)
+ :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+
+### Build your application
+
+> [!NOTE]
+>
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
+> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+
+1. Open the **Program.cs** file.
+
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. Copy and paste the code sample into your application's Program.cs file. Make sure you update the key variable with the value from your Azure portal Translator instance:
```csharp
-using System;
-using System.Net.Http;
using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+using Newtonsoft.Json;
class Program {
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
+ private static readonly string key = "<your-translator-key>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
static async Task Main(string[] args) { // Input and output languages are defined as parameters.
- string route = "/translate?api-version=3.0&from=en&to=de&to=it";
- string textToTranslate = "Hello, world!";
+ string route = "/translate?api-version=3.0&from=en&to=fr&to=zu";
+ string textToTranslate = "I would really like to drive your car around the block a few times!";
object[] body = new object[] { new { Text = textToTranslate } }; var requestBody = JsonConvert.SerializeObject(body);
-
+ using (var client = new HttpClient()) using (var request = new HttpRequestMessage()) {
class Program
request.RequestUri = new Uri(endpoint + route); request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json"); request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
+ // Send the request and get response. HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false); // Read response as a string.
class Program
} } }+ ```
+### Run your C# application
+
+Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
++
+### [Go](#tab/go)
+
+### Set up your Go environment
+
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+
+> [!TIP]
+>
+> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
-# [Go](#tab/go)
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install]).
+
+ * Download the Go version for your operating system.
+ * Once the download is complete, run the installer.
+ * Open a command prompt and enter the following to confirm Go was installed:
+
+ ```console
+ go version
+ ```
+
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-app**, and navigate to it.
+
+1. Create a new GO file named **translation.go** from the **translator-app** directory.
+
+1. Copy and paste the provided code sample into your **translation.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance:
```go package main
import (
) func main() {
- key := "YOUR-KEY"
+ key := "<YOUR-TRANSLATOR-KEY>"
// Add your location, also known as region. The default is global. // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
endpoint := "https://api.cognitive.microsofttranslator.com/" uri := endpoint + "/translate?api-version=3.0"
func main() {
u, _ := url.Parse(uri) q := u.Query() q.Add("from", "en")
- q.Add("to", "de")
- q.Add("to", "it")
+ q.Add("to", "fr")
+ q.Add("to", "zu")
u.RawQuery = q.Encode() // Create an anonymous struct for your request body and encode it to JSON body := []struct { Text string }{
- {Text: "Hello, world!"},
+ {Text: "I would really like to drive your car around the block a few times."},
} b, _ := json.Marshal(body)
func main() {
} // Add required headers to the request req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
req.Header.Add("Content-Type", "application/json") // Call the Translator API
func main() {
} ```
+### Run your Go application
+
+Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
+
+```console
+ go run translation.go
+```
+
+### [Java](#tab/java)
+
+### Set up your Java environment
+
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using VS Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
+
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+
+### Create a new Gradle project
+
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+ ```console
+ mkdir translator-text-app && translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+
+ ```console
+ gradle init --type basic
+ ```
+
+1. When prompted to choose a **DSL**, select **Kotlin**.
+
+1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
+
+1. Update `build.gradle.kts` with the following code:
+
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("TranslatorText")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation("com.squareup.okhttp3:okhttp:4.10.0")
+ implementation("com.google.code.gson:gson:2.9.0")
+ }
+ ```
+
+### Create a Java Application
+
+1. From the translator-text-app directory, run the following command:
+
+ ```console
+ mkdir -p src/main/java
+ ```
+ You'll create the following directory structure:
-# [Java](#tab/java)
+ :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
+
+1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item TranslatorText.java**.
+ >
+ > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
+
+1. Open the `TranslatorText.java` file in your IDE and copy then paste the following code sample into your application. **Make sure you update the key with one of the key values from your Azure portal Translator instance:**
```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
+import java.io.IOException;
+ import com.google.gson.*;
-import com.squareup.okhttp.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
-public class Translate {
- private static String key = "YOUR_KEY";
+public class TranslatorText {
+ private static String key = "<your-translator-key";
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/translate")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("from", "en")
- .addQueryParameter("to", "de")
- .addQueryParameter("to", "it")
- .build();
// Instantiates the OkHttpClient. OkHttpClient client = new OkHttpClient();
public class Translate {
public String Post() throws IOException { MediaType mediaType = MediaType.parse("application/json"); RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hello World!\"}]");
- Request request = new Request.Builder().url(url).post(body)
+ "[{\"Text\": \"I would really like to drive your car around the block a few times!\"}]");
+ Request request = new Request.Builder()
+ .url("https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&to=zu")
+ .post(body)
.addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
.addHeader("Content-type", "application/json") .build(); Response response = client.newCall(request).execute();
public class Translate {
public static void main(String[] args) { try {
- Translate translateRequest = new Translate();
+ TranslatorText translateRequest = new TranslatorText();
String response = translateRequest.Post(); System.out.println(prettify(response)); } catch (Exception e) {
public class Translate {
} ```
+### Build and run your application
-# [Node.js](#tab/nodejs)
-
-```Javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['de', 'it']
- },
- data: [{
- 'text': 'Hello World!'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
----
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': ['de', 'it']
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Hello World!'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+1. Build your application with the `build` command:
-
+ ```console
+ gradle build
+ ```
-After a successful call, you should see the following response:
+1. Run your application with the `run` command:
-```JSON
-[
- {
- "translations": [
- {
- "text": "Hallo Welt!",
- "to": "de"
- },
- {
- "text": "Salve, mondo!",
- "to": "it"
- }
- ]
- }
-]
-```
+ ```console
+ gradle run
+ ```
-## Detect language
+### [Node.js](#tab/nodejs)
-If you know that you'll need translation, but don't know the language of the text that will be sent to the Translator service, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
+### Create a Node.js Express application
-### Detect source language during translation
+1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
-If you don't include the `from` parameter in your translation request, the Translator service will attempt to detect the source text's language. In the response, you'll get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there is increased confidence that the detection is correct.
+ > [!TIP]
+ >
+ > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
-# [C#](#tab/csharp)
+1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+ ```console
+ mkdir translator-app && cd translator-app
+ ```
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
+ ```powershell
+ mkdir translator-app; cd translator-app
+ ```
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // Output languages are defined as parameters, input language detected.
- string route = "/translate?api-version=3.0&to=de&to=it";
- string textToTranslate = "Hello, world!";
- object[] body = new object[] { new { Text = textToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
+1. Run the npm init command to initialize the application and scaffold your project.
+ ```console
+ npm init
+ ```
-# [Go](#tab/go)
+1. Specify your project's attributes using the prompts presented in the terminal.
-```go
-package main
+ * The most important attributes are name, version number, and entry point.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
+ * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
+ * After you've completed the prompts, a `package.json` file will be created in your translator-app directory.
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
+1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
+ ```console
+ npm install axios uuid
+ ```
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "de")
- q.Add("to", "it")
- u.RawQuery = q.Encode()
+1. Create the `index.js` file in the application directory.
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Hello, world!"},
- }
- b, _ := json.Marshal(body)
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item index.js**.
+ >
+ > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory.
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
+1. Add the following code sample to your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**:
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
+```javascript
+ const axios = require('axios').default;
+ const { v4: uuidv4 } = require('uuid');
+
+ let key = "<your-translator-key>";
+ let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['fr', 'zu']
+ },
+ data: [{
+ 'text': 'I would really like to drive your car around the block a few times!'
+ }],
+ responseType: 'json'
+ }).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+ })
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
```
+### Run your application
+Once you've added the code sample to your application, run your program:
+1. Navigate to your application directory (translator-app).
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class Translate {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/translate")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("to", "de")
- .addQueryParameter("to", "it")
- .build();
+1. Type the following command in your terminal:
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
+ ```console
+ node index.js
+ ```
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hello World!\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
+### [Python](#tab/python)
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
+### Create a Python application
- public static void main(String[] args) {
- try {
- Translate translateRequest = new Translate();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
+1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+ > [!TIP]
+ >
+ > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+1. Open a terminal window and use pip to install the Requests library and uuid0 package:
-# [Node.js](#tab/nodejs)
+ ```console
+ pip install requests uuid
+ ```
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': ['de', 'it']
- },
- data: [{
- 'text': 'Hello World!'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
+ > [!NOTE]
+ > We will also use a Python built-in package called json. It's used to work with JSON data.
+1. Create a new Python file called **translator-app.py** in your preferred editor or IDE.
+1. Add the following code sample to your `translator-app.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
-# [Python](#tab/python)
```python import requests, uuid, json # Add your key and endpoint
-key = "YOUR_KEY"
+key = "<your-translator-key>"
endpoint = "https://api.cognitive.microsofttranslator.com" # Add your location, also known as region. The default is global.
constructed_url = endpoint + path
params = { 'api-version': '3.0',
- 'to': ['de', 'it']
+ 'from': 'en',
+ 'to': ['fr', 'zu']
} headers = { 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
'Content-type': 'application/json', 'X-ClientTraceId': str(uuid.uuid4()) } # You can pass more than one object in body. body = [{
- 'text': 'Hello World!'
+ 'text': 'I would really like to drive your car around the block a few times!'
}] request = requests.post(constructed_url, params=params, headers=headers, json=body) response = request.json() print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))+ ```
+### Run your python application
+
+Once you've added a code sample to your application, build and run your program:
+1. Navigate to your **translator-app.py** file.
+
+1. Type the following command in your console:
+
+ ```console
+ python translator-app.py
+ ```
+### Translation output
+ After a successful call, you should see the following response: ```json
After a successful call, you should see the following response:
}, "translations": [ {
- "text": "Hallo Welt!",
- "to": "de"
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
}, {
- "text": "Salve, mondo!",
- "to": "it"
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
} ] } ]
-```
-### Detect source language without translation
-
-It's possible to use the Translator service to detect the language of source text without performing a translation. To do this, you'll use the [`/detect`](./reference/v3-0-detect.md) endpoint.
+```
-# [C#](#tab/csharp)
+That's it, congratulations! You have learned to use the Translator service to translate text.
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // Just detect language
- string route = "/detect?api-version=3.0";
- string textToLangDetect = "Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal.";
- object[] body = new object[] { new { Text = textToLangDetect } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
---
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
-
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/detect?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
--
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class Detect {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/detect")
- .addQueryParameter("api-version", "3.0")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal.\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- Detect detectRequest = new Detect();
- String response = detectRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
--
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/detect',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0'
- },
- data: [{
- 'text': 'Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal.'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/detect'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal.'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
----
-When using the `/detect` endpoint, the response will include alternate detections, and will let you know if translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
-
-```json
-[
-
- {
-
- "language": "de",
-
- "score": 1.0,
-
- "isTranslationSupported": true,
-
- "isTransliterationSupported": false
-
- }
-
-]
-```
-
-## Transliterate text
-
-Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
-
-### Transliterate during translation
-
-If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you'll get a transliteration of the translated phrase using the Latin alphabet.
-
-To get a transliteration from the `translate` endpoint, use the `toScript` parameter.
-
-> [!NOTE]
-> For a complete list of available languages and transliteration options, see [language support](language-support.md).
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // Output language defined as parameter, with toScript set to latn
- string route = "/translate?api-version=3.0&to=th&toScript=latn";
- string textToTransliterate = "Hello";
- object[] body = new object[] { new { Text = textToTransliterate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
--
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "th")
- q.Add("toScript", "latn")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Hello"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
--
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class Translate {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/translate")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("to", "th")
- .addQueryParameter("toScript", "latn")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Hello\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- Translate translateRequest = new Translate();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
--
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': 'th',
- 'toScript': 'latn'
- },
- data: [{
- 'text': 'Hello'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```Python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'to': 'th',
- 'toScript': 'latn'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Hello'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
----
-After a successful call, you should see the following response. Keep in mind that the response from `translate` endpoint includes the detected source language with a confidence score, a translation using the alphabet of the output language, and a transliteration using the Latin alphabet.
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "text": "สวัสดี",
- "to": "th",
- "transliteration": {
- "script": "Latn",
- "text": "sawatdi"
- }
- }
- ]
- }
-]
-```
-
-### Transliterate without translation
-
-You can also use the `transliterate` endpoint to get a transliteration. When using the transliteration endpoint, you must provide the source language (`language`), the source script/alphabet (`fromScript`), and the output script/alphabet (`toScript`) as parameters. In this example, we're going to get the transliteration for สวัสดี.
-
-> [!NOTE]
-> For a complete list of available languages and transliteration options, see [language support](language-support.md).
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // For a complete list of options, see API reference.
- // Input and output languages are defined as parameters.
- string route = "/translate?api-version=3.0&to=th&toScript=latn";
- string textToTransliterate = "Hello";
- object[] body = new object[] { new { Text = textToTransliterate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
--
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/transliterate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("language", "th")
- q.Add("fromScript", "thai")
- q.Add("toScript", "latn")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "สวัสดี"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
---
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class Transliterate {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/transliterate")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("language", "th")
- .addQueryParameter("fromScript", "thai")
- .addQueryParameter("toScript", "latn")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"สวัสดี\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- Transliterate transliterateRequest = new Transliterate();
- String response = transliterateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
--
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/transliterate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'language': 'th',
- 'fromScript': 'thai',
- 'toScript': 'latn'
- },
- data: [{
- 'text': 'สวัสดี'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/transliterate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'language': 'th',
- 'fromScript': 'thai',
- 'toScript': 'latn'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'สวัสดี'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
-```
----
-After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `transliterate` only returns the `script` and the output `text`.
-
-```json
-[
- {
- "script": "latn",
- "text": "sawatdi"
- }
-]
-```
-
-## Get sentence length
-
-With the Translator service, you can get the character count for a sentence or series of sentences. The response is returned as an array, with character counts for each sentence detected. You can get sentence lengths with the `translate` and `breaksentence` endpoints.
-
-### Get sentence length during translation
-
-You can get character counts for both source text and translation output using the `translate` endpoint. To return sentence length (`srcSenLen` and `transSenLen`) you must set the `includeSentenceLength` parameter to `True`.
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // Include sentence length details.
- string route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
- string sentencesToCount =
- "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
- object[] body = new object[] { new { Text = sentencesToCount } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
--
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/translate?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("to", "es")
- q.Add("includeSentenceLength", "true")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
---
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class Translate {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/translate")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("to", "es")
- .addQueryParameter("includeSentenceLength", "true")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- Translate translateRequest = new Translate();
- String response = translateRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
-
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/translate',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'to': 'es',
- 'includeSentenceLength': true
- },
- data: [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/translate'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'to': 'es',
- 'includeSentenceLength': True
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
-----
-After a successful call, you should see the following response. In addition to the detected source language and translation, you'll get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "translations": [
- {
- "sentLen": {
- "srcSentLen": [
- 44,
- 21,
- 12
- ],
- "transSentLen": [
- 48,
- 18,
- 10
- ]
- },
- "text": "¿Puedes decirme cómo llegar a la estación Penn? ¿No estás seguro? Está bien.",
- "to": "es"
- }
- ]
- }
-]
-```
-
-### Get sentence length without translation
-
-The Translator service also lets you request sentence length without translation using the `breaksentence` endpoint.
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // Only include sentence length details.
- string route = "/breaksentence?api-version=3.0";
- string sentencesToCount =
- "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
- object[] body = new object[] { new { Text = sentencesToCount } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
---
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/breaksentence?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
-
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class BreakSentence {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/breaksentence")
- .addQueryParameter("api-version", "3.0")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- BreakSentence breakSentenceRequest = new BreakSentence();
- String response = breakSentenceRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
---
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/breaksentence',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0'
- },
- data: [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
---
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/breaksentence'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
-```
----
-After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `breaksentence` only returns the character counts for the source text in an array called `sentLen`.
-
-```json
-[
- {
- "detectedLanguage": {
- "language": "en",
- "score": 1.0
- },
- "sentLen": [
- 44,
- 21,
- 12
- ]
- }
-]
-```
-
-## Dictionary lookup (alternate translations)
-
-With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "shark" from `en` to `es`, this endpoint returns both "tibur├│n" and "escualo".
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // See many translation options
- string route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
- string wordToTranslate = "shark";
- object[] body = new object[] { new { Text = wordToTranslate } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
--
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/dictionary/lookup?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "es")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- }{
- {Text: "shark"},
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
--
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class DictionaryLookup {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/dictionary/lookup")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("from", "en")
- .addQueryParameter("to", "es")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Shark\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- DictionaryLookup dictionaryLookupRequest = new DictionaryLookup();
- String response = dictionaryLookupRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
---
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/dictionary/lookup',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
- data: [{
- 'text': 'shark'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/dictionary/lookup'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'shark'
-}]
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
----
-After a successful call, you should see the following response. Let's break this down since the JSON is more complex than some of the other examples in this article. The `translations` array includes a list of translations. Each object in this array includes a confidence score (`confidence`), the text optimized for end-user display (`displayTarget`), the normalized text (`normalizedText`), the part of speech (`posTag`), and information about previous translation (`backTranslations`). For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-lookup.md)
-
-```json
-[
- {
- "displaySource": "shark",
- "normalizedSource": "shark",
- "translations": [
- {
- "backTranslations": [
- {
- "displayText": "shark",
- "frequencyCount": 45,
- "normalizedText": "shark",
- "numExamples": 0
- }
- ],
- "confidence": 0.8182,
- "displayTarget": "tibur├│n",
- "normalizedTarget": "tibur├│n",
- "posTag": "OTHER",
- "prefixWord": ""
- },
- {
- "backTranslations": [
- {
- "displayText": "shark",
- "frequencyCount": 10,
- "normalizedText": "shark",
- "numExamples": 1
- }
- ],
- "confidence": 0.1818,
- "displayTarget": "escualo",
- "normalizedTarget": "escualo",
- "posTag": "NOUN",
- "prefixWord": ""
- }
- ]
- }
-]
-```
-
-## Dictionary examples (translations in context)
-
-After you've performed a dictionary lookup, you can pass the source text and translation to the `dictionary/examples` endpoint to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you'll use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
-
-# [C#](#tab/csharp)
-
-```csharp
-using System;
-using System.Net.Http;
-using System.Text;
-using System.Threading.Tasks;
-using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
-
-class Program
-{
- private static readonly string key = "YOUR-KEY";
- private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com/";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static readonly string location = "YOUR_RESOURCE_LOCATION";
-
- static async Task Main(string[] args)
- {
- // See examples of terms in context
- string route = "/dictionary/examples?api-version=3.0&from=en&to=es";
- object[] body = new object[] { new { Text = "Shark", Translation = "tibur├│n" } } ;
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- // Build the request.
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(endpoint + route);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", location);
-
- // Send the request and get response.
- HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
- // Read response as a string.
- string result = await response.Content.ReadAsStringAsync();
- Console.WriteLine(result);
- }
- }
-}
-```
---
-# [Go](#tab/go)
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/json"
- "fmt"
- "log"
- "net/http"
- "net/url"
-)
-
-func main() {
- key := "YOUR-KEY"
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- location := "YOUR_RESOURCE_LOCATION";
- endpoint := "https://api.cognitive.microsofttranslator.com/"
- uri := endpoint + "/dictionary/examples?api-version=3.0"
-
- // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
- u, _ := url.Parse(uri)
- q := u.Query()
- q.Add("from", "en")
- q.Add("to", "es")
- u.RawQuery = q.Encode()
-
- // Create an anonymous struct for your request body and encode it to JSON
- body := []struct {
- Text string
- Translation string
- }{
- {
- Text: "Shark",
- Translation: "tibur├│n",
- },
- }
- b, _ := json.Marshal(body)
-
- // Build the HTTP POST request
- req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
- if err != nil {
- log.Fatal(err)
- }
- // Add required headers to the request
- req.Header.Add("Ocp-Apim-Subscription-Key", key)
- req.Header.Add("Ocp-Apim-Subscription-Region", location)
- req.Header.Add("Content-Type", "application/json")
-
- // Call the Translator Text API
- res, err := http.DefaultClient.Do(req)
- if err != nil {
- log.Fatal(err)
- }
-
- // Decode the JSON response
- var result interface{}
- if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
- log.Fatal(err)
- }
- // Format and print the response to terminal
- prettyJSON, _ := json.MarshalIndent(result, "", " ")
- fmt.Printf("%s\n", prettyJSON)
-}
-```
--
-# [Java](#tab/java)
-
-```java
-import java.io.*;
-import java.net.*;
-import java.util.*;
-import com.google.gson.*;
-import com.squareup.okhttp.*;
-
-public class DictionaryExamples {
- private static String key = "YOUR_KEY";
-
- // Add your location, also known as region. The default is global.
- // This is required if using a Cognitive Services resource.
- private static String location = "YOUR_RESOURCE_LOCATION";
-
- HttpUrl url = new HttpUrl.Builder()
- .scheme("https")
- .host("api.cognitive.microsofttranslator.com")
- .addPathSegment("/dictionary/examples")
- .addQueryParameter("api-version", "3.0")
- .addQueryParameter("from", "en")
- .addQueryParameter("to", "es")
- .build();
-
- // Instantiates the OkHttpClient.
- OkHttpClient client = new OkHttpClient();
-
- // This function performs a POST request.
- public String Post() throws IOException {
- MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType,
- "[{\"Text\": \"Shark\", \"Translation\": \"tibur├│n\"}]");
- Request request = new Request.Builder().url(url).post(body)
- .addHeader("Ocp-Apim-Subscription-Key", key)
- .addHeader("Ocp-Apim-Subscription-Region", location)
- .addHeader("Content-type", "application/json")
- .build();
- Response response = client.newCall(request).execute();
- return response.body().string();
- }
-
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
-
- public static void main(String[] args) {
- try {
- DictionaryExamples dictionaryExamplesRequest = new DictionaryExamples();
- String response = dictionaryExamplesRequest.Post();
- System.out.println(prettify(response));
- } catch (Exception e) {
- System.out.println(e);
- }
- }
-}
-```
--
-# [Node.js](#tab/nodejs)
-
-```javascript
-const axios = require('axios').default;
-const { v4: uuidv4 } = require('uuid');
-
-var key = "YOUR_KEY";
-var endpoint = "https://api.cognitive.microsofttranslator.com";
-
-// Add your location, also known as region. The default is global.
-// This is required if using a Cognitive Services resource.
-var location = "YOUR_RESOURCE_LOCATION";
-
-axios({
- baseURL: endpoint,
- url: '/dictionary/examples',
- method: 'post',
- headers: {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': uuidv4().toString()
- },
- params: {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
- },
- data: [{
- 'text': 'shark',
- 'translation': 'tibur├│n'
- }],
- responseType: 'json'
-}).then(function(response){
- console.log(JSON.stringify(response.data, null, 4));
-})
-```
--
-# [Python](#tab/python)
-```python
-import requests, uuid, json
-
-# Add your key and endpoint
-key = "YOUR_KEY"
-endpoint = "https://api.cognitive.microsofttranslator.com"
-
-# Add your location, also known as region. The default is global.
-# This is required if using a Cognitive Services resource.
-location = "YOUR_RESOURCE_LOCATION"
-
-path = '/dictionary/examples'
-constructed_url = endpoint + path
-
-params = {
- 'api-version': '3.0',
- 'from': 'en',
- 'to': 'es'
-}
-
-headers = {
- 'Ocp-Apim-Subscription-Key': key,
- 'Ocp-Apim-Subscription-Region': location,
- 'Content-type': 'application/json',
- 'X-ClientTraceId': str(uuid.uuid4())
-}
-
-# You can pass more than one object in body.
-body = [{
- 'text': 'shark',
- 'translation': 'tibur├│n'
-}]
-
-request = requests.post(constructed_url, params=params, headers=headers, json=body)
-response = request.json()
-
-print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
-```
-----
-After a successful call, you should see the following response. For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-examples.md)
-
-```json
-[
- {
- "examples": [
- {
- "sourcePrefix": "More than a match for any ",
- "sourceSuffix": ".",
- "sourceTerm": "shark",
- "targetPrefix": "Más que un fósforo para cualquier ",
- "targetSuffix": ".",
- "targetTerm": "tibur├│n"
- },
- {
- "sourcePrefix": "Same with the mega ",
- "sourceSuffix": ", of course.",
- "sourceTerm": "shark",
- "targetPrefix": "Y con el mega ",
- "targetSuffix": ", por supuesto.",
- "targetTerm": "tibur├│n"
- },
- {
- "sourcePrefix": "A ",
- "sourceSuffix": " ate it.",
- "sourceTerm": "shark",
- "targetPrefix": "Te la ha comido un ",
- "targetSuffix": ".",
- "targetTerm": "tibur├│n"
- }
- ],
- "normalizedSource": "shark",
- "normalizedTarget": "tibur├│n"
- }
-]
-```
+## Next step
-## Troubleshooting
+ Explore our how-to documentation and take a deeper dive into Translation service capabilities:
-### Common HTTP status codes
+* [**Translate text**](translator-text-apis.md#translate-text)
-| HTTP status code | Description | Possible reason |
-||-|--|
-| 200 | OK | The request was successful. |
-| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
-| 401 | Unauthorized | The request is not authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
-| 429 | Too Many Requests | You have exceeded the quota or rate of requests allowed for your subscription. |
-| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+* [**Transliterate text**](translator-text-apis.md#transliterate-text)
-### Java users
+* [**Detect and identify language**](translator-text-apis.md#detect-language)
-If you're encountering connection issues, it may be that your SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
+* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
-## Next steps
+* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
-> [!div class="nextstepaction"]
-> [Customize and improve translation](customization.md)
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
+
+ Title: "Use Azure Cognitive Services Translator APIs"
+
+description: "Learn to translate text, transliterate text, detect language and more with the Translator service. Examples are provided in C#, Java, JavaScript and Python."
++++++ Last updated : 06/20/2022+
+ms.devlang: csharp, golang, java, javascript, python
+
+keywords: translator, translator service, translate text, transliterate text, language detection
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+
+# Use Azure Cognitive Services Translator APIs
+
+In this how-to guide, you'll learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You'll start with basic examples and move onto some core configuration options that are commonly used during development, including:
+
+* [Translation](#translate-text)
+* [Transliteration](#transliterate-text)
+* [Language identification/detection](#detect-language)
+* [Calculate sentence length](#get-sentence-length)
+* [Get alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A Cognitive Services or Translator resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
+
+* You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ > [!TIP]
+ > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../active-directory/authentication/overview-authentication.md).
+
+* You'll need the key and endpoint from the resource to connect your application to the Translator service. Later, you'll paste your key and endpoint into the code samples. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+## Headers
+
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to make sure the following headers are included with each request. Don't worry, we'll include the headers in the sample code in the following sections.
+
+|Header|Value| Condition |
+| |: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|<ul><li>***Required***</li></ul> |
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |<ul><li>***Required*** when using a multi-service Cognitive Services Resource.</li><li> ***Optional*** when using a single-service Translator Resource.</li></ul>|
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|<ul><li>***Required***</li></ul>|
+|**Content-Length**|The **length of the request** body.|<ul><li>***Optional***</li></ul> |
+|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul>
+|||
+
+## Setup your application
+
+### [C#](#tab/csharp)
+
+1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
+
+ > [!TIP]
+ >
+ > If you're new to Visual Studio, try the [**Introduction to Visual Studio**](/learn/modules/go-get-started/) Microsoft Learn module.
+
+1. Open Visual Studio.
+
+1. On the Start page, choose **Create a new project**.
+
+ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
+
+1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
+
+ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
+
+1. In the **Configure your new project** dialog window, enter `translator_text_app` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
+
+ :::image type="content" source="media/how-to-guides/configure-your-console-app.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
+
+1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
+
+ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
+
+### Install the Newtonsoft.json package with NuGet
+
+1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
+
+ :::image type="content" source="media/how-to-guides/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
+
+1. Select the Browse tab and type Newtonsoft.
+
+ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
+
+1. Select install from the right package manager window to add the package to your project.
+
+ :::image type="content" source="media/how-to-guides/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+
+### Build your application
+
+> [!NOTE]
+>
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
+> * For more information, *see* [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+
+1. Open the **Program.cs** file.
+
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. You will copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
+
+1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
++
+### [Go](#tab/go)
+
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+
+> [!TIP]
+>
+> If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
+
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install]).
+
+ * Download the Go version for your operating system.
+ * Once the download is complete, run the installer.
+ * Open a command prompt and enter the following to confirm Go was installed:
+
+ ```console
+ go version
+ ```
+
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+1. Create a new GO file named **text-translator.go** from the **translator-text-app** directory.
+
+1. You will copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
+
+1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
+
+ ```console
+ go run translation.go
+ ```
+
+### [Java](#tab/java)
+
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using VS Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
+
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+
+### Create a new Gradle project
+
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+ ```console
+ mkdir translator-text-app && translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the `gradle init` command from the translator-text-app directory. This command will create essential build files for Gradle, including *build.gradle.kts*, which is used at runtime to create and configure your application.
+
+ ```console
+ gradle init --type basic
+ ```
+
+1. When prompted to choose a **DSL**, select **Kotlin**.
+
+1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
+
+1. Update `build.gradle.kts` with the following code:
+
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("TextTranslator")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation("com.squareup.okhttp3:okhttp:4.10.0")
+ implementation("com.google.code.gson:gson:2.9.0")
+ }
+ ```
+
+### Create a Java Application
+
+1. From the translator-text-app directory, run the following command:
+
+ ```console
+ mkdir -p src/main/java
+ ```
+
+ You'll create the following directory structure:
+
+ :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
+
+1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item TranslatorText.java**.
+ >
+ > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
+
+1. You will copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
+
+1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+
+ 1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+ 1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+
+### [Node.js](#tab/nodejs)
+
+1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+
+ > [!TIP]
+ >
+ > If you're new to Node.js, try the [**Introduction to Node.js**](/learn/modules/intro-to-nodejs/) Microsoft Learn module.
+
+1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-text-app`.
+
+ ```console
+ mkdir translator-text-app && cd translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the npm init command to initialize the application and scaffold your project.
+
+ ```console
+ npm init
+ ```
+
+1. Specify your project's attributes using the prompts presented in the terminal.
+
+ * The most important attributes are name, version number, and entry point.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
+ * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
+ * After you've completed the prompts, a `package.json` file will be created in your translator-text-app directory.
+
+1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
+
+ ```console
+ npm install axios uuid
+ ```
+
+1. Create the `index.js` file in the application directory.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item index.js**.
+ >
+ > * You can also create a new file named `index.js` in your IDE and save it to the `translator-text-app` directory.
+
+1. You will copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
+
+1. Once you've added the code sample to your application, run your program:
+
+ 1. Navigate to your application directory (translator-text-app).
+
+ 1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+
+### [Python](#tab/python)
+
+1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+
+ > [!TIP]
+ >
+ > If you're new to Python, try the [**Introduction to Python**](/learn/paths/beginner-python/) Microsoft Learn module.
+
+1. Open a terminal window and use pip to install the Requests library and uuid0 package:
+
+ ```console
+ pip install requests uuid
+ ```
+
+ > [!NOTE]
+ > We will also use a Python built-in package called json. It's used to work with JSON data.
+
+1. Create a new Python file called **text-translator.py** in your preferred editor or IDE.
+
+1. Add the following code sample to your `text-translator.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
+
+1. Once you've added a desired code sample to your application, build and run your program:
+
+ 1. Navigate to your **text-translator.py** file.
+
+ 1. Type the following command in your console:
+
+ ```console
+ python text-translator.py
+ ```
+++
+> [!IMPORTANT]
+> The samples in this guide require hard-coded keys and endpoints.
+> Remember to **remove the key from your code when you're done**, and **never post it publicly**.
+> For production, consider using a secure way of storing and accessing your credentials. For more information, *see* [Cognitive Services security](../cognitive-services-security.md).
+
+## Translate text
+
+The core operation of the Translator service is to translate text. In this section, you'll build a request that takes a single source (`from`) and provides two outputs (`to`). Then we'll review some parameters that can be used to adjust both the request and the response.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System.Text;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource and can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Input and output languages are defined as parameters.
+ string route = "/translate?api-version=3.0&from=en&to=sw&to=it";
+ string textToTranslate = "Hello, friend! What did you do today?";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "it")
+ q.Add("to", "sw")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Hello friend! What did you do today?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&from=en&to=sw&to=it";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+ const { v4: uuidv4 } = require('uuid');
+
+ let key = "<your-translator-key>";
+ let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+ let location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['sw', 'it']
+ },
+ data: [{
+ 'text': 'Hello, friend! What did you do today?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['sw', 'it']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hello, friend! What did you do today?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations":[
+ {
+ "text":"Halo, rafiki! Ulifanya nini leo?",
+ "to":"sw"
+ },
+ {
+ "text":"Ciao, amico! Cosa hai fatto oggi?",
+ "to":"it"
+ }
+ ]
+ }
+]
+```
+
+## Detect language
+
+If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
+
+### Detect source language during translation
+
+If you don't include the `from` parameter in your translation request, the Translator service will attempt to detect the source text's language. In the response, you'll get the detected language (`language`) and a confidence score (`score`). The closer the `score` is to `1.0`, means that there's increased confidence that the detection is correct.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Output languages are defined as parameters, input language detected.
+ string route = "/translate?api-version=3.0&to=en&to=it";
+ string textToTranslate = "Halo, rafiki! Ulifanya nini leo?";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "en")
+ q.Add("to", "it")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Halo rafiki! Ulifanya nini leo?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=en&to=it";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Halo, rafiki! Ulifanya nini leo?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': ['en', 'it']
+ },
+ data: [{
+ 'text': 'Halo, rafiki! Ulifanya nini leo?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': ['en', 'it']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Halo, rafiki! Ulifanya nini leo?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"sw",
+ "score":0.8
+ },
+ "translations":[
+ {
+ "text":"Hello friend! What did you do today?",
+ "to":"en"
+ },
+ {
+ "text":"Ciao amico! Cosa hai fatto oggi?",
+ "to":"it"
+ }
+ ]
+ }
+]
+```
+
+### Detect source language without translation
+
+It's possible to use the Translator service to detect the language of source text without performing a translation. To do so, you'll use the [`/detect`](./reference/v3-0-detect.md) endpoint.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Just detect language
+ string route = "/detect?api-version=3.0";
+ string textToLangDetect = "Hallo Freund! Was hast du heute gemacht?";
+ object[] body = new object[] { new { Text = textToLangDetect } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/detect?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Ciao amico! Cosa hai fatto oggi?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/detect?api-version=3.0";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hallo Freund! Was hast du heute gemacht?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText detectRequest = new TranslatorText();
+ String response = detectRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/detect',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0'
+ },
+ data: [{
+ 'text': 'Hallo Freund! Was hast du heute gemacht?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/detect'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hallo Freund! Was hast du heute gemacht?'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+The `/detect` endpoint response will include alternate detections, and will let you know if translation and transliteration are supported for all of the detected languages. After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "language":"de",
+
+ "score":1.0,
+
+ "isTranslationSupported":true,
+
+ "isTransliterationSupported":false
+ }
+]
+```
+
+## Transliterate text
+
+Transliteration is the process of converting a word or phrase from the script (alphabet) of one language to another based on phonetic similarity. For example, you could use transliteration to convert "สวัสดี" (`thai`) to "sawatdi" (`latn`). There's more than one way to perform transliteration. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `transliterate` endpoint.
+
+### Transliterate during translation
+
+If you're translating into a language that uses a different alphabet (or phonemes) than your source, you might need a transliteration. In this example, we translate "Hello" from English to Thai. In addition to getting the translation in Thai, you'll get a transliteration of the translated phrase using the Latin alphabet.
+
+To get a transliteration from the `translate` endpoint, use the `toScript` parameter.
+
+> [!NOTE]
+> For a complete list of available languages and transliteration options, see [language support](language-support.md).
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Output language defined as parameter, with toScript set to latn
+ string route = "/translate?api-version=3.0&to=th&toScript=latn";
+ string textToTransliterate = "Hello, friend! What did you do today?";
+ object[] body = new object[] { new { Text = textToTransliterate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "th")
+ q.Add("toScript", "latn")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Hello, friend! What did you do today?"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=th&toScript=latn";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Hello, friend! What did you do today?\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': 'th',
+ 'toScript': 'latn'
+ },
+ data: [{
+ 'text': 'Hello, friend! What did you do today?'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```Python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': 'th',
+ 'toScript': 'latn'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Hello, friend! What did you do today?'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Keep in mind that the response from `translate` endpoint includes the detected source language with a confidence score, a translation using the alphabet of the output language, and a transliteration using the Latin alphabet.
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1
+ },
+ "translations": [
+ {
+ "text": "หวัดดีเพื่อน! วันนี้เธอทำอะไรไปบ้าง ",
+ "to": "th",
+ "transliteration": {
+ "script": "Latn",
+ "text": "watdiphuean! wannithoethamaraipaiang"
+ }
+ }
+ ]
+ }
+]
+```
+
+### Transliterate without translation
+
+You can also use the `transliterate` endpoint to get a transliteration. When using the transliteration endpoint, you must provide the source language (`language`), the source script/alphabet (`fromScript`), and the output script/alphabet (`toScript`) as parameters. In this example, we're going to get the transliteration for สวัสดีเพื่อน! วันนี้คุณทำอะไร.
+
+> [!NOTE]
+> For a complete list of available languages and transliteration options, see [language support](language-support.md).
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // For a complete list of options, see API reference.
+ // Input and output languages are defined as parameters.
+ string route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
+ string textToTransliterate = "สวัสดีเพื่อน! วันนี้คุณทำอะไร";
+ object[] body = new object[] { new { Text = textToTransliterate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/transliterate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("language", "th")
+ q.Add("fromScript", "thai")
+ q.Add("toScript", "latn")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "สวัสดีเพื่อน! วันนี้คุณทำอะไร"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/transliterate?api-version=3.0&language=th&fromScript=thai&toScript=latn";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"สวัสดีเพื่อน! วันนี้คุณทำอะไร\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText transliterateRequest = new TranslatorText();
+ String response = transliterateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/transliterate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'language': 'th',
+ 'fromScript': 'thai',
+ 'toScript': 'latn'
+ },
+ data: [{
+ 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/transliterate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'language': 'th',
+ 'fromScript': 'thai',
+ 'toScript': 'latn'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'สวัสดีเพื่อน! วันนี้คุณทำอะไร'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `transliterate` only returns the `text` and the output `script`.
+
+```json
+[
+ {
+ "text":"sawatdiphuean! wannikhunthamarai",
+
+ "script":"latn"
+ }
+]
+```
+
+## Get sentence length
+
+With the Translator service, you can get the character count for a sentence or series of sentences. The response is returned as an array, with character counts for each sentence detected. You can get sentence lengths with the `translate` and `breaksentence` endpoints.
+
+### Get sentence length during translation
+
+You can get character counts for both source text and translation output using the `translate` endpoint. To return sentence length (`srcSenLen` and `transSenLen`) you must set the `includeSentenceLength` parameter to `True`.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Include sentence length details.
+ string route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
+ string sentencesToCount =
+ "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
+ object[] body = new object[] { new { Text = sentencesToCount } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("to", "es")
+ q.Add("includeSentenceLength", "true")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/translate?api-version=3.0&to=es&includeSentenceLength=true";
+ public static String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'to': 'es',
+ 'includeSentenceLength': true
+ },
+ data: [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'to': 'es',
+ 'includeSentenceLength': True
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. In addition to the detected source language and translation, you'll get character counts for each detected sentence for both the source (`srcSentLen`) and translation (`transSentLen`).
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"en",
+ "score":1.0
+ },
+ "translations":[
+ {
+ "text":"¿Puedes decirme cómo llegar a Penn Station? Oh, ¿no estás seguro? Está bien.",
+ "to":"es",
+ "sentLen":{
+ "srcSentLen":[
+ 44,
+ 21,
+ 12
+ ],
+ "transSentLen":[
+ 44,
+ 22,
+ 10
+ ]
+ }
+ }
+ ]
+ }
+]
+```
+
+### Get sentence length without translation
+
+The Translator service also lets you request sentence length without translation using the `breaksentence` endpoint.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Only include sentence length details.
+ string route = "/breaksentence?api-version=3.0";
+ string sentencesToCount =
+ "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine.";
+ object[] body = new object[] { new { Text = sentencesToCount } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/breaksentence?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "Can you tell me how to get to Penn Station? Oh, you aren't sure? That's fine."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/breaksentence?api-version=3.0";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText breakSentenceRequest = new TranslatorText();
+ String response = breakSentenceRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/breaksentence',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0'
+ },
+ data: [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/breaksentence'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'Can you tell me how to get to Penn Station? Oh, you aren\'t sure? That\'s fine.'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Unlike the call to the `translate` endpoint, `breaksentence` only returns the character counts for the source text in an array called `sentLen`.
+
+```json
+[
+ {
+ "detectedLanguage":{
+ "language":"en",
+ "score":1.0
+ },
+ "sentLen":[
+ 44,
+ 21,
+ 12
+ ]
+ }
+]
+```
+
+## Dictionary lookup (alternate translations)
+
+With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "luz solar", "rayos solares", and "soleamiento", "sol", and "insolaci├│n".
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // See many translation options
+ string route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
+ string wordToTranslate = "sunlight";
+ object[] body = new object[] { new { Text = wordToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/dictionary/lookup?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "es")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "sunlight"},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/dictionary/lookup?api-version=3.0&from=en&to=es";
+ public String url = endpoint.concat(route);
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"sunlight\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText breakSentenceRequest = new TranslatorText();
+ String response = breakSentenceRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/dictionary/lookup',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+ },
+ data: [{
+ 'text': 'sunlight'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/dictionary/lookup'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'sunlight'
+}]
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. Let's examine the response more closely since the JSON is more complex than some of the other examples in this article. The `translations` array includes a list of translations. Each object in this array includes a confidence score (`confidence`), the text optimized for end-user display (`displayTarget`), the normalized text (`normalizedText`), the part of speech (`posTag`), and information about previous translation (`backTranslations`). For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-lookup.md)
+
+```json
+[
+ {
+ "normalizedSource":"sunlight",
+ "displaySource":"sunlight",
+ "translations":[
+ {
+ "normalizedTarget":"luz solar",
+ "displayTarget":"luz solar",
+ "posTag":"NOUN",
+ "confidence":0.5313,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":15,
+ "frequencyCount":702
+ },
+ {
+ "normalizedText":"sunshine",
+ "displayText":"sunshine",
+ "numExamples":7,
+ "frequencyCount":27
+ },
+ {
+ "normalizedText":"daylight",
+ "displayText":"daylight",
+ "numExamples":4,
+ "frequencyCount":17
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"rayos solares",
+ "displayTarget":"rayos solares",
+ "posTag":"NOUN",
+ "confidence":0.1544,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":4,
+ "frequencyCount":38
+ },
+ {
+ "normalizedText":"rays",
+ "displayText":"rays",
+ "numExamples":11,
+ "frequencyCount":30
+ },
+ {
+ "normalizedText":"sunrays",
+ "displayText":"sunrays",
+ "numExamples":0,
+ "frequencyCount":6
+ },
+ {
+ "normalizedText":"sunbeams",
+ "displayText":"sunbeams",
+ "numExamples":0,
+ "frequencyCount":4
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"soleamiento",
+ "displayTarget":"soleamiento",
+ "posTag":"NOUN",
+ "confidence":0.1264,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":0,
+ "frequencyCount":7
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"sol",
+ "displayTarget":"sol",
+ "posTag":"NOUN",
+ "confidence":0.1239,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"sun",
+ "displayText":"sun",
+ "numExamples":15,
+ "frequencyCount":20387
+ },
+ {
+ "normalizedText":"sunshine",
+ "displayText":"sunshine",
+ "numExamples":15,
+ "frequencyCount":1439
+ },
+ {
+ "normalizedText":"sunny",
+ "displayText":"sunny",
+ "numExamples":15,
+ "frequencyCount":265
+ },
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":15,
+ "frequencyCount":242
+ }
+ ]
+ },
+ {
+ "normalizedTarget":"insolaci├│n",
+ "displayTarget":"insolaci├│n",
+ "posTag":"NOUN",
+ "confidence":0.064,
+ "prefixWord":"",
+ "backTranslations":[
+ {
+ "normalizedText":"heat stroke",
+ "displayText":"heat stroke",
+ "numExamples":3,
+ "frequencyCount":67
+ },
+ {
+ "normalizedText":"insolation",
+ "displayText":"insolation",
+ "numExamples":1,
+ "frequencyCount":55
+ },
+ {
+ "normalizedText":"sunstroke",
+ "displayText":"sunstroke",
+ "numExamples":2,
+ "frequencyCount":31
+ },
+ {
+ "normalizedText":"sunlight",
+ "displayText":"sunlight",
+ "numExamples":0,
+ "frequencyCount":12
+ },
+ {
+ "normalizedText":"solarization",
+ "displayText":"solarization",
+ "numExamples":0,
+ "frequencyCount":7
+ },
+ {
+ "normalizedText":"sunning",
+ "displayText":"sunning",
+ "numExamples":1,
+ "frequencyCount":7
+ }
+ ]
+ }
+ ]
+ }
+]
+```
+
+## Dictionary examples (translations in context)
+
+After you've performed a dictionary lookup, you can pass the source text and translation to the `dictionary/examples` endpoint to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you'll use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
+
+### [C#](#tab/csharp)
+
+```csharp
+using System;
+using Newtonsoft.Json; // Install Newtonsoft.Json with NuGet
+
+class Program
+{
+ private static readonly string key = "<YOUR-TRANSLATOR-KEY>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // See examples of terms in context
+ string route = "/dictionary/examples?api-version=3.0&from=en&to=es";
+ object[] body = new object[] { new { Text = "sunlight", Translation = "luz solar" } } ;
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+```
+
+### [Go](#tab/go)
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ location := "<YOUR-RESOURCE-LOCATION>";
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/dictionary/examples?api-version=3.0"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "es")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ Translation string
+ }{
+ {
+ Text: "sunlight",
+ Translation: "luz solar",
+ },
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator Text API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+
+### [Java](#tab/java)
+
+```java
+import java.io.*;
+import java.net.*;
+import java.util.*;
+import com.google.gson.*;
+import com.squareup.okhttp.*;
+
+public class TranslatorText {
+ private static String key = "<YOUR-TRANSLATOR-KEY>";
+ public String endpoint = "https://api.cognitive.microsofttranslator.com";
+ public String route = "/dictionary/examples?api-version=3.0&from=en&to=es";
+ public String url = endpoint.concat(route);
++
+ // Add your location, also known as region. The default is global.
+ // This is required if using a Cognitive Services resource.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
+
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"sunlight\", \"Translation\": \"luz solar\"}]");
+ Request request = new Request.Builder()
+ .url(url)
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText dictionaryExamplesRequest = new TranslatorText();
+ String response = dictionaryExamplesRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+
+### [Node.js](#tab/nodejs)
+
+```javascript
+const axios = require('axios').default;
+const { v4: uuidv4 } = require('uuid');
+
+var key = "<YOUR-TRANSLATOR-KEY>";
+var endpoint = "https://api.cognitive.microsofttranslator.com";
+
+// Add your location, also known as region. The default is global.
+// This is required if using a Cognitive Services resource.
+var location = "<YOUR-RESOURCE-LOCATION>";
+
+axios({
+ baseURL: endpoint,
+ url: '/dictionary/examples',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+ },
+ data: [{
+ 'text': 'sunlight',
+ 'translation': 'luz solar'
+ }],
+ responseType: 'json'
+}).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+})
+```
+
+### [Python](#tab/python)
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<YOUR-TRANSLATOR-KEY>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# Add your location, also known as region. The default is global.
+# This is required if using a Cognitive Services resource.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/dictionary/examples'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': 'es'
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'sunlight',
+ 'translation': 'luz solar'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+```
+++
+After a successful call, you should see the following response. For more information about the response, see [Dictionary Lookup](reference/v3-0-dictionary-examples.md)
+
+```json
+[
+ {
+ "normalizedSource":"sunlight",
+ "normalizedTarget":"luz solar",
+ "examples":[
+ {
+ "sourcePrefix":"You use a stake, silver, or ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Se usa una estaca, plata, o ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"A pocket of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Una bolsa de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"There must also be ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"También debe haber ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"We were living off of current ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Estábamos viviendo de la ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" actual."
+ },
+ {
+ "sourcePrefix":"And they don't need unbroken ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Y ellos no necesitan ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" ininterrumpida."
+ },
+ {
+ "sourcePrefix":"We have lamps that give the exact equivalent of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Disponemos de lámparas que dan el equivalente exacto de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"Plants need water and ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Las plantas necesitan agua y ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"So this requires ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":".",
+ "targetPrefix":"Así que esto requiere ",
+ "targetTerm":"luz solar",
+ "targetSuffix":"."
+ },
+ {
+ "sourcePrefix":"And this pocket of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" freed humans from their ...",
+ "targetPrefix":"Y esta bolsa de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", liber├│ a los humanos de ..."
+ },
+ {
+ "sourcePrefix":"Since there is no ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", the air within ...",
+ "targetPrefix":"Como no hay ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", el aire atrapado en ..."
+ },
+ {
+ "sourcePrefix":"The ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" shining through the glass creates a ...",
+ "targetPrefix":"La ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" a través de la vidriera crea una ..."
+ },
+ {
+ "sourcePrefix":"Less ice reflects less ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", and more open ocean ...",
+ "targetPrefix":"Menos hielo refleja menos ",
+ "targetTerm":"luz solar",
+ "targetSuffix":", y más mar abierto ..."
+ },
+ {
+ "sourcePrefix":"",
+ "sourceTerm":"Sunlight",
+ "sourceSuffix":" is most intense at midday, so ...",
+ "targetPrefix":"La ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" es más intensa al mediodía, por lo que ..."
+ },
+ {
+ "sourcePrefix":"... capture huge amounts of ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":", so fueling their growth.",
+ "targetPrefix":"... capturan enormes cantidades de ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" que favorecen su crecimiento."
+ },
+ {
+ "sourcePrefix":"... full height, giving more direct ",
+ "sourceTerm":"sunlight",
+ "sourceSuffix":" in the winter.",
+ "targetPrefix":"... altura completa, dando más ",
+ "targetTerm":"luz solar",
+ "targetSuffix":" directa durante el invierno."
+ }
+ ]
+ }
+]
+```
+
+## Troubleshooting
+
+### Common HTTP status codes
+
+| HTTP status code | Description | Possible reason |
+||-|--|
+| 200 | OK | The request was successful. |
+| 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+
+### Java users
+
+If you're encountering connection issues, it may be that your TLS/SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Customize and improve translation](customization.md)
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
+
+ Title: Limited Access features for Cognitive Services
+
+description: Azure Cognitive Services that are available with Limited Access are described below.
+++++ Last updated : 06/16/2022+++
+# Limited Access features for Cognitive Services
+
+Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services.
+
+## What is Limited Access?
+
+Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they have reviewed and agree to the terms of service. Microsoft may require customers to re-verify this information.
+
+Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Please review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
+
+## List of Limited Access services
+
+The following services are Limited Access:
+
+- [Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context): Pro features
+- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features
+- [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features
+- [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature
+- [Azure Video Indexer](/azure/azure-video-indexer/limited-access-features): Celebrity Recognition and Face Identify features
+
+Features of these services that are not listed above are available without registration.
+
+## FAQ about Limited Access
+
+### How do I apply for access?
+
+Please submit an intake form for each Limited Access service you would like to use:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+- [Face API](https://aka.ms/facerecognition): Identify and Verify features
+- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature
+- [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
+
+### How long will the application process take?
+
+Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
+
+### Who is eligible to use Limited Access services?
+
+Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their application.
+
+Please use an email address affiliated with your organization in your application. Applications submitted with personal email addresses will be denied.
+
+If you are not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program.
+
+### What if I donΓÇÖt know whether IΓÇÖm a managed customer? What if I donΓÇÖt know my Microsoft contact or donΓÇÖt know if my organization has one?
+
+We invite you to submit an intake form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access.
+
+### What happens if IΓÇÖm an existing customer and I donΓÇÖt apply?
+
+Existing customers have until June 30, 2023 to submit an intake form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023.
+
+The intake forms can be found here:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+- [Face API](https://aka.ms/facerecognition): Identify and Verify features
+- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature
+- [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
+
+### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to apply to keep using these services?
+
+WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you have previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new intake form to continue using these services beyond June 30, 2023.
+
+If youΓÇÖre an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit an intake form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The intake forms can be found here:
+
+- [Custom Neural Voice](https://aka.ms/customneural): Pro features
+- [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
+
+### What if my use case is not on the intake form?
+
+Limited Access features are only available for the use cases listed on the intake forms. If your desired use case is not listed, please let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings.
+
+### Where can I use Limited Access services?
+
+Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a Limited Access service to view its regional availability. In the Brazil South and UAE North datacenter regions, we are prioritizing access for commercial customers managed by Microsoft.
+
+Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md).
+
+### What happens to my data if my application is denied?
+
+If you are an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
+
+## Help and support
+
+Report abuse of Limited Access services [here](https://aka.ms/reportabuse).
+
+## Next steps
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/overview.md
Follow these steps to get the most out of your model:
## Reference documentation and code samples
-As you use custom text classification, see the following reference documentation and samples for Azure Cognitive Services for Language:
+As you use custom text classification, see the following reference documentation and samples for Azure Cognitive Service for Language:
|Development option / language |Reference documentation |Samples | ||||
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/faq.md
- Previously updated : 05/23/2022+ Last updated : 06/21/2022
See the [language support](./language-support.md) article.
--> ## How do I get more accurate results for my project?
-Take a look at the [recommended guidelines](./how-to/create-project.md) for information on improving accuracy.
+See [evaluation metrics](./concepts/evaluation-metrics.md) for information on how models are evaluated, and metrics you can use to improve accuracy.
<!-- ## How many intents, and utterances can I add to a project?
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
The AI models used by the API are provided by the service, you just have to send
The conversation summarization API uses natural language processing techniques to locate key issues and resolutions in text-based chat logs. Conversation summarization will return issues and resolutions found from the text input.
-There's another feature in Azure Cognitive Service for Language, [document summarization](../overview.md?tabs=document-summarization), that can summarize sentences from large documents. When you're deciding between document summarization and conversation summarization, consider the following points:
+There's another feature in Azure Cognitive Service for Language named [document summarization](../overview.md?tabs=document-summarization) that can summarize sentences from large documents. When you're deciding between document summarization and conversation summarization, consider the following points:
* Extractive summarization returns sentences that collectively represent the most important or relevant information within the original content. * Conversation summarization returns summaries based on full chat logs including a reason for the chat (a problem), and the resolution. For example, a chat log between a customer and a customer service agent.
There's another feature in Azure Cognitive Service for Language, [document summa
You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
-When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
When you submit data to conversation summarization, we recommend sending one chat log per request, for better latency.
-
+
+### Get summaries from text chats
+
+You can use conversation summarization to get summaries from 2-person chats between customer service agents, and customers. To see an example using text chats, see the [quickstart article](../quickstart.md).
+
+### Get summaries from speech transcriptions
+
+Conversation summarization also enables you to get summaries from speech transcripts by using the [Speech service's speech-to-text feature](../../../Speech-Service/call-center-transcription.md). The following example shows a short conversation that you might include in your API requests.
+
+```json
+"conversations":[
+ {
+ "id":"abcdefgh-1234-1234-1234-1234abcdefgh",
+ "language":"En",
+ "modality":"transcript",
+ "conversationItems":[
+ {
+ "modality":"transcript",
+ "participantId":"speaker",
+ "id":"12345678-abcd-efgh-1234-abcd123456",
+ "content":{
+ "text":"Hi.",
+ "lexical":"hi",
+ "itn":"hi",
+ "maskedItn":"hi",
+ "audioTimings":[
+ {
+ "word":"hi",
+ "offset":4500000,
+ "duration":2800000
+ }
+ ]
+ }
+ }
+ ]
+ }
+]
+```
+ ## Getting conversation summarization results When you get results from language detection, you can stream the results to an application or save the output to a file on the local system.
The following text is an example of content you might submit for summarization.
Summarization is performed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../../concepts/multilingual-emoji-support.md) for more information.
-Using the above example, the API might return the following summarized sentences:
+In the above example, the API might return the following summarized sentences:
|Summarized text | Aspect | ||-|
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
If you want to clean up and remove a Cognitive Services subscription, you can de
## Next steps
-* [Summarization overview](overview.md)
+* [How to call document summarization](./how-to/document-summarization.md)
+* [How to call conversation summarization](./how-to/conversation-summarization.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 05/23/2022 Last updated : 06/22/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## June 2022
+* Python client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python).
+ ## May 2022 * PII detection for conversations.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
You can create confidential VMs that run on AMD processors in the following size
| Size family | Description | | | -- |
-| **DCasv5-series** | Confidential VM with remote storage only. No local temporary desk. |
+| **DCasv5-series** | Confidential VM with remote storage only. No local temporary disk. |
| **DCadsv5-series** | Confidential VM with a local temporary disk. | | **ECasv5-series** | Memory-optimized confidential VM with remote storage only. No local temporary disk. | | **ECadsv5-series** | Memory-optimized confidential VM with a local temporary disk. |
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration Previously updated : 05/10/2022 Last updated : 06/10/2022 # Built-in connectors in Azure Logic Apps Built-in connectors provide ways for you to control your workflow's schedule and structure, run your own code, manage or manipulate data, and complete other tasks in your workflows. Different from managed connectors, some built-in connectors aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in connectors run natively on the Azure Logic Apps runtime. Some don't require that you create a connection before you use them.
-For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type and not the other.
+For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type and not the other.
-For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Managed connectors in Azure Logic Apps](managed.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
+For example, a Standard logic app workflow provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption logic app workflow doesn't have the built-in versions. A Consumption logic app workflow provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app workflow doesn't have these built-in connectors.
-This article provides a general overview about built-in connectors in Consumption logic apps versus Standard logic apps.
+Also, in Standard logic app workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-<a name="built-in-operations-lists"></a>
+This article provides a general overview about built-in connectors in Consumption logic app workflows versus Standard logic app workflows.
+
+<a name="built-in-connectors"></a>
## Built-in connectors in Consumption versus Standard
+The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. An asterisk (**\***) marks [service provider-based built-in connectors](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation).
+ | Consumption | Standard | |-|-|
-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob <br>Azure Cosmos DB <br>Azure Functions <br>Azure Table Storage <br>Control <br>Data Operations <br>Date Time <br>DB2 <br>Event Hubs <br>Flat File <br>FTP <br>HTTP <br>IBM Host File <br>Inline Code <br>Liquid operations <br>MQ <br>Request <br>Schedule <br>Service Bus <br>SFTP <br>SQL Server <br>Variables <br>Workflow operations <br>XML operations |
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob* <br>Azure Cosmos DB* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
||| <a name="custom-built-in"></a>
connectors Connectors Create Api Bingsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-bingsearch.md
- Title: Connect to Bing Search
-description: Automate tasks and workflows that find results in Bing Search by using Azure Logic Apps.
--- Previously updated : 05/21/2018
-tags: connectors
--
-# Find results in Bing Search by using Azure Logic Apps
-
-This article shows how you can find news, videos, and other items through
-Bing Search from inside a logic app with the Bing Search connector.
-That way, you can create logic apps that automate tasks and workflows
-for processing search results and make those items available for other actions.
-
-For example, you can find news items based on search criteria,
-and have Twitter post those items as tweets in your Twitter feed.
-
-If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
-If you're new to logic apps, review
-[What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
-and [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-For connector-specific technical information, see the
-[Bing Search connector reference](/connectors/bingsearch/).
-
-## Prerequisites
-
-* A [Cognitive Services account](../cognitive-services/cognitive-services-apis-create-account.md)
-
-* A [Bing Search API key](https://azure.microsoft.com/try/cognitive-services/?api=bing-news-search-api),
-which provides access from your logic app to the Bing Search APIs
-
-* The logic app where you want to access your Event Hub.
-To start your logic app with a Bing Search trigger, you need a
-[blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-<a name="add-trigger"></a>
-
-## Add a Bing Search trigger
-
-In Azure Logic Apps, every logic app must start with a
-[trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts),
-which fires when a specific event happens or when a
-specific condition is met. Each time the trigger fires,
-the Logic Apps engine creates a logic app instance
-and starts running your app's workflow.
-
-1. In the Azure portal or Visual Studio,
-create a blank logic app, which opens Logic App Designer.
-This example uses the Azure portal.
-
-2. In the search box, enter "Bing search" as your filter.
-From the triggers list, select the trigger you want.
-
- This example uses this trigger:
- **Bing Search - On new news article**
-
- ![Find Bing Search trigger](./media/connectors-create-api-bing-search/add-trigger.png)
-
-3. If you're prompted for connection details,
-[create your Bing Search connection now](#create-connection).
-Or, if your connection already exists,
-provide the necessary information for the trigger.
-
- For this example, provide criteria for returning
- matching news articles from Bing Search.
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | Search Query | Yes | <*search-words*> | Enter the search keywords you want to use. |
- | Market | Yes | <*locale*> | The search locale. The default is "en-US", but you can select another value. |
- | Safe Search | Yes | <*search-level*> | The filter level for excluding adult content. The default is "Moderate", but you select another level. |
- | Count | No | <*results-count*> | Return the specified number of results. The default is 20, but you can specify another value. The actual number of returned results might be less than the specified number. |
- | Offset | No | <*skip-value*> | The number of results to skip before returning results |
- |||||
-
- For example:
-
- ![Set up trigger](./media/connectors-create-api-bing-search/bing-search-trigger.png)
-
-4. Select the interval and frequency for how often
-you want the trigger to check for results.
-
-5. When you're done, on the designer toolbar, select **Save**.
-
-6. Now continue adding one or more actions to your logic app
-for the tasks you want to perform with the trigger results.
-
-<a name="add-action"></a>
-
-## Add a Bing Search action
-
-In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts)
-is a step in your workflow that follows a trigger or another action.
-For this example, the logic app starts with a Bing Search trigger
-that returns news articles matching the specified criteria.
-
-1. In the Azure portal or Visual Studio,
-open your logic app in Logic App Designer.
-This example uses the Azure portal.
-
-2. Under the trigger or action, select **New step** > **Add an action**.
-
- This example uses this trigger:
-
- **Bing Search - On new news article**
-
- ![Add action](./media/connectors-create-api-bing-search/add-action.png)
-
- To add an action between existing steps,
- move your mouse over the connecting arrow.
- Select the plus sign (**+**) that appears,
- and then select **Add an action**.
-
-3. In the search box, enter "Bing search" as your filter.
-From the actions list, select the action you want.
-
- This example uses this action:
-
- **Bing Search - List news by query**
-
- ![Find Bing Search action](./media/connectors-create-api-bing-search/bing-search-select-action.png)
-
-4. If you're prompted for connection details,
-[create your Bing Search connection now](#create-connection).
-Or, if your connection already exists,
-provide the necessary information for the action.
-
- For this example, provide the criteria for
- returning a subset of the trigger's results.
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | Search Query | Yes | <*search-expression*> | Enter an expression for querying the trigger results. You can select from the fields in the dynamic content list, or create an expression with the expression builder. |
- | Market | Yes | <*locale*> | The search locale. The default is "en-US", but you can select another value. |
- | Safe Search | Yes | <*search-level*> | The filter level for excluding adult content. The default is "Moderate", but you select another level. |
- | Count | No | <*results-count*> | Return the specified number of results. The default is 20, but you can specify another value. The actual number of returned results might be less than the specified number. |
- | Offset | No | <*skip-value*> | The number of results to skip before returning results |
- |||||
-
- For example, suppose you want those results whose category
- name includes the word "tech".
-
- 1. Click in the **Search Query** box so the dynamic content list appears.
- From that list, select **Expression** so the expression builder appears.
-
- ![Bing Search trigger](./media/connectors-create-api-bing-search/bing-search-action.png)
-
- Now you can start creating your expression.
-
- 2. From the functions list, select the **contains()** function,
- which then appears in the expression box. Click **Dynamic content**
- so that the field list reappears, but make sure your cursor stays
- inside the parentheses.
-
- ![Select a function](./media/connectors-create-api-bing-search/expression-select-function.png)
-
- 3. From the field list, select **Category**, which converts to a parameter.
- Add a comma after the first parameter, and after the comma, add this word: `'tech'`
-
- ![Select a field](./media/connectors-create-api-bing-search/expression-select-field.png)
-
- 4. When you're done, select **OK**.
-
- The expression now appears in the **Search Query** box in this format:
-
- ![Finished expression](./media/connectors-create-api-bing-search/resolved-expression.png)
-
- In code view, this expression appears in this format:
-
- `"@{contains(triggerBody()?['category'],'tech')}"`
-
-5. When you're done, on the designer toolbar, select **Save**.
-
-<a name="create-connection"></a>
-
-## Connect to Bing Search
--
-1. When you're prompted for connection information,
-provide these details:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | Connection Name | Yes | <*connection-name*> | The name to create for your connection |
- | API Version | Yes | <*API-version*> | By default, the Bing Search API version is set to the current version. You can select an earlier version as necessary. |
- | API Key | Yes | <*API-key*> | The Bing Search API key that you got earlier. If you don't have a key, get your [API key now](https://azure.microsoft.com/try/cognitive-services/?api=bing-news-search-api). |
- |||||
-
- For example:
-
- ![Create connection](./media/connectors-create-api-bing-search/bing-search-create-connection.png)
-
-2. When you're done, select **Create**.
-
-## Connector reference
-
-For technical details, such as triggers, actions, and limits,
-as described by the connector's Swagger file,
-see the [connector's reference page](/connectors/bingsearch/).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
-
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
For example, if you're working in Visual Studio Code, follow these steps:
} ```
+ > [!NOTE]
+ >
+ > To find the thumbprint, follow these steps:
+ >
+ > 1. On your logic app resource menu, under **Settings**, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** or **Public Key Certificates (.cer)**.
+ >
+ > 2. Find the certificate that you want to use, and copy the thumbprint.
+ >
+ > For more information, review [Find the thumbprint - Azure App Service](../app-service/configure-ssl-certificate-in-code.md#find-the-thumbprint).
+ For more information, review the following documentation: * [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](../logic-apps/edit-app-settings-host-settings.md#manage-app-settings)
container-apps Authentication Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md
To register the app, perform the following steps:
1. (Optional) Select **Branding**. In **Home page URL**, enter the URL of your container app and select **Save**. 1. Select **Expose an API**, and select **Set** next to *Application ID URI*. This value uniquely identifies the application when it's used as a resource, allowing tokens to be requested that grant access. The value is also used as a prefix for scopes you create.
- For a single-tenant app, you can use the default value, which is in the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#appid-uri-configuration).
+ For a single-tenant app, you can use the default value, which is in the form `api://<application-client-id>`. You can also specify a more readable URI like `https://contoso.com/api` based on one of the verified domains for your tenant. For a multi-tenant app, you must provide a custom URI. To learn more about accepted formats for App ID URIs, see the [app registrations best practices reference](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri).
The value is automatically saved.
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 06/07/2022 Last updated : 06/21/2022 # Dapr integration with Azure Container Apps
The `pubsub.yaml` spec will be scoped to the dapr-enabled container apps with ap
```yaml # pubsub.yaml for Azure Service Bus component-- name: dapr-pubsub
- type: pubsub.azure.servicebus
- version: v1
- metadata:
- - name: connectionString
- secretRef: sb-root-connectionstring
- secrets:
- - name: sb-root-connectionstring
- value: "value"
- # Application scopes
- scopes:
+componentType: pubsub.azure.servicebus
+version: v1
+metadata:
+- name: connectionString
+ secretRef: sb-root-connectionstring
+secrets:
+- name: sb-root-connectionstring
+ value: "value"
+# Application scopes
+scopes:
- publisher-app - subscriber-app ```
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Title: Deploy container instance by GitHub action
-description: Configure a GitHub action that automates steps to build, push, and deploy a container image to Azure Container Instances
+ Title: Deploy container instance by GitHub Actions
+description: Configure a GitHub Action that automates steps to build, push, and deploy a container image to Azure Container Instances
Last updated 06/17/2022
-# Configure a GitHub action to create a container instance
+# Configure a GitHub Action to create a container instance
[GitHub Actions](https://docs.github.com/en/actions) is a suite of features in GitHub to automate your software development workflows in the same place you store code and collaborate on pull requests and issues.
-Use the [Deploy to Azure Container Instances](https://github.com/azure/aci-deploy) GitHub action to automate deployment of a single container to Azure Container Instances. The action allows you to set properties for a container instance similar to those in the [az container create][az-container-create] command.
+Use the [Deploy to Azure Container Instances](https://github.com/azure/aci-deploy) GitHub Actions to automate deployment of a single container to Azure Container Instances. The action allows you to set properties for a container instance similar to those in the [az container create][az-container-create] command.
This article shows how to set up a workflow in a GitHub repo that performs the following actions:
This article shows two ways to set up the workflow:
* [Use CLI extension](#use-deploy-to-azure-extension) - Use the `az container app up` command in the [Deploy to Azure](https://github.com/Azure/deploy-to-azure-cli-extension) extension in the Azure CLI. This command streamlines creation of the GitHub workflow and deployment steps. > [!IMPORTANT]
-> The GitHub action for Azure Container Instances is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
+> The GitHub Actions for Azure Container Instances is currently in preview. Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
## Prerequisites
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Germany West Central | 4 | 16 | N/A | N/A | 50 | N/A | Y | | Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Japan West | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Jio India West | 4 | 16 | N/A | N/A | 50 | N/A | N |
| Korea Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | North Central US | 2 | 3.5 | 4 | 16 | 50 | K80, P100, V100 | N | | North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | | Norway East | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Norway West | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| South Africa North | 4 | 16 | N/A | N/A | 50 | N/A | N |
| South Central US | 4 | 16 | 4 | 16 | 50 | V100 | Y | | Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | Y | | South India | 4 | 16 | N/A | N/A | 50 | K80 | N |
+| Sweden Central | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Sweden South | 4 | 16 | N/A | N/A | 50 | N/A | N |
| Switzerland North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N |
| UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y| | UK West | 4 | 16 | N/A | N/A | 50 | N/A | N | | UAE North | 4 | 16 | N/A | N/A | 50 | N/A | N | | West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | | West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y |
+| West India | 4 | 16 | N/A | N/A | 50 | N/A | N |
| West US | 4 | 16 | 4 | 16 | 50 | N/A | N | | West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y |
+| West US 3 | 4 | 16 | N/A | N/A | 50 | N/A | N |
The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* Currently, only Linux containers are supported in a container group deployed to a virtual network. * To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet.
+* To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription.
* You can't use a [managed identity](container-instances-managed-identity.md) in a container group deployed to a virtual network. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance. * Outbound connection to port 25 is not supported at this time. * If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource.
-* [IPv6 addresses](../virtual-network/ip-services/ipv6-overview.md) are not supported at this time.
+* [IPv6 addresses](../virtual-network/ip-services/ipv6-overview.md) are not supported at this time.
+* Depending on your subscription type, [certain ports may be blocked](/azure/virtual-network/network-security-groups-overview#azure-platform-considerations).
## Required network resources
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Run `helm registry login` to authenticate with the registry. You may pass [regi
## Push chart to registry as OCI artifact
-Run the `helm push` command in the Helm 3 CLI to push the chart archive to the fully qualified target repository. In the following example, the target repository namespace is `helm/hello-world`, and the chart is tagged `0.1.0`:
+Run the `helm push` command in the Helm 3 CLI to push the chart archive to the fully qualified target repository. Separate the words in the chart names and use only lower case letters and numbers. In the following example, the target repository namespace is `helm/hello-world`, and the chart is tagged `0.1.0`:
```console helm push hello-world-0.1.0.tgz oci://$ACR_NAME.azurecr.io/helm
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Prerequisites
-> * Install, create and sign in to [ORAS artifact enabled registry](/articles/container-registry/container-registry-oras-artifacts#sign-in-with-oras-1)
+> * Install, create and sign in to [ORAS artifact enabled registry](/azure/container-registry/container-registry-oras-artifacts#create-oras-artifact-enabled-registry)
> * Create or use an [Azure Key Vault](/azure/key-vault/general/quick-create-cli) >* This tutorial can be run in the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/)
In this tutorial:
1. Configure AKV resource names ```bash
- # Name of the existing AKV Resource Group
- AKV_RG=myResourceGroup
# Name of the existing Azure Key Vault used to store the signing keys AKV_NAME=<your-unique-keyvault-name> # New desired key name used to sign and verify KEY_NAME=wabbit-networks-io KEY_SUBJECT_NAME=wabbit-networks.io
+ CERT_PATH=./${KEY_NAME}.pem
``` 2. Configure ACR and image resource names
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
1. Create a certificate policy file
- Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV.
+ Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV. The EKU listed is for code-signing, but isn't required for notation to sign artifacts.
```bash cat <<EOF > ./my_policy.json
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
}, "x509CertificateProperties": { "ekus": [
- "1.3.6.1.5.5.7.3.1",
- "1.3.6.1.5.5.7.3.2",
"1.3.6.1.5.5.7.3.3" ], "subject": "CN=${KEY_SUBJECT_NAME}",
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
1. Get the Key ID for the certificate ```bash
- KEY_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'id' -otsv)
+ KEY_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'kid' -o tsv)
``` 4. Download public certificate ```bash
- az keyvault certificate download --file $CERT_PATH --id $KEY_ID --encoding PEM
+ CERT_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'id' -o tsv)
+ az keyvault certificate download --file $CERT_PATH --id $CERT_ID --encoding PEM
``` 5. Add the Key ID to the keys and certs
cosmos-db Audit Control Plane Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-control-plane-logs.md
AzureDiagnostics
## Next steps
+* [Prevent Azure Cosmos DB resources from being deleted or changed](resource-locks.md)
* [Explore Azure Monitor for Azure Cosmos DB](../azure-monitor/insights/cosmosdb-insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) * [Monitor and debug with metrics in Azure Cosmos DB](use-metrics.md)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Depending on the current RU/s provisioned and resource settings, each resource c
| | | | Maximum RU/s per container | 5,000 | | Maximum storage across all items per (logical) partition | 20 GB |
-| Maximum number of distinct (logical) partition keys | Unlimited |
-| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 1 TB |
-| Maximum storage per container (Cassandra API)| 1 TB |
+| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB (default)<sup>1</sup> |
+| Maximum storage per container (Cassandra API)| 30 GB (default)<sup>1</sup> |
+ <sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
Title: Quickstart - Azure Cosmos DB MongoDB API for JavaScript with mongoDB drier
+ Title: Quickstart - Azure Cosmos DB MongoDB API for JavaScript with MongoDB drier
description: Learn how to build a JavaScript app to manage Azure Cosmos DB MongoDB API account resources in this quickstart.
ms.devlang: javascript Previously updated : 06/10/2022 Last updated : 06/21/2022
-# Quickstart: Azure Cosmos DB MongoDB API for JavaScript with mongoDB driver
+# Quickstart: Azure Cosmos DB MongoDB API for JavaScript with MongoDB driver
[!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
-Get started with the mongoDB npm package to create databases, collections, and docs within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the MongoDB npm package to create databases, collections, and docs within your Cosmos DB resource. Follow these steps to install the package and try out example code for basic tasks.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
-[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [mongodb Package (NuGet)](https://www.npmjs.com/package/mongodb)
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (NuGet)](https://www.npmjs.com/package/mongodb)
## Prerequisites
Get started with the mongoDB npm package to create databases, collections, and d
## Setting up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses the mongoDB npm package.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses the MongoDB npm package.
### Create an Azure Cosmos DB account
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
resourceGroupName="msdocs-cosmos-javascript-quickstart-rg" location="westus"
- # Variable for account name with a randomnly generated suffix
+ # Variable for account name with a randomly generated suffix
let suffix=$RANDOM*$RANDOM accountName="msdocs-javascript-$suffix" ```
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Select API option** page, select the **Create** option within the **MongoDB** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the MongoDB API](/azure/cosmos-db/mongodb/introduction.md).
+1. On the **Select API option** page, select the **Create** option within the **MongoDB** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the MongoDB API](/azure/cosmos-db/mongodb/mongodb-introduction).
:::image type="content" source="media/quickstart-javascript/cosmos-api-choices.png" lightbox="media/quickstart-javascript/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
### Create a new JavaScript app
-Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command specifying the **console** template.
+Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
```console npm init
You'll use the following MongoDB classes to interact with these resources:
## Code examples - [Authenticate the client](#authenticate-the-client)-- [Create a database](#create-a-database)-- [Create a collection](#create-a-collection)-- [Create an doc](#create-a-doc)
+- [Get database instance](#get-database-instance)
+- [Get collection instance](#get-collection-instance)
+- [Chained instances](#chained-instances)
+- [Create an index](#create-an-index)
+- [Create a doc](#create-a-doc)
- [Get an doc](#get-a-doc) - [Query docs](#query-docs) The sample code described in this article creates a database named ``adventureworks`` with a collection named ``products``. The ``products`` collection is designed to contain product details such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-For this procedure, the database will not use sharding or a partition key.
+For this procedure, the database will not use sharding.
### Authenticate the client
The following code snippets should be added into the *main* function in order to
### Connect to the database
-Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Cosmos DB API for MongoDB resource. This method will return a reference to the existing or newly created database.
+Use the [``MongoClient.connect``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) method to connect to your Cosmos DB API for MongoDB resource. This method returns a reference to the database.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="connect_client":::
-### Create a database
+### Get database instance
-Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) method to create a new database if it doesn't already exist. This method will return a reference to the existing or newly created database.
+Use the [``MongoClient.db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#db) gets a reference to a database.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_database" :::
-### Create a collection
+### Get collection instance
-The [``Db.collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection) creates a new collection if it doesn't already exist. This method returns a reference to the collection.
+The [``MongoClient.Db.collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection) gets a reference to a collection.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_collection":::
+### Chained instances
+
+You can chain the client, database, and collection together. This is more convenient if you need to access multiple databases or collections.
+
+```javascript
+const db = await client.db(`adventureworks`).collection('products').updateOne(query, update, options)
+```
+
+### Create an index
+
+Use the [``Collection.createIndex``](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#createIndex) to create an index on the document's properties you intend to use for sorting with the MongoDB's [``FindCursor.sort``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html#sort) method.
++ ### Create a doc
-Create a doc with the *product* properties for the adventureworks database:
- * An _id property for the unique identifier of the product.
- * A *category* property. This can be used as the logical partition key.
- * A *name* property.
- * An inventory *quantity* property.
- * A *sale* property, indicating whether the product is on sale.
+Create a doc with the *product* properties for the `adventureworks` database:
+
+* An _id property for the unique identifier of the product.
+* A *category* property. This can be used as the logical partition key.
+* A *name* property.
+* An inventory *quantity* property.
+* A *sale* property, indicating whether the product is on sale.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_doc":::
In Azure Cosmos DB, you can perform a less-expensive [point read](https://devblo
### Query docs
-After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [``Collection.find``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#find) to get a result.
+After you insert a doc, you can run a query to get all docs that match a specific filter. This example finds all docs that match a specific category: `gear-surf-surfboards`. Once the query is defined, call [``Collection.find``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#find) to get a [``FindCursor``](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html) result. Convert the cursor into an array to use JavaScript array methods.
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="query_docs" :::
+Troubleshooting:
+
+* If you get an error such as `The index path corresponding to the specified order-by item is excluded.`, make sure you [created the index](#create-an-index).
+ ## Run the code
-This app creates a MongoDB API database and collection. The example then creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs metadata to the console about the steps it has performed.
+This app creates a MongoDB API database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
To run the app, use a terminal to navigate to the application directory and run the application.
-```dotnetcli
+```console
node index.js ``` The output of the app should be similar to this example:
-```output
-New database: adventureworks
-New collection: products
-Created doc: 68719518391 [gear-surf-surfboards]
-Read doc: 68719518391 [gear-surf-surfboards]
-1 filtered doc: 68719518391 [gear-surf-surfboards]
-done
-```
## Clean up resources
Remove-AzResourceGroup @parameters
1. Navigate to the resource group you previously created in the Azure portal. > [!TIP]
- > In this quickstart, we recommended the name ``msdocs-cosmos-dotnet-quickstart-rg``.
+ > In this quickstart, we recommended the name ``msdocs-cosmos-javascript-quickstart-rg``.
1. Select **Delete resource group**. :::image type="content" source="media/quickstart-javascript/delete-resource-group-option.png" lightbox="media/quickstart-javascript/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
Remove-AzResourceGroup @parameters
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account, create a database, and create a collection using the mongoDB driver. You can now dive deeper into the Cosmos DB MongoDB API to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
+In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account, create a database, and create a collection using the MongoDB driver. You can now dive deeper into the Cosmos DB MongoDB API to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources.
> [!div class="nextstepaction"] > [Migrate MongoDB to Azure Cosmos DB API for MongoDB offline](/azure/dms/tutorial-mongodb-cosmos-db?toc=%2Fazure%2Fcosmos-db%2Ftoc.json%3Ftoc%3D%2Fazure%2Fcosmos-db%2Ftoc.json)
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md
When applying a lock to an Azure Cosmos DB resource, use the following formats:
] ```
+## Samples
+
+Manage resource locks for Azure Cosmos DB:
+
+- Cassandra API keyspace and table [Azure CLI](scripts\cli\cassandra\lock.md) | [Azure PowerShell](scripts\powershell\cassandra\lock.md)
+- Gremlin API database and graph [Azure CLI](scripts\cli\gremlin\lock.md) | [Azure PowerShell](scripts\powershell\gremlin\lock.md)
+- MongoDB API database and collection [Azure CLI](scripts\cli\mongodb\lock.md)| [Azure PowerShell](scripts\powershell\mongodb\lock.md)
+- Core (SQL) API database and container [Azure CLI](scripts\cli\sql\lock.md) | [Azure PowerShell](scripts\powershell\sql\lock.md)
+- Table API table [Azure CLI](scripts\cli\table\lock.md) | [Azure PowerShell](scripts\powershell\table\lock.md)
+ ## Next steps - [Overview of Azure Resource Manager Locks](../azure-resource-manager/management/lock-resources.md)
+- [How to audit Azure Cosmos DB control plane operations](audit-control-plane-logs.md)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
Title: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB
-description: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB
+ Title: Azure Cosmos DB SQL API account, database, and container with autoscale
+description: Use Azure CLI to create an Azure Cosmos DB Core (SQL) API account, database, and container with autoscale.
Previously updated : 02/21/2022 Last updated : 06/22/2022+
-# Create an Azure Cosmos Core (SQL) API account, database and container with autoscale using Azure CLI
+# Create an Azure Cosmos DB SQL API account, database, and container with autoscale
[!INCLUDE[appliesto-sql-api](../../../includes/appliesto-sql-api.md)]
-The script in this article demonstrates creating a SQL API database and container with autoscale.
+The script in this article creates an Azure Cosmos DB Core (SQL) API account, database, and container with autoscale.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.0.73 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.0.73 or later.
-## Sample script
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
+
+ ```azurecli
+ subscription="<subscriptionId>" # add subscription here
+
+ az account set -s $subscription # ...or use 'az login'
+ ```
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+ If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+## Sample script
-### Run the script
+Run the following script to create an Azure resource group, an Azure Cosmos DB SQL API account and database, and a container with autoscale. The resources might take a while to create.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/sql/autoscale.sh" id="FullScript":::
+This script uses the following commands:
+
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) creates an Azure Cosmos DB account for SQL API.
+- [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) creates an Azure Cosmos SQL (Core) database.
+- [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) with `--max-throughput 1000` creates an Azure Cosmos SQL (Core) container with autoscale capability.
+ ## Clean up resources
+If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account, database, and container. The resources might take a while to delete.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb sql database create](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) | Creates an Azure Cosmos SQL (Core) database. |
-| [az cosmosdb sql container create](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) | Creates an Azure Cosmos SQL (Core) container. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+- [Throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources](throughput.md)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Title: Create a Table API table with autoscale for Azure Cosmos DB
-description: Create a Table API table with autoscale for Azure Cosmos DB
+ Title: Create an Azure Cosmos DB Table API account and table with autoscale
+description: Use Azure CLI to create a Table API account and table with autoscale for Azure Cosmos DB.
Previously updated : 02/21/2022 Last updated : 06/22/2022+
-# Create an Azure Cosmos Table API account and table with autoscale using Azure CLI
+# Use Azure CLI to create an Azure Cosmos DB Table API account and table with autoscale
[!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
-The script in this article demonstrates creating a Table API table with autoscale.
+The script in this article creates an Azure Cosmos DB Table API account and table with autoscale.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.12.1 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
+
+ ```azurecli
+ subscription="<subscriptionId>" # add subscription here
+
+ az account set -s $subscription # ...or use 'az login'
+ ```
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+ If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
## Sample script
+Run the following script to create an Azure resource group, an Azure Cosmos DB Table API account, and Table API table with autoscale capability. The resources might take a while to create.
+
+ :::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/table/autoscale.sh" id="FullScript":::
-### Run the script
+This script uses the following commands:
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable` creates an Azure Cosmos DB account for Table API.
+- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) with `--max-throughput 1000` creates an Azure Cosmos DB Table API table with autoscale capabilities.
## Clean up resources
+If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account and table. The resources might take a while to delete.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos Table API table. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+- [Throughput (RU/s) operations with Azure CLI for a table for Azure Cosmos DB Table API](throughput.md)
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Title: Create resource lock for a Azure Cosmos DB Table API table
-description: Create resource lock for a Azure Cosmos DB Table API table
+ Title: Azure Cosmos DB Table API resource lock operations
+description: Use Azure CLI to create, list, show properties for, and delete resource locks for an Azure Cosmos DB Table API table.
Previously updated : 02/21/2022 Last updated : 06/16/2022+
-# Create resource lock for a Azure Cosmos DB Table API table using Azure CLI
+# Use Azure CLI for resource lock operations on Azure Cosmos DB Table API tables
[!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
-The script in this article demonstrates performing resource lock operations for a Table API table.
+The script in this article demonstrates performing resource lock operations for a Table API table.
> [!IMPORTANT]
->
-> To create resource locks, you must have membership in the owner role in the subscription.
->
-> Resource locks do not work for changes made by users connecting Cosmos DB Table SDK, Azure Storage Table SDK, any tools that connect via account keys, or the Azure Portal unless the Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+> To enable resource locking, the Azure Cosmos DB account must have the `disableKeyBasedMetadataWriteAccess` property enabled. This property prevents any changes to resources from clients that connect via account keys, such as the Cosmos DB Table SDK, Azure Storage Table SDK, or Azure portal. For more information, see [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
+## Prerequisites
+- You need an [Azure Cosmos DB Table API account, database, and table created](create.md). [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+ > [!IMPORTANT]
+ > To create or delete resource locks, you must have the **Owner** role in your Azure subscription.
+
+- This script requires Azure CLI version 2.12.1 or later.
+
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
+
+ ```azurecli
+ subscription="<subscriptionId>" # add subscription here
+
+ az account set -s $subscription # ...or use 'az login'
+ ```
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+ If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
## Sample script
+The following script uses Azure CLI [az lock](/cli/azure/lock) commands to manipulate resource locks on your Azure Cosmos DB Table API table. The script needs the `resourceGroup`, `account` name, and `table` name for the Azure Cosmos DB account and table you created.
-### Run the script
+- [az lock create](/cli/azure/lock#az-lock-create) creates a `CanNotDelete` resource lock on the table.
+- [az lock list](/cli/azure/lock#az-lock-list) lists all the lock information for your Azure Cosmos DB Table account.
+- [az lock delete](/cli/azure/lock#az-lock-delete) uses [az lock show](/cli/azure/lock#az-lock-show) to get the `id` of the lock on your table, and then uses the `lockid` property to delete the lock.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/table/lock.sh" id="FullScript"::: ## Clean up resources
+If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account and table. The resources might take a while to delete.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az lock create](/cli/azure/lock#az-lock-create) | Creates a lock. |
-| [az lock list](/cli/azure/lock#az-lock-list) | List lock information. |
-| [az lock show](/cli/azure/lock#az-lock-show) | Show properties of a lock. |
-| [az lock delete](/cli/azure/lock#az-lock-delete) | Deletes a lock. |
- ## Next steps -- [Lock resources to prevent unexpected changes](../../../../azure-resource-manager/management/lock-resources.md)--- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).--- [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+- [Prevent Azure Cosmos DB resources from being deleted or changed](../../../resource-locks.md)
+- [Lock resources to prevent unexpected changes](/azure/azure-resource-manager/management/lock-resources)
+- [How to audit Azure Cosmos DB control plane operations](../../../audit-control-plane-logs.md)
+- [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+- [Azure Cosmos DB CLI GitHub repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Title: Create a Table API serverless account and table for Azure Cosmos DB
-description: Create a Table API serverless account and table for Azure Cosmos DB
+ Title: Create an Azure Cosmos DB Table API serverless account and table
+description: Use Azure CLI to create a Table API serverless account and table for Azure Cosmos DB.
Previously updated : 02/21/2022 Last updated : 06/16/2022+
-# Create an Azure Cosmos Table API serverless account and table using Azure CLI
+# Use Azure CLI to create an Azure Cosmos DB Table API serverless account and table
[!INCLUDE[appliesto-table-api](../../../includes/appliesto-table-api.md)]
-The script in this article demonstrates creating a Table API serverless account and table.
+The script in this article creates an Azure Cosmos DB Table API serverless account and table.
+## Prerequisites
+- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
-- This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
+- This script requires Azure CLI version 2.12.1 or later.
-## Sample script
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+
+ [![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
+
+ Cloud Shell is automatically authenticated under the account you used to sign in to the Azure portal. You can use [az account set](/cli/azure/account#az-account-set) to sign in with a different subscription, replacing `<subscriptionId>` with your Azure subscription ID.
+
+ ```azurecli
+ subscription="<subscriptionId>" # add subscription here
+
+ az account set -s $subscription # ...or use 'az login'
+ ```
+
+ - If you prefer, you can [install Azure CLI](/cli/azure/install-azure-cli) to run the script locally. Run [az version](/cli/azure/reference-index?#az-version) to find the Azure CLI version and dependent libraries that are installed, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) if you need to upgrade. If prompted, [install Azure CLI extensions](/cli/azure/azure-cli-extensions-overview). If you're running Windows or macOS, consider [running Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+ If you're using a local installation, sign in to Azure by running [az login](/cli/azure/reference-index#az-login) and following the prompts. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+## Sample script
-### Run the script
+Run the following script to create an Azure resource group, an Azure Cosmos DB Table API serverless account, and Table API table. The resources might take a while to create.
:::code language="azurecli" source="~/azure_cli_scripts/cosmosdb/table/serverless.sh" id="FullScript":::
+This script uses the following commands:
+
+- [az group create](/cli/azure/group#az-group-create) creates a resource group to store all resources.
+- [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) with `--capabilities EnableTable EnableServerless` creates an Azure Cosmos DB serverless account for Table API.
+- [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) creates an Azure Cosmos DB Table API table.
+ ## Clean up resources
+If you no longer need the resources you created, use the [az group delete](/cli/azure/group#az-group-delete) command to delete the resource group and all resources it contains. These resources include the Azure Cosmos DB account and table. The resources might take a while to delete.
```azurecli az group delete --name $resourceGroup ```
-## Sample reference
-
-This script uses the following commands. Each command in the table links to command specific documentation.
-
-| Command | Notes |
-|||
-| [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) | Creates an Azure Cosmos DB account. |
-| [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) | Creates an Azure Cosmos Table API table. |
-| [az group delete](/cli/azure/resource#az-resource-delete) | Deletes a resource group including all nested resources. |
- ## Next steps
-For more information on the Azure Cosmos DB CLI, see [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb).
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
For more information related to schema inference, see the full [schema inference
The Azure Cosmos DB Spark 3 OLTP Connector for SQL API has a complete configuration reference that provides additional and advanced settings writing and querying data, serialization, streaming using change feed, partitioning and throughput management and more. For a complete listing with details see our [Spark Connector Configuration Reference](https://aka.ms/azure-cosmos-spark-3-config) on GitHub.
+## Migrate to Spark 3 Connector
+
+If you are using our older Spark 2.4 Connector, you can find out how to migrate to the Spark 3 Connector [here](https://github.com/Azure/azure-sdk-for-jav).
+ ## Next steps * Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: [Release notes and resources](sql-api-sdk-java-spark-v3.md) * Learn more about [Apache Spark](https://spark.apache.org/).
+* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
description: Learn how Microsoft Defender provides advanced threat protection on
Previously updated : 02/03/2022 Last updated : 06/21/2022
-# Microsoft Defender for Cosmos DB (Preview)
+# Microsoft Defender for Cosmos DB
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Microsoft Defender for Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
For a full investigation experience of the security alerts, we recommended enabl
Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts: -- **Access from unusual locations**: This alert is triggered when there is a change in the access pattern to an Azure Cosmos DB account, where someone has connected to the Azure Cosmos DB endpoint from an unusual geographical location. In some cases, the alert detects a legitimate action, meaning a new application or developerΓÇÖs maintenance operation. In other cases, the alert detects a malicious action from a former employee, external attacker, etc.
+- **Potential SQL injection attacks**: Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
-- **Unusual data extraction**: This alert is triggered when a client is extracting an unusual amount of data from an Azure Cosmos DB account. It can be the symptom of some data exfiltration performed to transfer all the data stored in the account to an external data store.
+- **Anomalous database access patterns**: For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.
+
+- **Suspicious database activity**: For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.
## Configure Microsoft Defender for Cosmos DB
Use the following PowerShell cmdlets:
# [ARM template](#tab/arm-template) Use an Azure Resource Manager (ARM) template to set up Azure Cosmos DB with Azure Defender protection enabled. For more information, see
-[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
+[Create a Cosmos DB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
# [Azure Policy](#tab/azure-policy)
Use an Azure Policy to enable Azure Defender for Cosmos DB.
When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
- From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. The following image shows an example of alert details provided in Defender for Cloud.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-alert-details.png" alt-text="Threat details":::
-
-An email notification is also sent with the alert details and recommended actions. The following image shows an example of an alert email.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-alert.png" alt-text="Alert details":::
+ From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. An email notification is also sent with the alert details and recommended actions.
## Azure Cosmos DB alerts
An email notification is also sent with the alert details and recommended action
## Next steps
+* Learn more about [Microsoft Defender for Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
-* Learn more about [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md
Title: Connect to Azure Cosmos DB using BI analytics tools
-description: Learn how to use the Azure Cosmos DB ODBC driver to create tables and views so that normalized data can be viewed in BI and data analytics software.
+ Title: Use Azure Cosmos DB ODBC driver to connect to BI and analytics tools
+description: Use the Azure Cosmos DB ODBC driver to create normalized data tables and views for SQL queries, analytics, BI, and visualizations.
Previously updated : 10/04/2021- Last updated : 06/21/2022+
-# Connect to Azure Cosmos DB from BI and Data Integration tools with the ODBC driver
+# Use the Azure Cosmos DB ODBC driver to connect to BI and data analytics tools
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-The Azure Cosmos DB ODBC driver enables you to connect to Azure Cosmos DB using solutions such as SQL Server Integration Services, Alteryx, QlikSense , Tableau, and other analytics, BI, and data integration tools so that you can analyze, move, transform, and create visualizations of your Azure Cosmos DB data.
-
-The Azure Cosmos DB ODBC driver is ODBC 3.8 compliant and supports ANSI SQL-92 syntax. The driver offers rich features to help you renormalize data in Azure Cosmos DB. Using the driver, you can represent data in Azure Cosmos DB as tables and views. The driver enables you to perform SQL operations against the tables and views including group by queries, inserts, updates, and deletes.
-
-## Important information about this ODBC connector.
+This article walks you through installing and using the Azure Cosmos DB ODBC driver to create normalized tables and views for your Azure Cosmos DB data. You can query the normalized data with SQL queries, or import the data into Power BI or other BI and analytics software to create reports and visualizations.
-The current ODBC driver does not support aggregate pushdowns and has known issues with many analytics tools. We are working on building a new version and it's release is expected for the middle of 2022. Please check the alternatives below:
+Azure Cosmos DB is a schemaless database, which enables rapid application development and lets you iterate on data models without being confined to a strict schema. A single Azure Cosmos database can contain JSON documents of various structures. To analyze or report on this data, you might need to flatten the data to fit into a schema.
- * Azure Synapse Link is the preferred analytics solution for Azure Cosmos DB. With Synapse Link and Synapse SQL serverless pools, you can use any BI tool to extract near real time insights from your Cosmos DB data, SQL ou MongoDB APIs. For more information, please check our [documentation](../synapse-link.md).
- * If you are using Power BI, please check our native connector [documentation](powerbi-visualize.md).
- * If you are using QlikSense, please check our how-to [documentation](../visualize-qlik-sense.md).
+The ODBC driver normalizes Azure Cosmos DB data into tables and views that fit your data analytics and reporting needs. The normalized schemas let you use ODBC-compliant tools to access the data. The schemas have no impact on the underlying data, and don't require developers to adhere to them. The ODBC driver helps make Azure Cosmos DB databases useful for data analysts as well as development teams.
-> [!NOTE]
-> Connecting to Azure Cosmos DB with the ODBC driver is currently supported for Azure Cosmos DB SQL API accounts only.
+You can do SQL operations against the normalized tables and views, including group by queries, inserts, updates, and deletes. The driver is ODBC 3.8 compliant and supports ANSI SQL-92 syntax.
+You can also connect the normalized Azure Cosmos DB data to other software solutions, such as SQL Server Integration Services (SSIS), Alteryx, QlikSense, Tableau and other analytics software, BI, and data integration tools. You can use those solutions to analyze, move, transform, and create visualizations with your Azure Cosmos DB data.
-## Why do I need to normalize my data?
-Azure Cosmos DB is a schemaless database, which enables rapid application development and the ability to iterate on data models without being confined to a strict schema. A single Azure Cosmos database can contain JSON documents of various structures. This is great for rapid application development, but when you want to analyze and create reports of your data using data analytics and BI tools, the data often needs to be flattened and adhere to a specific schema.
+> [!IMPORTANT]
+> - Connecting to Azure Cosmos DB with the ODBC driver is currently supported for Azure Cosmos DB Core (SQL) API only.
+> - The current ODBC driver doesn't support aggregate pushdowns, and has known issues with some analytics tools. Until a new version is released, you can use one of the following alternatives:
+> - [Azure Synapse Link](../synapse-link.md) is the preferred analytics solution for Azure Cosmos DB. With Azure Synapse Link and Azure Synapse SQL serverless pools, you can use any BI tool to extract near real-time insights from Azure Cosmos DB SQL or MongoDB API data.
+> - For Power BI, you can use the [Azure Cosmos DB connector for Power BI](powerbi-visualize.md).
+> - For Qlik Sense, see [Connect Qlik Sense to Azure Cosmos DB](../visualize-qlik-sense.md).
-This is where the ODBC driver comes in. By using the ODBC driver, you can now renormalize data in Azure Cosmos DB into tables and views that fit your data analytics and reporting needs. The renormalized schemas have no impact on the underlying data and do not confine developers to adhere to them. Rather, they enable you to leverage ODBC-compliant tools to access the data. So, now your Azure Cosmos database will not only be a favorite for your development team, but your data analysts will love it too.
-
-Let's get started with the ODBC driver.
-
-## <a id="install"></a>Step 1: Install the Azure Cosmos DB ODBC driver
+<a id="install"></a>
+## Install the ODBC driver and connect to your database
1. Download the drivers for your environment:
Let's get started with the ODBC driver.
|[Microsoft Azure Cosmos DB ODBC 32x64-bit.msi](https://aka.ms/cosmos-odbc-32x64) for 32-bit on 64-bit Windows| 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, Windows Vista, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and Windows Server 2003.| |[Microsoft Azure Cosmos DB ODBC 32-bit.msi](https://aka.ms/cosmos-odbc-32x32) for 32-bit Windows|32-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows XP, and Windows Vista.|
- Run the msi file locally, which starts the **Microsoft Azure Cosmos DB ODBC Driver Installation Wizard**.
+1. Run the *.msi* file locally, which starts the **Microsoft Azure Cosmos DB ODBC Driver Installation Wizard**.
-1. Complete the installation wizard using the default input to install the ODBC driver.
+1. Complete the installation wizard using the default input.
-1. Open the **ODBC Data source Administrator** app on your computer. You can do this by typing **ODBC Data sources** in the Windows search box.
- You can confirm the driver was installed by clicking the **Drivers** tab and ensuring **Microsoft Azure Cosmos DB ODBC Driver** is listed.
+1. After the driver installs, type *ODBC Data sources* in the Windows search box, and open the **ODBC Data Source Administrator**.
- :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Azure Cosmos DB ODBC Data Source Administrator":::
+1. Make sure that the **Microsoft Azure DocumentDB ODBC Driver** is listed on the **Drivers** tab.
-## <a id="connect"></a>Step 2: Connect to your Azure Cosmos database
+ :::image type="content" source="./media/odbc-driver/odbc-driver.png" alt-text="Screenshot of the O D B C Data Source Administrator window.":::
-1. After [Installing the Azure Cosmos DB ODBC driver](#install), in the **ODBC Data Source Administrator** window, click **Add**. You can create a User or System DSN. In this example, you are creating a User DSN.
+ <a id="connect"></a>
+1. Select the **User DSN** tab, and then select **Add** to create a new data source name (DSN). You can also create a System DSN.
-1. In the **Create New Data Source** window, select **Microsoft Azure Cosmos DB ODBC Driver**, and then click **Finish**.
+1. In the **Create New Data Source** window, select **Microsoft Azure DocumentDB ODBC Driver**, and then select **Finish**.
-1. In the **Azure Cosmos DB ODBC Driver SDN Setup** window, fill in the following information:
+1. In the **DocumentDB ODBC Driver DSN Setup** window, fill in the following information:
- :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Azure Cosmos DB ODBC Driver DSN Setup window":::
- - **Data Source Name**: Your own friendly name for the ODBC DSN. This name is unique to your Azure Cosmos DB account, so name it appropriately if you have multiple accounts.
- - **Description**: A brief description of the data source.
- - **Host**: URI for your Azure Cosmos DB account. You can retrieve this from the Azure Cosmos DB Keys page in the Azure portal, as shown in the following screenshot.
- - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB Keys page in the Azure portal as shown in the following screenshot. We recommend you use the read-only key if the DSN is used for read-only data processing and reporting.
- :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Azure Cosmos DB Keys page":::
- - **Encrypt Access Key for**: Select the best choice based on the users of this machine.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-dsn-setup.png" alt-text="Screenshot of the D S N Setup window.":::
+
+ - **Data Source Name**: A friendly name for the ODBC DSN. This name is unique to this Azure Cosmos DB account.
+ - **Description**: A brief description of the data source.
+ - **Host**: The URI for your Azure Cosmos DB account. You can get this information from the **Keys** page in your Azure Cosmos DB account in the Azure portal.
+ - **Access Key**: The primary or secondary, read-write or read-only key from the Azure Cosmos DB **Keys** page in the Azure portal. It's best to use the read-only keys, if you use the DSN for read-only data processing and reporting.
+
+ To avoid an authentication error, use the copy buttons to copy the URI and key from the Azure portal.
+
+ :::image type="content" source="./media/odbc-driver/odbc-cosmos-account-keys.png" alt-text="Screenshot of the Azure Cosmos D B Keys page.":::
+
+ - **Encrypt Access Key for**: Select the best choice, based on who uses the machine.
-1. Click the **Test** button to make sure you can connect to your Azure Cosmos DB account.
+1. Select **Test** to make sure you can connect to your Azure Cosmos DB account.
+
+1. Select **Advanced Options** and set the following values:
+
+ - **REST API Version**: Select the [REST API version](/rest/api/cosmos-db) for your operations. The default is **2015-12-16**.
+
+ If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, type *2018-12-31*, and then [follow the steps at the end of this procedure](#edit-the-windows-registry-to-support-rest-api-version-2018-12-31).
+
+ - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is **Session**.
+ - **Number of Retries**: Enter the number of times to retry an operation if the initial request doesn't complete due to service rate limiting.
+ - **Schema File**: If you don't select a schema file, the driver scans the first page of data for each container to determine its schema, called container mapping, for each session. This process can cause long startup time for applications that use the DSN. It's best to associate a schema file to the DSN.
+
+ - If you already have a schema file, select **Browse**, navigate to the file, select **Save**, and then select **OK**.
+ - If you don't have a schema file yet, select **OK**, and then follow the steps in the next section to [create a schema definition](#create-a-schema-definition). After you create the schema, come back to this **Advanced Options** window to add the schema file.
+
+After you select **OK** to complete and close the **DocumentDB ODBC Driver DSN Setup** window, the new User DSN appears on the **User DSN** tab of the **ODBC Data Source Administrator** window.
+
+ :::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="Screenshot that shows the new User D S N on the User D S N tab.":::
+
+### Edit the Windows registry to support REST API version 2018-12-31
+
+If you have containers with [large partition keys](../large-partition-keys.md) that need REST API version 2018-12-31, follow these steps to update the Windows registry to support this version.
+
+1. In the Windows **Start** menu, type *regedit* to find and open the **Registry Editor**.
+1. In the Registry Editor, navigate to the path **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**.
+1. Create a new subkey with the same name as your DSN, such as *Contoso Account ODBC DSN*.
+1. Navigate to the new **Contoso Account ODBC DSN** subkey, and right-click to add a new **String** value:
+ - Value Name: **IgnoreSessionToken**
+ - Value data: **1**
+ :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Screenshot that shows the Windows Registry Editor settings.":::
+
+<a id="#container-mapping"></a><a id="table-mapping"></a>
+## Create a schema definition
+
+There are two types of sampling methods you can use to create a schema: *container mapping* or *table-delimiter mapping*. A sampling session can use both sampling methods, but each container can use only one of the sampling methods. Which method to use depends on your data's characteristics.
+
+- **Container mapping** retrieves the data on a container page to determine the data structure, and transposes the container to a table on the ODBC side. This sampling method is efficient and fast when the data in a container is homogenous.
+
+- **Table-delimiter mapping** provides more robust sampling for heterogeneous data. This method scopes the sampling to a set of attributes and corresponding values.
+
+ For example, if a document contains a **Type** property, you can scope the sampling to the values of this property. The end result of the sampling is a set of tables for each of the **Type** values you specified. **Type = Car** produces a **Car** table, while **Type = Plane** produces a **Plane** table.
+
+To define a schema, follow these steps. For the table-delimiter mapping method, you take extra steps to define attributes and values for the schema.
+
+1. On the **User DSN** tab of the **ODBC Data Source Administrator** window, select your Azure Cosmos DB User DSN Name, and then select **Configure**.
-1. Click **Advanced Options** and set the following values:
- * **REST API Version**: Select the [REST API version](/rest/api/cosmos-db/) for your operations. The default 2015-12-16. If you have containers with [large partition keys](../large-partition-keys.md) and require REST API version 2018-12-31:
- - Type in **2018-12-31** for REST API version
- - In the **Start** menu, type "regedit" to find and open the **Registry Editor** application.
- - In Registry Editor, navigate to the path: **Computer\HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI**
- - Create a new subkey with the same name as your DSN, e.g. "Contoso Account ODBC DSN".
- - Navigate to the "Contoso Account ODBC DSN" subkey.
- - Right-click to add a new **String** value:
- - Value Name: **IgnoreSessionToken**
- - Value data: **1**
- :::image type="content" source="./media/odbc-driver/cosmos-odbc-edit-registry.png" alt-text="Registry Editor settings":::
- - **Query Consistency**: Select the [consistency level](../consistency-levels.md) for your operations. The default is Session.
- - **Number of Retries**: Enter the number of times to retry an operation if the initial request does not complete due to service rate limiting.
- - **Schema File**: You have a number of options here.
- - By default, leaving this entry as is (blank), the driver scans the first page of data for all containers to determine the schema of each container. This is known as Container Mapping. Without a schema file defined, the driver has to perform the scan for each driver session and could result in a higher startup time of an application using the DSN. We recommend that you always associate a schema file for a DSN.
- - If you already have a schema file (possibly one that you created using the Schema Editor), you can click **Browse**, navigate to your file, click **Save**, and then click **OK**.
- - If you want to create a new schema, click **OK**, and then click **Schema Editor** in the main window. Then proceed to the Schema Editor information. After creating the new schema file, remember to go back to the **Advanced Options** window to include the newly created schema file.
+1. In the **DocumentDB ODBC Driver DSN Setup** window, select **Schema Editor**.
-1. Once you complete and close the **Azure Cosmos DB ODBC Driver DSN Setup** window, the new User DSN is added to the User DSN tab.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Screenshot that shows the Schema Editor button in the D S N Setup window.":::
- :::image type="content" source="./media/odbc-driver/odbc-driver-user-dsn.png" alt-text="New Azure Cosmos DB ODBC DSN on the User DSN tab":::
+1. In the **Schema Editor** window, select **Create New**.
-## <a id="#container-mapping"></a>Step 3: Create a schema definition using the container mapping method
+1. The **Generate Schema** window displays all the collections in the Azure Cosmos DB account. Select the checkboxes next to the containers you want to sample.
-There are two types of sampling methods that you can use: **container mapping** or **table-delimiters**. A sampling session can utilize both sampling methods, but each container can only use a specific sampling method. The steps below create a schema for the data in one or more containers using the container mapping method. This sampling method retrieves the data in the page of a container to determine the structure of the data. It transposes a container to a table on the ODBC side. This sampling method is efficient and fast when the data in a container is homogenous. If a container contains heterogeneous type of data, we recommend you use the [table-delimiters mapping method](#table-mapping) as it provides a more robust sampling method to determine the data structures in the container.
+1. To use the *container mapping* method, select **Sample**.
-1. After completing steps 1-4 in [Connect to your Azure Cosmos database](#connect), click **Schema Editor** in the **Azure Cosmos DB ODBC Driver DSN Setup** window.
+ Or, to use *table-delimiter* mapping, take the following steps to define attributes and values for scoping the sample.
- :::image type="content" source="./media/odbc-driver/odbc-driver-schema-editor.png" alt-text="Schema editor button in the Azure Cosmos DB ODBC Driver DSN Setup window":::
-1. In the **Schema Editor** window, click **Create New**.
- The **Generate Schema** window displays all the containers in the Azure Cosmos DB account.
+ 1. Select **Edit** in the **Mapping Definition** column for your DSN.
-1. Select one or more containers to sample, and then click **Sample**.
+ 1. In the **Mapping Definition** window, under **Mapping Method**, select **Table Delimiters**.
-1. In the **Design View** tab, the database, schema, and table are represented. In the table view, the scan displays the set of properties associated with the column names (SQL Name, Source Name, etc.).
- For each column, you can modify the column SQL name, the SQL type, SQL length (if applicable), Scale (if applicable), Precision (if applicable) and Nullable.
- - You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked Hide Column = true are not returned for selection and projection, although they are still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties starting with "_".
- - The **id** column is the only field that cannot be hidden as it is used as the primary key in the normalized schema.
+ 1. In the **Attributes** box, type the name of a delimiter property in your document that you want to scope the sampling to, for instance, *City*. Press Enter.
-1. Once you have finished defining the schema, click **File** | **Save**, navigate to the directory to save the schema, and then click **Save**.
+ 1. If you want to scope the sampling to certain values for the attribute you entered, select the attribute, and then enter a value in the **Value** box, such as *Seattle*, and press Enter. You can add multiple values for attributes. Just make sure that the correct attribute is selected when you enter values.
-1. To use this schema with a DSN, open the **Azure Cosmos DB ODBC Driver DSN Setup window** (via the ODBC Data Source Administrator), click **Advanced Options**, and then in the **Schema File** box, navigate to the saved schema. Saving a schema file to an existing DSN modifies the DSN connection to scope to the data and structure defined by schema.
+ 1. When you're done entering attributes and values, select **OK**.
-## <a id="table-mapping"></a>Step 4: Create a schema definition using the table-delimiters mapping method
+ 1. In the **Generate Schema** window, select **Sample**.
-There are two types of sampling methods that you can use: **container mapping** or **table-delimiters**. A sampling session can utilize both sampling methods, but each container can only use a specific sampling method.
+1. In the **Design View** tab, refine your schema. The **Design View** represents the database, schema, and table. The table view displays the set of properties associated with the column names, such as **SQL Name** and **Source Name**.
-The following steps create a schema for the data in one or more containers using the **table-delimiters** mapping method. We recommend that you use this sampling method when your containers contain heterogeneous type of data. You can use this method to scope the sampling to a set of attributes and its corresponding values. For example, if a document contains a "Type" property, you can scope the sampling to the values of this property. The end result of the sampling would be a set of tables for each of the values for Type you have specified. For example, Type = Car will produce a Car table while Type = Plane would produce a Plane table.
+ For each column, you can modify the **SQL name**, the **SQL type**, **SQL length**, **Scale**, **Precision**, and **Nullable** as applicable.
-1. After completing steps 1-4 in [Connect to your Azure Cosmos database](#connect), click **Schema Editor** in the Azure Cosmos DB ODBC Driver DSN Setup window.
+ You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked **Hide Column = true** aren't returned for selection and projection, although they're still part of the schema. For example, you can hide all of the Azure Cosmos DB system required properties that start with **_**. The **id** column is the only field you can't hide, because it's the primary key in the normalized schema.
-1. In the **Schema Editor** window, click **Create New**.
- The **Generate Schema** window displays all the containers in the Azure Cosmos DB account.
+1. Once you finish defining the schema, select **File** > **Save**, navigate to the directory to save in, and select **Save**.
-1. Select a container on the **Sample View** tab, in the **Mapping Definition** column for the container, click **Edit**. Then in the **Mapping Definition** window, select **Table Delimiters** method. Then do the following:
+1. To use this schema with a DSN, in the **DocumentDB ODBC Driver DSN Setup** window, select **Advanced Options**. Select the **Schema File** box, navigate to the saved schema, select **OK** and then select **OK** again. Saving the schema file modifies the DSN connection to scope to the schema-defined data and structure.
- a. In the **Attributes** box, type the name of a delimiter property. This is a property in your document that you want to scope the sampling to, for instance, City and press enter.
+### Create views
- b. If you only want to scope the sampling to certain values for the attribute you entered above, select the attribute in the selection box, enter a value in the **Value** box (e.g. Seattle), and press enter. You can continue to add multiple values for attributes. Just ensure that the correct attribute is selected when you're entering values.
+Optionally, you can define and create views in the **Schema Editor** as part of the sampling process. These views are equivalent to SQL views. The views are read-only, and scope to the selections and projections of the defined Azure Cosmos DB SQL query.
- For example, if you include an **Attributes** value of City, and you want to limit your table to only include rows with a city value of New York and Dubai, you would enter City in the Attributes box, and New York and then Dubai in the **Values** box.
+Follow these steps to create a view for your data:
-1. Click **OK**.
+1. On the **Sample View** tab of the **Schema Editor** window, select the containers you want to sample, and then select **Add** in the **View Definition** column.
-1. After completing the mapping definitions for the containers you want to sample, in the **Schema Editor** window, click **Sample**.
- For each column, you can modify the column SQL name, the SQL type, SQL length (if applicable), Scale (if applicable), Precision (if applicable) and Nullable.
- - You can set **Hide Column** to **true** if you want to exclude that column from query results. Columns marked Hide Column = true are not returned for selection and projection, although they are still part of the schema. For example, you can hide all the Azure Cosmos DB system required properties starting with `_`.
- - The **id** column is the only field that cannot be hidden as it is used as the primary key in the normalized schema.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-create-view.png" alt-text="Screenshot that shows creating a view.":::
-1. Once you have finished defining the schema, click **File** | **Save**, navigate to the directory to save the schema, and then click **Save**.
+1. In the **View Definitions** window, select **New**. Enter a name for the view, for example *EmployeesfromSeattleView*, and then select **OK**.
-1. Back in the **Azure Cosmos DB ODBC Driver DSN Setup** window, click **Advanced Options**. Then, in the **Schema File** box, navigate to the saved schema file and click **OK**. Click **OK** again to save the DSN. This saves the schema you created to the DSN.
+1. In the **Edit view** window, enter an [Azure Cosmos DB query](./sql-query-getting-started.md), for example:
-## (Optional) Set up linked server connection
+ `SELECT c.City, c.EmployeeName, c.Level, c.Age, c.Manager FROM c WHERE c.City = "Seattle"`
-You can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
+1. Select **OK**.
-1. Create a system data source as described in [Step 2](#connect), named for example `SDS Name`.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-create-view-2.png" alt-text="Screenshot of adding a query when creating a view.":::
-1. [Install SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and connect to the server.
+You can create as many views as you like. Once you're done defining the views, select **Sample** to sample the data.
-1. In the SSMS query editor, create a linked server object `DEMOCOSMOS` for the data source with the following commands. Replace `DEMOCOSMOS` with the name for your linked server, and `SDS Name` with the name of your system data source.
+## Query with SQL Server Management Studio
+
+Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
+
+1. [Install SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) and connect to the server.
+
+1. In the SSMS query editor, create a linked server object for the data source by running the following commands. Replace `DEMOCOSMOS` with the name for your linked server, and `SDS Name` with your data source name.
```sql USE [master]
You can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by settin
GO ```
-To see the new linked server name, refresh the Linked Servers list.
+To see the new linked server name, refresh the linked servers list.
-
-### Query linked database
To query the linked database, enter an SSMS query. In this example, the query selects from the table in the container named `customers`:
To query the linked database, enter an SSMS query. In this example, the query se
SELECT * FROM OPENQUERY(DEMOCOSMOS, 'SELECT * FROM [customers].[customers]') ```
-Execute the query. The result should be similar to this:
+Execute the query. The results should look similar to the following output:
-```
+```output
attachments/ 1507476156 521 Bassett Avenue, Wikieup, Missouri, 5422 "2602bc56-0000-0000-0000-59da42bc0000" 2015-02-06T05:32:32 +05:00 f1ca3044f17149f3bc61f7b9c78a26df attachments/ 1507476156 167 Nassau Street, Tuskahoma, Illinois, 5998 "2602bd56-0000-0000-0000-59da42bc0000" 2015-06-16T08:54:17 +04:00 f75f949ea8de466a9ef2bdb7ce065ac8 attachments/ 1507476156 885 Strong Place, Cassel, Montana, 2069 "2602be56-0000-0000-0000-59da42bc0000" 2015-03-20T07:21:47 +04:00 ef0365fb40c04bb6a3ffc4bc77c905fd
attachments/ 1507476156 515 Barwell Terrace, Defiance, Tennessee, 6439 "
attachments/ 1507476156 570 Ruby Street, Spokane, Idaho, 9025 "2602c156-0000-0000-0000-59da42bc0000" 2014-10-30T05:49:33 +04:00 e53072057d314bc9b36c89a8350048f3 ```
-> [!NOTE]
-> The linked Cosmos DB server does not support four-part naming. An error is returned similar to the following message:
-
-```
-Msg 7312, Level 16, State 1, Line 44
-
-Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
-```
-
-## (Optional) Creating views
-You can define and create views as part of the sampling process. These views are equivalent to SQL views. They are read-only and are scope the selections and projections of the Azure Cosmos DB SQL query defined.
-
-To create a view for your data, in the **Schema Editor** window, in the **View Definitions** column, click **Add** on the row of the container to sample.
+## View your data in Power BI Desktop
+You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools. This procedure shows you how to connect to Power BI Desktop to create a Power BI visualization.
+1. In Power BI Desktop, select **Get Data**.
-Then in the **View Definitions** window, do the following:
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Screenshot showing Get Data in Power B I Desktop.":::
-1. Click **New**, enter a name for the view, for example, EmployeesfromSeattleView and then click **OK**.
+1. In the **Get Data** window, select **Other** > **ODBC**, and then select **Connect**.
-1. In the **Edit view** window, enter an Azure Cosmos DB query. This must be an [Azure Cosmos DB SQL query](./sql-query-getting-started.md), for example `SELECT c.City, c.EmployeeName, c.Level, c.Age, c.Manager FROM c WHERE c.City = "Seattle"`, and then click **OK**.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Screenshot that shows choosing O D B C data source in Power B I Get Data.":::
- :::image type="content" source="./media/odbc-driver/odbc-driver-create-view-2.png" alt-text="Add query when creating a view":::
+1. In the **From ODBC** window, select the DSN you created, and then select **OK**.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Screenshot that shows choosing the D S N in Power B I Get Data.":::
-You can create a many views as you like. Once you are done defining the views, you can then sample the data.
+1. In the **Access a data source using an ODBC driver** window, select **Default or Custom** and then select **Connect**.
-## Step 5: View your data in BI tools such as Power BI Desktop
+1. In the **Navigator** window, in the left pane, expand the database and schema, and select the table. The results pane includes the data that uses the schema you created.
-You can use your new DSN to connect to Azure Cosmos DB with any ODBC-compliant tools - this step simply shows you how to connect to Power BI Desktop and create a Power BI visualization.
+ :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Screenshot of selecting the table in Power B I Get Data.":::
-1. Open Power BI Desktop.
+1. To visualize the data in Power BI desktop, select the checkbox next to the table name, and then select **Load**.
-1. Click **Get Data**.
+1. In Power BI Desktop, select the **Data** tab on the left of the screen to confirm your data was imported.
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data.png" alt-text="Get Data in Power BI Desktop":::
+1. Select the **Report** tab on the left of the screen, select **New visual** from the ribbon, and then customize the visual.
-1. In the **Get Data** window, click **Other** | **ODBC** | **Connect**.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-2.png" alt-text="Choose ODBC Data source in Power BI Get Data":::
-
-1. In the **From ODBC** window, select the data source name you created, and then click **OK**. You can leave the **Advanced Options** entries blank.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-3.png" alt-text="Choose Data source name (DSN) in Power BI Get Data":::
-
-1. In the **Access a data source using an ODBC driver** window, select **Default or Custom** and then click **Connect**. You do not need to include the **Credential connection string properties**.
-
-1. In the **Navigator** window, in the left pane, expand the database, the schema, and then select the table. The results pane includes the data using the schema you created.
-
- :::image type="content" source="./media/odbc-driver/odbc-driver-power-bi-get-data-4.png" alt-text="Select Table in Power BI Get Data":::
-
-1. To visualize the data in Power BI desktop, check the box in front of the table name, and then click **Load**.
+## Troubleshooting
-1. In Power BI Desktop, on the far left, select the Data tab :::image type="icon" source="./media/odbc-driver/odbc-driver-data-tab.png"::: to confirm your data was imported.
+- **Problem**: You get the following error when trying to connect:
-1. You can now create visuals using Power BI by clicking on the Report tab :::image type="icon" source="./media/odbc-driver/odbc-driver-report-tab.png":::, clicking **New Visual**, and then customizing your tile. For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
+ ```output
+ [HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
+ ```
-## Troubleshooting
+ **Solution:** Make sure the **Host** and **Access Key** values you copied from the Azure portal are correct, and retry.
-If you receive the following error, ensure the **Host** and **Access Key** values you copied the Azure portal in [Step 2](#connect) are correct and then retry. Use the copy buttons to the right of the **Host** and **Access Key** values in the Azure portal to copy the values error free.
+- **Problem**: You get the following error in SSMS when trying to create a linked Azure Cosmos DB server:
-```output
-[HY000]: [Microsoft][Azure Cosmos DB] (401) HTTP 401 Authentication Error: {"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ndbs\n\nfri, 20 jan 2017 03:43:55 gmt\n\n'\r\nActivityId: 9acb3c0d-cb31-4b78-ac0a-413c8d33e373"}
-```
+ ```output
+ Msg 7312, Level 16, State 1, Line 44
+
+ Invalid use of schema or catalog for OLE DB provider "MSDASQL" for linked server "DEMOCOSMOS". A four-part name was supplied, but the provider does not expose the necessary interfaces to use a catalog or schema.
+ ```
+ **Solution**: A linked Azure Cosmos DB server doesn't support four-part naming.
## Next steps
-To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).
+- To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).
+- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
If you have a new subscription, you can't immediately use Cost Management featur
## Supported Microsoft Azure offers
-The following information shows the currently supported [Microsoft Azure offers](https://azure.microsoft.com/support/legal/offer-details/) in Cost Management. An Azure offer is the type of the Azure subscription that you have. Data is available in Cost Management starting on the **Data available from** date. If a subscription changes offers, costs before the offer change date aren't available.
+The following information shows the currently supported [Microsoft Azure offers](https://azure.microsoft.com/support/legal/offer-details/) in Cost Management. An Azure offer is the type of the Azure subscription that you have. Data is available in Cost Management starting on the **Data available from** date. Summarized data in cost analysis is only available for the last 13 months. If a subscription changes offers, costs before the offer change date aren't available.
| **Category** | **Offer name** | **Quota ID** | **Offer number** | **Data available from** | | | | | | |
-| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014<sup>1</sup> |
+| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014<sup>1</sup> |
| **Azure Government** | Azure Government Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-USGOV-0003P | October 2, 2018 |
-| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014<sup>1</sup> |
+| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014<sup>1</sup> |
| **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014<sup>1</sup> | | **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019<sup>2</sup> | | **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019<sup>2</sup> |
-| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01<br><br>The quota ID is reused for Microsoft Customer Agreement and legacy CSP subscriptions. Currently, only Microsoft Customer Agreement subscriptions are supported. | N/A | October 2019 |
+| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01<sup>4</sup> | N/A | October 2019 |
| **Microsoft Developer Network (MSDN)** | MSDN Platforms<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 | | **Pay-As-You-Go** | Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-0003P | October 2, 2018 | | **Pay-As-You-Go** | Pay-As-You-Go Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0023P | October 2, 2018 | | **Pay-As-You-Go** | Microsoft Partner Network | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 | | **Pay-As-You-Go** | Free Trial<sup>3</sup> | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 | | **Pay-As-You-Go** | Azure in Open<sup>3</sup> | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 |
-| **Pay-As-You-Go** | Azure Pass<sup>3</sup> | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
+| **Pay-As-You-Go** | Azure Pass<sup>3</sup> | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
| **Visual Studio** | Visual Studio Enterprise ΓÇô MPN<sup>3</sup> | MPN_2014-09-01 | MS-AZR-0029P | October 2, 2018 | | **Visual Studio** | Visual Studio Professional<sup>3</sup> | MSDN_2014-09-01 | MS-AZR-0059P | October 2, 2018 | | **Visual Studio** | Visual Studio Test Professional<sup>3</sup> | MSDNDevTest_2014-09-01 | MS-AZR-0060P | October 2, 2018 |
_<sup>**2**</sup> Microsoft Customer Agreements started in March 2019 and don't
_<sup>**3**</sup> Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See [Historical data may not match invoice](#historical-data-might-not-match-invoice) below._
+_<sup>**4**</sup> Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic CSP subscriptions are not supported._
+ The following offers aren't supported yet:
-| Category | **Offer name** | **Quota ID** | **Offer number** |
+| **Category** | **Offer name** | **Quota ID** | **Offer number** |
| | | | | | **Azure Germany** | Azure Germany Pay-As-You-Go | PayAsYouGo_2014-09-01 | MS-AZR-DE-0003P | | **Cloud Solution Provider (CSP)** | Microsoft Azure | CSP_2015-05-01 | MS-AZR-0145P | | **Cloud Solution Provider (CSP)** | Azure Government CSP | CSP_2015-05-01 | MS-AZR-USGOV-0145P | | **Cloud Solution Provider (CSP)** | Azure Germany in CSP for Microsoft Cloud Germany | CSP_2015-05-01 | MS-AZR-DE-0145P |
-| **Pay-As-You-Go** | Azure for Students Starter | DreamSpark_2015-02-01 | MS-AZR-0144P |
+| **Pay-As-You-Go** | Azure for Students Starter | DreamSpark_2015-02-01 | MS-AZR-0144P |
| **Pay-As-You-Go** | Azure for Students<sup>3</sup> | AzureForStudents_2018-01-01 | MS-AZR-0170P |
-| **Pay-As-You-Go** | Microsoft Azure Sponsorship | Sponsored_2016-01-01 | MS-AZR-0036P |
+| **Pay-As-You-Go** | Microsoft Azure Sponsorship | Sponsored_2016-01-01 | MS-AZR-0036P |
| **Support Plans** | Standard support | Default_2014-09-01 | MS-AZR-0041P | | **Support Plans** | Professional Direct support | Default_2014-09-01 | MS-AZR-0042P | | **Support Plans** | Developer support | Default_2014-09-01 | MS-AZR-0043P |
The following tables show data that's included or isn't in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage<sup>4</sup> | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace offering usage<sup>5</sup> | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace purchases<sup>5</sup> | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Reservation purchases<sup>6</sup> | |
-| Amortization of reservation purchases<sup>6</sup> | |
-| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) <sup>7</sup> | |
+| Azure service usage<sup>5</sup> | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace offering usage<sup>6</sup> | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace purchases<sup>6</sup> | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Reservation purchases<sup>7</sup> | |
+| Amortization of reservation purchases<sup>7</sup> | |
+| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) <sup>8</sup> | |
-_<sup>**4**</sup> Azure service usage is based on reservation and negotiated prices._
+_<sup>**5**</sup> Azure service usage is based on reservation and negotiated prices._
-_<sup>**5**</sup> Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
+_<sup>**6**</sup> Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
-_<sup>**6**</sup> Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
+_<sup>**7**</sup> Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
-_<sup>**7**</sup> Only available for Partners_
+_<sup>**8**</sup> Only available for specific offers._
## How tags are used in cost and usage data
Here are a few tips for working with tags:
## Cost and usage data updates and retention
-Cost and usage data is typically available in Cost Management + Billing in the Azure portal and supporting APIs within 8-24 hours. Keep the following points in mind as you review costs:
+Cost and usage data is typically available in Cost Management within 8-24 hours. Keep the following points in mind as you review costs:
- Each Azure service (such as Storage, Compute, and SQL) emits usage at different intervals ΓÇô You might see data for some services sooner than others. - Estimated charges for the current billing period are updated six times per day.
The following examples illustrate how billing periods could end:
* Enterprise Agreement (EA) subscriptions ΓÇô If the billing month ends on March 31, estimated charges are updated up to 72 hours later. In this example, by midnight (UTC) April 4. * Pay-as-you-go subscriptions ΓÇô If the billing month ends on May 15, then the estimated charges might get updated up to 72 hours later. In this example, by midnight (UTC) May 19.
-Once cost and usage data becomes available in Cost Management + Billing, it will be retained for at least seven years.
+Once cost and usage data becomes available in Cost Management, it will be retained for at least seven years. Only the last 13 months is available from the portal. For historical data before 13 months, please use [Exports](tutorial-export-acm-data.md) or the [UsageDetails API](/rest/api/consumption/usage-details/list).
### Rerated data
Whether you use the Cost Management APIs, Power BI, or the Azure portal to retri
Costs shown in Cost Management are rounded. Costs returned by the Query API aren't rounded. For example: -- Cost analysis in the Azure portal - Charges are rounded using standard rounding rules: values more than 0.5 and higher are rounded up, otherwise costs are rounded down. Rounding occurs only when values are shown. Rounding doesn't happen during data processing and aggregation. For example, cost analysis aggregates costs as follows:
+- Cost analysis in the portal - Charges are rounded using standard rounding rules: values more than 0.5 and higher are rounded up, otherwise costs are rounded down. Rounding occurs only when values are shown. Rounding doesn't happen during data processing and aggregation. For example, cost analysis aggregates costs as follows:
- Charge 1: $0.004 - Charge 2: $0.004 - Aggregate charge rendered: 0.004 + 0.004 = 0.008. The charge shown is $0.01.
cost-management-billing Azure Account For Microsoft 365 Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azure-account-for-microsoft-365-subscription.md
- Title: Sign up for Microsoft 365 with Azure account
-description: Learn how to create a Microsoft 365 subscription by using an Azure account. You can also associate existing Azure and Microsoft 365 accounts with each other.
--
-tags: billing,top-support-issue
--- Previously updated : 09/15/2021---
-# Sign up for a Microsoft 365 subscription with your Azure account
-
-If you're Azure subscriber, you can use your Azure account to sign up for a Microsoft 365 subscription. If you're part of an organization that has an Azure subscription, you can create Microsoft 365 subscriptions for users in your existing Azure Active Directory (Azure AD). Sign up for Microsoft 365 using an account that has Global Admin or Billing Admin permissions in your Azure Active Directory tenant. For more information, see [Check my account permissions in Azure AD](#RoleInAzureAD) and [Assigning administrator roles in Azure Active Directory](../../active-directory/roles/permissions-reference.md).
-
-If you already have both a Microsoft 365 account and an Azure subscription, you can [Associate a Microsoft 365 tenant to an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
-
-## Get a Microsoft 365 subscription by using your Azure account
-
-1. Go to the [Microsoft 365 product page](https://www.microsoft.com/microsoft-365/business/all-business), and select a plan.
-2. Select **Sign in** on the upper-right corner of the page.
-
- ![screenshot of Microsoft 365 trial page](./media/azure-account-for-microsoft-365-subscription/12-office-365-trial-page.png)
-3. Sign in with your Azure account credentials. If you're creating a subscription for your organization, use an Azure account that's a member of the Global Admin or Billing Admin directory role in your Azure Active Directory tenant.
-
- ![Screenshot of Microsoft sign-in](./media/azure-account-for-microsoft-365-subscription/13-office-365-sign-in.png)
-4. Select **Try now**.
-
- ![Screenshot that confirms your order for Microsoft 365.](./media/azure-account-for-microsoft-365-subscription/14-office-365-confirm-your-order.png)
-5. On the order receipt page, select **Continue**.
-
- ![Screenshot of the Microsoft 365 order receipt](./media/azure-account-for-microsoft-365-subscription/15-office-365-order-receipt.png)
-
-Now you're all set. If you created the Microsoft 365 subscription for your organization, use the following steps to check that your Azure AD users are now in Microsoft 365.
-
-1. Open the Microsoft 365 admin center.
-2. Expand **USERS**, and then select **Active Users**.
-
- ![Screenshot of the Microsoft 365 admin center users](./media/azure-account-for-microsoft-365-subscription/16-microsoft-365-admin-center-users.png)
-
-After you sign up, the Microsoft 365 subscription is added to the same Azure Active Directory instance that your Azure subscription belongs to. For more information, see [More about Azure and Microsoft 365 subscriptions](microsoft-365-account-for-azure-subscription.md#more-about-subs) and [How Azure subscriptions are associated with Azure Active Directory](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
-
-## <a id="RoleInAzureAD"></a>Check my account permissions in Azure AD
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **All services**, and then search for **Active Directory**.
-
- ![Screenshot of Active Directory in the Azure portal](./media/azure-account-for-microsoft-365-subscription/billing-more-services-active-directory.png)
-3. Select **Users and groups** > **All users**.
-4. Select the user name.
-
- ![Screenshot that shows the Azure Active Directory users](./media/azure-account-for-microsoft-365-subscription/billing-users-groups.png)
-
-5. Select **Directory role**.
-
- ![Screenshot that shows the Azure portal directory role](./media/azure-account-for-microsoft-365-subscription/billing-user-directory-role.png)
-6. The role **Global administrator** or **Limited administrator** > **Billing administrator** is required to create a Microsoft 365 subscription for users in your existing Azure Active Directory.
-
- ![Screenshot that shows Azure portal directory role Billing Admin](./media/azure-account-for-microsoft-365-subscription/billing-directoryrole-limited.png)
-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-
-## Next steps
--- [Associate a Microsoft 365 tenant to an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md)
cost-management-billing Microsoft 365 Account For Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/microsoft-365-account-for-azure-subscription.md
- Title: Sign up for Azure with a Microsoft 365 account
-description: Learn how to create an Azure subscription by using a Microsoft 365 account. You can also associate existing Azure and Microsoft 365 accounts with each other.
--
-tags: billing,top-support-issue
--- Previously updated : 09/15/2021---
-# Sign up for an Azure subscription with your Microsoft 365 account
-
-If you have a Microsoft 365 subscription, you can use your Microsoft 365 account to create an Azure subscription. Sign in to the [Azure portal](https://portal.azure.com/) using your Microsoft 365 user name and password. If you want to set up virtual machines or use other Azure services, you must sign up for an Azure subscription. You can share your Azure subscription with others and [use Azure role-based access control (Azure RBAC) to manage access to your Azure subscription and resources](../../role-based-access-control/role-assignments-portal.md).
-
-If you already have both a Microsoft 365 account and an Azure subscription, see [Associate a Microsoft 365 tenant to an Azure subscription](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
-
-## Get an Azure subscription using your Microsoft 365 account
-
-Save time and avoid account proliferation by signing up for Azure using your Microsoft 365 user name and password.
-
-1. Sign up at [Azure.com](https://signup.azure.com/signup?offer=MS-AZR-0044p&appId=docs).
-2. Sign in by using your Microsoft 365 user name and password. The account you use doesn't need to have administrator permissions. If you have more than one Microsoft 365 account, make sure you use the credentials for the Microsoft 365 account that you want to associate with your Azure subscription.
-
- ![Screenshot that shows the sign-in page.](./media/microsoft-365-account-for-azure-subscription/billing-sign-in-with-office-365-account.png)
-
-3. Enter the required information and complete the sign-up process. Some information may not be required if you already have a Microsoft 365 account.
-
- ![Screenshot that shows the sign-up form.](./media/microsoft-365-account-for-azure-subscription/billing-azure-sign-up-fill-information.png)
--- If you need to add other people in your organization to the Azure subscription, see [Get started with access management in the Azure portal](../../role-based-access-control/overview.md).-
-## <a id="more-about-subs">More about Azure and Microsoft 365 subscriptions</a>
-
-Microsoft 365 and Azure use the Azure AD service to manage users and subscriptions. The Azure directory is like a container in which you can group users and subscriptions. To use the same user accounts for your Azure and Microsoft 365 subscriptions, you need to make sure that the Azure subscriptions are created in the same directory as the Microsoft 365 subscriptions. Keep in mind the following points:
-
-* A subscription gets created under a directory
-* Users belong to directories
-* A subscription lands in the directory of the user who creates the subscription. So your Microsoft 365 subscription is tied to the same account as your Azure subscription.
-* Azure subscriptions are owned by individual users in the directory
-* Microsoft 365 subscriptions are owned by the directory itself. Users with the right permissions within the directory can manage these subscriptions.
-
-![Screenshot that shows the relationship of the directory, users, and subscriptions.](./media/microsoft-365-account-for-azure-subscription/19-background-information.png)
-
-For more information, see [How Azure subscriptions are associated with Azure Active Directory](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-
-## Next steps
--- Share your Azure subscription with others and [use Azure role-based access control (Azure RBAC) to manage access to your Azure subscription and resources](../../role-based-access-control/role-assignments-portal.md).
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
When you use the PowerShell script to assign the ownership role and it runs succ
- Accept pipeline input: False - Accept wildcard characters: False
-[User Access Administrators](../../role-based-access-control/built-in-roles.md#user-access-administrator) can add the users to Reservation Administrator and Reservation Reader roles.
+## Tenant-level access
+[User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) rights are required before you can grant users or groups the Reservation Administrator and Reservation Reader roles at the tenant level.
## Add a Reservation Administrator role at the tenant level
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
Previously updated : 04/20/2022 Last updated : 06/10/2022 # User defined functions (Preview) in mapping data flow
Whenever you find yourself building the same logic in an expression across multi
> [!IMPORTANT] > User defined functions and mapping data flow libraries are currently in public preview.
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4Zkek]
+>
+ ## Getting started To get started with user defined functions, you must first create a data flow library. Navigate to the management page and then find data flow libraries under the author section.
data-factory Connector Troubleshoot Dynamics Dataverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-dynamics-dataverse.md
Previously updated : 12/02/2021 Last updated : 06/17/2022
This article provides suggestions to troubleshoot common problems with the Dynam
- **Recommendation**: You can add the 'type' property to those columns in the column mapping by using JSON editor on the portal.
+## Error code: UserErrorUnsupportedAttributeType
+
+- **Message**: `The attribute type 'Lookup' of field %attributeName; is not supported`
+
+- **Cause**: When loading data to Dynamics sink, Azure Data Factory imposes validation on lookup attribute's metadata. However, there's the known issue of certain Dynamics entities not having valid lookup attribute metadata that holds a list of targets, which would fail the validation.
+
+- **Recommendation**: Contact Dynamics support team to mitigate the issue.
+ ## The copy activity from the Dynamics 365 reads more rows than the actual number - **Symptoms**: The copy activity from the Dynamics 365 reads more rows than the actual number.
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
Install the latest Azure PowerShell modules by following instructions in [How to
>If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands. >
+## Pre- and post-deployment script
+The sample scripts to stop/ start triggers and update global parameters during release process (CICD) are located in the [Azure Data Factory Official GitHub page](https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/ContinuousIntegrationAndDelivery).
++ ## Script execution and parameters The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task the latest Azure PowerShell version.
When running a pre-deployment script, you will need to specify a variation of th
When running a post-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field. `-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true`-
+
> [!NOTE] > The `-deleteDeployment` flag is used to specify the deletion of the ADF deployment entry from the deployment history in ARM. :::image type="content" source="media/continuous-integration-delivery/continuous-integration-image11.png" alt-text="Azure PowerShell task":::
-## Pre- and post-deployment script
-
-Here is the script that can be used for pre- and post-deployment. It accounts for deleted resources and resource references.
-
-
-```powershell
-param
-(
- [parameter(Mandatory = $false)] [String] $armTemplate,
- [parameter(Mandatory = $false)] [String] $ResourceGroupName,
- [parameter(Mandatory = $false)] [String] $DataFactoryName,
- [parameter(Mandatory = $false)] [Bool] $predeployment=$true,
- [parameter(Mandatory = $false)] [Bool] $deleteDeployment=$false
-)
-
-function getPipelineDependencies {
- param([System.Object] $activity)
- if ($activity.Pipeline) {
- return @($activity.Pipeline.ReferenceName)
- } elseif ($activity.Activities) {
- $result = @()
- $activity.Activities | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
- return $result
- } elseif ($activity.ifFalseActivities -or $activity.ifTrueActivities) {
- $result = @()
- $activity.ifFalseActivities | Where-Object {$_ -ne $null} | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
- $activity.ifTrueActivities | Where-Object {$_ -ne $null} | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
- return $result
- } elseif ($activity.defaultActivities) {
- $result = @()
- $activity.defaultActivities | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
- if ($activity.cases) {
- $activity.cases | ForEach-Object{ $_.activities } | ForEach-Object{$result += getPipelineDependencies -activity $_ }
- }
- return $result
- } else {
- return @()
- }
-}
-
-function pipelineSortUtil {
- param([Microsoft.Azure.Commands.DataFactoryV2.Models.PSPipeline]$pipeline,
- [Hashtable] $pipelineNameResourceDict,
- [Hashtable] $visited,
- [System.Collections.Stack] $sortedList)
- if ($visited[$pipeline.Name] -eq $true) {
- return;
- }
- $visited[$pipeline.Name] = $true;
- $pipeline.Activities | ForEach-Object{ getPipelineDependencies -activity $_ -pipelineNameResourceDict $pipelineNameResourceDict} | ForEach-Object{
- pipelineSortUtil -pipeline $pipelineNameResourceDict[$_] -pipelineNameResourceDict $pipelineNameResourceDict -visited $visited -sortedList $sortedList
- }
- $sortedList.Push($pipeline)
-
-}
-
-function Get-SortedPipelines {
- param(
- [string] $DataFactoryName,
- [string] $ResourceGroupName
- )
- $pipelines = Get-AzDataFactoryV2Pipeline -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $ppDict = @{}
- $visited = @{}
- $stack = new-object System.Collections.Stack
- $pipelines | ForEach-Object{ $ppDict[$_.Name] = $_ }
- $pipelines | ForEach-Object{ pipelineSortUtil -pipeline $_ -pipelineNameResourceDict $ppDict -visited $visited -sortedList $stack }
- $sortedList = new-object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSPipeline]
-
- while ($stack.Count -gt 0) {
- $sortedList.Add($stack.Pop())
- }
- $sortedList
-}
-
-function triggerSortUtil {
- param([Microsoft.Azure.Commands.DataFactoryV2.Models.PSTrigger]$trigger,
- [Hashtable] $triggerNameResourceDict,
- [Hashtable] $visited,
- [System.Collections.Stack] $sortedList)
- if ($visited[$trigger.Name] -eq $true) {
- return;
- }
- $visited[$trigger.Name] = $true;
- if ($trigger.Properties.DependsOn) {
- $trigger.Properties.DependsOn | Where-Object {$_ -and $_.ReferenceTrigger} | ForEach-Object{
- triggerSortUtil -trigger $triggerNameResourceDict[$_.ReferenceTrigger.ReferenceName] -triggerNameResourceDict $triggerNameResourceDict -visited $visited -sortedList $sortedList
- }
- }
- $sortedList.Push($trigger)
-}
-
-function Get-SortedTriggers {
- param(
- [string] $DataFactoryName,
- [string] $ResourceGroupName
- )
- $triggers = Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
- $triggerDict = @{}
- $visited = @{}
- $stack = new-object System.Collections.Stack
- $triggers | ForEach-Object{ $triggerDict[$_.Name] = $_ }
- $triggers | ForEach-Object{ triggerSortUtil -trigger $_ -triggerNameResourceDict $triggerDict -visited $visited -sortedList $stack }
- $sortedList = new-object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSTrigger]
-
- while ($stack.Count -gt 0) {
- $sortedList.Add($stack.Pop())
- }
- $sortedList
-}
-
-function Get-SortedLinkedServices {
- param(
- [string] $DataFactoryName,
- [string] $ResourceGroupName
- )
- $linkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
- $LinkedServiceHasDependencies = @('HDInsightLinkedService', 'HDInsightOnDemandLinkedService', 'AzureBatchLinkedService')
- $Akv = 'AzureKeyVaultLinkedService'
- $HighOrderList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
- $RegularList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
- $AkvList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
-
- $linkedServices | ForEach-Object {
- if ($_.Properties.GetType().Name -in $LinkedServiceHasDependencies) {
- $HighOrderList.Add($_)
- }
- elseif ($_.Properties.GetType().Name -eq $Akv) {
- $AkvList.Add($_)
- }
- else {
- $RegularList.Add($_)
- }
- }
-
- $SortedList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]($HighOrderList.Count + $RegularList.Count + $AkvList.Count)
- $SortedList.AddRange($HighOrderList)
- $SortedList.AddRange($RegularList)
- $SortedList.AddRange($AkvList)
- $SortedList
-}
-
-$templateJson = Get-Content $armTemplate | ConvertFrom-Json
-$resources = $templateJson.resources
-
-#Triggers
-Write-Host "Getting triggers"
-$triggersInTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/triggers" }
-$triggerNamesInTemplate = $triggersInTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
-
-$triggersDeployed = Get-SortedTriggers -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
-
-$triggersToStop = $triggersDeployed | Where-Object { $triggerNamesInTemplate -contains $_.Name } | ForEach-Object {
- New-Object PSObject -Property @{
- Name = $_.Name
- TriggerType = $_.Properties.GetType().Name
- }
-}
-$triggersToDelete = $triggersDeployed | Where-Object { $triggerNamesInTemplate -notcontains $_.Name } | ForEach-Object {
- New-Object PSObject -Property @{
- Name = $_.Name
- TriggerType = $_.Properties.GetType().Name
- }
-}
-$triggersToStart = $triggersInTemplate | Where-Object { $_.properties.runtimeState -eq "Started" -and ($_.properties.pipelines.Count -gt 0 -or $_.properties.pipeline.pipelineReference -ne $null)} | ForEach-Object {
- New-Object PSObject -Property @{
- Name = $_.name.Substring(37, $_.name.Length-40)
- TriggerType = $_.Properties.type
- }
-}
-
-if ($predeployment -eq $true) {
- #Stop all triggers
- Write-Host "Stopping deployed triggers`n"
- $triggersToStop | ForEach-Object {
- if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
- Write-Host "Unsubscribing" $_.Name "from events"
- $status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- while ($status.Status -ne "Disabled"){
- Start-Sleep -s 15
- $status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- }
- }
- Write-Host "Stopping trigger" $_.Name
- Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
- }
-}
-else {
- #Deleted resources
- #pipelines
- Write-Host "Getting pipelines"
- $pipelinesADF = Get-SortedPipelines -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $pipelinesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/pipelines" }
- $pipelinesNames = $pipelinesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
- $deletedpipelines = $pipelinesADF | Where-Object { $pipelinesNames -notcontains $_.Name }
- #dataflows
- $dataflowsADF = Get-AzDataFactoryV2DataFlow -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $dataflowsTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/dataflows" }
- $dataflowsNames = $dataflowsTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40) }
- $deleteddataflow = $dataflowsADF | Where-Object { $dataflowsNames -notcontains $_.Name }
- #datasets
- Write-Host "Getting datasets"
- $datasetsADF = Get-AzDataFactoryV2Dataset -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $datasetsTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/datasets" }
- $datasetsNames = $datasetsTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40) }
- $deleteddataset = $datasetsADF | Where-Object { $datasetsNames -notcontains $_.Name }
- #linkedservices
- Write-Host "Getting linked services"
- $linkedservicesADF = Get-SortedLinkedServices -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $linkedservicesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/linkedservices" }
- $linkedservicesNames = $linkedservicesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
- $deletedlinkedservices = $linkedservicesADF | Where-Object { $linkedservicesNames -notcontains $_.Name }
- #Integrationruntimes
- Write-Host "Getting integration runtimes"
- $integrationruntimesADF = Get-AzDataFactoryV2IntegrationRuntime -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
- $integrationruntimesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/integrationruntimes" }
- $integrationruntimesNames = $integrationruntimesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
- $deletedintegrationruntimes = $integrationruntimesADF | Where-Object { $integrationruntimesNames -notcontains $_.Name }
-
- #Delete resources
- Write-Host "Deleting triggers"
- $triggersToDelete | ForEach-Object {
- Write-Host "Deleting trigger " $_.Name
- $trig = Get-AzDataFactoryV2Trigger -name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
- if ($trig.RuntimeState -eq "Started") {
- if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
- Write-Host "Unsubscribing trigger" $_.Name "from events"
- $status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- while ($status.Status -ne "Disabled"){
- Start-Sleep -s 15
- $status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- }
- }
- Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
- }
- Remove-AzDataFactoryV2Trigger -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
- Write-Host "Deleting pipelines"
- $deletedpipelines | ForEach-Object {
- Write-Host "Deleting pipeline " $_.Name
- Remove-AzDataFactoryV2Pipeline -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
- Write-Host "Deleting dataflows"
- $deleteddataflow | ForEach-Object {
- Write-Host "Deleting dataflow " $_.Name
- Remove-AzDataFactoryV2DataFlow -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
- Write-Host "Deleting datasets"
- $deleteddataset | ForEach-Object {
- Write-Host "Deleting dataset " $_.Name
- Remove-AzDataFactoryV2Dataset -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
- Write-Host "Deleting linked services"
- $deletedlinkedservices | ForEach-Object {
- Write-Host "Deleting Linked Service " $_.Name
- Remove-AzDataFactoryV2LinkedService -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
- Write-Host "Deleting integration runtimes"
- $deletedintegrationruntimes | ForEach-Object {
- Write-Host "Deleting integration runtime " $_.Name
- Remove-AzDataFactoryV2IntegrationRuntime -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
- }
-
- if ($deleteDeployment -eq $true) {
- Write-Host "Deleting ARM deployment ... under resource group: " $ResourceGroupName
- $deployments = Get-AzResourceGroupDeployment -ResourceGroupName $ResourceGroupName
- $deploymentsToConsider = $deployments | Where { $_.DeploymentName -like "ArmTemplate_master*" -or $_.DeploymentName -like "ArmTemplateForFactory*" } | Sort-Object -Property Timestamp -Descending
- $deploymentName = $deploymentsToConsider[0].DeploymentName
-
- Write-Host "Deployment to be deleted: " $deploymentName
- $deploymentOperations = Get-AzResourceGroupDeploymentOperation -DeploymentName $deploymentName -ResourceGroupName $ResourceGroupName
- $deploymentsToDelete = $deploymentOperations | Where { $_.properties.targetResource.id -like "*Microsoft.Resources/deployments*" }
-
- $deploymentsToDelete | ForEach-Object {
- Write-host "Deleting inner deployment: " $_.properties.targetResource.id
- Remove-AzResourceGroupDeployment -Id $_.properties.targetResource.id
- }
- Write-Host "Deleting deployment: " $deploymentName
- Remove-AzResourceGroupDeployment -ResourceGroupName $ResourceGroupName -Name $deploymentName
- }
- #Start active triggers - after cleanup efforts
- Write-Host "Starting active triggers"
- $triggersToStart | ForEach-Object {
- if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
- Write-Host "Subscribing" $_.Name "to events"
- $status = Add-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- while ($status.Status -ne "Enabled"){
- Start-Sleep -s 15
- $status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
- }
- }
- Write-Host "Starting trigger" $_.Name
- Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
- }
-}
-```
## Next steps - [Continuous integration and delivery overview](continuous-integration-delivery.md)
else {
- [Manually promote a Resource Manager template to each environment](continuous-integration-delivery-manual-promotion.md) - [Use custom parameters with a Resource Manager template](continuous-integration-delivery-resource-manager-custom-parameters.md) - [Linked Resource Manager templates](continuous-integration-delivery-linked-templates.md)-- [Using a hotfix production environment](continuous-integration-delivery-hotfix-environment.md)
+- [Using a hotfix production environment](continuous-integration-delivery-hotfix-environment.md)
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
The service provides the following performance optimization features:
* [Parallel copy](#parallel-copy) * [Staged copy](#staged-copy)
-### Data Integration Units
+### <a id="data-integration-units"></a>Data Integration Units
A Data Integration Unit (DIU) is a measure that represents the power of a single unit in Azure Data Factory and Synapse pipelines. Power is a combination of CPU, memory, and network resource allocation. DIU only applies to [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime). DIU does not apply to [self-hosted integration runtime](concepts-integration-runtime.md#self-hosted-integration-runtime). [Learn more here](copy-activity-performance-features.md#data-integration-units).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
+
+ Title: Managing Azure Data Factory Studio preview updates
+description: Learn how to enable/disable Azure Data Factory studio preview updates.
+++++++ Last updated : 06/21/2022++
+# Manage Azure Data Factory studio preview experience
++
+You can choose whether you would like to enable preview experiences in your Azure Data Factory.
+
+## How to enable/disable preview experience
+
+There are two ways to enable preview experiences.
+
+1. In the banner seen at the top of the screen, you can click **Open settings to learn more and opt in**.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt in option in a banner at the top of the screen.":::
+
+2. Alternatively, you can click the **Settings** button.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-2.png" alt-text="Screenshot of Azure Data Factory home page highlighting Settings gear in top right corner.":::
+
+ After opening **Settings**, you will see an option to turn on **Azure Data Factory Studio preview update**.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-3.png" alt-text="Screenshot of Settings panel highlighting button to turn on Azure Data Factory Studio preview update.":::
+
+ Toggle the button so that it shows **On** and click **Apply**.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-4.png" alt-text="Screenshot of Settings panel showing Azure Data Factory Studio preview update turned on and the Apply button in the bottom left corner.":::
+
+ Your data factory will refresh to show the preview features.
+
+ Similarly, you can disable preview features with the same steps. Click **Open settings to opt out** or click the **Settings** button and unselect **Azure Data Factory Studio preview update**.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-5.png" alt-text="Screenshot of Azure Data Factory home page with an Opt out option in a banner at the top of the screen and Settings gear in the top right corner of the screen.":::
+
+> [!NOTE]
+> Enabling/disabling preview updates will discard any unsaved changes.
+
+## Current Preview Updates
+
+### Dataflow Data first experimental view
+
+UI (user interfaces) changes have been made to mapping data flows. These changes were made to simplify and streamline the dataflow creation process so that you can focus on what your data looks like.
+The dataflow authoring experience remains the same as detailed [here](https://aka.ms/adfdataflows), except for certain areas detailed below.
+
+#### Configuration panel
+
+The configuration panel for transformations has now been simplified. Previously, the configuration panel showed settings specific to the selected transformation.
+Now, for each transformation, the configuration panel will only have **Data Preview** that will automatically refresh when changes are made to transformations.
+
+
+If no transformation is selected, the panel will show the pre-existing data flow configurations: **Parameters** and **Settings**.
+
+
+#### Transformation settings
+
+Settings specific to a transformation will now show in a pop up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear.
+
+
+ You can also find the settings by clicking the gear button in the top right corner of the transformation activity.
+
+
+#### Data preview
+
+If debug mode is on, **Data Preview** in the configuration panel will give you an interactive snapshot of the data at each transform.
+**Data preview** now includes Elapsed time (seconds) to show how long your data preview took to load.
+Columns can be rearranged by dragging a column by its header. You can also sort columns using the arrows next to the column titles and you can export data preview data using **Export to CSV** on the banner above column headers.
+
+
+### Pipeline experimental view
+
+UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process.
+
+#### Adding activities
+
+You now have the option to add an activity using the add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
+
+Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success.
+
+
+#### ForEach activity container
+
+You can now view the activities contained in your ForEach activity.
+
+
+You have two options to add activities to your ForEach loop.
+1. Use the + button in your ForEach container to add an activity.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new ForEach activity container with the add button highlighted on the left side of the center of the screen.":::
+
+ Clicking this button will bring up a drop-down list of all activities that you can add.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-13.png" alt-text="Screenshot of a drop-down list in the ForEach container with all the activities listed.":::
+
+ Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the ForEach container.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-14.png" alt-text="Screenshot of the ForEach container with three activities in the center of the container.":::
+
+> [!NOTE]
+> If your ForEach container includes more than 5 activities, only the first 4 will be shown in the container preview.
+
+2. Use the edit button in your ForEach container to see everything within the container. You can use the canvas to edit or add to your pipeline.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-15.png" alt-text="Screenshot of the ForEach container with the edit button highlighted on the right side of a box in the center of the screen.":::
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-16.png" alt-text="Screenshot of the inside of the ForEach container with three activities linked together.":::
+
+ Add additional activities by dragging new activities to the canvas or click the add button on the right most activity to bring up a drop-down list of activities.
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-17.png" alt-text="Screenshot of the Add activity button in the bottom left corner of the right most activity.":::
+
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right most activity.":::
+
+ Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the ForEach container.
+
+## Provide feedback
+
+We want to hear from you! If you see this pop-up, please provide feedback, and let us know your thoughts.
+
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
Previously updated : 03/22/2022 Last updated : 06/06/2022 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
databox Data Box Heavy Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-picked-up.md
You are now ready to ship your device back.
1. Ensure that the device is powered off and all the cables are removed. Spool and securely place the 4 power cords in the tray that you can access from the back of the device. 2. The device ships LTL freight via FedEx in the US and DHL in the EU.
- 1. Reach out to [Data Box Operations](mailto:DataBoxOps@microsoft.com) to inform regarding the pickup and to get the return shipping label.
+ 1. Reach out to [Data Box Operations](mailto:adbops@microsoft.com) to inform regarding the pickup and to get the return shipping label.
2. Call the local number for your shipping carrier to schedule the pickup. 3. Ensure that the shipping label is displayed prominently on the exterior of the shipment. 4. Make sure that the old shipping labels from the previous shipment are removed from the device.
databox Data Box Heavy Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-quickstart-portal.md
Previously updated : 11/04/2020 Last updated : 06/13/2022 #Customer intent: As an IT admin, I need to quickly deploy Data Box Heavy so as to import data into Azure.
This operation takes about 15-20 minutes to complete.
1. Remove the cables and return them to the tray at the back of the device. 2. Schedule a pickup with your regional carrier.
-3. Reach out to [Data Box Operations](mailto:DataBoxOps@microsoft.com) to inform regarding the pickup and to get the return shipping label.
+3. Reach out to [Data Box Operations](mailto:adbops@microsoft.com) to inform regarding the pickup and to get the return shipping label.
4. The return shipping label should be visible on the front clear panel of the device. ## Verify data
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
If you've integrated Azure Kubernetes Service with Defender for Cloud, you can t
`kubectl get pods --namespace=asc-alerttest-662jfi039n`
-For more information about defending your Kubernetes nodes and clusters, see [Introduction to Microsoft Defender for Containers](defender-for-containers-introduction.md)
+For more information about defending your Kubernetes nodes and clusters, see [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md)
## Next steps This article introduced you to the alerts validation process. Now that you're familiar with this validation, try the following articles:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 06/20/2022 Last updated : 06/21/2022 + # Security alerts - a reference guide This article lists the security alerts you might get from Microsoft Defender for Cloud and any Microsoft Defender plans you've enabled. The alerts shown in your environment depend on the resources and services you're protecting, and your customized configuration.
Microsoft Defender for Containers provides security alerts on the cluster level
| **MicroBurst exploitation toolkit used to extract keys from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultKeysREST) | MicroBurst's exploitation toolkit was used to extract keys from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High | | **MicroBurst exploitation toolkit used to extract keys to your storage accounts**<br>(ARM_MicroBurst.AZStorageKeysREST) | MicroBurst's exploitation toolkit was used to extract keys to your storage accounts. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | Collection | High | | **MicroBurst exploitation toolkit used to extract secrets from your Azure key vaults**<br>(ARM_MicroBurst.AzKeyVaultSecretsREST) | MicroBurst's exploitation toolkit was used to extract secrets from your Azure key vaults. This was detected by analyzing Azure Activity logs and resource management operations in your subscription. | - | High |
-| **Permissions granted for an RBAC role in an unusual way for your Azure environment (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) | Microsoft Defender for Resource Manager detected an RBAC role assignment that's unusual when compared with other assignments performed by the same assigner / performed for the same assignee / in your tenant due to the following anomalies: assignment time, assigner location, assigner, authentication method, assigned entities, client software used, assignment extent. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to grant permissions to an additional user account they own.|Lateral Movement, Defense Evasion|Medium|
| **PowerZure exploitation toolkit used to elevate access from Azure AD to Azure**<br>(ARM_PowerZure.AzureElevatedPrivileges) | PowerZure exploitation toolkit was used to elevate access from AzureAD to Azure. This was detected by analyzing Azure Resource Manager operations in your tenant. | - | High | | **PowerZure exploitation toolkit used to enumerate resources**<br>(ARM_PowerZure.GetAzureTargets) | PowerZure exploitation toolkit was used to enumerate resources on behalf of a legitimate user account in your organization. This was detected by analyzing Azure Resource Manager operations in your subscription. | Collection | High | | **PowerZure exploitation toolkit used to enumerate storage containers, shares, and tables**<br>(ARM_PowerZure.ShowStorageContent) | PowerZure exploitation toolkit was used to enumerate storage shares, tables, and containers. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
+| **Suspicious Azure role assignment detected (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious Azure role assignment / performed using PIM (Privileged Identity Management) in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to allow administrators to grant principals access to Azure resources. While this activity may be legitimate, a threat actor might utilize role assignment to escalate their permissions allowing them to advance their attack. |Lateral Movement, Defense Evasion|Low (PIM) / High|
| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium | | **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium | | **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |-
+| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to an additional user account under their control. |  Lateral Movement, Defense Evasion | High |
## <a name="alerts-dns"></a>Alerts for DNS
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
Last updated 03/01/2022
-# Introduction to Microsoft Defender for Azure Cosmos DB
+# Overview of Microsoft Defender for Azure Cosmos DB
Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders. Defender for Azure Cosmos DB uses advanced threat detection capabilities, and [Microsoft Threat Intelligence](https://www.microsoft.com/insidetrack/microsoft-uses-threat-intelligence-to-protect-detect-and-respond-to-threats) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
-You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-defender-for-cosmos.md) at either the subscription level, or the resource level.
+You can [enable protection for all your databases](quickstart-enable-database-protections.md) (recommended), or [enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md) at either the subscription level, or the resource level.
Defender for Azure Cosmos DB continually analyzes the telemetry stream generated by the Azure Cosmos DB service. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
Threat intelligence security alerts are triggered for:
In this article, you learned about Microsoft Defender for Azure Cosmos DB. > [!div class="nextstepaction"]
-> [Enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-defender-for-cosmos.md)
+> [Enable Microsoft Defender for Azure Cosmos DB](quickstart-enable-database-protections.md)
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
You can specify the workspace and region where data collected from your machines
> [!NOTE]
-> **Microsoft Defender for Storage** stores artifacts regionally according to the location of the related Azure resource. Learn more in [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md).
+> **Microsoft Defender for Storage** stores artifacts regionally according to the location of the related Azure resource. Learn more in [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md).
## Data consumption
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md
-# Protect your web apps and APIs
+# Overview of Defender for App Service to protect your Azure App Service web apps and APIs
## Prerequisites
As a cloud-native solution, Defender for App Service can identify attack methodo
The log data and the infrastructure together can tell the story: from a new attack circulating in the wild to compromises in customer machines. Therefore, even if Microsoft Defender for App Service is deployed after a web app has been exploited, it might be able to detect ongoing attacks. - ## What threats can Defender for App Service detect? ### Threats by MITRE ATT&CK tactics
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
It's a security basic to know and make sure your workloads are secure, and it st
Defender for Cloud continuously discovers new resources that are being deployed across your workloads and assesses whether they're configured according to security best practices. If not, they're flagged and you get a prioritized list of recommendations for what you need to fix. Recommendations help you reduce the attack surface across each of your resources.
-The list of recommendations is enabled and supported by the Azure Security Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for security and compliance best practices based on common compliance frameworks. Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction).
+The list of recommendations is enabled and supported by the Azure Security Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a set of guidelines for security and compliance best practices based on common compliance frameworks. Learn more in [Azure Security Benchmark introduction](/security/benchmark/azure/introduction).
In this way, Defender for Cloud enables you not just to set security policies, but to *apply secure configuration standards across your resources*.
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/s
### What registry types are scanned? What types are billed?
-For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](defender-for-container-registries-introduction.md#availability).
+For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md#additional-information).
If you connect unsupported registries to your Azure subscription, Defender for Containers won't scan them and won't bill you for them.
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
+
+ Title: Enable Microsoft Defender for Azure Cosmos DB
+description: Learn how to enable Microsoft Defender for Azure Cosmos DB's enhanced security features.
++ Last updated : 06/07/2022++
+# Enable Microsoft Defender for Azure Cosmos DB
+
+ Microsoft Defender for Azure Cosmos DB protection is available at both the [Subscription level](#enable-database-protection-at-the-subscription-level), and resource level. You can enable Microsoft Defender for Cloud on your subscription to protect all database types on your subscription including Microsoft Defender for Azure Cosmos DB (recommended). You can also choose to enable Microsoft Defender for Azure Cosmos DB at the [Resource level](#enable-microsoft-defender-for-azure-cosmos-db-at-the-resource-level) to protect a specific Azure Cosmos DB account.
+
+## Prerequisites
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+## Enable database protection at the subscription level
+
+The subscription level enablement, enables Microsoft Defender for Cloud protection for all database types in your subscription (recommended).
+
+You can enable Microsoft Defender for Cloud protection on your subscription in order to protect all database types, for example, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and OSS RDBs. You can also select specific resource types to protect when you configure your plan.
+
+When you enable Microsoft Defender for Cloud's enhanced security features on your subscription, Microsoft Defender for Azure Cosmos DB is automatically enabled for all of your Azure Cosmos DB accounts.
+
+**To enable database protection at the subscription level**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant subscription.
+
+1. Locate Databases and toggle the switch to **On**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/protection-type.png" alt-text="Screenshot showing the available protections you can enable." lightbox="media/quickstart-enable-defender-for-cosmos/protection-type-expanded.png":::
+
+1. Select **Save**.
+
+**To select specific resource types to protect when you configure your plan**:
+
+1. Follow steps 1 - 4 above.
+
+1. Select **Select types**
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/select-type.png" alt-text="Screenshot showing where the option to select the type is located.":::
+
+1. Toggle the desired resource type switches to **On**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/resource-type.png" alt-text="Screenshot showing the available resources you can enable.":::
+
+1. Select **Confirm**.
+
+## Enable Microsoft Defender for Azure Cosmos DB at the resource level
+
+You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB account through the Azure portal, PowerShell, or the Azure CLI.
+
+**To enable Microsoft Defender for Cloud for a specific Azure Cosmos DB account**:
+
+### [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **your Azure Cosmos DB account** > **Settings**.
+
+1. Select **Microsoft Defender for Cloud**.
+
+1. Select **Enable Microsoft Defender for Azure Cosmos DB**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/enable-storage.png" alt-text="Screenshot of the option to enable Microsoft Defender for Azure Cosmos DB on your specified Azure Cosmos DB account.":::
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Install the [Az.Security](https://www.powershellgallery.com/packages/Az.Security/1.1.1) module.
+
+1. Call the [Enable-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection) command.
+
+ ```powershell
+ Enable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/"
+ ```
+
+1. Verify the Microsoft Defender for Azure Cosmos DB setting for your storage account through the PowerShell call [Get-AzSecurityAdvancedThreatProtection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection) command.
+
+ ```powershell
+ Get-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<Your subscription ID>/resourceGroups/myResourceGroup/providers/Microsoft.DocumentDb/databaseAccounts/myCosmosDBAccount/"
+ ```
+
+### [ARM template](#tab/arm-template)
+
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
+++
+## Simulate security alerts from Microsoft Defender for Azure Cosmos DB
+
+A full list of [supported alerts](alerts-reference.md) is available in the reference table of all Defender for Cloud security alerts.
+
+You can use sample Microsoft Defender for Azure Cosmos DB alerts to evaluate their value, and capabilities. Sample alerts will also validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications).
+
+**To create sample alerts from Microsoft Defender for Azure Cosmos DB**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as a Subscription Contributor user.
+
+1. Navigate to the Alerts page.
+
+1. Select **Create sample alerts**.
+
+1. Select the subscription.
+
+1. Select the relevant Microsoft Defender plan(s).
+
+1. Select **Create sample alerts**.
+
+ :::image type="content" source="media/quickstart-enable-defender-for-cosmos/sample-alerts.png" alt-text="Screenshot showing the order needed to create an alert.":::
+
+After a few minutes, the alerts will appear in the security alerts page. Alerts will also appear anywhere that you've configured to receive your Microsoft Defender for Cloud security alerts. For example, connected SIEMs, and email notifications.
+
+## Next Steps
+
+In this article, you learned how to enable Microsoft Defender for Azure Cosmos DB, and how to simulate security alerts.
+
+> [!div class="nextstepaction"]
+> [Automate responses to Microsoft Defender for Cloud triggers](workflow-automation.md).
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
-# Introduction to Microsoft Defender for open-source relational databases
+# Overview of Microsoft Defender for open-source relational databases
This plan brings threat protections for the following open-source relational databases:
defender-for-cloud Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md
Microsoft Defender for Cloud detects anomalous activities indicating unusual and
To get alerts from the Microsoft Defender plan you'll first need to enable it as [shown below](#enable-enhanced-security).
-Learn more about this Microsoft Defender plan in [Introduction to Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md).
+Learn more about this Microsoft Defender plan in [Overview of Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md).
## Enable enhanced security
defender-for-cloud Defender For Dns Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-alerts.md
+
+ Title: Respond to Microsoft Defender for DNS alerts - Microsoft Defender for Cloud
+description: Learn best practices for responding to alerts that indicate security risks in DNS services.
Last updated : 6/21/2022+++++
+# Respond to Microsoft Defender for DNS alerts
+
+When you receive an alert from Microsoft Defender for DNS, we recommend you investigate and respond to the alert as described below. Microsoft Defender for DNS protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
+
+## Step 1. Contact
+
+1. Contact the resource owner to determine whether the behavior was expected or intentional.
+1. If the activity is expected, dismiss the alert.
+1. If the activity is unexpected, treat the resource as potentially compromised and mitigate as described in the next step.
+
+## Step 2. Immediate mitigation
+
+1. Isolate the resource from the network to prevent lateral movement.
+1. Run a full antimalware scan on the resource, following any resulting remediation advice.
+1. Review installed and running software on the resource, removing any unknown or unwanted packages.
+1. Revert the machine to a known good state, reinstalling the operating system if required, and restore software from a verified malware-free source.
+1. Resolve any Microsoft Defender for Cloud recommendations for the machine, remediating highlighted security issues to prevent future breaches.
+
+## Next steps
+
+Now that you know how to respond to DNS alerts, find out more about how to manage alerts.
+
+> [!div class="nextstepaction"]
+> [Manage security alerts](managing-and-responding-alerts.md)
+
+For related material, see the following articles:
+
+- To [export Defender for Cloud alerts](export-to-siem.md) to your centralized security information and event management (SIEM) system, such as Microsoft Sentinel, any third-party SIEM, or any other external tool.
+- To [send alerts to in real-time](continuous-export.md) to Log Analytics or Event Hubs to create automated processes to analyze and respond to security alerts.
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
-# Introduction to Microsoft Defender for DNS
+# Overview of Microsoft Defender for DNS
Microsoft Defender for DNS provides an additional layer of protection for resources that use Azure DNS's [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) capability.
Microsoft Defender for DNS doesn't use any agents.
To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md). -
-## Respond to Microsoft Defender for DNS alerts
-
-When you receive an alert from Microsoft Defender for DNS, we recommend you investigate and respond to the alert as described below. Microsoft Defender for DNS protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert.
--
-### Step 1. Contact
-
-1. Contact the resource owner to determine whether the behavior was expected or intentional.
-1. If the activity is expected, dismiss the alert.
-1. If the activity is unexpected, treat the resource as potentially compromised and mitigate as described in the next step.
-
-### Step 2. Immediate mitigation
-
-1. Isolate the resource from the network to prevent lateral movement.
-1. Run a full antimalware scan on the resource, following any resulting remediation advice.
-1. Review installed and running software on the resource, removing any unknown or unwanted packages.
-1. Revert the machine to a known good state, reinstalling the operating system if required, and restore software from a verified malware-free source.
-1. Resolve any Microsoft Defender for Cloud recommendations for the machine, remediating highlighted security issues to prevent future breaches.
-- ## Next steps In this article, you learned about Microsoft Defender for DNS.
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
-# Introduction to Microsoft Defender for Key Vault
+# Overview of Microsoft Defender for Key Vault
Azure Key Vault is a cloud service that safeguards encryption keys and secrets like certificates, connection strings, and passwords.
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
-# Introduction to Microsoft Defender for Resource Manager
+# Overview of Microsoft Defender for Resource Manager
[Azure Resource Manager](../azure-resource-manager/management/overview.md) is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.
Microsoft Defender for Resource Manager protects against issues including:
A full list of the alerts provided by Microsoft Defender for Resource Manager is on the [alerts reference page](alerts-reference.md#alerts-resourcemanager). -
- ## How to investigate alerts from Microsoft Defender for Resource Manager
-
-Security alerts from Microsoft Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events.
-
-Learn more about [Azure Activity log](../azure-monitor/essentials/activity-log.md).
-
-To investigate security alerts from Microsoft Defender for Resource
-
-1. Open Azure Activity log.
-
- :::image type="content" source="media/defender-for-resource-manager-introduction/opening-azure-activity-log.png" alt-text="How to open Azure Activity log.":::
-
-1. Filter the events to:
- - The subscription mentioned in the alert
- - The timeframe of the detected activity
- - The related user account (if relevant)
-
-1. Look for suspicious activities.
-
-> [!TIP]
-> For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity).
--- ## Next steps In this article, you learned about Microsoft Defender for Resource Manager.
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Microsoft Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert. - ## Step 1. Contact 1. Contact the resource owner to determine whether the behavior was expected or intentional. 1. If the activity is expected, dismiss the alert. 1. If the activity is unexpected, treat the related user accounts, subscriptions, and virtual machines as compromised and mitigate as described in the following step.
-## Step 2. Immediate mitigation
+## Step 2. Investigate alerts from Microsoft Defender for Resource Manager
+
+Security alerts from Microsoft Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events.
+
+Learn more about [Azure Activity log](../azure-monitor/essentials/activity-log.md).
+
+To investigate security alerts from Microsoft Defender for Resource
+
+1. Open Azure Activity log.
+
+ :::image type="content" source="media/defender-for-resource-manager-introduction/opening-azure-activity-log.png" alt-text="How to open Azure Activity log.":::
+
+1. Filter the events to:
+ - The subscription mentioned in the alert
+ - The timeframe of the detected activity
+ - The related user account (if relevant)
+
+1. Look for suspicious activities.
+
+> [!TIP]
+> For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors-reference.md#azure-activity).
+
+## Step 3. Immediate mitigation
1. Remediate compromised user accounts: - If theyΓÇÖre unfamiliar, delete them as they may have been created by a threat actor
When you receive an alert from Microsoft Defender for Resource Manager, we recom
- Run a full antimalware scan on the machine - Reimage the machines from a malware-free source - ## Next steps This page explained the process of responding to an alert from Microsoft Defender for Resource Manager. For related information see the following pages: -- [Introduction to Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md)
+- [Overview of Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md)
- [Suppress security alerts](alerts-suppression-rules.md) - [Continuously export Defender for Cloud data](continuous-export.md)
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
description: Learn about the benefits and features of Microsoft Defender for Ser
Last updated 06/15/2022
-# Introduction to Microsoft Defender for Servers
+# Overview of Microsoft Defender for Servers
Microsoft Defender for Servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, and on-premises environment.
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
Title: Microsoft Defender for SQL - the benefits and features
-description: Learn about the benefits and features of Microsoft Defender for SQL.
+description: Learn about the benefits and features of Microsoft Defender for Azure SQL .
Last updated 06/01/2022
-# Introduction to Microsoft Defender for SQL
+# Overview of Microsoft Defender for SQL
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to protect your SQL estate regardless of where it is located (Azure, multicloud or Hybrid environments). Microsoft Defender for SQL includes functions that can be used to discover and mitigate potential database vulnerabilities. Defender for SQL can also detect anomalous activities that may be an indication of a threat to your databases.
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
As you review your assessment results, you can mark results as being an acceptab
## Export results
-Use the [Continuous export](continuous-export.md) feature of Microsoft Defender for Cloud to export vulnerability assessment findings to Azure Event Hub or to Log Analytics workspace.
+Use the [Continuous export](continuous-export.md) feature of Microsoft Defender for Cloud to export vulnerability assessment findings to Azure Event Hubs or to Log Analytics workspace.
## View vulnerabilities in graphical, interactive reports
You can specify the region where your SQL Vulnerability Assessment data will be
## Next steps
-Learn more about Defender for Cloud's protections for SQL resources in [Introduction to Microsoft Defender for SQL](defender-for-sql-introduction.md).
+Learn more about Defender for Cloud's protections for SQL resources in [Overview of Microsoft Defender for SQL](defender-for-sql-introduction.md).
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
description: Learn about the benefits and features of Microsoft Defender for Sto
Last updated 06/16/2022 -
-# Introduction to Microsoft Defender for Storage
+# Overview of Microsoft Defender for Storage
**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
When you enable this Defender plan on a subscription, all existing Azure Storage
You can enable Defender for Storage in any of several ways, described in [Set up Microsoft Defender for Cloud](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) in the Azure Storage documentation.
-## Trigger a test alert for Microsoft Defender for Storage
-
-To test the security alerts from Microsoft Defender for Storage in your environment, generate the alert "Access from a Tor exit node to a storage account" with the following steps:
-
-1. Open a storage account with Microsoft Defender for Storage enabled.
-1. From the sidebar, select ΓÇ£ContainersΓÇ¥ and open an existing container or create a new one.
-
- :::image type="content" source="media/defender-for-storage-introduction/opening-storage-container.png" alt-text="Opening a blob container from an Azure Storage account." lightbox="media/defender-for-storage-introduction/opening-storage-container.png":::
-
-1. Upload a file to that container.
-
- > [!CAUTION]
- > Don't upload a file containing sensitive data.
-
-1. Use the context menu on the uploaded file to select ΓÇ£Generate SASΓÇ¥.
-
- :::image type="content" source="media/defender-for-storage-introduction/generate-sas.png" alt-text="The generate SAS option for a file in a blob container.":::
-
-1. Leave the default options and select **Generate SAS token and URL**.
-
-1. Copy the generated SAS URL.
-
-1. On your local machine, open the Tor browser.
-
- > [!TIP]
- > You can download Tor from the Tor Project site [https://www.torproject.org/download/](https://www.torproject.org/download/).
-
-1. In the Tor browser, navigate to the SAS URL.
-
-1. Download the file you uploaded in step 3.
-
- Within two hours you'll get the following security alert from Defender for Cloud:
-
- :::image type="content" source="media/defender-for-storage-introduction/tor-access-alert-storage.png" alt-text="Security alert regarding access from a Tor exit node.":::
--- ## FAQ - Microsoft Defender for Storage -- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)-- [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)-- [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
+- [Overview of Microsoft Defender for Storage](#overview-of-microsoft-defender-for-storage)
+ - [Availability](#availability)
+ - [What are the benefits of Microsoft Defender for Storage?](#what-are-the-benefits-of-microsoft-defender-for-storage)
+ - [Security threats in cloud-based storage services](#security-threats-in-cloud-based-storage-services)
+ - [What kind of alerts does Microsoft Defender for Storage provide?](#what-kind-of-alerts-does-microsoft-defender-for-storage-provide)
+ - [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis)
+ - [Enable Defender for Storage](#enable-defender-for-storage)
+ - [FAQ - Microsoft Defender for Storage](#faqmicrosoft-defender-for-storage)
+ - [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)
+ - [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)
+ - [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
+ - [Next steps](#next-steps)
### How do I estimate charges at the account level?
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
+
+ Title: Trigger test alert for Defender for Storage - Microsoft Defender for Cloud
+description: Learn how to create a test alert for Defender for Storage.
++ Last updated : 06/16/2022+++
+# Trigger a test alert for Microsoft Defender for Storage
+
+After you enable Defender for Storage, you can create a test alert to demonstrate how Defender for Storage recognizes and alerts on security risks.
+
+## Demonstrate Defender for Storage alerts
+
+To test the security alerts from Microsoft Defender for Storage in your environment, generate the alert "Access from a Tor exit node to a storage account" with the following steps:
+
+1. Open a storage account with [Microsoft Defender for Storage enabled](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud).
+1. From the sidebar, select ΓÇ£ContainersΓÇ¥ and open an existing container or create a new one.
+
+ :::image type="content" source="media/defender-for-storage-introduction/opening-storage-container.png" alt-text="Opening a blob container from an Azure Storage account." lightbox="media/defender-for-storage-introduction/opening-storage-container.png":::
+
+1. Upload a file to that container.
+
+ > [!CAUTION]
+ > Don't upload a file containing sensitive data.
+
+1. Use the context menu on the uploaded file to select ΓÇ£Generate SASΓÇ¥.
+
+ :::image type="content" source="media/defender-for-storage-introduction/generate-sas.png" alt-text="The generate SAS option for a file in a blob container.":::
+
+1. Leave the default options and select **Generate SAS token and URL**.
+
+1. Copy the generated SAS URL.
+
+1. On your local machine, open the Tor browser.
+
+ > [!TIP]
+ > You can download Tor from the Tor Project site [https://www.torproject.org/download/](https://www.torproject.org/download/).
+
+1. In the Tor browser, navigate to the SAS URL.
+
+1. Download the file you uploaded in step 3.
+
+ Within two hours you'll get the following security alert from Defender for Cloud:
+
+ :::image type="content" source="media/defender-for-storage-introduction/tor-access-alert-storage.png" alt-text="Security alert regarding access from a Tor exit node.":::
+
+## Next steps
+
+For information about how to use Defender for Cloud alerts in your security management processes:
+
+- [The full list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage)
+- [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md)
+- [Save Storage telemetry for investigation](../azure-monitor/essentials/diagnostic-settings.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 03/22/2022 Last updated : 06/19/2022 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
> [!TIP] > Originally launched as **Windows Defender ATP**, in 2019, this EDR product was renamed **Microsoft Defender ATP**. >
-> At Ignite 2020, we launched the [Microsoft Defender for Cloud XDR suite](https://www.microsoft.com/security/business/threat-protection), and this EDR component was renamed **Microsoft Defender for Endpoint**.
-
+> At Ignite 2020, we launched the [Microsoft Defender for Cloud XDR suite](https://www.microsoft.com/security/business/threat-protection), and this EDR component was renamed **Microsoft Defender for Endpoint (MDE)**.
## Availability
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
| Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts | - ## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud Microsoft Defender for Endpoint protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or AWS. Protections include: - **Advanced post-breach detection sensors**. Defender for Endpoint's sensors collect a vast array of behavioral signals from your machines. -- **Vulnerability assessment from the Microsoft threat and vulnerability management solution**. With Microsoft Defender for Endpoint enabled, Defender for Cloud can show vulnerabilities discovered by the threat and vulnerability management module and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
+- **Vulnerability assessment from the Microsoft threat and vulnerability management solution**. With Microsoft Defender for Endpoint installed, Defender for Cloud can show vulnerabilities discovered by the threat and vulnerability management module and also offer this module as a supported vulnerability assessment solution. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).
This module also brings the software inventory features described in [Access a software inventory](asset-inventory.md#access-a-software-inventory) and can be automatically enabled for supported machines with [the auto deploy settings](auto-deploy-vulnerability-assessment.md).
When you use Defender for Cloud to monitor your machines, a Defender for Endpoin
- **Location:** Data collected by Defender for Endpoint is stored in the geo-location of the tenant as identified during provisioning. Customer data - in pseudonymized form - may also be stored in the central storage and processing systems in the United States. After you've configured the location, you can't change it. If you have your own license for Microsoft Defender for Endpoint and need to move your data to another location, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to reset the tenant. - **Moving subscriptions:** If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). - ## Enable the Microsoft Defender for Endpoint integration ### Prerequisites
Confirm that your machine meets the necessary requirements for Defender for Endp
> [!IMPORTANT] > Defender for Cloud's integration with Microsoft Defender for Endpoint is enabled by default. So when you enable enhanced security features, you give consent for Microsoft Defender for Servers to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
+1. For Windows servers, make sure that your servers meet the requirements for [onboarding Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#windows-server-2012-r2-and-windows-server-2016)
+ 1. If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+### Enable the integration
+### [**Windows**](#tab/windows)
+[The new MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) doesn't use or require installation of the Log Analytics agent. The unified solution is automatically deployed for all Windows servers connected through Azure Arc and multicloud servers connected through the multicloud connectors, except for Windows 2012 R2 and 2016 servers on Azure that are protected by Defender for Servers Plan 2. You can choose to deploy the MDE unified solution to those machines.
-### Enable the integration
+You'll deploy Defender for Endpoint to your Windows machines in one of two ways - depending on whether you've already deployed it to your Windows machines:
-### [**Windows**](#tab/windows)
+- [Users with Defender for Servers enabled and Microsoft Defender for Endpoint deployed](#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed)
+- [Users who never enabled the integration with Microsoft Defender for Endpoint](#users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows)
+
+### Users with Defender for Servers enabled and Microsoft Defender for Endpoint deployed
+
+If you've already enabled the integration with **Defender for Endpoint**, you have complete control over when and whether to deploy the MDE unified solution to your **Windows** machines.
1. From Defender for Cloud's menu, select **Environment settings** and select the subscription with the Windows machines that you want to receive Defender for Endpoint.
-1. Select **Integrations**.
+1. Select **Integrations**. You'll know that the integration is enabled if the checkbox for **Allow Microsoft Defender for Endpoint to access my data** is selected as shown:
-1. Select **Allow Microsoft Defender for Endpoint to access my data**, and select **Save**.
+ :::image type="content" source="media/integration-defender-for-endpoint/unified-solution-enabled.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint, is enabled." lightbox="media/integration-defender-for-endpoint/unified-solution-enabled.png":::
+
+ > [!NOTE]
+ > If it isn't selected, use the instructions in [Users who've never enabled the integration with Microsoft Defender for Endpoint for Windows](#users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows).
+
+1. To deploy the MDE unified solution to your Windows Server 2012 R2 and 2016 machines:
+
+ 1. Select **Enable unified solution**.
+ 1. Select **Save**.
+ 1. In the confirmation prompt, verify the information and select **Enable** to continue.
- :::image type="content" source="./media/integration-defender-for-endpoint/enable-integration-with-edr.png" alt-text="Enable the integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint":::
+ :::image type="content" source="./mediE unified solution for Windows Server 2012 R2 and 2016 machines":::
+
+ Microsoft Defender for Cloud will:
+
+ - Stop the existing MDE process in the Log Analytics agent that collects data for Defender for Servers.
+ - Install the MDE unified solution for all existing and new Windows Server 2012 R2 and 2016 machines.
+ - Remove the **Enable unified solution** from the Integrations options.
Microsoft Defender for Cloud will automatically onboard your machines to Microsoft Defender for Endpoint. Onboarding might take up to 12 hours. For new machines created after the integration has been enabled, onboarding takes up to an hour.
+ > [!NOTE]
+ > If you choose not to deploy the MDE unified solution to your Windows 2012 R2 and 2016 servers in Defender for Servers Plan 2 and then downgrade Defender for Servers to Plan 1, the MDE unified solution is not deployed to those servers so that your existing deployment is not changed without your explicit consent.
+
+### Users who never enabled the integration with Microsoft Defender for Endpoint for Windows
+
+If you've never enabled the integration for Windows, the **Allow Microsoft Defender for Endpoint to access my data** option will enable Defender for Cloud to deploy Defender for Endpoint to *both* your Windows and Linux machines.
+
+1. From Defender for Cloud's menu, select **Environment settings** and select the subscription with the machines that you want to receive Defender for Endpoint.
+
+1. Select **Integrations**.
+
+1. Select **Allow Microsoft Defender for Endpoint to access my data**, and select **Save**.
+
+The MDE agent unified solution is deployed to all of the machines in the selected subscription.
+ ### [**Linux**](#tab/linux) You'll deploy Defender for Endpoint to your Linux machines in one of two ways - depending on whether you've already deployed it to your Windows machines: - [Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows](#existing-users-with-defender-for-clouds-enhanced-security-features-enabled-and-microsoft-defender-for-endpoint-for-windows)-- [New users who have never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-whove-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows)
+- [New users who never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows)
### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows
If you've already enabled the integration with **Defender for Endpoint for Windo
:::image type="content" source="./media/integration-defender-for-endpoint/integration-enabled.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint is enabled"::: > [!NOTE]
- > If it isn't selected, use the instructions in [New users who've never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-whove-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows).
+ > If it isn't selected, use the instructions in [New users who've never enabled the integration with Microsoft Defender for Endpoint for Windows](#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows).
-1. To add your Linux machines to your integration
+1. To add your Linux machines to your integration:
1. Select **Enable for Linux machines**. 1. Select **Save**.
- 1. In the confirmation prompt, verify the information and select **Enable** if you're happy to continue.
+ 1. In the confirmation prompt, verify the information and select **Enable** to continue.
:::image type="content" source="./media/integration-defender-for-endpoint/enable-for-linux-result.png" alt-text="Confirming the integration between Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint for Linux":::
If you've already enabled the integration with **Defender for Endpoint for Windo
Also, in the Azure portal you'll see a new Azure extension on your machines called `MDE.Linux`.
-### New users who've never enabled the integration with Microsoft Defender for Endpoint for Windows
+### New users who never enabled the integration with Microsoft Defender for Endpoint for Windows
If you've never enabled the integration for Windows, the **Allow Microsoft Defender for Endpoint to access my data** option will enable Defender for Cloud to deploy Defender for Endpoint to *both* your Windows and Linux machines.
Defender for Cloud automatically deploys the extension to machines running:
> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
-### I've enabled the solution but the "MDE.Windows" / "MDE.Linux" extension isn't showing on my machine
+### I enabled the solution but the "MDE.Windows" / "MDE.Linux" extension isn't showing on my machine
-If you've enabled the integration, but still don't see the extension running on your machines, check the following:
+If you enabled the integration, but still don't see the extension running on your machines:
-1. If 12 hours hasn't passed since you enabled the solution, you'll need to wait until the end of this period to be sure there's an issue to investigate.
-1. After 12 hours have passed, if you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration.
+1. If 12 hours didn't pass since you enabled the solution, you'll need to wait until the end of this period to be sure there's an issue to investigate.
+1. After 12 hours pass, if you still don't see the extension running on your machines, check that you've met [Prerequisites](#prerequisites) for the integration.
1. Ensure you've enabled the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for the subscriptions related to the machines you're investigating. 1. If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
If you've enabled the integration, but still don't see the extension running on
Defender for Endpoint is included at no extra cost with **Microsoft Defender for Servers**. Alternatively, it can be purchased separately for 50 machines or more. ### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?
-If you've already got a license for **Microsoft Defender for Endpoint for Servers** , you won't have to pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
+If you already have a license for **Microsoft Defender for Endpoint for Servers** , you won't pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) license. Learn more about [the Microsoft 365 license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace. The discount will be effective starting from the approval date, and won't take place retroactively.
-## Does Microsoft Defender for Servers support the new unified Microsoft Defender for Endpoint agent for Windows Server 2012 R2 and 2016?
-
-Defender for Servers Plan 1 deploys [the new Microsoft Defender for Endpoint solution stack](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292) for Windows Server 2012 R2 and 2016, which does not use or require installation of the Microsoft Monitoring Agent (MMA).
- ### How do I switch from a third-party EDR tool? Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
+<!-- ### Which Microsoft Defender for Endpoint plan is supported in Defender for Servers?
+
+Defender for Servers Plan 1 provides the capabilities of [Microsoft Defender for Endpoint Plan 1](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1?view=o365-worldwide). Defender for Servers Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide). -->
## Next steps
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
For visibility into the security posture of all subscriptions linked to an Azure
## Organize your subscriptions into management groups
-### Introduction to management groups
+### Overview of management groups
Use management groups to efficiently manage access, policies, and reporting on **groups of subscriptions**, as well as effectively manage the entire Azure estate by performing actions on the root management group. You can organize subscriptions into management groups and apply your governance policies to the management groups. All subscriptions within a management group automatically inherit the policies applied to the management group.
defender-for-cloud Quickstart Enable Database Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-database-protections.md
Title: Enable database protection for your subscription
-description: Learn how to enable Microsoft Defender for Cloud for all of your database types for your entire subscription.
-- Previously updated : 06/19/2022
+description: Learn how to enable Microsoft Defender for Cloud for all of your database types for your entire subscription.
+++ Last updated : 06/15/2022
-# Quickstart: Microsoft Defender for Cloud database protection
+# Enable Microsoft Defender for Cloud database plans
This article explains how to enable Microsoft Defender for Cloud's database (DB) protection for the most common database types that exist on your subscription.
You can enable database protection on your subscription, or exclude specific dat
1. Select **Continue**. 1. Select :::image type="icon" source="media/quickstart-enable-database-protections/save-icon.png" border="false":::.+ ## Next steps In this article, you learned how to enable Microsoft Defender for Cloud for all database types on your subscription. Next, read more about each of the resource types.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
These are the new alerts:
For more information, see: - [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)-- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)
+- [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md)
- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage) ### Improvements to alerts for Microsoft Defender for Storage
When Defender for Endpoint detects a threat, it triggers an alert. The alert is
During the preview period, you'll deploy the [Defender for Endpoint for Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux) sensor to supported Linux machines in one of two ways depending on whether you've already deployed it to your Windows machines: - [Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows](integration-defender-for-endpoint.md#existing-users-with-defender-for-clouds-enhanced-security-features-enabled-and-microsoft-defender-for-endpoint-for-windows)-- [New users who have never enabled the integration with Microsoft Defender for Endpoint for Windows](integration-defender-for-endpoint.md?tabs=linux#new-users-whove-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows)
+- [New users who have never enabled the integration with Microsoft Defender for Endpoint for Windows](integration-defender-for-endpoint.md?tabs=linux#new-users-who-never-enabled-the-integration-with-microsoft-defender-for-endpoint-for-windows)
Learn more in [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in June include:
- [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments) - [Alerts by resource group](#alerts-by-resource-group) - [General availability (GA) for Microsoft Defender for Azure Cosmos DB](#general-availability-ga-for-microsoft-defender-for-azure-cosmos-db)
+- [Auto-provisioning of Microsoft Defender for Endpoint unified solution](#auto-provisioning-of-microsoft-defender-for-endpoint-unified-solution)
+- [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)
### Drive implementation of security recommendations to enhance your security posture
Using the multicloud onboarding experience, you can enable and enforce databases
Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and your [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
-## Alerts by resource group
+### Alerts by resource group
The ability to filter, sort and group by resource group has been added to the Security alerts page.
Learn more about [Microsoft Defender for Azure Cosmos DB](concept-defender-for-c
With the addition of support for Azure Cosmos DB, Defender for Cloud now provides one of the most comprehensive workload protection offerings for cloud-based databases. Security teams and database owners can now have a centralized experience to manage their database security of their environments.
-Learn how to [enable database protection](quickstart-enable-database-protections.md) for your databases today.
+Learn how to [enable protections](enable-enhanced-security.md) for your databases.
+
+### Auto-provisioning of Microsoft Defender for Endpoint unified solution
+
+Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
+
+Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers plan 2 that enabled MDE integration *after* 06-20-2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* 06-20-2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page:
++
+Learn more about [MDE integration with Defender for Servers.](integration-defender-for-endpoint.md#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed).
+
+### Deprecating the "API App should only be accessible over HTTPS" policy
+
+The policy `API App should only be accessible over HTTPS` has been deprecated. This policy is replaced with the `Web Application should only be accessible over HTTPS` policy, which has been renamed to `App Service apps should only be accessible over HTTPS`.
+
+To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
## May 2022
When potentially malicious activities are detected, security alerts are generate
There's no impact on database performance when enabling the service, because Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data.
-Learn more at [Introduction to Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
+Learn more at [Overview of Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
We're also introducing a new enablement experience for database security. You can now enable Microsoft Defender for Cloud protection on your subscription to protect all database types, such as, Azure Cosmos DB, Azure SQL Database, Azure SQL servers on machines, and Microsoft Defender for open-source relational databases through one enablement process. Specific resource types can be included, or excluded by configuring your plan.
-Learn how to [enable your database security at the subscription level](quickstart-enable-defender-for-cosmos.md#enable-database-protection-at-the-subscription-level).
+Learn how to [enable your database security at the subscription level](quickstart-enable-database-protections.md#enable-database-protection-on-your-subscription).
### Threat protection for Google Kubernetes Engine (GKE) clusters
The two recommendations, which both offer automated remediation (the 'Fix' actio
|Recommendation |Description |Severity | ||||
-|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
-|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Last updated 06/02/2022
# Security posture for Microsoft Defender for Cloud
-## Introduction to secure score
+## Overview of secure score
Microsoft Defender for Cloud has two main goals:
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-readiness-roadmap.md
Defender for Cloud provides unified security management and advanced threat prot
Use the following resources to get started with Defender for Cloud. Articles-- [Introduction to Defender for Cloud](defender-for-cloud-introduction.md)
+- [What is Microsoft Defender for Cloud](defender-for-cloud-introduction.md)
- [Defender for Cloud quickstart guide](get-started.md) Videos
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
In this tutorial, you learned about Defender for Cloud features to be used when
- [Respond to Microsoft Defender for Key Vault alerts](defender-for-key-vault-usage.md) - [Security alerts - a reference guide](alerts-reference.md)-- [Introduction to Defender for Cloud](defender-for-cloud-introduction.md)
+- [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 | | [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
-| [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)|June 2022|
+| [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service) | July 2022 |
### GA support for Arc-enabled Kubernetes clusters
The following table lists the alerts that will be deprecated during June 2022.
These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
-### Deprecating the "API App should only be accessible over HTTPS" policy
+### Deprecate API App policies for App Service
-**Estimated date for change:** June 2022
+**Estimated date for change:** July 2022
-The policy `API App should only be accessible over HTTPS` is set to be deprecated. This policy will be replaced with `Web Application should only be accessible over HTTPS`, which will be renamed to `App Service apps should only be accessible over HTTPS`.
+We will be deprecating the following policies to corresponding policies that already exist to include API apps:
-To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
+| To be deprecated | Changing to |
+|--|--|
+|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
+| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest 'Python version` |
+| `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` |
+| `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
+| `Remote debugging should be turned off for API Apps` | `App Service apps should have remote debugging turned off` |
+| `Ensure that 'PHP version' is the latest, if used as a part of the API app` | `App Service apps that use PHP should use the latest 'PHP version'`|
+| `FTPS only should be required in your API App` | `App Service apps should require FTPS only` |
+| `Ensure that 'Java version' is the latest, if used as a part of the API app` | `App Service apps that use Java should use the latest 'Java version` |
+| `Latest TLS version should be used in your API App` | `App Service apps should use the latest TLS version` |
## Next steps
defender-for-cloud Workload Protections Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workload-protections-dashboard.md
In this article, you learned about the workload protections dashboard.
> [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)
-For more on the advanced protection plans of Microsoft Defender for Cloud, see [Introduction to Microsoft Defender for Cloud](defender-for-cloud-introduction.md)
+For more on the advanced protection plans of Microsoft Defender for Cloud, see [Extend Defender for Cloud with Defender plans and external monitoring](defender-for-cloud-introduction.md#extend-defender-for-cloud-with-defender-plans-and-external-monitoring)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The Defender for IoT architecture uses on-premises sensors and management server
For more information, see the [Microsoft Security Development Lifecycle practices](https://www.microsoft.com/en-us/securityengineering/sdl/), which describes Microsoft's SDK practices, including training, compliance, threat modeling, design requirements, tools such as Microsoft Component Governance, pen testing, and more. > [!IMPORTANT]
-> Manual changes to software packages may have detrimental effects on the sensor and on-premises management cosnole. Microsoft is unable to support deployments with manual changes made to packages.
+> Manual changes to software packages may have detrimental effects on the sensor and on-premises management console. Microsoft is unable to support deployments with manual changes made to packages.
> **Current versions of the sensor and on-premises management console software include**: | Version | Date released | End support date | |--|--|--|
+| 22.1.5 | 06/2022 | 03/2022 |
| 22.1.4 | 04/2022 | 12/2022 | | 22.1.3 | 03/2022 | 11/2022 | | 22.1.1 | 02/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## June
+
+**Sensor software version**: 22.1.5
+
+- Bug fixes related to OT monitoring software updates and sensor-cloud connections.
+ ## May 2022 We've recently optimized and enhanced our documentation as follows:
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
description: Learn how to ingest device telemetry messages from Azure IoT Hub to digital twins in an instance of Azure Digital Twins. Previously updated : 06/16/2022 Last updated : 06/21/2022
Before continuing with this example, you'll need to set up the following resourc
* An IoT hub. For instructions, see the [Create an IoT Hub section of this IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md). * An Azure Digital Twins instance that will receive your device telemetry. For instructions, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
-This article also uses Visual Studio. You can download the latest version from [Visual Studio Downloads](https://visualstudio.microsoft.com/downloads/).
- ## Example telemetry scenario This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, using a function in Azure. There are many possible configurations and matching strategies you can use for sending messages, but the example for this article contains the following parts:
When the twin is created successfully, the CLI output from the command should lo
In this section, you'll create an Azure function to access Azure Digital Twins and update twins based on IoT telemetry events that it receives. Follow the steps below to create and publish the function.
-1. First, create a new Azure Functions project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create a new Azure Functions project.
+
+ You can do this using **Visual Studio** (for instructions, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project)), **Visual Studio Code** (for instructions, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#create-an-azure-functions-project)), or the **Azure CLI** (for instructions, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#create-a-local-function-project)).
-2. Add the following packages to your project:
+2. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool).
* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/) * [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/) * [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/)
-3. Rename the *Function1.cs* sample function that Visual Studio has generated to *IoTHubtoTwins.cs*. Replace the code in the file with the following code:
+3. Create a function within the project called *IoTHubtoTwins.cs*. Paste the following code into the function file:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs"::: Save your function code.
-4. Publish the project with the *IoTHubtoTwins.cs* function to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+4. Publish the project with the *IoTHubtoTwins.cs* function to a function app in Azure.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
-Once the process of publishing the function completes, you can use this CLI command to verify the publish was successful. There are placeholders for your resource group, and the name of your function app. The command will print information about the *IoTHubToTwins* function.
+Once the process of publishing the function completes, you can use this Azure CLI command to verify the publish was successful. There are placeholders for your resource group, and the name of your function app. The command will print information about the *IoTHubToTwins* function.
```azurecli-interactive az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name IoTHubToTwins
You can test your new ingress function by using the device simulator from [Conne
1. Navigate to the [Azure Digital Twins end-to-end sample project repository](/samples/azure-samples/digital-twins-samples/digital-twins-samples). Get the sample project on your machine by selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the **Code** button followed by **Download ZIP**.
- This will download a .zip folder to your machine as *digital-twins-samples-master.zip*. Unzip the folder and extract the files. You'll be using the *DeviceSimulator* project folder.
+ This will download a .zip folder to your machine as *digital-twins-samples-main.zip*. Unzip the folder and extract the files. You'll be using the *DeviceSimulator* project folder.
1. [Register the simulated device with IoT Hub](tutorial-end-to-end.md#register-the-simulated-device-with-iot-hub) 2. [Configure and run the simulation](tutorial-end-to-end.md#configure-and-run-the-simulation)
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-opcua-data.md
description: Steps to get your Azure OPC UA data into Azure Digital Twins Previously updated : 02/22/2022 Last updated : 06/21/2022 # Optional fields. Don't forget to remove # if you need a field.
Before completing this article, complete the following prerequisites:
:::image type="content" source="media/how-to-ingest-opcua-data/download-repo.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub, highlighting the steps to clone or download the code." lightbox="media/how-to-ingest-opcua-data/download-repo.png"::: If you download the repository as a .zip, be sure to unzip it and extract the files.
-* Download Visual Studio: This article uses Visual Studio to publish an Azure function. You can download the latest version of Visual Studio from [Visual Studio Downloads](https://visualstudio.microsoft.com/downloads/).
## Architecture
Next, create a [shared access signature for the container](../storage/common/sto
In this section, you'll publish an Azure function that you downloaded in [Prerequisites](#prerequisites) that will process the OPC UA data and update Azure Digital Twins.
-1. Navigate to the downloaded [OPC UA to Azure Digital Twins](https://github.com/Azure-Samples/opcua-to-azure-digital-twins) project on your local machine, and into the *Azure Functions/OPCUAFunctions* folder. Open the *OPCUAFunctions.sln* solution in Visual Studio.
-2. Publish the project to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+1. Navigate to the downloaded [OPC UA to Azure Digital Twins](https://github.com/Azure-Samples/opcua-to-azure-digital-twins) project on your local machine.
+2. Publish the project to a function app in Azure, using your preferred method.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
### Configure the function app
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-azure-signalr.md
description: Learn how to stream Azure Digital Twins telemetry to clients using Azure SignalR Previously updated : 02/22/2022 Last updated : 06/21/2022
First, download the required sample apps. You'll need both of the following samp
:::image type="content" source="media/includes/download-repo-zip.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub and the steps for downloading it as a zip." lightbox="media/includes/download-repo-zip.png":::
- This button will download a copy of the sample repo in your machine, as *digital-twins-samples-master.zip*. Unzip the folder.
+ This button will download a copy of the sample repo in your machine, as *digital-twins-samples-main.zip*. Unzip the folder.
* [SignalR integration web app sample](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This sample React web app will consume Azure Digital Twins telemetry data from an Azure SignalR Service. - Navigate to the sample link and use the same download process to download a copy of the sample to your machine, as *digitaltwins-signalr-webapp-sample-main.zip*. Unzip the folder.
In this section, you'll set up two Azure functions:
* *negotiate* - A HTTP trigger function. It uses the *SignalRConnectionInfo* input binding to generate and return valid connection information. * *broadcast* - An [Event Grid](../event-grid/overview.md) trigger function. It receives Azure Digital Twins telemetry data through the event grid, and uses the output binding of the SignalR instance you created in the previous step to broadcast the message to all connected client applications.
-Start Visual Studio (or another code editor of your choice), and open the code solution in the *digital-twins-samples-master > ADTSampleApp* folder. Then do the following steps to create the functions:
+Start Visual Studio or another code editor of your choice, and open the code solution in the *digital-twins-samples-main\ADTSampleApp* folder. Then do the following steps to create the functions:
-1. In the *SampleFunctionsApp* project, create a new C# class called *SignalRFunctions.cs*. For instructions on how to create a new class, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. In the *SampleFunctionsApp* project, create a new C# class called *SignalRFunctions.cs*.
1. Replace the contents of the class file with the following code: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/signalRFunction.cs":::
-1. In Visual Studio's **Package Manager Console** window, or any command window on your machine, navigate to the folder *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp*, and run the following command to install the `SignalRService` NuGet package to the project:
+1. In Visual Studio's **Package Manager Console** window, or any command window on your machine, navigate to the folder *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp*, and run the following command to install the `SignalRService` NuGet package to the project:
```cmd dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.2.0 ``` Running this command should resolve any dependency issues in the class.
-1. Publish your function to Azure. You can publish it to the same app service/function app that you used in the end-to-end tutorial [prerequisite](#prerequisites), or create a new oneΓÇöbut you may want to use the same one to minimize duplication. For instructions on how to publish a function using Visual Studio, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+1. Publish the function to Azure, using your preferred method.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
### Configure the function
Next, set permissions in your function app in the Azure portal:
During the end-to-end tutorial prerequisite, you [configured the device simulator](tutorial-end-to-end.md#configure-and-run-the-simulation) to send data through an IoT Hub and to your Azure Digital Twins instance.
-Now, all you have to do is start the simulator project, located in *digital-twins-samples-master > DeviceSimulator > DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
+Now, start the simulator project located in *digital-twins-samples-main\DeviceSimulator\DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
:::image type="content" source="media/how-to-integrate-azure-signalr/start-button-simulator.png" alt-text="Screenshot of the Visual Studio start button with the DeviceSimulator project open.":::
Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resourc
az group delete --name <your-resource-group> ```
-Finally, delete the project sample folders that you downloaded to your local machine (*digital-twins-samples-master.zip*, *digitaltwins-signalr-webapp-sample-main.zip*, and their unzipped counterparts).
+Finally, delete the project sample folders that you downloaded to your local machine (*digital-twins-samples-main.zip*, *digitaltwins-signalr-webapp-sample-main.zip*, and their unzipped counterparts).
## Next steps
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
description: Learn how to set up event routes from Azure Digital Twins to Azure Time Series Insights. Previously updated : 02/23/2022 Last updated : 06/21/2022
Also, take note of the following values to use them later to create a Time Serie
In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects that only contain updated and added values from your twins.
-1. First, create a new function app project in Visual Studio. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create a new function app project.
+
+ You can do this using **Visual Studio** (for instructions, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project)), **Visual Studio Code** (for instructions, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#create-an-azure-functions-project)), or the **Azure CLI** (for instructions, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#create-a-local-function-project)).
2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**. :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger.":::
-3. Add the following packages to your project:
+3. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool).
* [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/) * [Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/) * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/)
In this section, you'll create an Azure function that will convert twin update e
Save your function code.
-5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
+ Save the function app name to use later to configure app settings for the two event hubs.
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md
The following runnable code snippet uses the relationship operations from this a
### Set up sample project files
-The snippet uses two sample model definitions, [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/master/AdtSampleApp/SampleClientApp/Models/Room.json) and [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/master/AdtSampleApp/SampleClientApp/Models/Floor.json). To **download the model files** so you can use them in your code, use these links to go directly to the files in GitHub. Then, right-click anywhere on the screen, select **Save as** in your browser's right-click menu, and use the Save As window to save the files as **Room.json** and **Floor.json**.
+The snippet uses two sample model definitions, [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/main/AdtSampleApp/SampleClientApp/Models/Room.json) and [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/main/AdtSampleApp/SampleClientApp/Models/Floor.json). To **download the model files** so you can use them in your code, use these links to go directly to the files in GitHub. Then, right-click anywhere on the screen, select **Save as** in your browser's right-click menu, and use the Save As window to save the files as **Room.json** and **Floor.json**.
Next, create a **new console app project** in Visual Studio or your editor of choice.
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
A model's decommissioning status is included in the `ModelData` records returned
You can delete all models in your instance at once, or you can do it on an individual basis.
-For an example of how to delete all models at the same time, see the [End-to-end samples for Azure Digital Twins](https://github.com/Azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/CommandLoop.cs) repository in GitHub. The *CommandLoop.cs* file contains a `CommandDeleteAllModels` function with code to delete all of the models in the instance.
+For an example of how to delete all models at the same time, see the [End-to-end samples for Azure Digital Twins](https://github.com/Azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/CommandLoop.cs) repository in GitHub. The *CommandLoop.cs* file contains a `CommandDeleteAllModels` function with code to delete all of the models in the instance.
To delete an individual model, follow the instructions and considerations from the rest of this section.
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
You can use the runnable code sample below to create a twin, update its details,
### Set up sample project files
-The snippet uses a sample model definition, [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/master/AdtSampleApp/SampleClientApp/Models/Room.json). To **download the model file** so you can use it in your code, use this link to go directly to the file in GitHub. Then, right-click anywhere on the screen, select **Save as** in your browser's right-click menu, and use the Save As window to save the file as **Room.json**.
+The snippet uses a sample model definition, [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-samples/main/AdtSampleApp/SampleClientApp/Models/Room.json). To **download the model file** so you can use it in your code, use this link to go directly to the file in GitHub. Then, right-click anywhere on the screen, select **Save as** in your browser's right-click menu, and use the Save As window to save the file as **Room.json**.
Next, create a **new console app project** in Visual Studio or your editor of choice.
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
description: Learn how to set up an automated process to provision and retire IoT devices in Azure Digital Twins using Device Provisioning Service (DPS). Previously updated : 02/23/2022 Last updated : 06/21/2022
az iot dps create --name <Device-Provisioning-Service-name> --resource-group <re
Inside your function app project that you created in the [Prerequisites section](#prerequisites), you'll create a new function to use with the Device Provisioning Service. This function will be used by the Device Provisioning Service in a [Custom Allocation Policy](../iot-dps/how-to-use-custom-allocation-policies.md) to provision a new device.
-Start by opening the function app project in Visual Studio on your machine and follow the steps below.
+Navigate to the function app project on your machine and follow the steps below.
-1. First, create a new function of type **HTTP-trigger** in the function app project in Visual Studio. For instructions on how to create this function, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. First, create a new function of type **HTTP-trigger** in the function app project.
2. Add a new NuGet package to the project: [Microsoft.Azure.Devices.Provisioning.Service](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/). You might need to add more packages to your project as well, if the packages used in the code aren't part of the project already.
-3. In the newly created function code file, paste in the following code, rename the function to *DpsAdtAllocationFunc.cs*, and save the file.
+3. In the newly created function code file, paste in the following code, name the function *DpsAdtAllocationFunc.cs*, and save the file.
:::code language="csharp" source="~/digital-twins-docs-samples-dps/functions/DpsAdtAllocationFunc.cs":::
-4. Publish the project with the *DpsAdtAllocationFunc.cs* function to a function app in Azure. For instructions on how to publish the project, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+4. Publish the project with the *DpsAdtAllocationFunc.cs* function to a function app in Azure.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
> [!IMPORTANT] > When creating the function app for the first time in the [Prerequisites section](#prerequisites), you may have already assigned an access role for the function and configured the application settings for it to access your Azure Digital Twins instance. These need to be done once for the entire function app, so verify they've been completed in your app before continuing. You can find instructions in the [Configure published app](how-to-authenticate-client.md#configure-published-app) section of the *Write app authentication code* article.
Inside your function app project that you created in the [Prerequisites section]
For more about lifecycle events, see [IoT Hub Non-telemetry events](../iot-hub/iot-hub-devguide-messages-d2c.md#non-telemetry-events). For more information about using Event Hubs with Azure functions, see [Azure Event Hubs trigger for Azure Functions](../azure-functions/functions-bindings-event-hubs-trigger.md).
-Start by opening the function app project in Visual Studio on your machine and follow the steps below.
+Navigate to the function app project on your machine and follow the steps below.
-1. First, create a new function of type **Event Hub Trigger** in the function app project in Visual Studio. For instructions on how to create this function, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. First, create a new function of type **Event Hub Trigger** in the function app project.
2. Add a new NuGet package to the project: [Microsoft.Azure.Devices.Provisioning.Service](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/). You might need to add more packages to your project as well, if the packages used in the code aren't part of the project already.
-3. In the newly created function code file, paste in the following code, rename the function to *DeleteDeviceInTwinFunc.cs*, and save the file.
+3. In the newly created function code file, paste in the following code, name the function *DeleteDeviceInTwinFunc.cs*, and save the file.
:::code language="csharp" source="~/digital-twins-docs-samples-dps/functions/DeleteDeviceInTwinFunc.cs":::
-4. Publish the project with the *DeleteDeviceInTwinFunc.cs* function to a function app in Azure. For instructions on how to publish the project, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+4. Publish the project with the *DeleteDeviceInTwinFunc.cs* function to a function app in Azure.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
> [!IMPORTANT] > When creating the function app for the first time in the [Prerequisites section](#prerequisites), you may have already assigned an access role for the function and configured the application settings for it to access your Azure Digital Twins instance. These need to be done once for the entire function app, so verify they've been completed in your app before continuing. You can find instructions in the [Configure published app](how-to-authenticate-client.md#configure-published-app) section of the *Write app authentication code* article.
Follow these steps to create an event hub endpoint:
2. Select the **Custom endpoints** tab. 3. Select **+ Add** and choose **Event hubs** to add an Event Hubs type endpoint.
- :::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png" alt-text="Screenshot of the Visual Studio window showing how to add an event hub custom endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png" alt-text="Screenshot of the Azure portal showing how to add an Event Hubs custom endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-custom-endpoint.png":::
4. In the window **Add an event hub endpoint** that opens, choose the following values: * **Endpoint name**: Choose an endpoint name.
Follow these steps to create an event hub endpoint:
* **Event hub instance**: Choose the event hub name that you created in the previous step. 5. Select **Create**. Keep this window open to add a route in the next step.
- :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png" alt-text="Screenshot of the Visual Studio window showing how to add an event hub endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png" alt-text="Screenshot of the Azure portal showing how to add an event hub endpoint." lightbox="media/how-to-provision-using-device-provisioning-service/add-event-hub-endpoint.png":::
Next, you'll add a route that connects to the endpoint you created in the above step, with a routing query that sends the delete events. Follow these steps to create a route: 1. Navigate to the **Routes** tab and select **Add** to add a route.
- :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-message-route.png" alt-text="Screenshot of the Visual Studio window showing how to add a route to send events." lightbox="media/how-to-provision-using-device-provisioning-service/add-message-route.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/add-message-route.png" alt-text="Screenshot of the Azure portal showing how to add a route to send events." lightbox="media/how-to-provision-using-device-provisioning-service/add-message-route.png":::
2. In the **Add a route** page that opens, choose the following values:
Next, you'll add a route that connects to the endpoint you created in the above
3. Select **Save**.
- :::image type="content" source="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png" alt-text="Screenshot of the Azure portal window showing how to add a route to send lifecycle events." lightbox="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png":::
+ :::image type="content" source="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png" alt-text="Screenshot of the Azure portal showing how to add a route to send lifecycle events." lightbox="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png":::
Once you've gone through this flow, everything is set to retire devices end-to-end.
digital-twins How To Send Twin To Twin Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-send-twin-to-twin-events.md
description: Learn how to create a function in Azure for propagating events through the twin graph. Previously updated : 02/24/2022 Last updated : 06/21/2022
To set up this twin-to-twin event handling, you'll create an [Azure function](..
## Prerequisites
-This article uses Visual Studio. You can download the latest version from [Visual Studio Downloads](https://visualstudio.microsoft.com/downloads/).
- To set up twin-to-twin handling, you'll need an Azure Digital Twins instance to work with. For instructions on how to create an instance, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md). The instance should contain at least two twins that you want to send data between. Optionally, you may want to set up [automatic telemetry ingestion through IoT Hub](how-to-ingest-iot-hub-data.md) for your twins as well. This process isn't required to send data from twin to twin, but it's an important piece of a complete solution where the twin graph is driven by live telemetry.
To set up twin-to-twin event handling, start by creating an *endpoint* in Azure
Next, create an Azure function that will listen on the endpoint and receive twin events that are sent there via the route. The logic of the function should use the information in the events to determine what other twins need to be updated and then perform the updates.
-1. First, create an Azure Functions project in Visual Studio on your machine. For instructions on how to do so, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
+1. First, create a new Azure Functions project.
+
+ You can do this using **Visual Studio** (for instructions, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project)), **Visual Studio Code** (for instructions, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#create-an-azure-functions-project)), or the **Azure CLI** (for instructions, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#create-a-local-function-project)).
-2. Add the following packages to your project (you can use the Visual Studio NuGet package manager or `dotnet` commands in a command-line tool).
+2. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool).
* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/) * [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/)
Next, create an Azure function that will listen on the endpoint and receive twin
3. Fill in the logic of your function. You can view sample function code for several scenarios in the [azure-digital-twins-getting-started](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/azure-functions) repository to help you get started.
-5. Publish the function app to Azure. For instructions on how to publish a function app, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+5. Publish the function to Azure, using your preferred method.
+
+ For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
-Once the process of publishing the function completes, you can use this CLI command to verify the publish was successful. There are placeholders for your resource group, the name of your function app, and the name of your specific function. The command will print information about your function.
+Once the process of publishing the function completes, you can use this Azure CLI command to verify the publish was successful. There are placeholders for your resource group, the name of your function app, and the name of your specific function. The command will print information about your function.
```azurecli-interactive az functionapp function show --resource-group <your-resource-group> --name <your-function-app> --function-name <your-function>
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
What you need to begin:
Once you're ready to go with your Azure Digital Twins instance, start setting up the client app project.
-Open a command prompt or other console window on your machine, and create an empty project directory where you want to store your work during this tutorial. Name the directory whatever you want (for example, *DigitalTwinsCodeTutorial*).
+Open a console window on your machine, and create an empty project directory where you want to store your work during this tutorial. Name the directory whatever you want (for example, *DigitalTwinsCodeTutorial*).
Navigate into the new directory.
digital-twins Tutorial Command Line App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-app.md
description: Tutorial to build an Azure Digital Twins scenario using a sample command-line application Previously updated : 02/23/2022 Last updated : 06/21/2022
In this tutorial, you'll build a graph in Azure Digital Twins using models, twins, and relationships. The tool for this tutorial is the sample command-line client application for interacting with an Azure Digital Twins instance. The client app is similar to the one written in [Code a client app](tutorial-code.md).
-You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/master/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you want.
+You can use this sample to perform essential Azure Digital Twins actions such as uploading models, creating and modifying twins, and creating relationships. You can also look at the [code of the sample](https://github.com/Azure-Samples/digital-twins-samples/tree/main/) to learn about the Azure Digital Twins APIs, and practice implementing your own commands by modifying the sample project however you want.
In this tutorial, you will... > [!div class="checklist"]
In this tutorial, you will...
### Run the sample project
-Now that the app and authentication are set up, run the project with this button in the toolbar:
+Now that the app and authentication are set up, open a local **console window** that you'll use to run the project. Navigate in the console to the *digital-twins-samples-main\AdtSampleApp\SampleClientApp* folder, and run the project with this dotnet command:
+```cmd/sh
+dotnet run
+```
-A console window will open, carry out authentication, and wait for a command.
+The project will start running, carry out authentication, and wait for a command.
Here's a screenshot of what the project console looks like:
Here's a screenshot of what the project console looks like:
> [!TIP] > For a list of all the possible commands you can use with this project, enter `help` in the project console and press return.
-Once you've confirmed the app is running successfully, close the console window to stop running the app for now. You'll run it again later in the article.
+Once you've confirmed the app is running successfully, you can stop running the project. You'll run it again later in the tutorial.
## Model a physical environment with DTDL
Now that the Azure Digital Twins instance and sample app are set up, you can beg
The first step in creating an Azure Digital Twins solution is defining twin [models](concepts-models.md) for your environment.
-Models are similar to classes in object-oriented programming languages; they provide user-defined templates for [digital twins](concepts-twins-graph.md) to follow and instantiate later. They're written in a JSON-like language called *Digital Twins Definition Language (DTDL)*, and can define a twin's properties, telemetry, relationships, and components.
+Models are similar to classes in object-oriented programming languages; they're user-defined templates that you can instantiate to create [digital twins](concepts-twins-graph.md). Models are written in a JSON-like language called *Digital Twins Definition Language (DTDL)*, and they define a type of twin in terms of its properties, telemetry, relationships, and components.
> [!NOTE] > DTDL also allows for the definition of commands on digital twins. However, commands are not currently supported in the Azure Digital Twins service.
-In your Visual Studio window where the *AdtE2ESample.sln* project is open, use the **Solution Explorer** pane to navigate to the *AdtSampleApp\SampleClientApp\Models folder*. This folder contains sample models.
+In the sample project folder that you downloaded earlier, navigate into the *digital-twins-samples-main\AdtSampleApp\SampleClientApp\Models* folder. This folder contains sample models.
-Select *Room.json* to open it in the editing window, and change it in the following ways:
+Open *Room.json* for editing, and make the following changes to the code:
[!INCLUDE [digital-twins-tutorial-model-create.md](../../includes/digital-twins-tutorial-model-create.md)]
Select *Room.json* to open it in the editing window, and change it in the follow
After designing models, you need to upload them to your Azure Digital Twins instance. Doing so configures your Azure Digital Twins service instance with your own custom domain vocabulary. Once you've uploaded the models, you can create twin instances that use them.
-1. After editing the Room.json file in the previous section, start running the console app again.
+1. Return to your console window that's open to the *digital-twins-samples-main\AdtSampleApp\SampleClientApp* folder, and run the console app again with `dotnet run`.
1. In the project console window, run the following command to upload your updated Room model along with a Floor model that you'll also use in the next section to create different types of twins.
After designing models, you need to upload them to your Azure Digital Twins inst
:::image type="content" source="media/tutorial-command-line/app/output-get-models.png" alt-text="Screenshot of the result from GetModels, showing the updated Room model." lightbox="media/tutorial-command-line/app/output-get-models.png":::
+Keep the console app running for the next steps.
+ ### Errors The sample application also handles errors from the service.
-Rerun the `CreateModels` command to try re-uploading one of the same models you uploaded, for a second time:
+To test this, rerun the `CreateModels` command to try re-uploading the Room model that you've already uploaded:
```cmd/sh CreateModels Room
You can also modify the properties of a twin you've created.
Next, you can create some relationships between these twins, to connect them into a [twin graph](concepts-twins-graph.md). Twin graphs are used to represent an entire environment.
-The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called `contains`, which makes it possible to create a `contains`-type relationship from each Floor twin to the corresponding room that it contains.
+The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called `contains`, which makes it possible to create a `contains`-type relationship from each Floor twin to the corresponding room that it contains.
To add a relationship, use the `CreateRelationship` command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID.
-1. Run the following code to add a `contains` relationship from each of the Floor twins you created earlier to a corresponding Room twin. The relationships are named relationship0 and relationship1.
+1. Run the following commands to add a `contains` relationship from each of the Floor twins you created earlier to a corresponding Room twin. The relationships are named relationship0 and relationship1.
```cmd/sh CreateRelationship floor0 contains room0 relationship0
To add a relationship, use the `CreateRelationship` command. Specify the twin th
``` >[!TIP]
- >The `contains` relationship in the [Floor model](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two string properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
+ >The `contains` relationship in the [Floor model](https://github.com/azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two string properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
> Here's an alternate version of the command above to create relationship0 that also specifies initial values for these properties: > ```cmd/sh > CreateRelationship floor0 contains room0 relationship0 ownershipUser string MyUser ownershipDepartment string myDepartment
Run the following commands in the running project console to answer some questio
:::image type="content" source="media/tutorial-command-line/app/output-query-compound.png" alt-text="Screenshot of the result from the compound query, showing no results." lightbox="media/tutorial-command-line/app/output-query-compound.png":::
+Now that you've run several queries on the scenario you set up, the tutorial is complete. Stop running the project and close the console window.
+ ## Clean up resources After completing this tutorial, you can choose which resources you want to remove, depending on what you want to do next.
After completing this tutorial, you can choose which resources you want to remov
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
-You may also want to delete the project folder from your local machine.
+You may also want to delete the downloaded project folder from your local machine.
## Next steps
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
If you don't have an Azure subscription, create a [free account](https://azure.m
### Download the sample models The tutorial uses two pre-written models that are part of the C# [end-to-end sample project](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) for Azure Digital Twins. The model files are located here:
-* [Room.json](https://github.com/Azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Room.json)
-* [Floor.json](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json)
+* [Room.json](https://github.com/Azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Room.json)
+* [Floor.json](https://github.com/azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Floor.json)
To get the files on your machine, use the navigation links above and copy the file bodies into local files on your machine with the same names (*Room.json* and *Floor.json*).
You can also modify the properties of a twin you've created.
Next, you can create some relationships between these twins, to connect them into a [twin graph](concepts-twins-graph.md). Twin graphs are used to represent an entire environment.
-The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called `contains`. Since the model definition specifies this relationship, it's possible to create a `contains`-type relationship from each Floor twin to the corresponding room that it contains.
+The types of relationships that you can create from one twin to another are defined within the [models](#model-a-physical-environment-with-dtdl) that you uploaded earlier. The [model definition for Floor](https://github.com/azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Floor.json) specifies that floors can have a type of relationship called `contains`. Since the model definition specifies this relationship, it's possible to create a `contains`-type relationship from each Floor twin to the corresponding room that it contains.
To add a relationship, use the [az dt twin relationship create](/cli/azure/dt/twin/relationship#az-dt-twin-relationship-create) command. Specify the twin that the relationship is coming from, the type of relationship, and the twin that the relationship is connecting to. Lastly, give the relationship a unique ID. If a relationship was defined to have properties, you can initialize the relationship properties in this command as well.
To add a relationship, use the [az dt twin relationship create](/cli/azure/dt/tw
``` >[!TIP]
- >The `contains` relationship in the [Floor model](https://github.com/azure-Samples/digital-twins-samples/blob/master/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
+ >The `contains` relationship in the [Floor model](https://github.com/azure-Samples/digital-twins-samples/blob/main/AdtSampleApp/SampleClientApp/Models/Floor.json) was also defined with two properties, `ownershipUser` and `ownershipDepartment`, so you can also provide arguments with the initial values for these when you create the relationships.
> To create a relationship with these properties initialized, add the `--properties` option to either of the above commands, like this: > ```azurecli-interactive > ... --properties '{"ownershipUser":"MyUser", "ownershipDepartment":"MyDepartment"}'
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
description: Follow this tutorial to learn how to build out an end-to-end Azure Digital Twins solution that's driven by device data. Previously updated : 06/16/2022 Last updated : 06/21/2022
First, you'll use the AdtSampleApp solution from the sample project to build the
:::image type="content" source="media/tutorial-end-to-end/building-scenario-a.png" alt-text="Diagram of an excerpt from the full building scenario diagram highlighting the Azure Digital Twins instance section.":::
-In your Visual Studio window where the *AdtE2ESample.sln* solution is open, run the SampleClientApp project with this button in the toolbar:
+Open a local **console window** and navigate into the *digital-twins-samples-main\AdtE2ESample\SampleClientApp* folder. Run the *SampleClientApp* project with this dotnet command:
+```cmd/sh
+dotnet run
+```
-A console window will open, carry out authentication, and wait for a command. In this console, run the next command to instantiate the sample Azure Digital Twins solution.
+The project will start running, carry out authentication, and wait for a command. In this console, run the next command to instantiate the sample Azure Digital Twins solution.
> [!IMPORTANT] > If you already have digital twins and relationships in your Azure Digital Twins instance, running this command will delete them and replace them with the twins and relationships for the sample scenario.
You can verify the twins that were created by running the following command, whi
Query ```
-You can now stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial.
+You can now stop running the project. Keep the console window open at this location, though, as you'll use this app again later in the tutorial.
## Set up the sample function app
The next step is setting up an [Azure Functions app](../azure-functions/function
* *ProcessHubToDTEvents*: processes incoming IoT Hub data and updates Azure Digital Twins accordingly * *ProcessDTRoutedData*: processes data from digital twins, and updates the parent twins in Azure Digital Twins accordingly
-In this section, you'll publish the pre-written function app, and ensure the function app can access Azure Digital Twins by assigning it an Azure Active Directory (Azure AD) identity. Completing these steps will allow the rest of the tutorial to use the functions inside the function app.
-
-Back in your Visual Studio window where the *AdtE2ESample.sln* solution is open, the function app is located in the SampleFunctionsApp project. You can view it in the **Solution Explorer** pane.
-
-### Update dependencies
-
-Before publishing the app, it's a good idea to make sure your dependencies are up to date, making sure you have the latest version of all the included packages.
+In this section, you'll publish the pre-written function app, and ensure the function app can access Azure Digital Twins by assigning it an Azure Active Directory (Azure AD) identity.
-In the **Solution Explorer** pane, expand **SampleFunctionsApp > Dependencies**. Right-select **Packages** and choose **Manage NuGet Packages...**.
--
-Doing so will open the NuGet Package Manager. Select the **Updates** tab and if there are any packages to be updated, check the box to **Select all packages**. Then select **Update**.
-
+The function app is part of the sample project you downloaded, located in the *digital-twins-samples-main\AdtE2ESample\SampleFunctionsApp* folder.
### Publish the app
-To publish the function app to Azure, you'll first need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI.
+To publish the function app to Azure, you'll need to create a storage account, then create the function app in Azure, and finally publish the functions to the Azure function app. This section completes these actions using the Azure CLI.
1. Create an Azure storage account by running the following command:
To publish the function app to Azure, you'll first need to create a storage acco
1. Next, you'll zip up the functions and publish them to your new Azure function app.
- 1. Open a terminal like PowerShell on your local machine, and navigate to the [Digital Twins samples repo](https://github.com/azure-samples/digital-twins-samples/tree/master/) you downloaded earlier in the tutorial. Inside the downloaded repo folder, navigate to *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp*.
+ 1. Open a console window on your machine, and navigate into the *digital-twins-samples-main\AdtE2ESample\SampleFunctionsApp* folder inside your downloaded sample project.
- 1. In your terminal, run the following command to publish the project:
+ 1. In the console, run the following command to publish the project locally:
- ```powershell
+ ```cmd/sh
dotnet publish -c Release ```
- This command publishes the project to the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
+ This command publishes the project to the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
- 1. Create a zip of the published files that are located in the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
+ 1. Create a zip of the published files that are located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
>[!TIP] >If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
To publish the function app to Azure, you'll first need to create a storage acco
>```powershell >Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip >```
- > The cmdlet will create the *publish.zip* file in the directory location of your terminal.
+ > The cmdlet will create the *publish.zip* file in the directory location of your console.
Your *publish.zip* file should contain folders for *bin*, *ProcessDTRoutedData*, and *ProcessHubToDTEvents*, and there should also be a *host.json* file. :::image type="content" source="media/tutorial-end-to-end/publish-zip.png" alt-text="Screenshot of File Explorer in Windows showing the contents of the publish zip folder.":::
+ Now you can close the local console window that you used to prepare the project. The last step will be done in the Azure CLI.
+ 1. In the Azure CLI, run the following command to deploy the published and zipped functions to your Azure function app: ```azurecli-interactive
az functionapp config appsettings set --resource-group <your-resource-group> --n
The output is the list of settings for the Azure Function, which should now contain an entry called `ADT_SERVICE_URL`. - ## Process simulated telemetry from an IoT Hub device An Azure Digital Twins graph is meant to be driven by telemetry from real devices.
Then, get the device connection string with this command:
az iot hub device-identity connection-string show --device-id thermostat67 --hub-name <your-IoT-hub-name> ```
-You'll plug these values into the device simulator code in your local project to connect the simulator into this IoT hub and IoT hub device.
+Next, plug these values into the device simulator code in your local project to connect the simulator into this IoT hub and IoT hub device.
-In a new Visual Studio window, open (from the downloaded solution folder) *DeviceSimulator* > **DeviceSimulator.sln**.
-
->[!NOTE]
-> You should now have two Visual Studio windows, one with *DeviceSimulator.sln* and one from earlier with *AdtE2ESample.sln*.
-
-From the **Solution Explorer** pane in this new Visual Studio window, select **DeviceSimulator > AzureIoTHub.cs** to open it in the editing window. Change the following connection string values to the values you gathered above:
+Navigate on your local machine to the downloaded sample folder, and into the *digital-twins-samples-main\DeviceSimulator\DeviceSimulator* folder. Open the *AzureIoTHub.cs* file for editing. Change the following connection string values to the values you gathered above:
```csharp iotHubConnectionString = <your-hub-connection-string>
deviceConnectionString = <your-device-connection-string>
Save the file.
-Now, to see the results of the data simulation that you've set up, run the **DeviceSimulator** project with this button in the toolbar:
+Now, to see the results of the data simulation that you've set up, navigate to *digital-twins-samples-main\DeviceSimulator\DeviceSimulator* in a local console window.
+
+>[!NOTE]
+> You should now have two open console windows: one that's open to the the *DeviceSimulator\DeviceSimulator* folder, and one from earlier that's still open to the *AdtSampleApp\SampleClientApp* folder.
+
+Use the following dotnet command to run the device simulator project:
+```cmd/sh
+dotnet run
+```
-A console window will open and display simulated temperature telemetry messages. These messages are being sent to IoT Hub, where they're then picked up and processed by the Azure function.
+The project will start running and begin displaying simulated temperature telemetry messages. These messages are being sent to IoT Hub, where they're then picked up and processed by the Azure function.
:::image type="content" source="media/tutorial-end-to-end/console-simulator-telemetry.png" alt-text="Screenshot of the console output of the device simulator showing temperature telemetry being sent.":::
You don't need to do anything else in this console, but leave it running while y
The *ProcessHubToDTEvents* function you published earlier listens to the IoT Hub data, and calls an Azure Digital Twins API to update the `Temperature` property on the thermostat67 twin.
-To see the data from the Azure Digital Twins side, go to your Visual Studio window where the *AdtE2ESample.sln* solution is open and run the SampleClientApp project.
+To see the data from the Azure Digital Twins side, switch to your other console window that's open to the *AdtSampleApp\SampleClientApp* folder. Run the *SampleClientApp* project with `dotnet run`.
-In the project console window that opens, run the following command to get the temperatures being reported by the digital twin thermostat67:
+Once the project is running and accepting commands, run the following command to get the temperatures being reported by the digital twin thermostat67:
-```cmd
+```cmd/sh
ObserveProperties thermostat67 Temperature ```
-You should see the live updated temperatures from your Azure Digital Twins instance being logged to the console every two seconds.
+You should see the live updated temperatures from your Azure Digital Twins instance being logged to the console every two seconds. They should reflect the values that the data simulator is generating (you can place the console windows side-by-side to verify that the values coordinate).
>[!NOTE] > It may take a few seconds for the data from the device to propagate through to the twin. The first few temperature readings may show as 0 before data begins to arrive. :::image type="content" source="media/tutorial-end-to-end/console-digital-twins-telemetry.png" alt-text="Screenshot of the console output showing log of temperature messages from digital twin thermostat67.":::
-Once you've verified the live temperature logging is working successfully, you can stop running both projects. Keep the Visual Studio windows open, as you'll continue using them in the rest of the tutorial.
+Once you've verified the live temperature logging is working successfully, you can stop running both projects. Keep the console windows open, as you'll use them again later in the tutorial.
## Propagate Azure Digital Twins events through the graph
az eventgrid event-subscription create --name <name-for-topic-event-subscription
Now, events should have the capability to flow from the simulated device into Azure Digital Twins, and through the Azure Digital Twins graph to update twins as appropriate. In this section, you'll run the device simulator again to kick off the full event flow you've set up, and query Azure Digital Twins to see the live results
-Go to your Visual Studio window where the *DeviceSimulator.sln* solution is open, and run the DeviceSimulator project.
+Go to your console window that's open to the *DeviceSimulator\DeviceSimulator* folder, and run the device simulator project with `dotnet run`.
-Like when you ran the device simulator earlier, a console window will open and display simulated temperature telemetry messages. These events are going through the flow you set up earlier to update the thermostat67 twin, and then going through the flow you set up recently to update the room21 twin to match.
+Like the first time you ran the device simulator, the project will start running and display simulated temperature telemetry messages. These events are going through the flow you set up earlier to update the thermostat67 twin, and then going through the flow you set up recently to update the room21 twin to match.
:::image type="content" source="media/tutorial-end-to-end/console-simulator-telemetry.png" alt-text="Screenshot of the console output of the device simulator showing temperature telemetry being sent."::: You don't need to do anything else in this console, but leave it running while you complete the next steps.
-To see the data from the Azure Digital Twins side, go to your Visual Studio window where the *AdtE2ESample.sln* solution is open, and run the SampleClientApp project.
+To see the data from the Azure Digital Twins side, go to your other console window that's open to the *AdtSampleApp\SampleClientApp* folder, and run the *SampleClientApp* project with `dotnet run`.
-In the project console window that opens, run the following command to get the temperatures being reported by both the digital twin thermostat67 and the digital twin room21.
+Once the project is running and accepting commands, run the following command to get the temperatures being reported by both the digital twin thermostat67 and the digital twin room21.
```cmd ObserveProperties thermostat67 Temperature room21 Temperature
You should see the live updated temperatures from your Azure Digital Twins insta
:::image type="content" source="media/tutorial-end-to-end/console-digital-twins-telemetry-b.png" alt-text="Screenshot of the console output showing a log of temperature messages, from a thermostat and a room.":::
-Once you've verified the live temperatures logging from your instance is working successfully, you can stop running both projects. You can also close the Visual Studio windows, as the tutorial is now complete.
+Once you've verified the live temperatures logging from your instance is working successfully, you can stop running both projects. You can also close both console windows, as the tutorial is now complete.
## Review
event-grid Availability Zone Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zone-resiliency.md
+
+ Title: Resiliency in Azure Event Grid | Microsoft Docs
+description: Describes how Azure Event Grid supports resiliency.
+ Last updated : 06/21/2022++
+# Resiliency in Azure Event Grid
+
+Azure availability zones are designed to help you achieve resiliency and reliability for your business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions ensures customers are supported across the world.
+
+## Availability zones
+
+Azure Event Grid event subscription configurations and events are automatically replicated across data centers in the availability zone, and replicated in the three availability zones (when available) in the region specified to provide automatic in-region recovery of your data in case of a failure in the region. See [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones) to learn more about the supported regions with availability zones.
+
+Azure availability zones are connected by a high-performance network with a round-trip latency of less than 2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+
+With availability zones, you can design and operate applications and databases that automatically transition between zones without interruption. Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple datacenter infrastructures.
+
+If a region supports availability zones, the event data is replicated across availability zones though.
++
+## Next steps
+
+- If you want to understand the geo disaster recovery concepts, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md)
+
+- If you want to implement your own disaster recovery plan, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md)
+
+- If you want to implement your own client-side failover logic, see [# Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery-client-side.md)
event-grid Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/geo-disaster-recovery.md
Title: Geo disaster recovery in Azure Event Grid | Microsoft Docs description: Describes how Azure Event Grid supports geo disaster recovery (GeoDR) automatically. Previously updated : 03/24/2022 Last updated : 06/21/2022 # Server-side geo disaster recovery in Azure Event Grid
-Event Grid supports automatic geo-disaster recovery of metadata for topics, domains, and event subscriptions. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
-Note that event data is not replicated to the paired region. Only the metadata is replicated. If a region supports availability zones, the event data is replicated across availability zones though.
+Event Grid supports automatic geo-disaster recovery of event subscription configuration data (metadata) for topics, system topics, domains, and partner topics. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
+
+> [!NOTE]
+> Event data is not replicated to the paired region, only the metadata is replicated.
+
+Microsoft offers options to recover from a failure, you can opt to enable recovery to a paired region where available or disable recovery to a paired region to manage your own recovery. See [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to learn more about the supported paired regions. The failover is nearly instantaneous once initiated. To learn more about how to implement your own failover strategy, see [Build your own disaster recovery plan for Azure Event Grid topics and domains](custom-disaster-recovery.md) .
+
+Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over all the Event Grid resources from an affected region to the corresponding geo-paired region. This process is a default option and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's traffic is failed over.
+
+## Metrics
Disaster recovery is measured with two metrics:
Disaster recovery is measured with two metrics:
Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
-## Recovery point objective (RPO)
+### Recovery point objective (RPO)
- **Metadata RPO**: zero minutes. Anytime a resource is created in Event Grid, it's instantly replicated across regions. When a failover occurs, no metadata is lost. - **Data RPO**: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.
-## Recovery time objective (RTO)
+### Recovery time objective (RTO)
- **Metadata RTO**: Though generally it happens much more quickly, within 60 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions. - **Data RTO**: Like metadata, it generally happens much more quickly, however within 60 minutes, Event Grid will begin accepting new traffic after a regional failover.
Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata
## Next steps
-If you want to implement you own client-side failover logic, see [# Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md)
+
+If you want to implement your own client-side failover logic, see [# Build your own disaster recovery for custom topics in Event Grid](custom-disaster-recovery.md)
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authentication.md
Title: Authenticate event delivery to event handlers (Azure Event Grid) description: This article describes different ways of authenticating delivery to event handlers in Azure Event Grid. Previously updated : 06/28/2021 Last updated : 06/22/2022 # Authenticate event delivery to event handlers (Azure Event Grid)
You can enable a system-assigned managed identity for a topic or domain and use
Here are the steps: 1. Create a topic or domain with a system-assigned identity, or update an existing topic or domain to enable identity. For more information, see [Enable managed identity for a system topic](enable-identity-system-topics.md) or [Enable managed identity for a custom topic or a domain](enable-identity-custom-topics-domains.md)
-1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For more information, see [Grand identity the access to Event Grid destination](add-identity-roles.md)
+1. Add the identity to an appropriate role (for example, Service Bus Data Sender) on the destination (for example, a Service Bus queue). For more information, see [Grant identity the access to Event Grid destination](add-identity-roles.md)
1. When you create event subscriptions, enable the usage of the identity to deliver events to the destination. For more information, see [Create an event subscription that uses the identity](managed-service-identity.md). For detailed step-by-step instructions, see [Event delivery with a managed identity](managed-service-identity.md).
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
az storage account create --name $blobStorageAccount --location southeastasia \
## Create Blob storage containers
-The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The *images* container is where the app uploads full-resolution images. In a later part of the series, an Azure function app uploads resized image thumbnails to the *thumbnail
+The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The *images* container is where the app uploads full-resolution images. In a later part of the series, an Azure function app uploads resized image thumbnails to the *thumbnail* container.
The *images* container's public access is set to `off`. The *thumbnails* container's public access is set to `container`. The `container` public access setting permits users who visit the web page to view the thumbnails.
event-hubs Event Hubs Get Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-get-connection-string.md
Title: Get connection string - Azure Event Hubs | Microsoft Docs description: This article provides instructions for getting a connection string that clients can use to connect to Azure Event Hubs. Previously updated : 01/03/2022 Last updated : 06/21/2022 # Get an Event Hubs connection string
To communicate with an event hub in a namespace, you need a connection string fo
The connection string for a namespace has the following components embedded within it,
-* FQDN = the FQDN of the Event Hubs namespace you created (it includes the Event Hubs namespace name followed by servicebus.windows.net)
-* SharedAccessKeyName = the name you chose for your application's SAS keys
-* SharedAccessKey = the generated value of the key.
+* Fully qualified domain name of the Event Hubs namespace you created (it includes the Event Hubs namespace name followed by servicebus.windows.net)
+* Name of the shared access key
+* Value of the shared access key
The connection string for a namespace looks like:
firewall Fqdn Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/fqdn-tags.md
The following table shows the current FQDN tags you can use. Microsoft maintains
|FQDN tag |Description | |||
-|Windows Update |Allow outbound access to Microsoft Update as described in [How to Configure a Firewall for Software Updates](/mem/configmgr/sum/get-started/install-a-software-update-point).|
-|Windows Diagnostics|Allow outbound access to all [Windows Diagnostics endpoints](/windows/privacy/configure-windows-diagnostic-data-in-your-organization#endpoints).|
-|Microsoft Active Protection Service (MAPS)|Allow outbound access to [MAPS](https://cloudblogs.microsoft.com/enterprisemobility/2016/05/31/important-changes-to-microsoft-active-protection-service-maps-endpoint/).|
-|App Service Environment (ASE)|Allows outbound access to ASE platform traffic. This tag doesnΓÇÖt cover customer-specific Storage and SQL endpoints created by ASE. These should be enabled via [Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md) or added manually.<br><br>For more information about integrating Azure Firewall with ASE, see [Locking down an App Service Environment](../app-service/environment/firewall-integration.md#configuring-azure-firewall-with-your-ase).|
-|Azure Backup|Allows outbound access to the Azure Backup services.|
-|Azure HDInsight|Allows outbound access for HDInsight platform traffic. This tag doesnΓÇÖt cover customer-specific Storage or SQL traffic from HDInsight. Enable these using [Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md) or add them manually.|
+|WindowsUpdate |Allow outbound access to Microsoft Update as described in [How to Configure a Firewall for Software Updates](/mem/configmgr/sum/get-started/install-a-software-update-point).|
+|WindowsDiagnostics|Allow outbound access to all [Windows Diagnostics endpoints](/windows/privacy/configure-windows-diagnostic-data-in-your-organization#endpoints).|
+|MicrosoftActiveProtectionService (MAPS)|Allow outbound access to [MAPS](https://cloudblogs.microsoft.com/enterprisemobility/2016/05/31/important-changes-to-microsoft-active-protection-service-maps-endpoint/).|
+|AppServiceEnvironment (ASE)|Allows outbound access to ASE platform traffic. This tag doesnΓÇÖt cover customer-specific Storage and SQL endpoints created by ASE. These should be enabled via [Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md) or added manually.<br><br>For more information about integrating Azure Firewall with ASE, see [Locking down an App Service Environment](../app-service/environment/firewall-integration.md#configuring-azure-firewall-with-your-ase).|
+|AzureBackup|Allows outbound access to the Azure Backup services.|
+|AzureHDInsight|Allows outbound access for HDInsight platform traffic. This tag doesnΓÇÖt cover customer-specific Storage or SQL traffic from HDInsight. Enable these using [Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md) or add them manually.|
|WindowsVirtualDesktop|Allows outbound Azure Virtual Desktop (formerly Windows Virtual Desktop) platform traffic. This tag doesnΓÇÖt cover deployment-specific Storage and Service Bus endpoints created by Azure Virtual Desktop. Additionally, DNS and KMS network rules are required. For more information about integrating Azure Firewall with Azure Virtual Desktop, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](protect-azure-virtual-desktop.md).|
-|Azure Kubernetes Service (AKS)|Allows outbound access to AKS. For more information, see [Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments](protect-azure-kubernetes-service.md).|
+|AzureKubernetesService (AKS)|Allows outbound access to AKS. For more information, see [Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments](protect-azure-kubernetes-service.md).|
> [!NOTE] > When selecting FQDN Tag in an application rule, the protocol:port field must be set to **https**. ## Next steps
-To learn how to deploy an Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md).
+To learn how to deploy an Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md).
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Previously updated : 10/06/2021 Last updated : 06/22/2022
You will need to create an Azure Firewall Policy and create Rule Collections for
| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination | | | -- | - | -- | -- | - | | | Rule Name | IP Address | VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 |
-| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop |
+| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop, AzureFrontDoor.Frontend |
| Rule Name | IP Address | VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
-|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 |
+|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net)|
> [!NOTE] > Some deployments might not need DNS rules. For example, Azure Active Directory Domain controllers forward DNS queries to Azure DNS at 168.63.129.16.
You will need to create an Azure Firewall Policy and create Rule Collections for
> [!IMPORTANT] > We recommend that you don't use TLS inspection with Azure Virtual Desktop. For more information, see the [proxy server guidelines](../virtual-desktop/proxy-server-support.md#dont-use-ssl-termination-on-the-proxy-server).
-## Host pool outbound access to the internet
+## Host pool outbound access to the Internet
Depending on your organization needs, you might want to enable secure outbound internet access for your end users. If the list of allowed destinations is well-defined (for example, for [Microsoft 365 access](/microsoft-365/enterprise/microsoft-365-ip-web-service)), you can use Azure Firewall application and network rules to configure the required access. This routes end-user traffic directly to the internet for best performance. If you need to allow network connectivity for Windows 365 or Intune, see [Network requirments for Windows 365](/windows-365/requirements-network#allow-network-connectivity) and [Network endpoints for Intune](/mem/intune/fundamentals/intune-endpoints).
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
az network front-door waf-policy managed-rules add \
Run [az afd security-policy create](/cli/azure/afd/security-policy#az-afd-security-policy-create) to apply your WAF policy to the endpoint's default domain. > [!NOTE]
-> Substitute 'mysubscription' with your Azure Subscription ID in the domains and waf-policy parameters below. Run [az account subscription list](/cli/azure/aaccount/subscription#az-account-subscription-list) to get Subscription ID details.
+> Substitute 'mysubscription' with your Azure Subscription ID in the domains and waf-policy parameters below. Run [az account subscription list](/cli/azure/account/subscription#az-account-subscription-list) to get Subscription ID details.
```azurecli-interactive
frontdoor Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/endpoint.md
+
+ Title: 'Endpoints in Azure Front Door'
+description: Learn about endpoints when using Azure Front Door.
+++++ Last updated : 06/22/2022+++
+# Endpoints in Azure Front Door
+
+In Azure Front Door Standard/Premium, an *endpoint* is a logical grouping of one or more routes that are associated with domain names. Each endpoint is [assigned a domain name](#endpoint-domain-names) by Front Door, and you can associate your own custom domains by using routes.
+
+## How many endpoints should I create?
+
+A Front Door profile can contain multiple endpoints. However, in many situations you might only need a single endpoint.
+
+When you're planning the endpoints to create, consider the following factors:
+
+- If all of your domains use the same or similar route paths, it's probably best to combine them into a single endpoint.
+- If you use different routes and route paths for each domain, consider using separate endpoints, such as by having an endpoint for each custom domain.
+- If you need to enable or disable all of your domains together, consider using a single endpoint. An entire endpoint can be enabled or disabled together.
+
+## Endpoint domain names
+
+Endpoint domain names are automatically generated when you create a new endpoint. Front Door generates a unique domain name based on several components, including:
+
+- The endpoint's name.
+- A pseudorandom hash value, which is determined by Front Door. By using hash values as part of the domain name, Front Door helps to protect against [subdomain takeover](../security/fundamentals/subdomain-takeover.md) attacks.
+- The base domain name for your Front Door environment. This is generally `z01.azurefd.net`.
+
+For example, suppose you have created an endpoint named `myendpoint`. The endpoint domain name might be `myendpoint-mdjf2jfgjf82mnzx.z01.azurefd.net`.
+
+The endpoint domain is accessible when you associate it with a route.
+
+### Reuse of an endpoint domain name
+
+When you delete and redeploy an endpoint, you might expect to get the same pseudorandom hash value, and therefore the same endpoint domain name. Front Door enables you to control how the pseudorandom hash values are reused on an endpoint-by-endpoint basis.
+
+An endpoint's domain can be reused within the same tenant, subscription, or resource group scope level. You can also choose to not allow the reuse of an endpoint domain. By default, your allow reuse of the endpoint domain within the same Azure Active Directory tenant.
+
+You can use Bicep, an Azure Resource Manager template (ARM template), the Azure CLI, or Azure PowerShell to configure the scope level of the endpoint's domain reuse behavior. You can also configure it for all Front Door endpoints in your whole organization by using Azure Policy. The Azure portal uses the scope level you define through the command line once it has been changed.
+
+The following table lists the allowable values for the endpoint's domain reuse behavior:
+
+| Value | Description |
+|--|--|
+| `TenantReuse` | This is the default value. Endpoints with the same name in the same Azure Active Directory tenant receive the same domain label. |
+| `SubscriptionReuse` | Endpoints with the same name in the same Azure subscription receive the same domain label. |
+| `ResourceGroupReuse` | Endpoints with the same name in the same resource group will receive the same domain label. |
+| `NoReuse` | Endpoints will always receive a new domain label. |
+
+> [!NOTE]
+> You can't modify the reuse behavior of an existing Front Door endpoint. The reuse behavior only applies to newly created endpoints.
+
+The following example shows how to create a new Front Door endpoint with a reuse scope of `SubscriptionReuse`:
+
+# [Azure CLI](#tab/azurecli)
+
+```azurecli
+az afd endpoint create \
+ --resource-group MyResourceGroup \
+ --profile-name MyProfile \
+ --endpoint-name myendpoint \
+ --name-reuse-scope SubscriptionReuse
+```
+
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+New-AzFrontDoorCdnEndpoint `
+ -ResourceGroupName MyResourceGroup `
+ -ProfileName MyProfile `
+ -EndpointName myendpoint `
+ -Location global `
+ -AutoGeneratedDomainNameLabelScope SubscriptionReuse
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource endpoint 'Microsoft.Cdn/profiles/afdEndpoints@2021-06-01' = {
+ name: endpointName
+ parent: profile
+ location: 'global'
+ properties: {
+ autoGeneratedDomainNameLabelScope: 'SubscriptionReuse'
+ }
+}
+```
+++
+## Next steps
+
+* [Configure an origin](origin.md) for Azure Front Door.
frontdoor Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/manager.md
Previously updated : 03/16/2022 Last updated : 06/13/2022
The Front Door manager in Azure Front Door Standard and Premium provides an over
## Routes within an endpoint
-An endpoint is a logical grouping of one or more routes that associates with domains. A route contains the origin group configuration and routing rules between domains and origins. An endpoint can have one or more routes. A route can have multiple domains but only one origin group. You need to have at least one configured route in order for traffic to route between your domains and the origin group.
+An [*endpoint*](endpoint.md) is a logical grouping of one or more routes that are associated with domain names. A route contains the origin group configuration and routing rules between domains and origins. An endpoint can have one or more routes. A route can have multiple domains but only one origin group. You need to have at least one configured route in order for traffic to route between your domains and the origin group.
> [!NOTE] > * You can *enable* or *disable* an endpoint or a route. > * Traffic will only flow to origins once both the endpoint and route is **enabled**. >
-Domains configured within a route can either be a custom domain or an endpoint domain. For more information about custom domains, see [create a custom domain](standard-premium/how-to-add-custom-domain.md) with Azure Front Door. Endpoint domains refer to the auto generated domain name when you create a new endpoint. The name is a unique endpoint hostname with a hash value in the format of `endpointname-hash.z01.azurefd.net`. The endpoint domain will be accessible if you associate it with a route.
-
-### Reuse of an endpoint domain name
-
-An endpoint domain can be reused within the same tenant, subscription, or resource group scope level. You can also choose to not allow the reuse of an endpoint domain. The Azure portal default settings allow tenant level reuse of the endpoint domain. You can use command line to configure the scope level of the endpoint domain reuse. The Azure portal will use the scope level you define through the command line once it has been changed.
-
-| Value | Behavior |
-|--|--|
-| TenantReuse | This is the default value. Object with the same name in the same tenant will receive the same domain label. |
-| SubscriptionReuse | Object with the same name in the same subscription will receive the same domain label. |
-| ResourceGroupReuse | Object with the same name in the same resource group will receive the same domain label. |
-| NoReuse | Object with the same will receive a new domain label for each new instance. |
+Domains configured within a route can either be a custom domain or an endpoint domain. For more information about custom domains, see [create a custom domain](standard-premium/how-to-add-custom-domain.md) with Azure Front Door. Endpoint domains refer to the auto generated domain name when you create a new endpoint. The name is a unique endpoint hostname with a hash value in the format of `endpointname-hash.z01.azurefd.net`. The endpoint domain will be accessible if you associate it with a route.
## Security policy in an endpoint
In Azure Front Door (classic), the Front Door manager is called Front Door desig
## Next steps
+* Learn about [endpoints](endpoint.md).
* Learn how to [configure endpoints with Front Door manager](how-to-configure-endpoints.md). * Learn about the Azure Front Door [routing architecture](front-door-routing-architecture.md). * Learn [how traffic is matched to a route](front-door-routing-architecture.md) in Azure Front Door.
frontdoor Concept Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-endpoint-manager.md
> [!NOTE] > * This documentation is for Azure Front Door Standard/Premium. Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
-Endpoint Manager provides an overview of endpoints you've configured for your Azure Front Door. An endpoint is a logical grouping of a domains and their associated configurations. Endpoint Manager helps you manage your collection of endpoints for CRUD (create, read, update, and delete) operation. You can manage the following elements for your endpoints through Endpoint
+Endpoint Manager provides an overview of endpoints you've configured for your Azure Front Door. An endpoint is a logical grouping of domains and their associated configuration. Endpoint Manager helps you manage your collection of endpoints for CRUD (create, read, update, and delete) operation. You can manage the following elements for your endpoints through Endpoint
* Domains * Origin Groups
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy guest configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 05/12/2022 Last updated : 06/21/2022 ++ # Linux security baseline This article details the configuration settings for Linux guests as applicable in the following
-implementations:
+Azure Policy definitions:
-- **\[Preview\]: Linux machines should meet requirements for the Azure compute security baseline**
- Azure Policy guest configuration definition
-- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
- Security Center
+- Linux machines should meet requirements for the Azure compute security baseline
+- Vulnerabilities in security configuration on your machines should be remediated
For more information, see [Azure Policy guest configuration](../concepts/guest-configuration.md) and
-[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+[Overview of the Azure Security Benchmark (V3)](../../../security/benchmarks/overview.md).
## General security controls
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Monitor.
> [Guest Configuration for VMs](../../policy/concepts/guest-configuration.md). To view examples of how to query Guest Configuration resources in Resource Graph, view [Azure Resource Graph queries by category - Azure Policy Guest Configuration](../samples/samples-by-category.md#azure-policy-guest-configuration). > [!IMPORTANT]
-> Resource configuration changes only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Subscriptions and Resource groups. Changes are queryable for fourteen days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query result to any of the Azure data stores (e.g., Log Analytics) for your desired retention.
+> Resource configuration changes only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Subscriptions and Resource groups. Changes are queryable for fourteen days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export your query results to any of the Azure data stores (e.g., Log Analytics) for your desired retention.
## Find detected change events and view change details
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
The OSS component versions associated with HDInsight 4.0 are listed in the follo
| Apache Hadoop and YARN | 3.1.1 | | Apache Tez | 0.9.1 | | Apache Pig | 0.16.1 |
-| Apache Hive | 3.1.0 |
+| Apache Hive | 3.1.2 |
| Apache Ranger | 1.1.0 | | Apache HBase | 2.1.6 | | Apache Sqoop | 1.5.0 |
hdinsight Hdinsight Management Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-management-ip-addresses.md
description: Learn which IP addresses you must allow inbound traffic from, in or
Previously updated : 08/11/2020 Last updated : 06/22/2022 # HDInsight management IP addresses
hdinsight Hdinsight Supported Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-supported-node-configuration.md
keywords: vm sizes, cluster sizes, cluster configuration
Previously updated : 05/14/2020 Last updated : 06/22/2022 # What are the default and recommended node configurations for Azure HDInsight?
hdinsight Spark Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-best-practices.md
Title: Apache Spark guidelines on Azure HDInsight
description: Learn guidelines for using Apache Spark in Azure HDInsight. Previously updated : 04/28/2020 Last updated : 06/22/2022 # Apache Spark guidelines
hdinsight Apache Storm Develop Csharp Visual Studio Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-develop-csharp-visual-studio-topology.md
description: Learn how to create Storm topologies in C#. Create a word count top
Previously updated : 12/31/2019 Last updated : 06/22/2022
hdinsight Apache Storm Develop Python Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-develop-python-topology.md
description: Learn how to create an Apache Storm topology that uses Python compo
Previously updated : 12/16/2019 Last updated : 06/22/2022 # Develop Apache Storm topologies using Python on HDInsight
To stop the topology, use __Ctrl + C__.
## Next steps
-See the following documents for other ways to use Python with HDInsight: [How to use Python User Defined Functions (UDF) in Apache Pig and Apache Hive](../hadoop/python-udf-hdinsight.md).
+See the following documents for other ways to use Python with HDInsight: [How to use Python User Defined Functions (UDF) in Apache Pig and Apache Hive](../hadoop/python-udf-hdinsight.md).
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
You need the following prerequisites to complete the steps in this guide:
- Two IoT Central applications - one for your development environment and one for your production environment. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md). - Two Azure Key Vaults - one for your development environment and one for your production environment. It's best practice to have a dedicated Key Vault for each environment. To learn more, see [Create an Azure Key Vault with the Azure portal](../../key-vault/general/quick-create-portal.md). - A GitHub account [GitHub](https://github.com/).-- An Azure DevOps organization. To learn more, see [Create an Azure DevOps organization](/devops/organizations/accounts/create-organization?view=azure-devops&preserve-view=true).
+- An Azure DevOps organization. To learn more, see [Create an Azure DevOps organization](/azure/devops/organizations/accounts/create-organization).
- PowerShell 7 for Windows, Mac or Linux. [Get PowerShell](/powershell/scripting/install/installing-powershell). - Azure Az PowerShell module installed in your PowerShell 7 environment. To learn more, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps). - Visual Studio Code or other tool to edit PowerShell and JSON files.[Get Visual Studio Code](https://code.visualstudio.com/Download).
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
The following sections describe the main types of device you can connect to an I
### IoT device
-A IoT device is a standalone device connects directly to IoT Central. A IoT device typically sends telemetry from its onboard or connected sensors to your IoT Central application. Standalone devices can also report property values, receive writable property values, and respond to commands.
+An IoT device is a standalone device that connects directly to IoT Central. An IoT device typically sends telemetry from its onboard or connected sensors to your IoT Central application. Standalone devices can also report property values, receive writable property values, and respond to commands.
### IoT Edge device An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as downstream devices. Scenarios that use IoT Edge devices include: -- Aggregate or filter telemetry before it's sent to IoT Central. This approach can help to reduce the costs of sending data to IoT Central.
+- Aggregate or filter telemetry before it's sent to IoT Central. This approach can help reduce the costs of sending data to IoT Central.
- Enable devices that can't connect directly to IoT Central to connect through the IoT Edge device. For example, a downstream device might use bluetooth to connect to the IoT Edge device, which then connects over the internet to IoT Central. - Control downstream devices locally to avoid the latency associated with connecting to IoT Central over the internet.
When you register a device with IoT Central, you're telling IoT Central the ID o
There are three ways to register a device in an IoT Central application: -- Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
+- Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without being registered first. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices). - Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).
You only need to register a device once in your IoT Central application.
### Provision a device
-When a device first tries to connect to your IoT Central application, it starts the process by connecting to the Device Provisioning Service (DPS). DPS checks the device's credentials and, if they're valid, provisions the device with connection string for one of IoT Central's internal IoT hubs. DPS uses the _group enrollment_ configurations in your IoT Central application to manage this provisioning process for you.
+When a device first tries to connect to your IoT Central application, it starts the process by connecting to the Device Provisioning Service (DPS). DPS checks the device's credentials and, if they're valid, provisions the device with the connection string for one of IoT Central's internal IoT hubs. DPS uses the _group enrollment_ configurations in your IoT Central application to manage this provisioning process for you.
> [!TIP] > The device also sends the **ID scope** value that tells DPS which IoT Central application the device is connecting to. You can look up the **ID scope** in your IoT Central application on the **Permissions > Device connection groups** page.
Typically, a device should cache the connection string it receives from DPS but
Using DPS enables: - IoT Central to onboard and connect devices at scale.-- You to generate device credentials and configure the devices offline without registering the devices through IoT Central UI.
+- You to generate device credentials and configure the devices offline without registering the devices through the IoT Central UI.
- You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems. - A single, consistent way to connect devices to IoT Central.
All data exchanged between devices and your Azure IoT Central is encrypted. IoT
Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway.
-A solution design must take into account the required device connectivity pattern. These patterns fall in to two broad categories. Both categories include devices sending telemetry to your IoT Central application:
+A solution design must take into account the required device connectivity pattern. These patterns fall into two broad categories. Both categories include devices sending telemetry to your IoT Central application:
### Persistent connections
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Connect a device to an Azure IoT Central application | Micro
description: Quickstart - Connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device. Previously updated : 06/08/2022 Last updated : 06/22/2022
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
This method accepts a JSON payload with the following schema:
| tail | integer | Number of log lines in the past to retrieve starting from the latest. OPTIONAL. | | since | string | Only return logs since this time, as a duration (1 d, 90 m, 2 days 3 hours 2 minutes), rfc3339 timestamp, or UNIX timestamp. If both `tail` and `since` are specified, the logs are retrieved using the `since` value first. Then, the `tail` value is applied to the result, and the final result is returned. OPTIONAL. | | until | string | Only return logs before the specified time, as an rfc3339 timestamp, UNIX timestamp, or duration (1 d, 90 m, 2 days 3 hours 2 minutes). OPTIONAL. |
-| log level | integer | Filter log lines less than or equal to specified log level. Log lines should follow recommended logging format and use [Syslog severity level](https://en.wikipedia.org/wiki/Syslog#Severity_level) standard. OPTIONAL. |
+| loglevel | integer | Filter log lines equal to specified log level. Log lines should follow recommended logging format and use [Syslog severity level](https://en.wikipedia.org/wiki/Syslog#Severity_level) standard. Should you need to filter by multiple log level severity values, then rely on regex matching, provided the module follows some consistent format when logging different severity levels. OPTIONAL. |
| regex | string | Filter log lines that have content that match the specified regular expression using [.NET Regular Expressions](/dotnet/standard/base-types/regular-expressions) format. OPTIONAL. | | encoding | string | Either `gzip` or `none`. Default is `none`. | | contentType | string | Either `json` or `text`. Default is `text`. |
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
You can also restart modules remotely from the Azure portal. For more informatio
## Check your firewall and port configuration rules
-Azure IoT Edge allows communication from an on-premises server to Azure cloud using supported IoT Hub protocols, see [choosing a communication protocol](../iot-hub/iot-hub-devguide-protocols.md). For enhanced security, communication channels between Azure IoT Edge and Azure IoT Hub are always configured to be Outbound. This configuration is based on the [Services Assisted Communication pattern](/archive/blogs/clemensv/service-assisted-communication-for-connected-devices), which minimizes the attack surface for a malicious entity to explore. Inbound communication is only required for specific scenarios where Azure IoT Hub needs to push messages to the Azure IoT Edge device. Cloud-to-device messages are protected using secure TLS channels and can be further secured using X.509 certificates and TPM device modules. The Azure IoT Edge Security Manager governs how this communication can be established, see [IoT Edge Security Manager](../iot-edge/iot-edge-security-manager.md).
+Azure IoT Edge allows communication from an on-premises server to Azure cloud using supported IoT Hub protocols, see [choosing a communication protocol](../iot-hub/iot-hub-devguide-protocols.md). For enhanced security, communication channels between Azure IoT Edge and Azure IoT Hub are always configured to be Outbound. This configuration is based on the [Services Assisted Communication pattern](/archive/blogs/clemensv/service-assisted-communication-for-connected-devices), which minimizes the attack surface for a malicious entity to explore. Inbound communication is only required for [specific scenarios](#anchortext) where Azure IoT Hub needs to push messages to the Azure IoT Edge device. Cloud-to-device messages are protected using secure TLS channels and can be further secured using X.509 certificates and TPM device modules. The Azure IoT Edge Security Manager governs how this communication can be established, see [IoT Edge Security Manager](../iot-edge/iot-edge-security-manager.md).
While IoT Edge provides enhanced configuration for securing Azure IoT Edge runtime and deployed modules, it is still dependent on the underlying machine and network configuration. Hence, it is imperative to ensure proper network and firewall rules are set up for secure edge to cloud communication. The following table can be used as a guideline when configuration firewall rules for the underlying servers where Azure IoT Edge runtime is hosted:
While IoT Edge provides enhanced configuration for securing Azure IoT Edge runti
|--|--|--|--|--| |MQTT|8883|BLOCKED (Default)|BLOCKED (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open when using MQTT as the communication protocol.<li>1883 for MQTT is not supported by IoT Edge. <li>Incoming (Inbound) connections should be blocked.</ul>| |AMQP|5671|BLOCKED (Default)|OPEN (Default)|<ul> <li>Default communication protocol for IoT Edge. <li> Must be configured to be Open if Azure IoT Edge is not configured for other supported protocols or AMQP is the desired communication protocol.<li>5672 for AMQP is not supported by IoT Edge.<li>Block this port when Azure IoT Edge uses a different IoT Hub supported protocol.<li>Incoming (Inbound) connections should be blocked.</ul></ul>|
-|HTTPS|443|BLOCKED (Default)|OPEN (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open on 443 for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS). <li>Incoming (Inbound) connection should be Open only for specific scenarios: <ul> <li> If you have a transparent gateway with leaf devices that may send method requests. In this case, Port 443 does not need to be open to external networks to connect to IoTHub or provide IoTHub services through Azure IoT Edge. Thus the incoming rule could be restricted to only open Incoming (Inbound) from the internal network. <li> For Client to Device (C2D) scenarios.</ul><li>80 for HTTP is not supported by IoT Edge.<li>If non-HTTP protocols (for example, AMQP or MQTT) cannot be configured in the enterprise; the messages can be sent over WebSockets. Port 443 will be used for WebSocket communication in that case.</ul>|
+|HTTPS|443|BLOCKED (Default)|OPEN (Default)|<ul> <li>Configure Outgoing (Outbound) to be Open on 443 for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS). <li><a id="anchortext">Incoming (Inbound) connection</a> should be Open only for specific scenarios: <ul> <li> If you have a transparent gateway with leaf devices that may send method requests. In this case, Port 443 does not need to be open to external networks to connect to IoTHub or provide IoTHub services through Azure IoT Edge. Thus the incoming rule could be restricted to only open Incoming (Inbound) from the internal network. <li> For Client to Device (C2D) scenarios.</ul><li>80 for HTTP is not supported by IoT Edge.<li>If non-HTTP protocols (for example, AMQP or MQTT) cannot be configured in the enterprise; the messages can be sent over WebSockets. Port 443 will be used for WebSocket communication in that case.</ul>|
## Next steps
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-simulator.md
In this tutorial, you'll learn how to:
If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md) and configure an IoT hub.
+Download the zip file named `Tutorial_Simulator.zip` from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) in the latest release, and unzip it.
+ ## Add a device to Azure IoT Hub After the Device Update agent is running on an IoT device, you must add the device to IoT Hub. From within IoT Hub, a connection string is generated for a particular device.
After the Device Update agent is running on an IoT device, you must add the devi
`sudo /usr/bin/AducIotAgent --register--content-handler <full path to the handler file> --update-type <update type name>`
-1. Download the `sample-du-simulator-data.json` from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases). Run the following command to create and edit the `du-simulator-data.json` file in the tmp folder:
+1. You will need the file `sample-du-simulator-data.json` from the downloaded `Tutorial_Simulator.zip` in the prerequisites.
+
+ Open the file `sample-du-simulator-data.json` and copy contents to clipboard:
+
+ ```sh
+ nano sample-du-simulator-data.json
+ ```
+
+ Select the contents of the file and press **Ctrl+C**. Press **Ctrl+X** to close the file and don't save changes.
+
+ Run the following command to create and edit the `du-simulator-data.json` file in the tmp folder:
```sh sudo nano /tmp/du-simulator-data.json
+ ```
+ Press **Ctrl+V** to paste the contents into the editor. Select **Ctrl+X** to save the changes, and then **Y**.
+
+ Change permissions:
+ ```sh
sudo chown adu:adu /tmp/du-simulator-data.json sudo chmod 664 /tmp/du-simulator-data.json ```-
- Copy the contents from the downloaded file into the `du-simulator-data.json` file. Select **Ctrl+X** to save the changes.
If /tmp doesn't exist, then:
Read the license terms prior to using the agent. Your installation and use const
## Import the update
-1. Download the sample tutorial manifest (Tutorial Import Manifest_Sim.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent. The update file is reused from the Raspberry Pi tutorial. Because the update in this tutorial is simulated, the specific file content doesn't matter.
+1. You will need the files `TutorialImportManifest_Sim.importmanifest.json` and `adu-update-image-raspberrypi3.swu` from the downloaded `Tutorial_Simulator.zip` in the prerequisites. The update file is reused from the Raspberry Pi tutorial. Because the update in this tutorial is simulated, the specific file content doesn't matter.
1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**. 1. Select the **Updates** tab. 1. Select **+ Import New Update**.
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filtering.md
After filling in the fields, select **Save** to save the rule. You see an alert
:::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-save-new-rule.png" alt-text="Screenshot that shows notification about saving an IP filter rule.":::
-The **Add** option is disabled when you reach the maximum of 10 IP filter rules.
+The **Add** option is disabled when you reach the maximum of 100 IP filter rules.
To edit an existing rule, select the data you want to change, make the change, then select **Save** to save your edit.
lab-services How To Enable Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm.md
The nested virtualization VM sizes may use different processors as shown in the
| Medium (nested virtualization) | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) | | Large (nested virtualization) | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
-Each time that a template VM or a student VM is stopped and started, the underlying processor may change. To help ensure that nested VMs work consistently across processors, try enabling [processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v) on the nested VMs. It's recommended to enable **Processor Compatibility** mode on the template VM's nested VMs before publishing or exporting the image. You should also test the performance of the nested VMs with the **Processor Compatibility** mode enabled to ensure performance isn't negatively impacted. For more information, see [ramifications of using processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v.md#ramifications-of-using-processor-compatibility-mode).
+Each time that a template VM or a student VM is stopped and started, the underlying processor may change. To help ensure that nested VMs work consistently across processors, try enabling [processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v) on the nested VMs. It's recommended to enable **Processor Compatibility** mode on the template VM's nested VMs before publishing or exporting the image. You should also test the performance of the nested VMs with the **Processor Compatibility** mode enabled to ensure performance isn't negatively impacted. For more information, see [ramifications of using processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v#ramifications-of-using-processor-compatibility-mode).
lab-services How To Setup Lab Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu.md
This section describes how to validate that your GPU drivers are properly instal
#### Small GPU (Visualization) Windows images
-To verify driver installation for **Small GPU (Visualization)** size, see [validate the AMD GPU drivers on N-series VMs running Windows](/virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+To verify driver installation for **Small GPU (Visualization)** size, see [validate the AMD GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
#### Small GPU (Compute) and Medium GPU (Visualization) Windows images
lighthouse Deploy Policy Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/deploy-policy-remediation.md
Title: Deploy a policy that can be remediated description: To deploy policies that use a remediation task via Azure Lighthouse, you'll need to create a managed identity in the customer tenant. Previously updated : 11/05/2021 Last updated : 06/20/2022 # Deploy a policy that can be remediated within a delegated subscription
-[Azure Lighthouse](../overview.md) allows service providers to create and edit policy definitions within a delegated subscription. However, to deploy policies that use a [remediation task](../../governance/policy/how-to/remediate-resources.md) (that is, policies with the [deployIfNotExists](../../governance/policy/concepts/effects.md#deployifnotexists) or [modify](../../governance/policy/concepts/effects.md#modify) effect), you'll need to create a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) in the customer tenant. This managed identity can be used by Azure Policy to deploy the template within the policy. There are steps required to enable this scenario, both when you onboard the customer for Azure Lighthouse, and when you deploy the policy itself.
+[Azure Lighthouse](../overview.md) allows service providers to create and edit policy definitions within a delegated subscription. To deploy policies that use a [remediation task](../../governance/policy/how-to/remediate-resources.md) (that is, policies with the [deployIfNotExists](../../governance/policy/concepts/effects.md#deployifnotexists) or [modify](../../governance/policy/concepts/effects.md#modify) effect), you must create a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) in the customer tenant. This managed identity can be used by Azure Policy to deploy the template within the policy. There are steps required to enable this scenario, both when you onboard the customer for Azure Lighthouse, and when you deploy the policy itself.
> [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. ## Create a user who can assign roles to a managed identity in the customer tenant
-When you onboard a customer to Azure Lighthouse, you use an [Azure Resource Manager template](onboard-customer.md#create-an-azure-resource-manager-template) along with a parameters file to define authorizations that grant access to delegated resources in the customer tenant. Each authorization specifies a **principalId** that corresponds to an Azure AD user, group, or service principal in the managing tenant, and a **roleDefinitionId** that corresponds to the [Azure built-in role](../../role-based-access-control/built-in-roles.md) that will be granted.
+When you onboard a customer to Azure Lighthouse, you use an [Azure Resource Manager template](onboard-customer.md#create-an-azure-resource-manager-template) to define authorizations that grant access to delegated resources in the customer tenant. Each authorization specifies a **principalId** that corresponds to an Azure AD user, group, or service principal in the managing tenant, and a **roleDefinitionId** that corresponds to the [Azure built-in role](../../role-based-access-control/built-in-roles.md) that will be granted.
-To allow a **principalId** to create a managed identity in the customer tenant, you must set its **roleDefinitionId** to **User Access Administrator**. While this role is not generally supported, it can be used in this specific scenario, allowing user accounts with this permission to assign one or more specific built-in roles to managed identities. These roles are defined in the **delegatedRoleDefinitionIds** property, and can include any [supported Azure built-in role](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse) except for User Access Administrator or Owner.
+To allow a **principalId** to create a managed identity in the customer tenant, you must set its **roleDefinitionId** to **User Access Administrator**. While this role is not generally supported, it may be used in this specific scenario, allowing user accounts with this permission to assign one or more specific built-in roles to managed identities. These roles must be defined in the **delegatedRoleDefinitionIds** property, and can include any [supported Azure built-in role](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse) except for User Access Administrator or Owner.
-After the customer is onboarded, the **principalId** created in this authorization will be able to assign these built-in roles to managed identities in the customer tenant. However, they will not have any other permissions normally associated with the User Access Administrator role.
+After the customer is onboarded, the **principalId** created in this authorization will be able to assign these built-in roles to managed identities in the customer tenant. It will not have any other permissions normally associated with the User Access Administrator role.
> [!NOTE] > [Role assignments](../../role-based-access-control/role-assignments-steps.md#step-5-assign-role) across tenants must currently be done through APIs, not in the Azure portal.
lighthouse Manage Hybrid Infrastructure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-hybrid-infrastructure-arc.md
Title: Manage hybrid infrastructure at scale with Azure Arc description: Azure Lighthouse helps you effectively manage customers' machines and Kubernetes clusters outside of Azure. Previously updated : 09/07/2021 Last updated : 06/20/2022
[Azure Arc](../../azure-arc/overview.md) helps simplify complex and distributed environments across on-premises, edge and multicloud, enabling deployment of Azure services anywhere and extending Azure management to any infrastructure.
-With [Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
+With [Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md), customers can manage Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. Through Azure Lighthouse, service providers can then manage these connected non-Azure machines along with their customers' Azure resources.
-[Azure ArcΓÇôenabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
-
-This topic provides an overview of how to use Azure Arc-enabled servers and Azure Arc-enabled Kubernetes in a scalable way across the customer tenants you manage.
+[Azure ArcΓÇôenabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters outside of Azure. When a Kubernetes cluster is connected to Azure Arc, it appears in the Azure portal with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
> [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md).
This topic provides an overview of how to use Azure Arc-enabled servers and Azur
As a service provider, you can manage on-premises Windows Server or Linux machines outside Azure that your customers have connected to their subscription using the [Azure Connected Machine agent](../../azure-arc/servers/agent-overview.md). When viewing resources for a delegated subscription in the Azure portal, you'll see these connected machines labeled with **Azure Arc**.
-You can manage these connected machines using Azure constructs, such as Azure Policy and tagging, the same way that youΓÇÖd manage the customer's Azure resources. You can also work across customer tenants to manage all connected hybrid machines together.
+You can manage these connected machines using Azure constructs, such as Azure Policy and tagging, just as you would manage the customer's Azure resources. You can also work across customer tenants to manage all connected machines together.
-For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Microsoft Defender for Cloud to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly from hybrid machines](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of customer's hybrid machines.
+For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Microsoft Defender for Cloud to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of your customers' hybrid machines.
## Manage hybrid Kubernetes clusters at scale with Azure Arc-enabled Kubernetes You can manage Kubernetes clusters that have been [connected to a customer's subscription with Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md), just as if they were running in Azure.
-If your customer has created a service principal account to onboard Kubernetes clusters to Azure Arc, you can access this account so that you can onboard and manage clusters. To do so, a user in the managing tenant must have been granted the [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) when the subscription containing the service principal account was [onboarded to Azure Lighthouse](onboard-customer.md).
+If your customer has created a service principal account to onboard Kubernetes clusters to Azure Arc, you can access this account so that you can [onboard and manage clusters](../../azure-arc/kubernetes/quickstart-connect-cluster.md). To do so, a user in the managing tenant must have been granted the [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) when the subscription containing the service principal account was [onboarded to Azure Lighthouse](onboard-customer.md).
-You can deploy [configurations](../../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md) and [Helm charts](../../azure-arc/kubernetes/use-gitops-with-helm.md) using GitOps for connected clusters.
+You can deploy [configurations](../../azure-arc/kubernetes/tutorial-use-gitops-flux2.md) and [Helm charts](../../azure-arc/kubernetes/use-gitops-with-helm.md) using [GitOps for connected clusters](../../azure-arc/kubernetes/conceptual-gitops-flux2.md).
-You can also monitor connected clusters with Azure Monitor, and [use Azure Policy to apply cluster configurations at scale](../../azure-arc/kubernetes/use-azure-policy.md).
+You can also [monitor connected clusters](../..//azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md) with Azure Monitor, and [use Azure Policy to apply cluster configurations at scale](../../azure-arc/kubernetes/use-azure-policy.md).
## Next steps -- Explore the jumpstarts and samples in the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc).-- Learn about [supported scenarios for Azure Arc-enabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).
+- Explore the [Azure Arc Jumpstart](https://azurearcjumpstart.io/).
+- Learn about [supported cloud operations for Azure Arc-enabled servers](../../azure-arc/servers/overview.md#supported-cloud-operations).
- Learn about [Kubernetes distributions supported by Azure Arc](../../azure-arc/kubernetes/overview.md#supported-kubernetes-distributions).
lighthouse Manage Sentinel Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-sentinel-workspaces.md
Title: Manage Microsoft Sentinel workspaces at scale description: Azure Lighthouse helps you effectively manage Microsoft Sentinel across delegated customer resources. Previously updated : 11/05/2021 Last updated : 06/20/2022
If you are managing Microsoft Sentinel resources for multiple customers, you can
[Azure Monitor Workbooks in Microsoft Sentinel](../../sentinel/overview.md#workbooks) help you visualize and monitor data from your connected data sources to gain insights. You can use the built-in workbook templates in Microsoft Sentinel, or create custom workbooks for your scenarios.
-You can deploy workbooks in your managing tenant and create at-scale dashboards to monitor and query data across customer tenants. For more information, see [Cross-workspace monitoring](../../sentinel/extend-sentinel-across-workspaces-tenants.md#using-cross-workspace-workbooks).
+You can deploy workbooks in your managing tenant and create at-scale dashboards to monitor and query data across customer tenants. For more information, see [Cross-workspace workbooks](../../sentinel/extend-sentinel-across-workspaces-tenants.md#using-cross-workspace-workbooks).
You can also deploy workbooks directly in an individual tenant that you manage for scenarios specific to that customer. ## Run Log Analytics and hunting queries across Microsoft Sentinel workspaces
-Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-hunting). These queries can then be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the workspace () expression. For more information, see [Cross-workspace querying](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-querying).
+Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-hunting). These queries can then be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the [workspace() expression](../../azure-monitor/logs/workspace-expression.md).
+
+For more information, see [Cross-workspace querying](../../sentinel/extend-sentinel-across-workspaces-tenants.md#cross-workspace-querying).
## Use automation for cross-workspace management
You can use automation to manage multiple Microsoft Sentinel workspaces and conf
Use Azure Lighthouse in conjunction with Microsoft Sentinel to monitor the security of Office 365 environments across tenants. First, out-of-the box [Office 365 data connectors must be enabled in the managed tenant](../../sentinel/data-connectors-reference.md#microsoft-office-365) so that information about user and admin activities in Exchange and SharePoint (including OneDrive) can be ingested to a Microsoft Sentinel workspace within the managed tenant. This includes details about actions such as file downloads, access requests sent, changes to group events, and mailbox operations, along with information about the users who performed the actions. [Office 365 DLP alerts](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-office-365-dlp-events-into-azure-sentinel/ba-p/1031820) are also supported as part of the built-in Office 365 connector.
-[Microsoft Defender for Cloud Apps connector](../../sentinel/data-connectors-reference.md#microsoft-cloud-app-security-mcas) to stream alerts and Cloud Discovery logs into Microsoft Sentinel. This gives you visibility into cloud apps, provides sophisticated analytics to identify and combat cyberthreats, and helps yuo control how data travels. Activity logs for Defender for Cloud Apps can be [consumed using the Common Event Format (CEF)](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-box-com-activity-events-via-microsoft-cloud-app-security/ba-p/1072849).
+You can use the [Microsoft Defender for Cloud Apps connector](../../sentinel/data-connectors-reference.md#microsoft-cloud-app-security-mcas) to stream alerts and Cloud Discovery logs into Microsoft Sentinel. This gives you visibility into cloud apps, provides sophisticated analytics to identify and combat cyberthreats, and helps you control how data travels. Activity logs for Defender for Cloud Apps can be [consumed using the Common Event Format (CEF)](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-box-com-activity-events-via-microsoft-cloud-app-security/ba-p/1072849).
After setting up Office 365 data connectors, you can use cross-tenant Microsoft Sentinel capabilities such as viewing and analyzing the data in workbooks, using queries to create custom alerts, and configuring playbooks to respond to threats.
lighthouse Migration At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/migration-at-scale.md
Title: Manage Azure Migrate projects at scale description: Azure Lighthouse helps you effectively use Azure Migrate across delegated customer resources. Previously updated : 09/13/2021 Last updated : 06/20/2022
This topic provides an overview of how [Azure Lighthouse](../overview.md) can he
Azure Lighthouse allows service providers to perform operations at scale across several tenants at once, making management tasks more efficient.
-Azure Migrate provides a centralized hub to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. Typically, partners who performing assessments and migration at scale for multiple customers must access each customer subscription individually by using the [CSP (Cloud Solution Provider) subscription model](/partner-center/customers-revoke-admin-privileges) or by [creating a guest user in the customer tenant](../../active-directory/external-identities/what-is-b2b.md).
+Azure Migrate provides a centralized hub to assess and migrate to Azure on-premises servers, infrastructure, applications, and data.
-Azure Lighthouse integration with Azure Migrate lets service providers discover, assess, and migrate workloads for different customers at scale, while allowing customers to have full visibility and control of their environments. Through Azure delegated resource management, service providers have a single view of all of the Azure Migrate projects they manage across multiple customer tenants.
-
-> [!NOTE]
-> Via Azure Lighthouse, partners can perform discovery, assessment and migration for on-premises VMware VMs, Hyper-V VMs, physical servers and AWS/GCP instances. There are two options for [VMware VM migration](../../migrate/server-migrate-overview.md). Currently, only the agent-based method of migration can be used when working on a migration project in a delegated customer subscription; migration using agentless replication is not currently supported through delegated access to the customer's scope.
+Azure Lighthouse integration with Azure Migrate lets service providers discover, assess, and migrate workloads for different customers at scale, rather than accessing each customer subscription individually. Service providers can have a single view of all of the Azure Migrate projects they manage across multiple customer tenants. Their customers will have full visibility into service provider access, and they maintain control of their own environments.
> [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). Depending on your scenario, you may wish to create the Azure Migrate in the customer tenant or in your managing tenant. Review the considerations below and determine which model best fits your customer's migration needs.
+> [!NOTE]
+> Via Azure Lighthouse, partners can perform discovery, assessment and migration for on-premises VMware VMs, Hyper-V VMs, physical servers and AWS/GCP instances. There are two options for [VMware VM migration](../../migrate/server-migrate-overview.md). Currently, only the agent-based method of migration can be used when working on a migration project in a delegated customer subscription; migration using agentless replication is not currently supported through delegated access to the customer's scope.
+ ## Create an Azure Migrate project in the customer tenant One option when using Azure Lighthouse is to create the Azure Migrate project in the customer tenant. Users in the managing tenant can then select the customer subscription when creating a migration project. From the managing tenant, the service provider can perform the necessary migration operations. This may include deploying the Azure Migrate appliance to discover the workloads, assessing workloads by grouping VMs and calculating cloud-related costs, reviewing VM readiness, and performing the migration.
-In this scenario, no resources will be created and stored in the managing tenant, even though the discovery and assessment steps can be initiated and executed from that tenant. All of the resources, such as migration projects, assessment reports for on-prem workloads, and migrated resources at the target destination, will be deployed in the customer subscription. However, the service provider can access all customer projects from their own tenant and portal experience.
+In this scenario, no resources will be created and stored in the managing tenant, even though the discovery and assessment steps can be initiated and executed from that tenant. All of the resources, such as migration projects, assessment reports for on-premises workloads, and migrated resources at the target destination, will be deployed in the customer subscription. However, the service provider can access all customer projects from their own tenant and portal experience.
This approach minimizes context switching for service providers working across multiple customers, while letting customers keep all of their resources in their own tenants.
The workflow for this model will be similar to the following:
1. When the target customer subscription is ready, proceed with the migration through the access granted by Azure Lighthouse. The migration project containing assessment results and migrated resources will be created in the customer tenant under the target subscription. > [!TIP]
-> Prior to migration, a landing zone will need to be deployed to provision the foundation infrastructure resources and prepare the subscription to which virtual machines will be migrated. To access or create some resources in this landing zone, the Owner built-in role may be required, which is not currently supported in Azure Lighthouse. For such scenarios, the customer may need to provide guest access role or delegate admin access via the CSP subscription model. For an approach to creating multi-tenant landing zones, see the [Multi-tenant-Landing-Zones demo solution](https://github.com/Azure/Multi-tenant-Landing-Zones) on GitHub.
+> Prior to migration, a landing zone must be deployed to provision the foundation infrastructure resources and prepare the subscription to which virtual machines will be migrated. To access or create some resources in this landing zone, the Owner built-in role may be required, which is not currently supported in Azure Lighthouse. With these scenarios, the customer may need to provide [guest access](/azure/active-directory/external-identities/what-is-b2b) or delegate admin access via the [Cloud Solution Provider (CSP) subscription model](/partner-center/customers-revoke-admin-privileges). For an approach to creating multi-tenant landing zones, see the [Multi-tenant-Landing-Zones demo solution](https://github.com/Azure/Multi-tenant-Landing-Zones) on GitHub.
## Create an Azure Migrate project in the managing tenant
This approach enables services providers to start migration discovery and assess
The workflow for this model will be similar to the following:
-1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role.
+1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Be sure to modify the parameter file to reflect your environment before deploying the template.
1. The designated user signs into the managing tenant in the Azure portal, then goes to Azure Migrate. This user [creates an Azure Migrate project](../../migrate/create-manage-projects.md) in a subscription belonging to the managing tenant. 1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). The on-premises VMs will be discovered and assessed within the migration project created in the managing tenant, then migrated from there.
The workflow for this model will be similar to the following:
1. When ready, proceed with the migration by selecting the delegated customer subscription as the target destination for replicating and migrating the workloads. The newly created resources will exist in the customer subscription, while the assessment data and resources pertaining to the migration project will remain in the managing tenant.
-NOTE: You must modify the parameter file to reflect your environment before deploying
-https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate
- ## Partner recognition for customer migrations As a member of the [Microsoft Partner Network](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects.
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-at-scale.md
Title: Monitor delegated resources at scale description: Azure Lighthouse helps you use Azure Monitor Logs in a scalable way across customer tenants. Previously updated : 12/06/2021 Last updated : 06/20/2022
We recommend creating these workspaces directly in the customer tenants. This wa
> [!TIP] > Any automation account used to access data from a Log Analytics workspace must be created in the same tenant as the workspace.
-You can create a Log Analytics workspace by using the [Azure portal](../../azure-monitor/logs/quick-create-workspace.md), by using [Azure CLI](../../azure-monitor/logs/resource-manager-workspace.md), or by using [Azure PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md).
+You can create a Log Analytics workspace by using the [Azure portal](../../azure-monitor/logs/quick-create-workspace.md), by using [Azure Resource Manager templates](../../azure-monitor/logs/resource-manager-workspace.md), or by using [Azure PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md).
> [!IMPORTANT] > If all workspaces are created in customer tenants, the Microsoft.Insights resource providers must also be [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on a subscription in the managing tenant. If your managing tenant doesn't have an existing Azure subscription, you can register the resource provider manually by using the following PowerShell commands:
You can create a Log Analytics workspace by using the [Azure portal](../../azure
## Deploy policies that log data
-Once you've created your Log Analytics workspaces, you can deploy [Azure Policy](../../governance/policy/index.yml) across your customer hierarchies so that diagnostic data is sent to the appropriate workspace in each tenant. The exact policies you deploy may vary depending on the resource types that you want to monitor.
+Once you've created your Log Analytics workspaces, you can deploy [Azure Policy](../../governance/policy/overview.md) across your customer hierarchies so that diagnostic data is sent to the appropriate workspace in each tenant. The exact policies you deploy may vary, depending on the resource types that you want to monitor.
To learn more about creating policies, see [Tutorial: Create and manage policies to enforce compliance](../../governance/policy/tutorials/create-and-manage.md). This [community tool](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/tools/azure-diagnostics-policy-generator) provides a script to help you create policies to monitor the specific resource types that you choose.
workspace("WS-customer-tenant-2").AzureDiagnostics
| project Category, ResourceGroup, TenantId ```
-For more examples of queries across multiple Log Analytics workspaces, see [Query across resources with Azure Monitor](../../azure-monitor/logs/cross-workspace-query.md).
+For more examples of queries across multiple Log Analytics workspaces, see [Create a log query across multiple workspaces and apps in Azure Monitor](../../azure-monitor/logs/cross-workspace-query.md).
> [!IMPORTANT] > If you use an automation account used to query data from a Log Analytics workspace, that automation account must be created in the same tenant as the workspace. ## View alerts across customers
-You can view [alerts](../../azure-monitor/alerts/alerts-overview.md) for the delegated subscriptions in customer tenants that your manage.
+You can view [alerts](../../azure-monitor/alerts/alerts-overview.md) for delegated subscriptions in the customer tenants that you manage.
From your managing tenant, you can [create, view, and manage activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md) in the Azure portal or through APIs and management tools.
lighthouse Monitor Delegation Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-delegation-changes.md
Title: Monitor delegation changes in your managing tenant description: Learn how to monitor all Azure Lighthouse delegation activity to your managing tenant. Previously updated : 09/08/2021 Last updated : 06/22/2022 ms.devlang: azurecli
As a service provider, you may want to be aware when customer subscriptions or r
In the managing tenant, the [Azure activity log](../../azure-monitor/essentials/platform-logs-overview.md) tracks delegation activity at the tenant level. This logged activity includes any added or removed delegations from customer tenants.
-This topic explains the permissions needed to monitor delegation activity to your tenant (across all of your customers). It also includes a sample script that shows one method for querying and reporting on this data.
+This topic explains the permissions needed to monitor delegation activity to your tenant across all of your customers. It also includes a sample script that shows one method for querying and reporting on this data.
> [!IMPORTANT] > All of these steps must be performed in your managing tenant, rather than in any customer tenants.
After you've assigned the Monitoring Reader role at root scope to the desired ac
## View delegation changes in the Azure portal
-Users who has been assigned the Monitoring Reader role at root scope can view delegation changes directly in the Azure portal.
+Users who have been assigned the Monitoring Reader role at root scope can view delegation changes directly in the Azure portal.
1. Navigate to the **My customers** page, then select **Activity log** from the left-hand navigation menu. 1. Ensure that **Directory Activity** is selected in the filter near the top of the screen.
else {
## Next steps - Learn how to [onboard customers to Azure Lighthouse](onboard-customer.md).-- Learn about [Azure Monitor](../../azure-monitor/index.yml) and the [Azure activity log](../../azure-monitor/essentials/platform-logs-overview.md).
+- Learn about [Azure Monitor](../../azure-monitor/index.yml) and the [Azure activity log](../../azure-monitor/essentials/activity-log.md).
- Review the [Activity Logs by Domain](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/workbook-activitylogs-by-domain) sample workbook to learn how to display Azure Activity logs across subscriptions with an option to filter them by domain name.
lighthouse Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-management-group.md
Title: Onboard all subscriptions in a management group description: You can deploy an Azure Policy to delegate all subscriptions within a management group to an Azure Lighthouse managing tenant. Previously updated : 08/13/2021 Last updated : 06/22/2022
lighthouse Partner Earned Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/partner-earned-credit.md
Title: Link your partner ID to track your impact on delegated resources description: Associate your partner ID to receive partner earned credit (PEC) on customer resources you manage through Azure Lighthouse. Previously updated : 12/16/2021 Last updated : 06/22/2022 # Link your partner ID to track your impact on delegated resources
-If you're a member of the [Microsoft Partner Network](https://partner.microsoft.com/), you can link your partner ID with the credentials used to manage delegated customer resources, allowing Microsoft to identify and recognize partners who drive Azure customer success. This link also allows [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partners to receive [partner earned credit for managed services (PEC)](/partner-center/partner-earned-credit) for customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started).
+If you're a member of the [Microsoft Partner Network](https://partner.microsoft.com/), you can link your partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to identify and recognize partners who drive Azure customer success. It also allows [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partners to receive [partner earned credit for managed services (PEC)](/partner-center/partner-earned-credit) for customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started).
To earn recognition for Azure Lighthouse activities, you'll need to [link your MPN ID](../../cost-management-billing/manage/link-partner-id.md) with at least one user account in your managing tenant, and ensure that the linked account has access to each of your onboarded subscriptions.
To earn recognition for Azure Lighthouse activities, you'll need to [link your M
Use the following process to link your partner ID (and enable partner earned credit, if applicable). You'll need to know your [MPN partner ID](/partner-center/partner-center-account-setup#locate-your-mpn-id) to complete these steps. Be sure to use the **Associated MPN ID** shown on your partner profile.
-For simplicity, we recommend creating a service principal account in your tenant, linking it to your **Associated MPN ID**, then granting it access to every customer you onboard with an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec).
+For simplicity, we recommend creating a service principal account in your tenant, linking it to your **Associated MPN ID**, then granting it an [Azure built-in role that is eligible for PEC](/partner-center/azure-roles-perms-pec) to every customer that you onboard.
1. [Create a service principal user account](../../active-directory/develop/howto-authenticate-service-principal-powershell.md) in your managing tenant. For this example, we'll use the name *Provider Automation Account* for this service principal account. 1. Using that service principal account, [link to your Associated MPN ID](../../cost-management-billing/manage/link-partner-id.md#link-to-a-partner-id) in your managing tenant. You only need to do this one time.
lighthouse Policy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/policy-at-scale.md
Title: Deploy Azure Policy to delegated subscriptions at scale description: Azure Lighthouse lets you deploy a policy definition and policy assignment across multiple tenants. Previously updated : 12/16/2021 Last updated : 6/22/2022
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/publish-managed-services-offers.md
You need to have a valid [account in Partner Center](../../marketplace/create-ac
Per the [Managed Service offer certification requirements](/legal/marketplace/certification-policies#700-managed-services), you must have a [Silver or Gold Cloud Platform competency level](/partner-center/learn-about-competencies) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) in order to publish a Managed Service offer. You must also [enter a lead destination that will create a record in your CRM system](../../marketplace/plan-managed-service-offer.md#customer-leads) each time a customer deploys your offer.
-If you don't want to publish an offer to Azure Marketplace, or don't meet all the requirements, you can onboard customers manually by using Azure Resource Manager templates. For more info, see [Onboard a customer to Azure Lighthouse](onboard-customer.md).
+If you don't want to publish an offer to Azure Marketplace, or if you don't meet all the requirements, you can onboard customers manually by using Azure Resource Manager templates. For details, see [Onboard a customer to Azure Lighthouse](onboard-customer.md).
The following table can help determine whether to onboard customers by publishing a Managed Service offer or by using Azure Resource Manager templates.
To learn about the general publishing process, review the [commercial marketplac
Once a customer adds your offer, they will be able to delegate one or more subscriptions or resource groups, which will then be [onboarded to Azure Lighthouse](#the-customer-onboarding-process). > [!IMPORTANT]
-> Each plan in a Managed Service offer includes a **Manifest Details** section, where you define the Azure Active Directory (Azure AD) entities in your tenant that will have access to the delegated resource groups and/or subscriptions for customers who purchase that plan. It's important to be aware that any group (or user or service principal) that you include will have the same permissions for every customer who purchases the plan. To assign different groups to work with each customer, you can publish a separate [private plan](../../marketplace/private-offers.md) that is exclusive to each customer. Keep in mind that private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
+> Each plan in a Managed Service offer includes a **Manifest Details** section, where you define the Azure Active Directory (Azure AD) entities in your tenant that will have access to the delegated resource groups and/or subscriptions for customers who purchase that plan. It's important to be aware that any group (or user or service principal) that you include will have the same permissions for every customer who purchases the plan. To assign different groups to work with each customer, you can publish a separate [private plan](../../marketplace/private-offers.md) that is exclusive to each customer. These private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
## Publish your offer Once you've completed all of the sections, your next step is to publish the offer. After you initiate the publishing process, your offer will go through several validation and publishing steps. For more information, see [Review and publish an offer to the commercial marketplace](../../marketplace/review-publish-offer.md)
-You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes and update to the new version](view-manage-service-providers.md#update-service-provider-offers).
+You can [publish an updated version of your offer](../../marketplace/update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously published offer. When you do so, customers who have already added the offer will see an icon in the **Service providers** page in the Azure portal that lets them know an update is available. Each customer will be able to [review the changes and update to the new version](view-manage-service-providers.md#update-service-provider-offers).
## The customer onboarding process
-After a customer adds your offer, they can [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Provider offers** section of the [**Service providers**](view-manage-service-providers.md) page in the Azure portal.
+After a customer adds your offer, they can [delegate one or more specific subscriptions or resource groups](view-manage-service-providers.md#delegate-resources), which will be onboarded to Azure Lighthouse. If a customer has accepted an offer but has not yet delegated any resources, they'll see a note at the top of the **Service provider offers** section of the **Service providers** page in the Azure portal.
> [!IMPORTANT] > Delegation must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.md#list-owners-of-a-subscription).
lighthouse Remove Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/remove-delegation.md
Title: Remove access to a delegation description: Learn how to remove access to resources that had been delegated to a service provider for Azure Lighthouse. Previously updated : 09/08/2021 Last updated : 06/22/2022
lighthouse Update Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/update-delegation.md
Title: Update a delegation description: Learn how to update a delegation for a customer previously onboarded to Azure Lighthouse. Previously updated : 09/08/2021 Last updated : 06/22/2022
After you have onboarded a subscription (or resource group) to Azure Lighthouse,
If you [onboarded your customer through Azure Resource Manager templates (ARM templates)](onboard-customer.md), a new deployment must be performed for that customer. Depending on what you are changing, you may want to update the original offer, or remove the original offer and create a new one. -- **If you are changing authorizations only**: You can update your delegation by changing only the **authorizations** section of the ARM template.
+- **If you are changing authorizations only**: You can update your delegation by changing the **authorizations** section of the ARM template.
- **If you are changing the managing tenant**: You must create a new ARM template using with a different **mspOfferName** than your previous offer. ## Update your ARM template To update your delegation, you will need to deploy an ARM template that includes the changes you'd like to make.
-If you are only updating authorizations (such as adding a new user group with a role you hadn't previously included, or changing the role for an existing user), you can use the same **mspOfferName** as in the [ARM template](onboard-customer.md#create-an-azure-resource-manager-template) that you used for the previous delegation. You can use your previous template as a starting point. Then, make the changes you need, such as replacing one Azure built-in role with another, or adding a completely new authorization to the template.
+If you are only updating authorizations (such as adding a new user group with a role you hadn't previously included, or changing the role for an existing user), you can use the same **mspOfferName** as in the [ARM template](onboard-customer.md#create-an-azure-resource-manager-template) that you used for the previous delegation. Use your previous template as a starting point. Then, make the changes you need, such as replacing one Azure built-in role with another, or adding a completely new authorization to the template.
If you change the **mspOfferName**, this will be considered a new, separate offer. This is required if you are changing the managing tenant.
-It's not necessary to change the **mspOfferName** if the managing tenant remains the same. In most cases, we recommend having only one **mspOfferName** in use by the same customer and managing tenant. If you choose to change it anyway, be sure that the customer's previous delegation is removed before deploying the new one.
+You don't need to change the **mspOfferName** if the managing tenant remains the same. In most cases, we recommend having only one **mspOfferName** in use by the same customer and managing tenant. If you do choose to create a new **mspOfferName** for your template, be sure that the customer's previous delegation is removed before deploying the new one.
## Remove the previous delegation
If you are updating the offer to adjust authorizations only, and keeping the sam
Removing access to the delegation can be done by any user in the managing tenant who was granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) in the original delegation. If no user in your managing tenant has this role, you can ask the customer to [remove access to the offer in the Azure portal](view-manage-service-providers.md#remove-service-provider-offers). > [!TIP]
-> If you have removed the previous delegation following the steps above, and are still unable to deploy the new ARM template, you may need to [remove the registration definition completely](/powershell/module/az.managedservices/remove-azmanagedservicesdefinition). This can be done by any user with a role that has the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), in the customer tenant.
+> If you have removed the previous delegation but are unable to deploy the new ARM template, you may need to [remove the registration definition completely](/powershell/module/az.managedservices/remove-azmanagedservicesdefinition). This can be done by any user with a role that has the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), in the customer tenant.
## Deploy the ARM template
After the deployment has been completed, [confirm that it was successful](onboar
## Updating Managed Service offers
-If you onboarded your customer through a Managed Service offer published to Azure Marketplace, and you want to update authorizations, you can do so by [publishing a new version of your offer](../../marketplace/update-existing-offer.md) with the [authorizations](../../marketplace/create-managed-service-offer-plans.md#authorizations) that you want to use updated in the plan for that customer. The customer will then be able to [review the changes in the Azure portal and accept the new version](view-manage-service-providers.md#update-service-provider-offers).
+If you onboarded your customer through a Managed Service offer published to Azure Marketplace, and you want to update authorizations, you can do so by [publishing a new version of your offer](../../marketplace/update-existing-offer.md) with updates to the [authorizations](../../marketplace/create-managed-service-offer-plans.md#authorizations) in the plan for that customer. The customer will then be able to [review the changes in the Azure portal and accept the new version](view-manage-service-providers.md#update-service-provider-offers).
If you want to change the managing tenant, you will need to [create and publish a new Managed Service offer](publish-managed-services-offers.md) for the customer to accept. > [!IMPORTANT]
-> As mentioned earlier, we recommend that you avoid using multiple offers for the same customer and managing tenant. If you do publish a new offer for the same customer which uses the same managing tenant, be sure that the earlier offer is removed before the customer accepts the newer offer.
+> We recommend that you avoid using multiple offers between the same customer and managing tenant. If you publish a new offer for a current customer that uses the same managing tenant, be sure that the earlier offer is removed before the customer accepts the newer offer.
## Next steps
lighthouse View Service Provider Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-service-provider-activity.md
Title: Monitor service provider activity description: Customers can monitor logged activity to see actions performed by service providers through Azure Lighthouse. Previously updated : 12/16/2021 Last updated : 06/22/2022 # Monitor service provider activity
-Customers who have delegated subscriptions for [Azure Lighthouse](../overview.md) can [view Azure Activity log](../../azure-monitor/essentials/platform-logs-overview.md) data to see all actions taken. This gives customers full visibility into operations that service providers are performing, along with operations done by users within the customer's own Azure Active Directory (Azure AD) tenant.
+Customers who have delegated subscriptions for [Azure Lighthouse](../overview.md) can [view Azure Activity log](../../azure-monitor/essentials/activity-log.md) data to see all actions taken. This gives customers full visibility into operations that service providers are performing, along with operations done by users within the customer's own Azure Active Directory (Azure AD) tenant.
## View activity log data
-You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) from the **Monitor** menu in the Azure portal. To limit results to a specific subscription, use the filters to select a specific subscription. You can also [view and retrieve activity log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) programmatically.
+You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) from the **Monitor** menu in the Azure portal. To limit results to a specific subscription, use the filters to select a specific subscription. You can also [view and retrieve activity log events](../../azure-monitor/essentials/activity-log.md#other-methods-to-retrieve-activity-log-events) programmatically.
> [!NOTE] > Users in a service provider's tenant can view activity log results for a delegated subscription in a customer tenant if they were granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) when that subscription was onboarded to Azure Lighthouse. In the activity log, you'll see the name of the operation and its status, along with the date and time it was performed. The **Event initiated by** column shows which user performed the operation, whether it was a user in a service provider's tenant acting through Azure Lighthouse, or a user in the customer's own tenant. Note that the name of the user is shown, rather than the tenant or the role that the user has been assigned for that subscription.
-Logged activity is available in the Azure portal for the past 90 days. To learn how to store this data for longer than 90 days, see [Collect and analyze Azure activity logs in Log Analytics workspace](../../azure-monitor/essentials/activity-log.md).
- > [!NOTE] > Users from the service provider appear in the activity log, but these users and their role assignments aren't shown in **Access Control (IAM)** or when retrieving role assignment info via APIs.
+Logged activity is available in the Azure portal for the past 90 days. You can also [store this data for a longer period](../../azure-monitor/essentials/activity-log.md#retention-period) if needed.
+ ## Set alerts for critical operations
-To stay aware of critical operations that service providers (or users in your own tenant) are performing, we recommend creating [activity log alerts](../../azure-monitor/alerts/activity-log-alerts.md). For example, you may want to track all administrative actions for a subscription, or be notified when any virtual machine in a particular resource group is deleted. When you create alerts, they will include actions performed by users in the customer's own tenant as well as in any managing tenants.
+To stay aware of critical operations that service providers (or users in your own tenant) are performing, we recommend creating [activity log alerts](../../azure-monitor/alerts/alerts-types.md#activity-log-alerts). For example, you may want to track all administrative actions for a subscription, or be notified when any virtual machine in a particular resource group is deleted. When you create alerts, they'll include actions performed by users in the customer's own tenant as well as in any managing tenants.
-For more information, see [Create and manage activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md).
+For more information, see [Create, view, and manage activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md).
## Create log queries Log queries can help you analyze your logged activity or focus on specific items. For example, perhaps an audit requires you to report on all administrative-level actions performed on a subscription. You can create a query to filter on only these actions and sort the results by user, date, or another value.
-For more information, see [Overview of log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
+For more information, see [Log queries in Azure Monitor](../../azure-monitor/logs/log-query-overview.md).
## View user activity across domains
lighthouse Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/overview.md
Title: What is Azure Lighthouse? description: Azure Lighthouse lets service providers deliver managed services for their customers with higher automation and efficiency at scale. Previously updated : 11/02/2021 Last updated : 06/20/2022
Azure Lighthouse enables multi-tenant management with scalability, higher automation, and enhanced governance across resources.
-With Azure Lighthouse, service providers can deliver managed services using [comprehensive and robust tooling built into the Azure platform](concepts/architecture.md). Customers maintain control over who has access to their tenant, which resources they can access, and what actions can be taken. [Enterprise organizations](concepts/enterprise.md) managing resources across multiple tenants can also use Azure Lighthouse to streamline management tasks.
+With Azure Lighthouse, service providers can deliver managed services using [comprehensive and robust tooling built into the Azure platform](concepts/architecture.md). Customers maintain control over who has access to their tenant, which resources they can access, and what actions can be taken. [Enterprise organizations](concepts/enterprise.md) managing resources across multiple tenants can use Azure Lighthouse to streamline management tasks.
-[Cross-tenant management experiences](concepts/cross-tenant-management-experience.md) lets you work more efficiently with Azure services like [Azure Policy](how-to/policy-at-scale.md), [Microsoft Sentinel](how-to/manage-sentinel-workspaces.md), [Azure Arc](how-to/manage-hybrid-infrastructure-arc.md), and many more. Users can see what changes were made and by whom [in the activity log](how-to/view-service-provider-activity.md), which is stored in the customer's tenant and can be viewed by users in the managing tenant.
+[Cross-tenant management experiences](concepts/cross-tenant-management-experience.md) let you work more efficiently with Azure services such as [Azure Policy](how-to/policy-at-scale.md), [Microsoft Sentinel](how-to/manage-sentinel-workspaces.md), [Azure Arc](how-to/manage-hybrid-infrastructure-arc.md), and many more. Users can see what changes were made and by whom [in the activity log](how-to/view-service-provider-activity.md), which is stored in the customer's tenant and can be viewed by users in the managing tenant.
![Overview diagram of Azure Lighthouse](media/azure-lighthouse-overview.jpg)
With Azure Lighthouse, service providers can deliver managed services using [com
Azure Lighthouse helps service providers efficiently build and deliver managed services. Benefits include: - **Management at scale**: Customer engagement and life-cycle operations to manage customer resources are easier and more scalable. Existing APIs, management tools, and workflows can be used with delegated resources, including machines hosted outside of Azure, regardless of the regions in which they're located.-- **Greater visibility and control for customers**: Customers have precise control over the scopes they delegate for management and the permissions that are allowed. They can [audit service provider actions](how-to/view-service-provider-activity.md) and remove access completely at any time.
+- **Greater visibility and control for customers**: Customers have precise control over the scopes they delegate and the permissions that are allowed. They can [audit service provider actions](how-to/view-service-provider-activity.md) and remove access completely at any time.
- **Comprehensive and unified platform tooling**: Azure Lighthouse works with existing tools and APIs, [Azure managed applications](concepts/managed-applications.md), and partner programs like the [Cloud Solution Provider program (CSP)](concepts/cloud-solution-provider.md). This flexibility supports key service provider scenarios, including multiple licensing models such as EA, CSP and pay-as-you-go. You can integrate Azure Lighthouse into your existing workflows and applications, and track your impact on customer engagements by [linking your partner ID](how-to/partner-earned-credit.md). ## Capabilities
logic-apps Create Custom Built In Connector Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-custom-built-in-connector-standard.md
For more information, review the following documentation:
* [Visual Studio Code with the Azure Logic Apps (Standard) extension and other prerequisites installed](create-single-tenant-workflows-azure-portal.md#prerequisites). Your installation should already include the [NuGet package for Microsoft.Azure.Workflows.WebJobs.Extension](https://www.nuget.org/packages/Microsoft.Azure.Workflows.WebJobs.Extension/).
+ > [!NOTE]
+ >
+ > This authoring capability is currently available only in Visual Studio Code.
+ * An Azure Cosmos account, database, and container or collection. For more information, review [Quickstart: Create an Azure Cosmos account, database, container and items from the Azure portal](../cosmos-db/sql/create-cosmosdb-resources-portal.md). ## High-level steps
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
As you progress, you'll complete these high-level tasks:
| **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. | |||||
-1. Before you continue making selections, under **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type. The **Plan type** property specifies the hosting plan and billing model to use for your logic app. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md).
| Plan type | Description | |--|-|
- | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
| **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
|||
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. |
+ | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <p><p>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
+ |||||
+ 1. Now continue making the following selections: | Property | Required | Value | Description |
As you progress, you'll complete these high-level tasks:
| **Region** | Yes | <*Azure-region*> | The Azure datacenter region to use for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>- If you previously chose **Docker Container**, select your custom location from the **Region** list. <br><br>- If you want to deploy your app to an existing [App Service Environment v3 resource](../app-service/environment/overview.md), you can select that environment from the **Region** list. | |||||
+ > [!NOTE]
+ >
+ > If you select an Azure region that supports availability zone redundancy, the **Zone redundancy**
+ > section is enabled. This section offers the choice to enable availability zone redundancy
+ > for your logic app. However, currently supported Azure regions don't include **West US**,
+ > so you can ignore this section for this example. For more information, see
+ > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
+ When you're done, your settings look similar to this version: ![Screenshot that shows the Azure portal and "Create Logic App" page.](./media/create-single-tenant-workflows-azure-portal/create-logic-app-resource-portal.png)
As you progress, you'll complete these high-level tasks:
| Property | Required | Value | Description | |-|-|-|-|
- | **Storage type** | Yes | - **SQL and Azure Storage** <br>- **Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <p><p>- To deploy only to Azure, select **Azure Storage**. <p><p>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <p><p>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The ongoing workflow state, run history, and other runtime artifacts are stored in your SQL database. <p><p>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
+ | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <p><p>- To deploy only to Azure, select **Azure Storage**. <p><p>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <p><p>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The ongoing workflow state, run history, and other runtime artifacts are stored in your SQL database. <p><p>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
| **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <p><p>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <p><p>This example creates a storage account named `fabrikamstorageacct`. |
- | **Plan type** | Yes | <*hosting-plan*> | The hosting plan to use for deploying your logic app. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
- | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <p><p>This example uses the name `Fabrikam-Service-Plan`. |
- | **SKU and size** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <p><p>To change the default pricing tier, select **Change size**. You can then select other pricing tiers, based on the workload that you need. <p><p>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
||||| 1. Next, if your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app.
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
To create and manage a logic app resource using other tools, review these other
| **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). <br><br>This example creates a logic app named **My-First-Logic-App**. | |||||
-1. Before you continue making selections, under **Plan type**, select **Consumption** so that you view only the settings that apply to the Consumption plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** so that you view only the settings that apply to the Consumption plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
| Plan type | Description | |--|-|
- | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). After you select **Consumption**, the **Zone redundancy** section appears. This section offers the choice to enable availability zones for your Consumption logic app. In this example, keep **Enabled** as the setting value. For more information, see [Protect Consumption logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md). |
| **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
||| 1. Now continue making the following selections:
To create and manage a logic app resource using other tools, review these other
| **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <p><p>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. | ||||
+ > [!NOTE]
+ >
+ > If you selected an Azure region that supports availability zone redundancy, the **Zone redundancy**
+ > section is enabled. This preview section offers the choice to enable availability zone redundancy
+ > for your logic app. However, currently supported Azure regions don't include **West US**,
+ > so you can ignore this section for this example. For more information, see
+ > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
+ When you're done, your settings look similar to this version: ![Screenshot showing the Azure portal and logic app resource creation page with details for new logic app.](./media/quickstart-create-first-logic-app-workflow/create-logic-app-settings.png)
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
ms.suite: integration Previously updated : 05/02/2022 Last updated : 06/17/2022 #Customer intent: As a developer, I want to protect logic apps from regional failures by setting up availability zones.
-# Protect Consumption logic apps from region failures with zone redundancy and availability zones (preview)
+# Protect logic apps from region failures with zone redundancy and availability zones
In each Azure region, *availability zones* are physically separate locations that are tolerant to local failures. Such failures can range from software and hardware failures to events such as earthquakes, floods, and fires. These zones achieve tolerance through the redundancy and logical isolation of Azure services.
-To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information about availability zones and zone redundancy, review [Azure regions and availability zones](../availability-zones/az-overview.md).
+To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information about availability zone redundancy, review [Azure regions and availability zones](../availability-zones/az-overview.md).
-This article provides a brief overview about considerations for using availability zones in Azure Logic Apps and how to enable this capability for your Consumption logic app.
-
-> [!NOTE]
->
-> Standard logic apps that use [App Service Environment v3 (ASE v3)](../app-service/environment/overview-zone-redundancy.md)
-> support zone redundancy with availability zones, but only for built-in operations. Currently, support is unavailable
-> for Azure (managed) connectors.
+This article provides a brief overview, considerations, and information about how to enable availability zone redundancy in Azure Logic Apps.
## Considerations
-During preview, the following considerations apply:
+### [Standard](#tab/standard)
+
+Availability zone redundancy is available for Standard logic apps, which are powered by Azure Functions extensibility. For more information, review [Azure Functions support for availability zone redundancy](../azure-functions/azure-functions-az-redundancy.md#overview).
+
+* You can enable availability zone redundancy *only when you create* Standard logic apps, either in a [supported Azure region](../azure-functions/azure-functions-az-redundancy.md#requirements) or in an [App Service Environment v3 (ASE v3) - Windows plans only](../app-service/environment/overview-zone-redundancy.md). Currently, this capability supports only built-in connector operations, not Azure (managed) connector operations.
+
+* You can enable availability zone redundancy *only for new* Standard logic apps with workflows that run in single-tenant Azure Logic Apps. You can't enable availability zone redundancy for existing Standard logic app workflows.
+
+* You can enable availability zone redundancy *only at creation time using Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy.
+
+### [Consumption (preview)](#tab/consumption)
+
+Availability zone redundancy is currently in *preview* for Consumption logic apps, which run in multi-tenant Azure Logic Apps. During preview, the following considerations apply:
-* The following list includes the Azure regions where you can currently enable availability zones with the list expanding as available:
+* You can enable availability zone redundancy *only for new* Consumption logic app workflows that you create in the following Azure regions, which will expand as available:
* Australia East * Brazil South
During preview, the following considerations apply:
* West Europe * West US 3
-* Azure Logic Apps currently supports the option to enable availability zones *only for new Consumption logic app workflows* that run in multi-tenant Azure Logic Apps.
+ You have to create these Consumption logic apps *using the Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy.
- * This option is available *only when you create a Consumption logic app using the Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zones.
+* You can't enable availability zone redundancy for existing Consumption logic app workflows. Any existing Consumption logic app workflows are unaffected until mid-May 2022.
- * This option is unavailable for existing Consumption logic app workflows and for any Standard logic app workflows.
+ However, after this time, the Azure Logic Apps team will gradually start to move existing Consumption logic app workflows towards using availability zone redundancy, several Azure regions at a time. The option to enable availability zone redundancy on new Consumption logic app workflows remains available during this time.
-* Existing Consumption logic app workflows are unaffected until mid-May 2022. After this time, the Azure Logic Apps team will gradually start to move existing Consumption logic app workflows towards using availability zones, several Azure regions at a time. The option to enable availability zones on new Consumption logic app workflows remains available during this time.
-
-* If you use a firewall or restricted environment, you have to allow traffic through all the IP addresses required by Azure Logic Apps, managed connectors, and custom connectors in the Azure region where you create your logic app workflows. New IP addresses that support availability zones are already published for Azure Logic Apps, managed connectors, and custom connectors. For more information, review [Prerequisites](#prerequisites).
+ ## Limitations
With HTTP-based actions, certificates exported or created with AES256 encryption
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* If you have a firewall or restricted environment, you have to allow traffic through all the IP addresses required by Azure Logic Apps, managed connectors, and any custom connectors in the Azure region where you create your logic app workflows. For more information, review the following documentation:
+* If you have a firewall or restricted environment, you have to allow traffic through all the IP addresses required by Azure Logic Apps, managed connectors, and any custom connectors in the Azure region where you create your logic app workflows. New IP addresses that support availability zone redundancy are already published for Azure Logic Apps, managed connectors, and custom connectors. For more information, review the following documentation:
* [Firewall configuration: IP addresses and service tags](logic-apps-limits-and-config.md#firewall-ip-configuration)
With HTTP-based actions, certificates exported or created with AES256 encryption
* [Outbound IP addresses for managed connectors and custom connectors](/connectors/common/outbound-ip-addresses)
-## Set up availability zones for Consumption logic app workflows
+## Enable availability zones
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), start creating a Standard logic app. On the **Create Logic App** page, stop after you select **Standard** as the plan type for your logic app.
+
+ ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Standard" plan type selected.](./media/set-up-zone-redundancy-availability-zones/select-standard-plan.png)
+
+ For a tutorial, review [Create Standard logic app workflows with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md).
+
+ After you select **Standard**, the **Zone redundancy** section and options become available.
+
+1. Under **Zone redundancy**, select **Enabled**.
+
+ At this point, your logic app creation experience appears similar to this example:
+
+ ![Screenshot showing Azure portal, "Create Logic App" page, Standard logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-standard.png)
+
+1. Finish creating your logic app.
+
+1. If you use a firewall and haven't set up access for traffic through the required IP addresses, make sure to complete that [requirement](#prerequisites).
+
+### [Consumption (preview)](#tab/consumption)
1. In the [Azure portal](https://portal.azure.com), start creating a Consumption logic app. On the **Create Logic App** page, stop after you select **Consumption** as the plan type for your logic app.
With HTTP-based actions, certificates exported or created with AES256 encryption
At this point, your logic app creation experience appears similar to this example:
- ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy.png)
+ ![Screenshot showing Azure portal, "Create Logic App" page, Consumption logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-consumption.png)
1. Finish creating your logic app. 1. If you use a firewall and haven't set up access for traffic through the required IP addresses, make sure to complete that [requirement](#prerequisites). ++ ## Next steps * [Business continuity and disaster recovery for Azure Logic Apps](business-continuity-disaster-recovery-guidance.md)
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 06/13/2022 Last updated : 06/10/2022
The single-tenant model and **Logic App (Standard)** resource type include many
* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * More managed connectors are now available as built-in operations and run similarly to other built-in operations, such as Azure Functions. Built-in operations run natively on the single-tenant Azure Logic Apps runtime. For example, new built-in operations include Azure Service Bus, Azure Event Hubs, SQL Server, MQ, DB2, and IBM Host File.
+ * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also [*service provider-based* connectors](custom-connector-overview.md#service-provider-interface-implementation). For a list, review the [Built-in connectors for Standard logic apps](#built-connectors-standard) section later in this article.
- > [!NOTE]
- > For the built-in SQL Server version, only the **Execute Query** action can directly connect to Azure
- > virtual networks without using the [on-premises data gateway](logic-apps-gateway-connection.md).
-
- * You can create your own built-in connectors for any service that you need by using the [single-tenant Azure Logic Apps extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported.
-
- The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+ * You can create your own custom built-in connectors for any service that you need by using the single-tenant Azure Logic Apps extensibility framework. Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported. For more information, review [Custom connector overview](custom-connector-overview.md#custom-connector-standard) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
* You can use the following actions for Liquid Operations and XML Operations without an integration account. These operations include the following actions:
The single-tenant model and **Logic App (Standard)** resource type include many
* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Standard)** resource. For this task, [follow the same steps for the **Logic Apps (Consumption)** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level.
+<a name="built-connectors-standard"></a>
+
+## Built-in connectors for Standard
+
+A Standard logic app workflow has many of the same built-in connectors as a Consumption logic app workflow, but not all. Vice versa, a Standard logic app workflow has many built-in connectors that aren't available in a Consumption logic app workflow.
+
+For example, a Standard logic app workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption logic app workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
+
+In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md).
+ <a name="limited-unavailable-unsupported"></a> ## Changed, limited, unavailable, or unsupported capabilities
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Task | Model algorithms | String literal syntax<br> ***`default_model`\**** den
|-|- Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-weighted models for mobile applications <br> **ResNet**: Residual networks<br> **ResNeSt**: Split attention networks<br> **SE-ResNeXt50**: Squeeze-and-Excitation networks<br> **ViT**: Vision transformer networks| `mobilenetv2` <br>`resnet18` <br>`resnet34` <br> `resnet50` <br> `resnet101` <br> `resnet152` <br> `resnest50` <br> `resnest101` <br> `seresnext` <br> `vits16r224` (small) <br> ***`vitb16r224`\**** (base) <br>`vitl16r224` (large)| Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
-Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn` <br>`maskrcnn_resnet50_fpn`
+Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn`
In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
transformations:
encoding: 'ascii' ```
-Therefore, the MLTable folder would have the MLTable deinifition file plus the data file (the bank_marketing_train_data.csv file in this case).
+Therefore, the MLTable folder would have the MLTable definition file plus the data file (the bank_marketing_train_data.csv file in this case).
The following shows two ways of creating an MLTable. - A. Providing your training data and MLTable definition file from your local folder and it'll be automatically uploaded into the cloud (default Workspace Datastore)
Automated ML offers options for you to monitor and evaluate your training result
* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
-From Azure Machine Learning UI at the model's page you can also view the hyperparameters used when training a particular a particular model and also view and customize the internal model's training code used.
+From Azure Machine Learning UI at the model's page you can also view the hyperparameters used when training a particular model and also view and customize the internal model's training code used.
## Register and deploy models
machine-learning How To Troubleshoot Protobuf Descriptor Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
+
+ Title: "Troubleshoot `descriptors cannot not be created directly`"
+
+description: Troubleshooting steps when you get the "descriptors cannot not be created directly" message.
++++++ Last updated : 06/22/2022++
+# Troubleshoot `descriptors cannot not be created directly` error
+
+When using Azure Machine Learning, you may receive the following error:
+
+```
+TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.ΓÇ¥ It is followed by the proposition to install the appropriate version of protobuf library.
+
+If you cannot immediately regenerate your protos, some other possible workarounds are:
+ 1. Downgrade the protobuf package to 3.20.x or lower.
+ 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
+```
+
+You may notice this error specifically when using AutoML.
+
+## Cause
+
+This problem is caused by breaking changes introduced in protobuf 4.0.0. For more information, see [https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates](https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates).
+
+## Resolution
+
+For a local development environment or compute instance, install the Azure Machine Learning SDK version 1.42.0.post1 or greater.
+
+```bash
+pip install azureml-sdk[automl,explain,notebooks]>=1.42.0
+```
+
+For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles:
+
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
+* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
+
+To verify the version of your installed SDK, use the following command:
+
+```bash
+pip show azureml-core
+```
+
+This command should return information similar to `Version: 1.42.0.post1`.
+
+> [!TIP]
+> If you can't upgrade your Azure Machine Learning SDK installation, you can pin the protobuf version in your environment to `3.20.1`. The following example is a `conda.yml` file that demonstrates how to pin the version:
+>
+> ```yml
+> name: model-env
+> channels:
+> - conda-forge
+> dependencies:
+> - python=3.8
+> - numpy=1.21.2
+> - pip=21.2.4
+> - scikit-learn=0.24.2
+> - scipy=1.7.1
+> - pandas>=1.1,<1.2
+> - pip:
+> - inference-schema[numpy-support]==1.3.0
+> - xlrd==2.0.1
+> - mlflow== 1.26.0
+> - azureml-mlflow==1.41.0
+> - protobuf==3.20.1
+> ```
+
+## Next steps
+
+For more information on the breaking changes in protobuf 4.0.0, see [https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates](https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates).
+
+For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles:
+
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
+* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
machine-learning How To Troubleshoot Serialization Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-serialization-error.md
Last updated 06/15/2022
# Troubleshoot "cannot import name 'SerializationError'"
-When using Azure Machine Learning, you may receive the error "Cannot import name 'SerializationError'". This error may occur when using an Azure Machine Learning environment. For example, when submitting a training job.
+When using Azure Machine Learning, you may receive one of the following errors:
+
+* `cannot import name 'SerializationError'`
+* `cannot import name 'SerializationError' from 'azure.core.exceptions'`
+
+This error may occur when using an Azure Machine Learning environment. For example, when submitting a training job or using AutoML.
## Cause
This problem is caused by a bug in the Azure Machine Learning SDK version 1.42.0
## Resolution
-Update your Azure Machine Learning environment to use SDK version 1.42.0.post1 or greater.
+Update the affected environment to use SDK version 1.42.0.post1 or greater. For a local development environment or compute instance, use the following command:
+
+```bash
+pip install azureml-sdk[automl,explain,notebooks]>=1.42.0
+```
+
+For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles:
+
+* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment)
+* [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
+* [Create & manage environments (CLI v2)](how-to-manage-environments-v2.md#update)
+
+To verify the version of your installed SDK, use the following command:
+
+```bash
+pip show azureml-core
+```
+
+## Next steps
-For more information on updating an environment, see the following articles:
+For more information on updating an Azure Machine Learning environment (for training or deployment), see the following articles:
* [Manage environments in studio](how-to-manage-environments-in-studio.md#rebuild-an-environment) * [Create & use software environments (SDK v1)](how-to-use-environments.md#update-an-existing-environment)
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
Previously updated : 10/21/2021 Last updated : 06/21/2022 # Trigger applications, processes, or CI/CD workflows based on Azure Machine Learning events (preview)
Azure Event Grid allows customers to build de-coupled message handlers, which ca
1. From the left bar, select __Events__ and then select **Event Subscriptions**.
- ![select-events-in-workspace.png](./media/how-to-use-event-grid/select-event.png)
+ :::image type="content" source="./media/how-to-use-event-grid/select-event.png" alt-text="Screenshot showing the Event Subscription selection.":::
1. Select the event type to consume. For example, the following screenshot has selected __Model registered__, __Model deployed__, __Run completed__, and __Dataset drift detected__:
- ![add-event-type](./media/how-to-use-event-grid/add-event-type-updated.png)
+ :::image type="content" source="./media/how-to-use-event-grid/add-event-type-updated.png" alt-text="Screenshot of the Create Event Subscription form.":::
1. Select the endpoint to publish the event to. In the following screenshot, __Event hub__ is the selected endpoint:
Use [Azure Logic Apps](../logic-apps/index.yml) to configure emails for all your
1. In the Azure portal, go to your Azure Machine Learning workspace and select the events tab from the left bar. From here, select __Logic apps__.
- ![Screenshot shows a Machine Learning workspace Events page with Logic Apps.](./media/how-to-use-event-grid/select-logic-ap.png)
+ :::image type="content" source="./media/how-to-use-event-grid/select-logic-ap.png" alt-text="Screenshot showing the Logic Apps selection.":::
1. Sign into the Logic App UI and select Machine Learning service as the topic type.
Use [Azure Logic Apps](../logic-apps/index.yml) to configure emails for all your
1. Select which event(s) to be notified for. For example, the following screenshot __RunCompleted__.
- ![Screenshot shows the When a resource event occurs dialog box with an event type selected.](./media/how-to-use-event-grid/select-event-runcomplete.png)
+ :::image type="content" source="./media/how-to-use-event-grid/select-event-runcomplete.png" alt-text="Screenshot showing the Machine Learning service as the resource type.":::
1. Next, add a step to consume this event and search for email. There are several different mail accounts you can use to receive events. You can also configure conditions on when to send an email alert.
Before you begin, perform the following actions:
In this example, a simple Data Factory pipeline is used to copy files into a blob store and run a published Machine Learning pipeline. For more information on this scenario, see how to set up a [Machine Learning step in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md)
-![Screenshot shows the Training Pipeline in Factory Resources with Copy data1 feeding M L Execute Pipeline1.](./media/how-to-use-event-grid/adf-mlpipeline-stage.png)
1. Start with creating the logic app. Go to the [Azure portal](https://portal.azure.com), search for Logic Apps, and select create.
In this example, a simple Data Factory pipeline is used to copy files into a blo
1. Login and fill in the details for the event. Set the __Resource Name__ to the workspace name. Set the __Event Type__ to __DatasetDriftDetected__.
- ![Screenshot shows the When a resource event occurs with an Event Type Item selected.](./media/how-to-use-event-grid/login-and-add-event.png)
+ :::image type="content" source="./media/how-to-use-event-grid/login-and-add-event.png" alt-text="Screenshot showing the data drift event type item.":::
1. Add a new step, and search for __Azure Data Factory__. Select __Create a pipeline run__.
In this example, a simple Data Factory pipeline is used to copy files into a blo
![Screenshot shows events with the Logic App highlighted.](./media/how-to-use-event-grid/show-logic-app-webhook.png)
-Now the data factory pipeline is triggered when drift occurs. View details on your data drift run and machine learning pipeline on the [new workspace portal](https://ml.azure.com).
+Now the data factory pipeline is triggered when drift occurs. View details on your data drift run and machine learning pipeline in [Azure Machine Learning studio](https://ml.azure.com).
-![Screenshot shows pipeline endpoints.](./media/how-to-use-event-grid/view-in-workspace.png)
### Example: Deploy a model based on tags
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
1. To browse through the cluster nodes, navigate to the cluster resource and open the **Data Center** pane to view them:
- :::image type="content" source="./media/create-cluster-portal/datacenter-1.png" alt-text="View datacenter nodes." lightbox="./media/create-cluster-portal/datacenter-1.png" border="true":::
+ :::image type="content" source="./media/create-cluster-portal/datacenter.png" alt-text="Screenshot of datacenter nodes." lightbox="./media/create-cluster-portal/datacenter.png" border="true":::
+
+## Scale a datacenter
+
+1. Now that you have deployed a cluster with a single data center, you can scale the nodes up or down by highlighting the data center, and selecting the `Scale` button:
+
+ :::image type="content" source="./media/create-cluster-portal/datacenter-scale-1.png" alt-text="Screenshot of scaling datacenter nodes." lightbox="./media/create-cluster-portal/datacenter-scale-1.png" border="true":::
+
+1. Next, move the slider to the desired number, or just edit the value. When finished, hit `Scale`.
+
+ :::image type="content" source="./media/create-cluster-portal/datacenter-scale-2.png" alt-text="Screenshot of selecting number of datacenter nodes." lightbox="./media/create-cluster-portal/datacenter-scale-2.png" border="true":::
+
+ > [!NOTE]
+ > The length of time it takes for nodes to scale depends on various factors, it may take several minutes. When Azure notifies you that the scale operation has completed, this does not mean that all your nodes have joined the Cassandra ring. Nodes will be fully commissioned when they all display a status of "healthy", and the datacenter status reads "succeeded".
## Add a datacenter 1. To add another datacenter, click the add button in the **Data Center** pane:
- :::image type="content" source="./media/create-cluster-portal/add-datacenter.png" alt-text="Click on add datacenter." lightbox="./media/create-cluster-portal/add-datacenter.png" border="true":::
+ :::image type="content" source="./media/create-cluster-portal/add-datacenter.png" alt-text="Screenshot of adding a datacenter." lightbox="./media/create-cluster-portal/add-datacenter.png" border="true":::
> [!WARNING] > If you are adding a datacenter in a different region, you will need to select a different virtual network. You will also need to ensure that this virtual network has connectivity to the primary region's virtual network created above (and any other virtual networks that are hosting datacenters within the managed instance cluster). Take a look at [this article](../virtual-network/tutorial-connect-virtual-networks-portal.md#peer-virtual-networks) to learn how to peer virtual networks using Azure portal. You also need to make sure you have applied the appropriate role to your virtual network before attempting to deploy a managed instance cluster, using the below CLI command.
managed-instance-apache-cassandra Dba Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/dba-commands.md
Both will return a json of the following form:
"exitCode": 0 } ```
+In most cases you might only need the commandOutput or the exitCode. Here is an example for only getting the commandOutput:
+
+```azurecli-interactive
+ az managed-cassandra cluster invoke-command --query "commandOutput" --resource-group $resourceGroupName --cluster-name $clusterName --host $host --command-name nodetool --arguments getstreamthroughput=""
+```
## How to run an `sstable` command
For more information on each command, see https://cassandra.apache.org/doc/lates
* [Create a managed instance cluster from the Azure portal](create-cluster-portal.md) * [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
-* [Management operations in Azure Managed Instance for Apache Cassandra](management-operations.md)
+* [Management operations in Azure Managed Instance for Apache Cassandra](management-operations.md)
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics.md
description: Access analytic reports to monitor sales, evaluate performance, and
--++ Previously updated : 06/01/2021 Last updated : 06/21/2022 # Access analytic reports for the commercial marketplace in Partner Center
marketplace Azure Container Plan Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-availability.md
description: Set plan availability for an Azure Container offer in Microsoft App
-- Previously updated : 04/21/2021++ Last updated : 6/20/2022 # Set plan availability for an Azure Container offer
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/downloads-dashboard.md
description: Learn how to access download requests for your marketplace offers.
--++ Last updated 09/27/2021
marketplace Iot Edge Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-checklist.md
Previously updated : 05/21/2021 Last updated : 06/20/2022 # Pre-certification checklist for IoT Edge modules
Verify the following:
- [Deploy modules from the commercial marketplace](../iot-edge/how-to-deploy-modules-portal.md#deploy-from-azure-marketplace) - [Publish the Edge Module in Partner Center](./iot-edge-offer-setup.md)-- [Deploy IoT Edge Module](../iot-edge/quickstart-linux.md)
+- [Deploy IoT Edge Module](../iot-edge/quickstart-linux.md)
marketplace Iot Edge Plan Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-plan-availability.md
Previously updated : 05/21/2021 Last updated : 06/20/2022 # Set plan availability for an IoT Edge Module offer
marketplace Iot Edge Plan Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-plan-listing.md
Previously updated : 05/21/2021 Last updated : 06/20/2022 # Set up plan listing details for an IoT Edge Module offer
marketplace Iot Edge Plan Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-plan-setup.md
Previously updated : 05/21/2021 Last updated : 6/20/2022 # Set up plans for an IoT Edge Module offer
marketplace Iot Edge Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-plan-technical-configuration.md
description: Set plan technical configuration for an IoT Edge Module offer on Az
-+ Previously updated : 05/21/2021 Last updated : 6/20/2022 # Set plan technical configuration for an IoT Edge Module offer
marketplace Iot Edge Preview Audience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-preview-audience.md
Previously updated : 05/21/2021 Last updated : 6/20/2022 # Set the preview audience for an IoT Edge Module offer
marketplace Iot Edge Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-properties.md
Previously updated : 05/21/2021 Last updated : 6/20/2022 # Configure IoT Edge Module offer properties
marketplace Iot Edge Technical Asset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-technical-asset.md
Previously updated : 05/21/2021 Last updated : 06/20/2022 # Prepare IoT Edge module technical assets
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-apis-guide.md
description: Align your business with our eCommerce platform (Azure Marketplace)
-- Previously updated : 05/13/2021++ Last updated : 06/21/2022 # Align your business with our e-commerce platform
The activities below are not sequential. The activity you use is dependent on yo
| <center>Activity | ISV sales activities | Corresponding Marketplace API | Corresponding Marketplace UI | | | | | |
-| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
+| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
| <center>**2. Demand Generation**<br><img src="medi)<br>[Co-Sell Connector for SalesForce CRM](/partner-center/connector-salesforce)<br>[Co-Sell Connector for Dynamics 365 CRM](/partner-center/connector-dynamics) | Product Promotion<br>Lead nurturing<br>Eval, trial & PoC<br>Azure Marketplace and AppSource<br>PC Marketplace Insights<br>PC Co-Sell Opportunities | | <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](/partner-center/) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) | | <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing |
marketplace Pc Saas Fulfillment Subscription Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-subscription-api.md
Returns the list of all existing subscriptions for all offers made by this publi
"endDate": "2020-04-30", "termUnit": "P1Y" },
- "autoRenew": false
+ "autoRenew": false,
"allowedCustomerOperations": ["Read"], "sessionMode": "None", "isFreeTrial": false,
Response body example:
{ "planId": "Platinum001", "displayName": "Private platinum plan for Contoso", // display name of the plan as it appears in the marketplace
- "isPrivate": true //true or false
+ "isPrivate": true, //true or false
"description": "plan description", "minQuantity": 5, "maxQuantity": 100,
Response body example:
{ "planId": "gold", "displayName": "Gold plan for Contoso",
- "isPrivate": false //true or false,
+ "isPrivate": false, //true or false
"description": "gold plan details.", "minQuantity": 1, "maxQuantity": 5,
marketplace Policies Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/policies-terms.md
description: Microsoft commercial marketplace policies and terms apply to all pu
-- Previously updated : 04/16/2021++ Last updated : 06/21/2022 # Commercial marketplace policies and terms
The Microsoft Publisher Agreement describes the relationship for publishing offe
## Next steps -- [What is the Microsoft commercial marketplace?](overview.md)
+- [What is the Microsoft commercial marketplace?](overview.md)
marketplace Power Bi App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-app-offer-listing.md
description: Configure Power BI app offer listing details on Microsoft AppSource
-- Previously updated : 05/26/2021++ Last updated : 6/20/2022 # Configure Power BI app offer listing details
marketplace Power Bi App Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-app-technical-configuration.md
description: Set up Power BI app offer technical configuration on Microsoft AppS
-- Previously updated : 05/26/2021++ Last updated : 06/20/2022 # Set up Power BI app offer technical configuration
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
The first step of migration is to set up the replication appliance. To set up th
8. Copy the appliance setup file and key file to the Windows Server 2016 or Windows Server 2012 AWS VM you created for the replication appliance. 9. Run the replication appliance setup file, as described in the next procedure.
- 9.1. Under **Before You Begin**, select **Install the configuration server and process server**, and then select **Next**.
- 9.2 In **Third-Party Software License**, select **I accept the third-party license agreement**, and then select **Next**.
- 9.3 In **Registration**, select **Browse**, and then go to where you put the vault registration key file. Select **Next**.
- 9.4 In **Internet Settings**, select **Connect to Azure Site Recovery without a proxy server**, and then select **Next**.
- 9.5 The **Prerequisites Check** page runs checks for several items. When it's finished, select **Next**.
- 9.6 In **MySQL Configuration**, provide a password for the MySQL DB, and then select **Next**.
- 9.7 In **Environment Details**, select **No**. You don't need to protect your VMs. Then, select **Next**.
- 9.8 In **Install Location**, select **Next** to accept the default.
- 9.9 In **Network Selection**, select **Next** to accept the default.
- 9.10 In **Summary**, select **Install**.
- 9.11 **Installation Progress** shows you information about the installation process. When it's finished, select **Finish**. A window displays a message about a reboot. Select **OK**.
- 9.12 Next, a window displays a message about the configuration server connection passphrase. Copy the passphrase to your clipboard and save the passphrase in a temporary text file on the source VMs. YouΓÇÖll need this passphrase later, during the mobility service installation process.
+ 1. Under **Before You Begin**, select **Install the configuration server and process server**, and then select **Next**.
+ 2. In **Third-Party Software License**, select **I accept the third-party license agreement**, and then select **Next**.
+ 3. In **Registration**, select **Browse**, and then go to where you put the vault registration key file. Select **Next**.
+ 4. In **Internet Settings**, select **Connect to Azure Site Recovery without a proxy server**, and then select **Next**.
+ 5. The **Prerequisites Check** page runs checks for several items. When it's finished, select **Next**.
+ 6. In **MySQL Configuration**, provide a password for the MySQL DB, and then select **Next**.
+ 7. In **Environment Details**, select **No**. You don't need to protect your VMs. Then, select **Next**.
+ 8. In **Install Location**, select **Next** to accept the default.
+ 9. In **Network Selection**, select **Next** to accept the default.
+ 10. In **Summary**, select **Install**.
+ 11. **Installation Progress** shows you information about the installation process. When it's finished, select **Finish**. A window displays a message about a reboot. Select **OK**.
+ 12. Next, a window displays a message about the configuration server connection passphrase. Copy the passphrase to your clipboard and save the passphrase in a temporary text file on the source VMs. YouΓÇÖll need this passphrase later, during the mobility service installation process.
10. After the installation completes, the Appliance configuration wizard will be launched automatically (You can also launch the wizard manually by using the cspsconfigtool shortcut that is created on the desktop of the appliance). In this tutorial, we'll be manually installing the Mobility Service on source VMs to be replicated, so create a dummy account in this step and proceed. You can provide the following details for creating the dummy account - "guest" as the friendly name, "username" as the username, and "password" as the password for the account. You will be using this dummy account in the Enable Replication stage.
A Mobility service agent must be installed on the source AWS VMs to be migrated.
6. In **Virtual Machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**. 7. Check each VM you want to migrate. Then click **Next: Target settings**.
- ![Select VMs](./media/tutorial-migrate-physical-virtual-machines/select-vms.png)
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-vms-inline.png" alt-text="Screenshot on selecting VMs." lightbox="./media/tutorial-migrate-physical-virtual-machines/select-vms-expanded.png":::
8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
A Mobility service agent must be installed on the source AWS VMs to be migrated.
- **Availability Zone**: Specify the Availability Zone to use. - **Availability Set**: Specify the Availability Set to use.
-![Compute settings](./media/tutorial-migrate-physical-virtual-machines/compute-settings.png)
- 15. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
- ![Disk settings](./media/tutorial-migrate-physical-virtual-machines/disks.png)
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-physical-virtual-machines/disks-expanded.png":::
+
+1. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs.
+
+ :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot shows the tags tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
16. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
A Mobility service agent must be installed on the source GCP VMs to be migrated.
6. In **Virtual Machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**. 7. Check each VM you want to migrate. Then click **Next: Target settings**.
- ![Select VMs](./media/tutorial-migrate-physical-virtual-machines/select-vms.png)
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-vms-inline.png" alt-text="Screenshot on selecting VMs." lightbox="./media/tutorial-migrate-physical-virtual-machines/select-vms-expanded.png":::
8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
A Mobility service agent must be installed on the source GCP VMs to be migrated.
- **Availability Zone**: Specify the Availability Zone to use. - **Availability Set**: Specify the Availability Set to use.
-![Compute settings](./media/tutorial-migrate-physical-virtual-machines/compute-settings.png)
- 15. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
- ![Disk settings](./media/tutorial-migrate-physical-virtual-machines/disks.png)
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-physical-virtual-machines/disks-expanded.png":::
+
+1. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs.
+
+ :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot shows the tags tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
16. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
ms. Previously updated : 03/18/2021 Last updated : 06/20/2022 # Migrate Hyper-V VMs to Azure
-This article shows you how to migrate on-premises Hyper-V VMs to Azure with the [Azure Migrate:Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool.
+This article shows you how to migrate on-premises Hyper-V VMs to Azure with the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool.
This tutorial is the third in a series that demonstrates how to assess and migrate machines to Azure.
This tutorial is the third in a series that demonstrates how to assess and migra
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Add the Azure Migrate:Server Migration tool.
+> * Add the Azure Migrate: Server Migration tool.
> * Discover VMs you want to migrate. > * Start replicating VMs. > * Run a test migration to make sure everything's working as expected.
Before you begin this tutorial, you should:
## Download the provider
-For migrating Hyper-V VMs, Azure Migrate:Server Migration installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes. Note that the [Azure Migrate appliance](migrate-appliance.md) isn't used for Hyper-V migration.
+For migrating Hyper-V VMs, Azure Migrate: Server Migration installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes. Note that the [Azure Migrate appliance](migrate-appliance.md) isn't used for Hyper-V migration.
1. In the Azure Migrate project > **Servers**, in **Azure Migrate: Server Migration**, click **Discover**. 1. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. 1. In **Target region**, select the Azure region to which you want to migrate the machines. 1. Select **Confirm that the target region for migration is region-name**. 1. Click **Create resources**. This creates an Azure Site Recovery vault in the background.
- - If you've already set up migration with Azure Migrate Server Migration, this option won't appear since resources were set up previously.
+ - If you've already set up migration with Azure Migrate: Server Migration, this option won't appear since resources were set up previously.
- You can't change the target region for this project after clicking this button. - All subsequent migrations are to this region. 1. In **Prepare Hyper-V host servers**, download the Hyper-V Replication provider, and the registration key file.
- - The registration key is needed to register the Hyper-V host with Azure Migrate Server Migration.
+ - The registration key is needed to register the Hyper-V host with Azure Migrate: Server Migration.
- The key is valid for five days after you generate it. ![Download provider and key](./media/tutorial-migrate-hyper-v/download-provider-hyper-v.png)
Run the following commands on each host, as described below:
1. Register the Hyper-V host to Azure Migrate.
+ > [!NOTE]
+ > If your Hyper-V host was previously registered with another Azure Migrate project that you are no longer using or have deleted, you'll need to de-register it from that project and register it in the new one. Follow the [Remove servers and disable protection](https://docs.microsoft.com/azure/site-recovery/site-recovery-manage-registration-and-protection?WT.mc_id=modinfra-39236-thmaure#unregister-a-connected-configuration-server) guide to do so.
+ ``` "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Credentials <key file path> ```
After installing the provider on hosts, go to the Azure portal and in **Discover
![Finalize registration](./media/tutorial-migrate-hyper-v/finalize-registration.png)
-It can take up to 15 minutes after finalizing registration until discovered VMs appear in Azure Migrate Server Migration. As VMs are discovered, the **Discovered servers** count rises.
+It can take up to 15 minutes after finalizing registration until discovered VMs appear in Azure Migrate: Server Migration. As VMs are discovered, the **Discovered servers** count rises.
![Discovered servers](./media/tutorial-migrate-hyper-v/discovered-servers.png)
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
1. In **Virtual machines**, search for VMs as needed, and check each VM you want to migrate. Then, click **Next: Target settings**.
- ![Select VMs](./media/tutorial-migrate-hyper-v/select-vms.png)
+ :::image type="content" source="./media/tutorial-migrate-hyper-v/select-vms-inline.png" alt-text="Screenshot shows the selected VMs in the Replicate dialog box." lightbox="./media/tutorial-migrate-hyper-v/select-vms-expanded.png":::
1. In **Target settings**, select the target region to which you'll migrate, the subscription, and the resource group in which the Azure VMs will reside after migration. 1. In **Replication Storage Account**, select the Azure Storage account in which replicated data will be stored in Azure.
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
- **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer. - **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration.
- ![VM compute settings](./media/tutorial-migrate-hyper-v/compute-settings.png)
- 1. In **Disks**, specify the VM disks that needs to be replicated to Azure. Then click **Next**. - You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Now, select machines for migration.
- **Availability Zone**: Specify the Availability Zone to use. - **Availability Set**: Specify the Availability Set to use.
-![Compute settings](./media/tutorial-migrate-physical-virtual-machines/compute-settings.png)
- 15. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
ms. Previously updated : 06/09/2020 Last updated : 06/20/2022 # Migrate VMware VMs to Azure (agent-based)
-This article shows you how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate:Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware VMs using agentless migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
+This article shows you how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agent-based migration. You can also migrate VMware VMs using agentless migration. [Compare](server-migrate-overview.md#compare-migration-methods) the methods.
In this tutorial, you learn how to: > [!div class="checklist"] > * Prepare Azure to work with Azure Migrate. > * Prepare for agent-based migration. Set up a VMware account so that Azure Migrate can discover machines for migration. Set up an account so that the Mobility service agent can install on machines you want to migrate, and prepare a machine to act as the replication appliance.
-> * Add the Azure Migrate:Server Migration tool
+> * Add the Azure Migrate: Server Migration tool
> * Set up the replication appliance. > * Replicate VMs. > * Run a test migration to make sure everything's working as expected.
Verify support requirements and permissions, and prepare to deploy a replicatio
### Prepare an account to discover VMs
-Azure Migrate Server Migration needs access to VMware servers to discover VMs you want to migrate. Create the account as follows:
+Azure Migrate: Server Migration needs access to VMware servers to discover VMs you want to migrate. Create the account as follows:
1. To use a dedicated account, create a role at the vCenter level. Give the role a name such as **Azure_Migrate**.
The Mobility service must be installed on machines you want to replicate.
- The Azure Migrate replication appliance can do a push installation of this service when you enable replication for a machine, or you can install it manually, or using installation tools. - In this tutorial, we're going to install the Mobility service with the push installation.-- For push installation, you need to prepare an account that Azure Migrate Server Migration can use to access the VM. This account is used only for the push installation, if you don't install the Mobility service manually.
+- For push installation, you need to prepare an account that Azure Migrate: Server Migration can use to access the VM. This account is used only for the push installation, if you don't install the Mobility service manually.
Prepare the account as follows:
Make sure VMware servers and VMs comply with requirements for migration to Azure
- Review [Windows](prepare-for-migration.md#windows-machines) and [Linux](prepare-for-migration.md#linux-machines) changes you need to make. > [!NOTE]
-> Agent-based migration with Azure Migrate Server Migration is based on features of the Azure Site Recovery service. Some requirements might link to Site Recovery documentation.
+> Agent-based migration with Azure Migrate: Server Migration is based on features of the Azure Site Recovery service. Some requirements might link to Site Recovery documentation.
## Set up the replication appliance
Select VMs for migration.
16. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements). - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
- - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
- - **Availability Zone**: Specify the Availability Zone to use.
- - **Availability Set**: Specify the Availability Set to use.
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+ - **Availability Zone**: Specify the Availability Zone to use.
+ - **Availability Set**: Specify the Availability Set to use.
17. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication.
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
$job = Get-AzMigrateJob -InputObject $job
## 10. Update properties of a replicating VM
-[Azure Migrate:Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) allows you to change target properties, such as name, size, resource group, NIC configuration and so on, for a replicating VM.
+[Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) allows you to change target properties, such as name, size, resource group, NIC configuration and so on, for a replicating VM.
The following properties can be updated for a VM.
migrate Tutorial Migrate Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md
ms. Previously updated : 06/09/2020 Last updated : 06/20/2022
Enable replication as follows:
:::image type="content" source="./media/tutorial-migrate-vmware/target-settings.png" alt-text="Screenshot on target settings.":::
-11. In **Compute**, In Compute, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements).
+11. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements).
- **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**. - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
+
+ Title: Version support policy - Azure Database for MySQL - Single Server and Flexible Server
+description: Describes the policy around MySQL major and minor versions in Azure Database for MySQL
++++++ Last updated : 06/21/2022++
+# Azure Database for MySQL version support policy
++
+This page describes the Azure Database for MySQL versioning policy, and is applicable to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
+
+## Supported MySQL versions
+
+Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports all the current major version supported by the community namely MySQL 5.6, 5.7 and 8.0. MySQL uses the X.Y.Z naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
+
+Azure Database for MySQL currently supports the following major and minor versions of MySQL:
+
+| Version | [Single Server](single-server/overview.md) <br/> Current minor version |[Flexible Server](flexible-server/overview.md) <br/> Current minor version |
+|:-|:-|:|
+|MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
+|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
+
+> [!NOTE]
+> In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
+
+> [!IMPORTANT]
+> MySQL v5.6 is retired on Single Server as of February 2021. Starting from September 1st 2021, you will not be able to create new v5.6 servers on Azure Database for MySQL - Single Server deployment option. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+
+Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
+
+## Major version support
+
+Each major version of MySQL will be supported by Azure Database for MySQL from the date on which Azure begins supporting the version until the version is retired by the MySQL community, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
+
+## Minor version support
+
+Azure Database for MySQL automatically performs minor version upgrades to the Azure preferred MySQL version as part of periodic maintenance.
+
+## Major version retirement policy
+
+The table below provides the retirement details for MySQL major versions. The dates follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
+
+| Version | What's New | Azure support start date | Retirement date|
+| - | - | | -- |
+| [MySQL 5.6](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/)| [Features](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-49.html) | March 20, 2018 | February 2021
+| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 | October 2023
+| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)) | December 11, 2019 | April 2026
+
+## Retired MySQL engine versions not supported in Azure Database for MySQL
+
+After the retirement date for each MySQL database version, if you continue running the retired version, note the following restrictions:
+
+- As the community will not be releasing any further bug fixes or security fixes, Azure Database for MySQL will not patch the retired database engine for any bugs or security issues or otherwise take security measures with regard to the retired database engine. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you may experience relates to the MySQL database, we may not be able to provide you with support. In such cases, you will have to upgrade your database in order for us to provide you with any support.
+- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions.
+- Uptime SLAs will apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in the retired database version, Azure may chose to stop the compute node of your database server to secure the service first. You will be asked to upgrade the server before bringing the server online. During the upgrade process, your data will always be protected using automatic backups performed on the service which can be used to restore back to the older version if desired.
+
+## Next steps
+
+- See MySQL [dump and restore](./concepts-migrate-dump-restore.md) to perform upgrades.
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
+
+ Title: Azure Database for MySQL - Flexible Server service tiers
+description: This article describes the compute and storage options in Azure Database for MySQL - Flexible Server.
++++++ Last updated : 05/24/2022++
+# Azure Database for MySQL - Flexible Server service tiers
++
+You can create an Azure Database for MySQL Flexible Server in one of three different service tiers: Burstable, General Purpose, and Business Critical. The service tiers are differentiated by the underlying VM SKU used B-series, D-series, and E-series. The choice of compute tier and size determines the memory and vCores available on the server. The same storage technology is used across all service tiers. All resources are provisioned at the MySQL server level. A server can have one or many databases.
+
+| Resource / Tier | **Burstable** | **General Purpose** | **Business Critical** |
+|:|:-|:--|:|
+| VM series| B-series | Ddsv4-series | Edsv4/v5-series*|
+| vCores | 1, 2, 4, 8, 12, 16, 20 | 2, 4, 8, 16, 32, 48, 64 | 2, 4, 8, 16, 32, 48, 64, 80, 96 |
+| Memory per vCore | Variable | 4 GiB | 8 GiB * |
+| Storage size | 20 GiB to 16 TiB | 20 GiB to 16 TiB | 20 GiB to 16 TiB |
+| Database backup retention period | 1 to 35 days | 1 to 35 days | 1 to 35 days |
+
+\* With the exception of E64ds_v4 (Business Critical) SKU, which has 504 GB of memory
+
+\* Only few regions have Edsv5 compute availability.
+
+To choose a compute tier, use the following table as a starting point.
+
+| Compute tier | Target workloads |
+|:-|:--|
+| Burstable | Best for workloads that donΓÇÖt need the full CPU continuously. |
+| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.|
+| Business Critical | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.|
+
+After you create a server, the compute tier, compute size, and storage size can be changed. Compute scaling requires a restart and takes between 60-120 seconds, while storage scaling does not require restart. You also can independently adjust the backup retention period up or down. For more information, see the [Scale resources](#scale-resources) section.
+
+## Service tiers, size, and server types
+
+Compute resources can be selected based on the tier and size. This determines the vCores and memory size. vCores represent the logical CPU of the underlying hardware.
+
+The detailed specifications of the available server types are as follows:
+
+| Compute size | vCores | Memory Size (GiB) | Max Supported IOPS | Max Connections
+|-|--|-| |
+|**Burstable**
+|Standard_B1s | 1 | 1 | 320 | 171
+|Standard_B1ms | 1 | 2 | 640 | 341
+|Standard_B2s | 2 | 4 | 1280 | 683
+|Standard_B2ms | 2 | 8 | 1700 | 1365
+|Standard_B4ms | 4 | 16 | 2400 | 2731
+|Standard_B8ms | 8 | 32 | 3100 | 5461
+|Standard_B12ms | 12 | 48 | 3800 | 8193
+|Standard_B16ms | 16 | 64 | 4300 | 10923
+|Standard_B20ms | 20 | 80 | 5000 | 13653
+|**General Purpose**|
+|Standard_D2ds_v4 |2 |8 |3200 |1365
+|Standard_D4ds_v4 |4 |16 |6400 |2731
+|Standard_D8ds_v4 |8 |32 |12800 |5461
+|Standard_D16ds_v4 |16 |64 |20000 |10923
+|Standard_D32ds_v4 |32 |128 |20000 |21845
+|Standard_D48ds_v4 |48 |192 |20000 |32768
+|Standard_D64ds_v4 |64 |256 |20000 |43691
+|**Memory Optimized** |
+|Standard_E2ds_v4 | 2 | 16 | 5000 | 2731
+|Standard_E4ds_v4 | 4 | 32 | 10000 | 5461
+|Standard_E8ds_v4 | 8 | 64 | 18000 | 10923
+|Standard_E16ds_v4 | 16 | 128 | 28000 | 21845
+|Standard_E32ds_v4 | 32 | 256 | 38000 | 43691
+|Standard_E48ds_v4 | 48 | 384 | 48000 | 65536
+|Standard_E64ds_v4 | 64 | 504 | 48000 | 86016
+|Standard_E80ids_v4 | 80 | 504 | 48000 | 86016
+|Standard_E2ds_v5 | 2 | 16 | 5000 | 2731
+|Standard_E4ds_v5 | 4 | 32 | 10000 | 5461
+|Standard_E8ds_v5 | 8 | 64 | 18000 | 10923
+|Standard_E16ds_v5 | 16 | 128 | 28000 | 21845
+|Standard_E32ds_v5 | 32 | 256 | 38000 | 43691
+|Standard_E48ds_v5 | 48 | 384 | 48000 | 65536
+|Standard_E64ds_v5 | 64 | 512 | 48000 | 87383
+|Standard_E96ds_v5 | 96 | 672 | 48000 | 100000
+
+To get more details about the compute series available, refer to Azure VM documentation for [Burstable (B-series)](../../virtual-machines/sizes-b-series-burstable.md), [General Purpose (Ddsv4-series)](../../virtual-machines/ddv4-ddsv4-series.md), and Business Critical [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)]
+
+>[!NOTE]
+>For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md#q-why-is-my-remaining-credit-set-to-0-after-a-redeploy-or-a-stopstart).
+
+## Storage
+
+The storage you provision is the amount of storage capacity available to your flexible server. Storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. In all service tiers, the minimum storage supported is 20 GiB and maximum is 16 TiB. Storage is scaled in 1 GiB increments and can be scaled up after the server is created.
+
+>[!NOTE]
+> Storage can only be scaled up, not down.
+
+You can monitor your storage consumption in the Azure portal (with Azure Monitor) using the storage limit, storage percentage, and storage used metrics. Refer to the [monitoring article](./concepts-monitoring.md) to learn about metrics.
+
+### Reaching the storage limit
+
+When storage consumed on the server is close to reaching the provisioned limit, the server is put to read-only mode to protect any lost writes on the server. Servers with less than equal to 100 GiB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GiB provisioned storage are marked read only when the free storage is less than 5 GiB.
+
+For example, if you have provisioned 110 GiB of storage, and the actual utilization goes over 105 GiB, the server is marked read-only. Alternatively, if you have provisioned 5 GiB of storage, the server is marked read-only when the free storage reaches less than 256 MB.
+
+While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted.
+
+To get the server out of read-only mode, you should increase the provisioned storage on the server. This can be done using the Azure portal or Azure CLI. Once increased, the server will be ready to accept write transactions again.
+
+We recommend that you set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. Refer to the [monitoring article](./concepts-monitoring.md) to learn about metrics available.
+
+We recommend that you <!--turn on storage auto-grow or to--> set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on alert documentation [how to set up an alert](how-to-alert-on-metric.md).
+
+### Storage auto-grow
+
+Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto-grow is enabled, the storage automatically grows without impacting the workload. Storage auto-grow is enabled by default for all new server creates. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply. Refresh the server instance to see the updated storage provisioned under **Settings** on the **Compute + Storage** page.
+
+For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
+
+Remember that storage once auto-scaled up, cannot be scaled down.
+
+## IOPS
+
+Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
+
+The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table].(#compute-tiers-size-and-server-types)
+
+The maximum IOPS is dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)] documentation.
+
+> [!Important]
+> **Complimentary IOPS** are equal to MINIMUM("Max uncached disk throughput: IOPS/MBps" of compute size, 300 + storage provisioned in GiB * 3)<br>
+> **Minimum IOPS** is 360 across all compute sizes<br>
+> **Maximum IOPS** is determined by the selected compute size.
+
+You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS than the max IOPS based on compute then you need to scale your server's compute.
+
+## Backup
+
+The service automatically takes backups of your server. You can select a retention period from a range of 1 to 35 days. Learn more about backups in the [backup and restore concepts article](concepts-backup-restore.md).
+
+## Scale resources
+
+After you create your server, you can independently change the compute tier, compute size (vCores and memory), and the amount of storage, and the backup retention period. The compute size can be scaled up or down. The backup retention period can be scaled up or down from 1 to 35 days. The storage size can only be increased. Scaling of the resources can be done through the portal or Azure CLI.
+
+> [!NOTE]
+> The storage size can only be increased. You cannot go back to a smaller storage size after the increase.
+
+When you change the compute tier or compute size, the server is restarted for the new server type to take effect. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back. This window varies, but in most cases, is between 60-120 seconds.
+
+Scaling storage and changing the backup retention period are online operations and do not require a server restart.
+
+## Pricing
+
+For the most up-to-date pricing information, see the service [pricing page](https://azure.microsoft.com/pricing/details/MySQL/). To see the cost for the configuration you want, the [Azure portal](https://portal.azure.com/#create/Microsoft.MySQLServer/flexibleServers) shows the monthly cost on the **Compute + storage** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, choose **Azure Database for MySQL**, and **Flexible Server** as the deployment type to customize the options.
+
+If you would like to optimize server cost, you can consider following tips:
+
+- Scale down your compute tier or compute size (vCores) if compute is underutilized.
+- Consider switching to the Burstable compute tier if your workload doesn't need the full compute capacity continuously from the General Purpose and Business Critical tiers.
+- Stop the server when not in use.
+- Reduce the backup retention period if a longer retention of backup is not required.
+
+## Next steps
+
+- Learn how to [create a MySQL server in the portal](quickstart-create-server-portal.md).
+- Learn about [service limitations](concepts-limitations.md).
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/app-development-best-practices.md
Title: App development best practices - Azure Database for MySQL description: Learn about best practices for building an app by using Azure Database for MySQL.-- Previously updated : 08/11/2020++ Last updated : 06/20/2022 # Best practices for building an application with Azure Database for MySQL
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-monitoring-best-practices.md
Title: Monitoring best practices - Azure Database for MySQL description: This article describes the best practices to monitor your Azure Database for MySQL.-- ++ Previously updated : 11/23/2020 Last updated : 06/20/2022 # Best practices for monitoring Azure Database for MySQL - Single server
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-operation-excellence-best-practices.md
Title: MySQL server operational best practices - Azure Database for MySQL description: This article describes the best practices to operate your MySQL database on Azure.-- Previously updated : 11/23/2020++ Last updated : 06/20/2022 # Best practices for server operations on Azure Database for MySQL -Single server
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-performance-best-practices.md
Title: Performance best practices - Azure Database for MySQL description: This article describes some recommendations to monitor and tune performance for your Azure Database for MySQL.-- Previously updated : 1/28/2021++ Last updated : 06/20/2022 # Best practices for optimal performance of your Azure Database for MySQL - Single server
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
Title: Prepay for compute with reserved capacity - Azure Database for MySQL description: Prepay for Azure Database for MySQL compute resources with reserved capacity-- Previously updated : 10/06/2021++ Last updated : 06/20/2022 # Prepay for Azure Database for MySQL compute resources with reserved instances
Azure Database for MySQL now helps you save money by prepaying for compute resou
>[!NOTE] >The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding > from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue. - ## How does the instance reservation work? You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-aks.md
Title: Connect to Azure Kubernetes Service - Azure Database for MySQL description: Learn about connecting Azure Kubernetes Service with Azure Database for MySQL-- Previously updated : 07/14/2020++ Last updated : 06/20/2022 - # Best practices for Azure Kubernetes Service and Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md
Title: Audit logs - Azure Database for MySQL description: Describes the audit logs available in Azure Database for MySQL, and the available parameters for enabling logging levels.-- Previously updated : 6/24/2020++ Last updated : 06/20/2022 # Audit Logs in Azure Database for MySQL
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
Title: Active Directory authentication - Azure Database for MySQL description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL-- Previously updated : 07/23/2020++ Last updated : 06/20/2022 # Use Azure Active Directory for authenticating with MySQL
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for MySQL description: Learn about Azure Advisor recommendations for MySQL.-- Previously updated : 06/03/2022++ Last updated : 06/20/2022 + # Azure Advisor for MySQL [!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md
Title: Backup and restore - Azure Database for MySQL description: Learn about automatic backups and restoring your Azure Database for MySQL server.-- Previously updated : 3/27/2020++ Last updated : 06/20/2022 # Backup and restore in Azure Database for MySQL
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md
Title: Business continuity - Azure Database for MySQL description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MySQL service.-- Previously updated : 7/7/2020++ Last updated : 06/20/2022 # Overview of business continuity with Azure Database for MySQL - Single Server
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
Title: Certificate rotation for Azure Database for MySQL description: Learn about the upcoming changes of root certificate changes that will affect Azure Database for MySQL-- Previously updated : 04/08/2021++ Last updated : 06/20/2022 # Understanding the changes in the Root CA change for Azure Database for MySQL Single Server
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md
Title: Driver and tools compatibility - Azure Database for MySQL description: This article describes the MySQL drivers and management tools that are compatible with Azure Database for MySQL. -- Previously updated : 11/4/2021++ Last updated : 06/20/2022 + # MySQL drivers and management tools compatible with Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Connect To A Gateway Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md
+
+ Title: Azure Database for MySQL managing updates and upgrades
+description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.
++++++ Last updated : 06/20/2022++
+# Connect to a gateway node to a specific MySQL version
++
+In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture.
+
+As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
+
+In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
++
+> [!NOTE]
+> Connecting to Azure Database for MySQL via ports 3308 and 3309 are only supported for public connectivity, Private Link and VNet service endpoints can only be used with port 3306.
+
+Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
+
+## Managing updates and upgrades
+
+The service automatically manages patching for bug fix version updates. For example, 5.7.20 to 5.7.21.
+
+Major version upgrade is currently supported by service for upgrades from MySQL v5.6 to v5.7. For more details, refer [how to perform major version upgrades](how-to-major-version-upgrade.md). If you'd like to upgrade from 5.7 to 8.0, we recommend you perform [dump and restore](./concepts-migrate-dump-restore.md) to a server that was created with the new engine version.
+
+## Next steps
+
+- To see supported versions, visit [Azure Database for MySQL version support policy](../concepts-version-policy.md)
+- For details around Azure Database for MySQL versioning policy, see [this document](concepts-version-policy.md).
+- For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](./concepts-pricing-tiers.md)
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md
Title: Connection libraries - Azure Database for MySQL description: This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL.-- Previously updated : 8/3/2020++ Last updated : 06/20/2022 # Connection libraries for Azure Database for MySQL
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for MySQL description: Describes the connectivity architecture for your Azure Database for MySQL server.-- ++ Previously updated : 10/15/2021 Last updated : 06/20/2022 # Connectivity architecture in Azure Database for MySQL
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md
Title: Transient connectivity errors - Azure Database for MySQL description: Learn how to handle transient connectivity errors and connect efficiently to Azure Database for MySQL. keywords: mysql connection,connection string,connectivity issues,transient error,connection error,connect efficiently-- Previously updated : 3/18/2020++ Last updated : 06/20/2022 # Handle transient errors and connect efficiently to Azure Database for MySQL
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md
Title: VNet service endpoints - Azure Database for MySQL description: 'Describes how VNet service endpoints work for your Azure Database for MySQL server.'-- Previously updated : 7/17/2020++ Last updated : 06/20/2022 + # Use Virtual Network service endpoints and rules for Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md
Title: Private Link - Azure Database for MySQL description: Learn how Private link works for Azure Database for MySQL.-- Previously updated : 03/10/2020++ Last updated : 06/20/2022 # Private Link for Azure Database for MySQL
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md
Title: Data encryption with customer-managed key - Azure Database for MySQL description: Azure Database for MySQL data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.-- Previously updated : 01/13/2020++ Last updated : 06/20/2022 # Azure Database for MySQL data encryption with a customer-managed key
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md
Title: Data-in Replication - Azure Database for MySQL description: Learn about using Data-in Replication to synchronize from an external server into the Azure Database for MySQL service.-- Previously updated : 04/08/2021++ Last updated : 06/20/2022 # Replicate data into Azure Database for MySQL
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md
Title: Application development - Azure Database for MySQL description: Introduces design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL -- Previously updated : 3/18/2020++ Last updated : 06/20/2022 # Application development overview for Azure Database for MySQL
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for MySQL description: Learn about using firewall rules to enable connections to your Azure Database for MySQL server.-- Previously updated : 07/17/2020++ Last updated : 06/20/2022 # Azure Database for MySQL server firewall rules
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md
Title: High availability - Azure Database for MySQL description: This article provides information on high availability in Azure Database for MySQL-- Previously updated : 7/7/2020++ Last updated : 06/20/2022 # High availability in Azure Database for MySQL
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-infrastructure-double-encryption.md
Title: Infrastructure double encryption - Azure Database for MySQL description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service managed keys.-- ++ Previously updated : 6/30/2020 Last updated : 06/20/2022 # Azure Database for MySQL Infrastructure double encryption
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md
Title: Limitations - Azure Database for MySQL description: This article describes limitations in Azure Database for MySQL, such as number of connection and storage engine options.-- Previously updated : 10/1/2020++ Last updated : 06/20/2022 + # Limitations in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md
Title: Use dbForge Studio for MySQL to migrate a MySQL database to Azure Database for MySQL description: The article demonstrates how to migrate to Azure Database for MySQL by using dbForge Studio for MySQL.-- Previously updated : 03/03/2021++ Last updated : 06/20/2022 + # Migrate data to Azure Database for MySQL with dbForge Studio for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md
Title: Migrate using dump and restore - Azure Database for MySQL description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.-- Previously updated : 10/30/2020++ Last updated : 06/20/2022 # Migrate your MySQL database to Azure Database for MySQL using dump and restore
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-import-export.md
Previously updated : 10/30/2020 Last updated : 06/20/2022 # Migrate your MySQL database by using import and export
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-mydumper-myloader.md
Previously updated : 06/18/2021 Last updated : 06/20/2022 # Migrate large databases to Azure Database for MySQL using mydumper/myloader
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md
Title: Monitoring - Azure Database for MySQL description: This article describes the metrics for monitoring and alerting for Azure Database for MySQL, including CPU, storage, and connection statistics.-- ++ Previously updated : 10/21/2020 Last updated : 06/20/2022 + # Monitoring in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md
Title: Performance recommendations - Azure Database for MySQL description: This article describes the Performance Recommendation feature in Azure Database for MySQL-- Previously updated : 6/3/2020++ Last updated : 06/20/2022 + # Performance Recommendations in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for MySQL - Single Server description: This article describes the Planned maintenance notification feature in Azure Database for MySQL - Single Server-- Previously updated : 10/21/2020++ Last updated : 06/20/2022 + # Planned maintenance notification in Azure Database for MySQL - Single Server [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md
Title: Pricing tiers - Azure Database for MySQL
-description: Learn about the various pricing tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods.
--
+ Title: Azure Database for MySQL - Single Server service tiers
+description: Learn about the various service tiers for Azure Database for MySQL including compute generations, storage types, storage size, vCores, memory, and backup retention periods.
Previously updated : 02/07/2022++ Last updated : 06/20/2022
-# Azure Database for MySQL pricing tiers
+# Azure Database for MySQL - Single Server service tiers
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-You can create an Azure Database for MySQL server in one of three different pricing tiers: Basic, General Purpose, and Memory Optimized. The pricing tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases.
+You can create an Azure Database for MySQL server in one of three different service tiers: Basic, General Purpose, and Memory Optimized. The service tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases.
-| Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
+| Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
|:|:-|:--|:| | Compute generation | Gen 4, Gen 5 | Gen 4, Gen 5 | Gen 5 | | vCores | 1, 2 | 2, 4, 8, 16, 32, 64 |2, 4, 8, 16, 32 |
You can create an Azure Database for MySQL server in one of three different pric
To choose a pricing tier, use the following table as a starting point.
-| Pricing tier | Target workloads |
+| Service tier | Target workloads |
|:-|:--| | Basic | Workloads that require light compute and I/O performance. Examples include servers used for development or testing or small-scale infrequently used applications. | | General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications.| | Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps.| > [!NOTE]
-> Dynamic scaling to and from the Basic pricing tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers.
+> Dynamic scaling to and from the Basic service tiers is currently not supported. Basic Tier SKUs servers can't be scaled up to General Purpose or Memory Optimized Tiers.
After you create a General Purpose or Memory Optimized server, the number of vCores, hardware generation, and pricing tier can be changed up or down within seconds. You also can independently adjust the amount of storage up and the backup retention period up or down with no application downtime. You can't change the backup storage type after a server is created. For more information, see the [Scale resources](#scale-resources) section.
General purpose storage v2 is supported in the following Azure regions:
> *For these Azure regions, you will have an option to create server in both General purpose storage v1 and v2. For the servers created with General purpose storage v2 in public preview, following are the limitations, <br /> > * Geo-Redundant Backup will not be supported<br /> > * The replica server should be in the regions which support General purpose storage v2. <br />
-
### How can I determine which storage type my server is running on?
-You can find the storage type of your server by going in the Pricing tier blade in portal.
+You can find the storage type of your server by going to **Settings** > **Compute + storage** page
* If the server is provisioned using Basic SKU, the storage type is Basic storage. * If the server is provisioned using General Purpose or Memory Optimized SKU, the storage type is General Purpose storage * If the maximum storage that can be provisioned on your server is up to 4-TB, the storage type is General Purpose storage v1.
Yes, migration to general purpose storage v2 from v1 is supported if the underly
### Can I grow storage size after server is provisioned? You can add additional storage capacity during and after the creation of the server, and allow the system to grow storage automatically based on the storage consumption of your workload.
->[!IMPORTANT]
+> [!IMPORTANT]
> Storage can only be scaled up, not down. ### Monitoring IO consumption
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for MySQL description: This article describes the Query Performance Insight feature in Azure Database for MySQL-- Previously updated : 01/12/2022++ Last updated : 06/20/2022 + # Query Performance Insight in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md
 Title: Query Store - Azure Database for MySQL description: Learn about the Query Store feature in Azure Database for MySQL to help you track performance over time.-- Previously updated : 5/12/2020++ Last updated : 06/20/2022 + # Monitor Azure Database for MySQL performance with Query Store [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md
Title: Read replicas - Azure Database for MySQL description: 'Learn about read replicas in Azure Database for MySQL: choosing regions, creating replicas, connecting to replicas, monitoring replication, and stopping replication.'-- Previously updated : 06/17/2021++ Last updated : 06/20/2022 # Read replicas in Azure Database for MySQL
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md
Title: Security - Azure Database for MySQL description: An overview of the security features in Azure Database for MySQL.-- Previously updated : 3/18/2020++ Last updated : 06/20/2022 # Security in Azure Database for MySQL
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md
Title: Slow query logs - Azure Database for MySQL description: Describes the slow query logs available in Azure Database for MySQL, and the available parameters for enabling different logging levels.-- Previously updated : 11/6/2020++ Last updated : 06/20/2022 + # Slow query logs in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for MySQL description: This topic provides guidelines for configuring server parameters in Azure Database for MySQL.-- ++ Previously updated : 1/26/2021 Last updated : 06/20/2022 + # Server parameters in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md
Title: Server concepts - Azure Database for MySQL description: This topic provides considerations and guidelines for working with Azure Database for MySQL servers.-- Previously updated : 3/18/2020++ Last updated : 06/20/2022 + # Server concepts in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md
Title: SSL/TLS connectivity - Azure Database for MySQL description: Information for configuring Azure Database for MySQL and associated applications to properly use SSL connections-- ++ Previously updated : 07/09/2020 Last updated : 06/20/2022 # SSL/TLS connectivity in Azure Database for MySQL
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md
Title: 'Quickstart: Connect using C++ - Azure Database for MySQL' description: This quickstart provides a C++ code sample you can use to connect and query data from Azure Database for MySQL.-- - Previously updated : 5/26/2020
-adobe-target: true
+ms.devlang: cpp
+++ Last updated : 06/20/2022 # Quickstart: Use Connector/C++ to connect and query data in Azure Database for MySQL
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
Title: 'Quickstart: Connect using C# - Azure Database for MySQL' description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for MySQL."-- - Previously updated : 10/18/2020
+ms.devlang: csharp
+++ Last updated : 06/20/2022 # Quickstart: Use .NET (C#) to connect and query data in Azure Database for MySQL
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
Title: 'Quickstart: Connect using Go - Azure Database for MySQL' description: This quickstart provides several Go code samples you can use to connect and query data from Azure Database for MySQL.-- +++ ms.devlang: golang- Previously updated : 5/26/2020 Last updated : 06/20/2022 # Quickstart: Use Go language to connect and query data in Azure Database for MySQL
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for MySQL' description: Learn how to use Java and JDBC with an Azure Database for MySQL database.-- - ms.devlang: java Previously updated : 08/17/2020+++ Last updated : 06/20/2022 # Quickstart: Use Java and JDBC with Azure Database for MySQL
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL' description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL.-- - Previously updated : 12/11/2020
+ms.devlang: javascript
+++ Last updated : 06/20/2022 + # Quickstart: Use Node.js to connect and query data in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md
Title: 'Quickstart: Connect using PHP - Azure Database for MySQL' description: This quickstart provides several PHP code samples you can use to connect and query data from Azure Database for MySQL.-- - Previously updated : 10/28/2020+++ Last updated : 06/20/2022 # Quickstart: Use PHP to connect and query data in Azure Database for MySQL
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for MySQL' description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for MySQL.-- ++ ms.devlang: python Previously updated : 10/28/2020 Last updated : 06/20/2022 # Quickstart: Use Python to connect and query data in Azure Database for MySQL
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md
Title: 'Quickstart: Connect using Ruby - Azure Database for MySQL' description: This quickstart provides several Ruby code samples you can use to connect and query data from Azure Database for MySQL.-- ++ ms.devlang: ruby Previously updated : 5/26/2020 Last updated : 06/20/2022 # Quickstart: Use Ruby to connect and query data in Azure Database for MySQL
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md
Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL' description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL.-- ++ Previously updated : 5/26/2020 Last updated : 06/20/2022 # Quickstart: Use MySQL Workbench to connect and query data in Azure Database for MySQL
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md
Title: Configure metric alerts - Azure portal - Azure Database for MySQL description: This article describes how to configure and access metric alerts for Azure Database for MySQL from the Azure portal.-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 # Use the Azure portal to set up alerts on metrics for Azure Database for MySQL
mysql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md
Title: Auto grow storage - Azure CLI - Azure Database for MySQL description: This article describes how you can enable auto grow storage using the Azure CLI in Azure Database for MySQL.-- Previously updated : 3/18/2020 ++ Last updated : 06/20/2022 + # Auto-grow Azure Database for MySQL storage using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for MySQL description: This article describes how you can enable auto grow storage for Azure Database for MySQL using Azure portal-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 + # Auto grow storage in Azure Database for MySQL using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
Title: Auto grow storage - Azure PowerShell - Azure Database for MySQL description: This article describes how you can enable auto grow storage using PowerShell in Azure Database for MySQL.-- Previously updated : 4/28/2020 ++ Last updated : 06/20/2022 # Auto grow storage in Azure Database for MySQL server using PowerShell
mysql How To Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md
Title: Access audit logs - Azure CLI - Azure Database for MySQL description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure CLI.-- ++ Previously updated : 6/24/2020 Last updated : 06/20/2022 # Configure and access audit logs in the Azure CLI
mysql How To Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md
Title: Access audit logs - Azure portal - Azure Database for MySQL description: This article describes how to configure and access the audit logs in Azure Database for MySQL from the Azure portal.-- ++ Previously updated : 9/29/2020 Last updated : 06/20/2022 # Configure and access audit logs for Azure Database for MySQL in the Azure portal
mysql How To Configure Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md
Title: Private Link - Azure CLI - Azure Database for MySQL description: Learn how to configure private link for Azure Database for MySQL from Azure CLI-- ++ Previously updated : 01/09/2020 Last updated : 06/20/2022 # Create and manage Private Link for Azure Database for MySQL using CLI
mysql How To Configure Private Link Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-portal.md
Title: Private Link - Azure portal - Azure Database for MySQL description: Learn how to configure private link for Azure Database for MySQL from Azure portal-- ++ Previously updated : 01/09/2020 Last updated : 06/20/2022 # Create and manage Private Link for Azure Database for MySQL using Portal
mysql How To Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md
Title: Access slow query logs - Azure CLI - Azure Database for MySQL description: This article describes how to access the slow query logs in Azure Database for MySQL by using the Azure CLI.-- Previously updated : 4/13/2020 ++
+ms.devlang: azurecli
Last updated : 06/20/2022 + # Configure and access slow query logs by using Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md
Title: Access slow query logs - Azure portal - Azure Database for MySQL description: This article describes how to configure and access the slow logs in Azure Database for MySQL from the Azure portal.-- ++ Previously updated : 3/15/2021 Last updated : 06/20/2022 # Configure and access slow query logs from the Azure portal
mysql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md
Title: Configure server parameters - Azure CLI - Azure Database for MySQL description: This article describes how to configure the service parameters in Azure Database for MySQL using the Azure CLI command line utility.-- ++ ms.devlang: azurecli Previously updated : 10/1/2020 Last updated : 06/20/2022 + # Configure server parameters in Azure Database for MySQL using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
Title: Configure server parameters - Azure PowerShell - Azure Database for MySQL description: This article describes how to configure the service parameters in Azure Database for MySQL using PowerShell.-- ++ ms.devlang: azurepowershell Previously updated : 10/1/2020 Last updated : 06/20/2022
mysql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
Title: Use Azure Active Directory - Azure Database for MySQL description: Learn about how to set up Azure Active Directory (Azure AD) for authentication with Azure Database for MySQL-- Previously updated : 07/23/2020 ++ Last updated : 06/20/2022 # Use Azure Active Directory for authentication with MySQL
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
Title: Configure SSL - Azure Database for MySQL description: Instructions for how to properly configure Azure Database for MySQL and associated applications to correctly use SSL connections-- ++ ms.devlang: csharp, golang, java, javascript, php, python, ruby Previously updated : 07/08/2020 Last updated : 06/20/2022 + # Configure SSL connectivity in your application to securely connect to Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-overview-single-server.md
Previously updated : 09/22/2020 Last updated : 06/20/2022 # Connect and query overview for Azure database for MySQL- Single Server
mysql How To Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-webapp.md
Title: Connect to Azure App Service - Azure Database for MySQL description: Instructions for how to properly connect an existing Azure App Service to Azure Database for MySQL-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 # Connect an existing Azure App Service to Azure Database for MySQL server
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
Title: Connect with Managed Identity - Azure Database for MySQL description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for MySQL-- Previously updated : 05/19/2020++ Last updated : 06/20/2022 # Connect with Managed Identity to Azure Database for MySQL
mysql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for MySQL description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for MySQL.-- ++ Previously updated : 8/5/2020 Last updated : 06/20/2022 # How to generate an Azure Database for MySQL connection string with PowerShell
mysql How To Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md
Title: Connection strings - Azure Database for MySQL description: This document lists the currently supported connection strings for applications to connect with Azure Database for MySQL, including ADO.NET (C#), JDBC, Node.js, ODBC, PHP, Python, and Ruby.-- ++ Previously updated : 3/18/2020-+ Last updated : 06/20/2022+ # How to connect applications to Azure Database for MySQL
mysql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-manage-server-portal.md
Title: Manage server - Azure portal - Azure Database for MySQL description: Learn how to manage an Azure Database for MySQL server from the Azure portal.-- ++ Previously updated : 1/26/2021 Last updated : 06/20/2022 # Manage an Azure Database for MySQL server using the Azure portal
mysql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-users.md
Title: How to create users for Azure Database for MySQL description: This article describes how to create new user accounts to interact with an Azure Database for MySQL server.-- ++ Previously updated : 02/17/2022 Last updated : 06/20/2022 # Create users in Azure Database for MySQL
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-cli.md
Title: Data encryption - Azure CLI - Azure Database for MySQL description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure CLI.-- ++ Previously updated : 03/30/2020 Last updated : 06/20/2022
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-portal.md
Title: Data encryption - Azure portal - Azure Database for MySQL description: Learn how to set up and manage data encryption for your Azure Database for MySQL by using the Azure portal.-- ++ Previously updated : 01/13/2020 Last updated : 06/20/2022 # Data encryption for Azure Database for MySQL by using the Azure portal
mysql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-troubleshoot.md
Title: Troubleshoot data encryption - Azure Database for MySQL description: Learn how to troubleshoot data encryption in Azure Database for MySQL-- ++ Previously updated : 02/13/2020 Last updated : 06/20/2022 # Troubleshoot data encryption in Azure Database for MySQL
mysql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-validation.md
Title: How to ensure validation of the Azure Database for MySQL - Data encryption description: Learn how to validate the encryption of the Azure Database for MySQL - Data encryption using the customers managed key.-- ++ Previously updated : 04/28/2020 Last updated : 06/20/2022 # Validating data encryption for Azure Database for MySQL
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
Title: Configure Data-in Replication - Azure Database for MySQL description: This article describes how to set up Data-in Replication for Azure Database for MySQL.-- ++ Previously updated : 04/08/2021 Last updated : 06/20/2022 # How to configure Azure Database for MySQL Data-in Replication
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-decide-on-right-migration-tools.md
Previously updated : 10/12/2021 Last updated : 06/20/2022 # Select the right tools for migration to Azure Database for MySQL
mysql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-deny-public-network-access.md
Title: Deny Public Network Access - Azure portal - Azure Database for MySQL description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for MySQL -- ++ Previously updated : 03/10/2020 Last updated : 06/20/2022 # Deny Public Network Access in Azure Database for MySQL using Azure portal
mysql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-double-encryption.md
Title: Infrastructure double encryption - Azure portal - Azure Database for MySQL description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for MySQL.-- ++ Previously updated : 06/30/2020 Last updated : 06/20/2022 # Infrastructure double encryption for Azure Database for MySQL
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-fix-corrupt-database.md
Title: Resolve database corruption - Azure Database for MySQL description: In this article, you'll learn about how to fix database corruption problems in Azure Database for MySQL.-- ++ Previously updated : 09/21/2020 Last updated : 06/20/2022 # Troubleshoot database corruption in Azure Database for MySQL
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md
Title: Major version upgrade in Azure Database for MySQL - Single Server description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server -- ++ Previously updated : 1/28/2021 Last updated : 06/20/2022 + # Major version upgrade in Azure Database for MySQL Single Server [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] > [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
->
> [!IMPORTANT] > Major version upgrade for Azure database for MySQL Single Server is in public preview.
The GA of this feature is planned before MySQL v5.6 retirement. However, the fea
Yes, the server will be unavailable during the upgrade process so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server.The upgrades of Basic SKU servers are expected to take longer time as it is on standard storage platform. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. Consider [performing minimal downtime major version upgrade from MySQL 5.6 to MySQL 5.7 using read replica.](#perform-minimal-downtime-major-version-upgrade-from-mysql-56-to-mysql-57-using-read-replicas)
-### What will happen if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
+### What happens if we do not choose to upgrade our MySQL v5.6 server before February 5, 2021?
-You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](concepts-version-policy.md) will apply.
+You can still continue running your MySQL v5.6 server as before. Azure **will never** perform force upgrade on your server. However, the restrictions documented in [Azure Database for MySQL versioning policy](../concepts-version-policy.md) will apply.
## Next steps
-Learn about [Azure Database for MySQL versioning policy](concepts-version-policy.md).
+Learn about [Azure Database for MySQL versioning policy](../concepts-version-policy.md).
mysql How To Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for MySQL description: This article describes how to create and manage Azure Database for MySQL firewall rules using Azure CLI command-line.-- ++ ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 06/20/2022
mysql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for MySQL description: Create and manage Azure Database for MySQL firewall rules using the Azure portal-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 + # Create and manage Azure Database for MySQL firewall rules by using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md
Title: Manage server - Azure CLI - Azure Database for MySQL description: Learn how to manage an Azure Database for MySQL server from the Azure CLI.-- ++ Previously updated : 9/22/2020 Last updated : 06/20/2022 # Manage an Azure Database for MySQL Single server using the Azure CLI
mysql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md
Title: Manage VNet endpoints - Azure CLI - Azure Database for MySQL description: This article describes how to create and manage Azure Database for MySQL VNet service endpoints and rules using Azure CLI command line.-- ++ ms.devlang: azurecli Previously updated : 02/10/2022 Last updated : 06/20/2022 + # Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-portal.md
Title: Manage VNet endpoints - Azure portal - Azure Database for MySQL description: Create and manage Azure Database for MySQL VNet service endpoints and rules using the Azure portal-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 + # Create and manage Azure Database for MySQL VNet service endpoints and VNet rules by using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-online.md
Title: Minimal-downtime migration - Azure Database for MySQL description: This article describes how to perform a minimal-downtime migration of a MySQL database to Azure Database for MySQL.-- ++ Previously updated : 6/19/2021 Last updated : 06/20/2022 # Minimal-downtime migration to Azure Database for MySQL
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-data-in-replication.md
Previously updated : 09/24/2021 Last updated : 06/20/2022 # Migrate Amazon RDS for MySQL to Azure Database for MySQL using Data-in Replication
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-workbench.md
Previously updated : 05/21/2021 Last updated : 06/20/2022 # Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-single-flexible-minimum-downtime.md
Previously updated : 06/18/2021 Last updated : 06/20/2022 # Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime
mysql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for MySQL description: Move an Azure Database for MySQL server from one Azure region to another using a read replica and the Azure portal.-- ++ Previously updated : 06/26/2020 Last updated : 06/20/2022 #Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region.
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-cli.md
Title: Manage read replicas - Azure CLI, REST API - Azure Database for MySQL description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure CLI or REST API.-- ++ Previously updated : 06/17/2020 Last updated : 06/20/2022
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal - Azure Database for MySQL description: Learn how to set up and manage read replicas in Azure Database for MySQL using the Azure portal.-- ++ Previously updated : 06/17/2020 Last updated : 06/20/2022 # How to create and manage read replicas in Azure Database for MySQL using the Azure portal
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
Title: Manage read replicas - Azure PowerShell - Azure Database for MySQL description: Learn how to set up and manage read replicas in Azure Database for MySQL using PowerShell.-- ++ Previously updated : 06/17/2020 Last updated : 06/20/2022
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
Title: Connect with redirection - Azure Database for MySQL description: This article describes how you can configure you application to connect to Azure Database for MySQL with redirection.-- ++ Previously updated : 6/8/2020 Last updated : 06/20/2022 # Connect to Azure Database for MySQL with redirection
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for MySQL description: This article describes how you can restart an Azure Database for MySQL server using the Azure CLI.-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for MySQL description: This article describes how you can restart an Azure Database for MySQL server using the Azure portal.-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 # Restart Azure Database for MySQL server using Azure portal
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
Title: Restart server - Azure PowerShell - Azure Database for MySQL description: This article describes how you can restart an Azure Database for MySQL server using PowerShell.-- ++ Previously updated : 4/28/2020 Last updated : 06/20/2022
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-dropped-server.md
Title: Restore a deleted Azure Database for MySQL server description: This article describes how to restore a deleted server in Azure Database for MySQL using the Azure portal.-- ++ Previously updated : 10/09/2020 Last updated : 06/20/2022 # Restore a deleted Azure Database for MySQL server
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md
Title: Backup and restore - Azure CLI - Azure Database for MySQL description: Learn how to backup and restore a server in Azure Database for MySQL by using the Azure CLI.-- ++ ms.devlang: azurecli Previously updated : 3/27/2020 Last updated : 06/20/2022 + # How to back up and restore a server in Azure Database for MySQL using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md
Title: Backup and restore - Azure portal - Azure Database for MySQL description: This article describes how to restore a server in Azure Database for MySQL using the Azure portal.-- ++ Previously updated : 6/30/2020 Last updated : 06/20/2022 # How to backup and restore a server in Azure Database for MySQL using the Azure portal
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
Title: Backup and restore - Azure PowerShell - Azure Database for MySQL description: Learn how to backup and restore a server in Azure Database for MySQL by using Azure PowerShell.-- ++ ms.devlang: azurepowershell Previously updated : 4/28/2020 Last updated : 06/20/2022 + # How to back up and restore an Azure Database for MySQL server using PowerShell [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql How To Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-server-parameters.md
Title: Configure server parameters - Azure portal - Azure Database for MySQL description: This article describes how to configure MySQL server parameters in Azure Database for MySQL using the Azure portal.-- ++ Previously updated : 10/1/2020 Last updated : 06/20/2022 # Configure server parameters in Azure Database for MySQL using the Azure portal
mysql How To Stop Start Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-stop-start-server.md
Title: Stop/start - Azure portal - Azure Database for MySQL server description: This article describes how to stop/start operations in Azure Database for MySQL.-- ++ Previously updated : 09/21/2020 Last updated : 06/20/2022 # Stop/Start an Azure Database for MySQL
mysql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-tls-configurations.md
Title: TLS configuration - Azure portal - Azure Database for MySQL description: Learn how to set TLS configuration using Azure portal for your Azure Database for MySQL -- ++ Previously updated : 06/02/2020 Last updated : 06/20/2022 # Configuring TLS settings in Azure Database for MySQL using Azure portal
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-connection-issues.md
Title: Troubleshoot connection issues - Azure Database for MySQL description: Learn how to troubleshoot connection issues to Azure Database for MySQL, including transient errors requiring retries, firewall issues, and outages. keywords: mysql connection,connection string,connectivity issues,transient error,connection error-- ++ Previously updated : 3/18/2020 Last updated : 06/20/2022 # Troubleshoot connection issues to Azure Database for MySQL
mysql How To Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-errors.md
Previously updated : 5/21/2021 Last updated : 06/20/2022 # Troubleshoot errors commonly encountered during or post migration to Azure Database for MySQL
mysql How To Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md
Title: Troubleshoot high CPU utilization in Azure Database for MySQL description: Learn how to troubleshoot high CPU utilization in Azure Database for MySQL.-- ++ Previously updated : 4/27/2022 Last updated : 06/20/2022 # Troubleshoot high CPU utilization in Azure Database for MySQL
mysql How To Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-low-memory-issues.md
Title: Troubleshoot low memory issues in Azure Database for MySQL description: Learn how to troubleshoot low memory issues in Azure Database for MySQL.-- ++ Previously updated : 4/22/2022 Last updated : 06/20/2022 # Troubleshoot low memory issues in Azure Database for MySQL
mysql How To Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance-new.md
Title: Troubleshoot query performance in Azure Database for MySQL description: Learn how to troubleshoot query performance in Azure Database for MySQL.-- ++ Previously updated : 4/22/2022 Last updated : 06/20/2022 # Troubleshoot query performance in Azure Database for MySQL
mysql How To Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance.md
Title: Profile query performance - Azure Database for MySQL description: Learn how to profile query performance in Azure Database for MySQL by using EXPLAIN.-- ++ Previously updated : 3/30/2022 Last updated : 06/20/2022 # Profile query performance in Azure Database for MySQL using EXPLAIN
mysql How To Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-replication-latency.md
Title: Troubleshoot replication latency - Azure Database for MySQL description: Learn how to troubleshoot replication latency by using Azure Database for MySQL read replicas. keywords: mysql, troubleshoot, replication latency in seconds-- ++ Previously updated : 01/13/2021 Last updated : 06/20/2022 + # Troubleshoot replication latency in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)]
mysql How To Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-sys-schema.md
Title: Use the sys_schema - Azure Database for MySQL description: Learn how to use the sys_schema to find performance issues and maintain databases in Azure Database for MySQL.-- ++ Previously updated : 3/10/2022 Last updated : 06/20/2022 # Tune performance and maintain databases in Azure Database for MySQL using the sys_schema
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/overview.md
Previously updated : 3/18/2020 Last updated : 06/20/2022 # What is Azure Database for MySQL?
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/partners-migration-mysql.md
Previously updated : 08/18/2021 Last updated : 06/20/2022 # Azure Database for MySQL migration partners
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Title: Built-in policy definitions for Azure Database for MySQL description: Lists Azure Policy built-in policy definitions for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 05/12/2022++ - Last updated : 06/20/2022 + # Azure Policy built-in definitions for Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure DB for MySQL - ARM template' description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration, by using an Azure Resource Manager template.-- ++ Previously updated : 05/19/2020 Last updated : 06/20/2022 # Quickstart: Use an ARM template to create an Azure Database for MySQL server
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for MySQL' description: This quickstart describes how to use the Azure CLI to create an Azure Database for MySQL server in an Azure resource group.-- ++ ms.devlang: azurecli Previously updated : 07/15/2020 Last updated : 06/20/2022
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-portal.md
Title: 'Quickstart: Create a server - Azure portal - Azure Database for MySQL' description: This article walks you through using the Azure portal to create a sample Azure Database for MySQL server in about five minutes.-- ++ Previously updated : 11/04/2020 Last updated : 06/20/2022 # Quickstart: Create an Azure Database for MySQL server by using the Azure portal
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
Title: 'Quickstart: Create a server - Azure PowerShell - Azure Database for MySQL' description: This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group.-- ++ ms.devlang: azurepowershell Previously updated : 04/28/2020 Last updated : 06/20/2022
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md
Title: 'Quickstart: Create Azure Database for MySQL using az mysql up' description: Quickstart guide to create Azure Database for MySQL server using Azure CLI (command line interface) up command.-- ++ ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 06/20/2022
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
Previously updated : 05/09/2022 Last updated : 06/20/2022 # Quickstart: Use GitHub Actions to connect to Azure MySQL
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/reference-stored-procedures.md
Title: Management stored procedures - Azure Database for MySQL description: Learn which stored procedures in Azure Database for MySQL are useful to help you configure data-in replication, set the timezone, and kill queries.-- Previously updated : 3/18/2020++ Last updated : 06/20/2022 # Azure Database for MySQL management stored procedures
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-azure-cli.md
Title: Azure CLI samples - Azure Database for MySQL | Microsoft Docs description: This article lists the Azure CLI code samples available for interacting with Azure Database for MySQL.-- +
+ms.devlang: azurecli
++ Previously updated : 09/17/2021
-keywords: azure cli samples, azure cli code samples, azure cli script samples
Last updated : 06/20/2022 + # Azure CLI samples for Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
Previously updated : 02/28/2018 Last updated : 06/20/2022 + # Java sample to illustrate connection pooling [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 06/16/2022 -- ++ Last updated : 06/20/2022 + # Azure Policy Regulatory Compliance controls for Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md
Title: Selecting the right deployment type - Azure Database for MySQL description: This article describes what factors to consider before you deploy Azure Database for MySQL as either infrastructure as a service (IaaS) or platform as a service (PaaS).-- Previously updated : 08/26/2020++ Last updated : 06/20/2022 # Choose the right MySQL Server option in Azure
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md
Previously updated : 6/19/2021 Last updated : 06/20/2022 + # Azure Database for MySQL Single Server [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md
Previously updated : 06/17/2021 Last updated : 06/20/2022 + # What's new in Azure Database for MySQL - Single Server? [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
Title: 'Tutorial: Design a server - Azure CLI - Azure Database for MySQL' description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure CLI from the command line.-- Previously updated : 12/02/2019++
+ms.devlang: azurecli
Last updated : 06/20/2022 # Tutorial: Design an Azure Database for MySQL using Azure CLI
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md
Title: 'Tutorial: Design a server - Azure portal - Azure Database for MySQL' description: This tutorial explains how to create and manage Azure Database for MySQL server and database using Azure portal.-- Previously updated : 3/20/2020++ Last updated : 06/20/2022 # Tutorial: Design an Azure Database for MySQL database using the Azure portal
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
Title: 'Tutorial: Design a server - Azure PowerShell - Azure Database for MySQL' description: This tutorial explains how to create and manage Azure Database for MySQL server and database using PowerShell.-- Previously updated : 04/29/2020++
+ms.devlang: azurepowershell
Last updated : 06/20/2022 # Tutorial: Design an Azure Database for MySQL using PowerShell
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
Title: 'Tutorial: Create Azure Database for MySQL - Azure Resource Manager template' description: This tutorial explains how to provision and automate Azure Database for MySQL server deployments using Azure Resource Manager template.-- ++ Previously updated : 12/02/2019 Last updated : 06/20/2022
network-watcher Traffic Analytics Schema Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema-update.md
 Title: Azure Traffic Analytics schema update - March 2020 | Microsoft Docs
-description: Sample queries with new fields in the Traffic Analytics schema.
+ Title: Azure Traffic Analytics schema update - March 2020
+description: Sample queries with new fields in the Traffic Analytics schema. Use these three examples to replace the deprecated fields with the new ones.
documentationcenter: na--++ editor: -++ na Previously updated : 01/07/2021- Last updated : 06/13/2022+
-# Sample queries with new fields in the Traffic Analytics schema (August 2019 schema update)
+# Sample queries with new fields in the Traffic Analytics schema (March 2020 schema update)
-The [Traffic Analytics log schema](./traffic-analytics-schema.md) includes the following new fields: **SrcPublicIPs_s**, **DestPublicIPs_s**, **NSGRule_s**. The new fields provide information about source and destination IPs, and they simplify queries.
+The [Traffic Analytics log schema](./traffic-analytics-schema.md) includes the following new fields:
-In the next few months, the following older fields will be deprecated: **VMIP_s**, **Subscription_g**, **Region_s**, **NSGRules_s**, **Subnet_s**, **VM_s**, **NIC_s**, **PublicIPs_s**, **FlowCount_d**.
+- `SrcPublicIPs_s`
+- `DestPublicIPs_s`
+- `NSGRule_s`
+
+The new fields provide information about source and destination IPs, and they simplify queries.
+
+The following older fields will be deprecated in future:
+
+- `VMIP_s`
+- `Subscription_g`
+- `Region_s`
+- `NSGRules_s`
+- `Subnet_s`
+- `VM_s`
+- `NIC_s`
+- `PublicIPs_s`
+- `FlowCount_d`
The following three examples show how to replace the old fields with the new ones. ## Example 1: VMIP_s, Subscription_g, Region_s, Subnet_s, VM_s, NIC_s, and PublicIPs_s fields
-We don't have to infer source and destination cases from the **FlowDirection_s** field for AzurePublic and ExternalPublic flows. It can also be inappropriate to use the **FlowDirection_s** field for a network virtual appliance.
+The schema doesn't have to infer source and destination cases from the `FlowDirection_s` field for AzurePublic and ExternalPublic flows. It can also be inappropriate to use the `FlowDirection_s` field for a network virtual appliance.
+
+Previous Kusto query:
-```Old Kusto query
+```kusto
AzureNetworkAnalytics_CL | where SubType_s == "FlowLog" and FASchemaVersion_s == "1" | extend isAzureOrExternalPublicFlows = FlowType_s in ("AzurePublic", "ExternalPublic")
SourcePublicIPsAggregated = iif(isAzureOrExternalPublicFlows and FlowDirection_s
DestPublicIPsAggregated = iif(isAzureOrExternalPublicFlows and FlowDirection_s == 'O', PublicIPs_s, "N/A") ```
+New Kusto query:
-```New Kusto query
+```kusto
AzureNetworkAnalytics_CL | where SubType_s == "FlowLog" and FASchemaVersion_s == "2" | extend SourceAzureVM = iif(isnotempty(VM1_s), VM1_s, "N/A"),
DestPublicIPsAggregated = iif(isnotempty(DestPublicIPs_s), DestPublicIPs_s, "N/A
## Example 2: NSGRules_s field
-The old field used the format:
+The old field used the following format:
-`<Index value 0)>|<NSG_ RuleName>|<Flow Direction>|<Flow Status>|<FlowCount ProcessedByRule>`
+```kusto
+<Index value 0)>|<NSG_ RuleName>|<Flow Direction>|<Flow Status>|<FlowCount ProcessedByRule>
+```
+
+The schema no longer aggregates data across a network security group (NSG). In the updated schema, `NSGList_s` contains only one NSG. Also, `NSGRules` contains only one rule. The complicated formatting has been removed here and in other fields, as shown in the following example.
-We no longer aggregate data across a network security group (NSG). In the updated schema, **NSGList_s** contains only one NSG. Also **NSGRules** contains only one rule. We removed the complicated formatting here and in other fields as shown in the example.
+Previous Kusto query:
-```Old Kusto query
+```kusto
AzureNetworkAnalytics_CL | where SubType_s == "FlowLog" and FASchemaVersion_s == "1" | extend NSGRuleComponents = split(NSGRules_s, "|")
AzureNetworkAnalytics_CL
| project NSGName, NSGRuleName, FlowDirection, FlowStatus, FlowCountProcessedByRule ```
-```New Kusto query
+New Kusto query:
+
+```kusto
AzureNetworkAnalytics_CL | where SubType_s == "FlowLog" and FASchemaVersion_s == "2" | extend NSGRuleComponents = split(NSGRules_s, "|")
FlowCountProcessedByRule = AllowedInFlows_d + DeniedInFlows_d + AllowedOutFlows_
## Example 3: FlowCount_d field
-Because we do not club data across the NSG, the **FlowCount_d** is simply:
+Because the schema doesn't club data across the NSG, the `FlowCount_d` is simply:
-**AllowedInFlows_d** + **DeniedInFlows_d** + **AllowedOutFlows_d** + **DeniedOutFlows_d**
+`AllowedInFlows_d` + `DeniedInFlows_d` + `AllowedOutFlows_d` + `DeniedOutFlows_d`
-Only one of the four fields will be nonzero. The other three fields will be zero. The fields populate to indicate the status and count in the NIC where the flow was captured.
+Only one of the four fields is nonzero. The other three fields are zero. The fields populate to indicate the status and count in the NIC where the flow was captured.
To illustrate these conditions: -- If the flow was allowed, one of the "Allowed" prefixed fields will be populated.-- If the flow was denied, one of the "Denied" prefixed fields will be populated.-- If the flow was inbound, one of the "InFlows_d" suffixed fields will be populated.-- If the flow was outbound, one of the "OutFlows_d" suffixed fields will be populated.
+- If the flow was allowed, one of the `Allowed` prefixed fields is populated.
+- If the flow was denied, one of the `Denied` prefixed fields is populated.
+- If the flow was inbound, one of the `InFlows_d` suffixed fields is populated.
+- If the flow was outbound, one of the `OutFlows_d` suffixed fields is populated.
-Depending on the conditions, we know which one of the four fields will be populated.
+Depending on the conditions, it's clear which of the four fields is populated.
## Next steps - To get answers to frequently asked questions, see [Traffic Analytics FAQ](traffic-analytics-faq.yml).-- To see details about functionality, see [Traffic Analytics documentation](traffic-analytics.md).
+- To see details about functionality, see [Traffic Analytics documentation](traffic-analytics.md).
openshift Howto Create Private Cluster 3X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-3x.md
Title: Create a private cluster with Azure Red Hat OpenShift 3.11 | Microsoft Docs
-description: Create a private cluster with Azure Red Hat OpenShift 3.11
+ Title: Create a private cluster with Azure Red Hat OpenShift 3.11
+description: Learn how to create a private cluster with Azure Red Hat OpenShift 3.11 and about the benefits of private clusters.
- Previously updated : 03/02/2020+ Last updated : 06/02/2022+ keywords: aro, openshift, private cluster, red hat #Customer intent: As a customer, I want to create a private cluster on ARO OpenShift.
keywords: aro, openshift, private cluster, red hat
# Create a private cluster with Azure Red Hat OpenShift 3.11 > [!IMPORTANT]
-> Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
->
+> Azure Red Hat OpenShift 3.11 will be retired. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
+>
> Follow this guide to [create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md).
-> If you have specific questions, [please contact us](mailto:arofeedback@microsoft.com).
+> If you have specific questions, [contact us](mailto:arofeedback@microsoft.com).
Private clusters provide the following benefits:
-* Private clusters don't expose cluster control plane components (such as the API servers) on a public IP address.
-* The virtual network of a private cluster is configurable by customers, allowing you to set up networking to allow peering with other virtual networks, including ExpressRoute environments. You can also configure custom DNS on the virtual network to integrate with internal services.
+* Private clusters don't expose cluster control plane components, such as the API servers, on a public IP address.
+* The virtual network of a private cluster is configurable by customers. You can set up networking to allow peering with other virtual networks, including ExpressRoute environments. You can also configure custom DNS on the virtual network to integrate with internal services.
## Before you begin
properties:
privateApiServer: true ```
-A private cluster can be deployed using the sample scripts provided below. Once the cluster is deployed, execute the `cluster get` command and view the `properties.FQDN` property to determine the private IP address of the OpenShift API server.
+A private cluster can be deployed using the sample scripts provided below. Once the cluster is deployed, run the `cluster get` command and view the `properties.FQDN` property to determine the private IP address of the OpenShift API server.
-The cluster virtual network will have been created with permissions so that you can modify it. You can then set up networking to access the virtual network (ExpressRoute, VPN, virtual network peering) as required for your needs.
+The cluster virtual network is created with permissions so that you can modify it. You can set up networking to access the virtual network, such as ExpressRoute, VPN, and virtual network peering.
-If you change the DNS nameservers on the cluster virtual network, then you will need to issue an update on the cluster with the `properties.RefreshCluster` property set to `true` so that the VMs can be reimaged. This update will allow them to pick up the new nameservers.
+If you change the DNS nameservers on the cluster virtual network, issue an update on the cluster with the `properties.RefreshCluster` property set to `true` so that the virtual machines can be reimaged. This update allows them to pick up the new nameservers.
## Sample configuration scripts
Use the sample scripts in this section to set up and deploy your private cluster
Fill in the environment variables below as using your own values. > [!NOTE]
-> The location must be set to `eastus2` as this is currently the only supported location for private clusters.
+> The location must be set to `eastus2` because this is currently the only supported location for private clusters.
``` bash export CLUSTER_NAME=
export SECRET=
### private-cluster.json
-Using the environment variables defined above, here is a sample cluster configuration with private cluster enabled.
+This sample is a cluster configuration with private cluster enabled. It uses the environment variables defined above.
```json {
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-monitoring.md
Previously updated : 12/06/2021 Last updated : 06/22/2022 # Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus)
These metrics are available for Hyperscale (Citus) nodes:
|Metric|Metric Display Name|Unit|Description| ||||| |active_connections|Active Connections|Count|The number of active connections to the server.|
+|apps_reserved_memory_percent|Reserved Memory Percent|Percent|Calculated from the ratio of Committed_AS/CommitLimit as shown in /proc/meminfo.|
|cpu_percent|CPU percent|Percent|The percentage of CPU in use.| |iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](resources-compute.md)| |memory_percent|Memory percent|Percent|The percentage of memory in use.|
postgresql Howto App Stacks Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-csharp.md
+
+ Title: C# app to connect and query Hyperscale (Citus)
+description: Learn to query Hyperscale (Citus) using C#
+++++ Last updated : 06/20/2022++
+# C# app to connect and query Hyperscale (Citus)
++
+In this document, you'll learn how to connect to a Hyperscale (Citus) database using a C# application. You'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using C#, and are new to working with Hyperscale (Citus).
+
+> [!TIP]
+>
+> The process of creating a C# app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+* Create a Hyperscale (Citus) server group using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
+* Install the [.NET SDK](https://dotnet.microsoft.com/download) for your platform (Windows, Ubuntu Linux, or macOS) for your platform.
+* Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.
+* Install the [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio.
+
+## Get database connection information
+
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
+
+![Diagram showing C# connection string.](../media/howto-app-stacks/03-csharp-connection-string.png)
+
+## Step 1: Connect, create table, and insert data
+
+Use the following code to connect and load the data using CREATE TABLE and INSERT INTO SQL statements. The code uses these `NpgsqlCommand` class methods:
+
+* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Hyperscale (Citus),
+* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) to set the CommandText property,
+* [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run database commands.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host> Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("DROP TABLE IF EXISTS pharmacy;", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished dropping table (if existed)");
+ }
+ using (var command = new NpgsqlCommand("CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating table");
+ }
+ using (var command = new NpgsqlCommand("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished creating index");
+ }
+ using (var command = new NpgsqlCommand("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (@n1, @q1, @a, @b, @c)", conn))
+ {
+ command.Parameters.AddWithValue("n1", 0);
+ command.Parameters.AddWithValue("q1", "Target");
+ command.Parameters.AddWithValue("a", "Sunnyvale");
+ command.Parameters.AddWithValue("b", "California");
+ command.Parameters.AddWithValue("c", 94001);
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows inserted={0}", nRows));
+ }
+
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Step 2: Use the super power of distributed tables
+
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!TIP]
+>
+> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresCreate
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("select create_distributed_table('pharmacy','pharmacy_id');", conn))
+ {
+ command.ExecuteNonQuery();
+ Console.Out.WriteLine("Finished distributing the table");
+ }
+
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Step 3: Read data
+
+Use the following code to connect and read the data using a SELECT SQL statement. The code uses these `NpgsqlCommand` class methods:
+
+* [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Hyperscale (Citus).
+* [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
+* [Read()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_Read) to advance to the record in the results.
+* [GetInt32()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetInt32_System_Int32_) and [GetString()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlDataReader.html#Npgsql_NpgsqlDataReader_GetString_System_Int32_) to parse the values in the record.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class read
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connString))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("SELECT * FROM pharmacy", conn))
+ {
+ var reader = command.ExecuteReader();
+ while (reader.Read())
+ {
+ Console.WriteLine(
+ string.Format(
+ "Reading from table=({0}, {1}, {2}, {3}, {4})",
+ reader.GetInt32(0).ToString(),
+ reader.GetString(1),
+ reader.GetString(2),
+ reader.GetString(3),
+ reader.GetInt32(4).ToString()
+ )
+ );
+ }
+ reader.Close();
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Step 4: Update data
+
+Use the following code to connect and update data using an UPDATE SQL statement.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresUpdate
+ {
+ static void Main(string[] args)
+ {
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("UPDATE pharmacy SET city = @q WHERE pharmacy_id = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", 0);
+ command.Parameters.AddWithValue("q", "guntur");
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows updated={0}", nRows));
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## Step 5: Delete data
+
+Use the following code to connect and delete data using a DELETE SQL statement.
+
+```csharp
+using System;
+using Npgsql;
+namespace Driver
+{
+ public class AzurePostgresDelete
+ {
+
+ static void Main(string[] args)
+ {
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+
+ Console.Out.WriteLine("Opening connection");
+ conn.Open();
+ using (var command = new NpgsqlCommand("DELETE FROM pharmacy WHERE pharmacy_id = @n", conn))
+ {
+ command.Parameters.AddWithValue("n", 0);
+ int nRows = command.ExecuteNonQuery();
+ Console.Out.WriteLine(String.Format("Number of rows deleted={0}", nRows));
+ }
+ }
+ Console.WriteLine("Press RETURN to exit");
+ Console.ReadLine();
+ }
+ }
+}
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code is an example for copying data from a CSV file to a database table.
+
+It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```csharp
+using Npgsql;
+public class csvtotable
+{
+
+ static void Main(string[] args)
+ {
+ String sDestinationSchemaAndTableName = "pharmacy";
+ String sFromFilePath = "C:\\Users\\Documents\\pharmacies.csv";
+
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ NpgsqlConnection conn = new NpgsqlConnection(connStr.ToString());
+ NpgsqlCommand cmd = new NpgsqlCommand();
+
+ conn.Open();
+
+ if (File.Exists(sFromFilePath))
+ {
+ using (var writer = conn.BeginTextImport("COPY " + sDestinationSchemaAndTableName + " FROM STDIN WITH(FORMAT CSV, HEADER true,NULL ''); "))
+ {
+ foreach (String sLine in File.ReadAllLines(sFromFilePath))
+ {
+ writer.WriteLine(sLine);
+ }
+ }
+ Console.WriteLine("csv file data copied sucessfully");
+ }
+ }
+}
+```
+
+### COPY command to load data in-memory
+
+The following code is an example for copying in-memory data to table.
+
+```csharp
+using Npgsql;
+using NpgsqlTypes;
+namespace Driver
+{
+ public class InMemory
+ {
+
+ static async Task Main(string[] args)
+ {
+
+ // Replace below argument with connection string from portal.
+ var connStr = new NpgsqlConnectionStringBuilder("Server = <host>; Database = citus; Port = 5432; User Id = citus; Password = {your password}; Ssl Mode = Require;");
+
+ connStr.TrustServerCertificate = true;
+
+ using (var conn = new NpgsqlConnection(connStr.ToString()))
+ {
+ conn.Open();
+ var text = new dynamic[] { 0, "Target", "Sunnyvale", "California", 94001 };
+ using (var writer = conn.BeginBinaryImport("COPY pharmacy FROM STDIN (FORMAT BINARY)"))
+ {
+ writer.StartRow();
+ foreach (var item in text)
+ {
+ writer.Write(item);
+ }
+ writer.Complete();
+ }
+ Console.WriteLine("in-memory data copied sucessfully");
+ }
+ }
+ }
+}
+```
+
+## Next steps
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto App Stacks Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-java.md
+
+ Title: Java app to connect and query Hyperscale (Citus)
+description: Learn building a simple app on Hyperscale (Citus) using java
+++++ Last updated : 06/20/2022++
+# Java app to connect and query Hyperscale (Citus)
++
+In this document, you'll learn how to connect to a Hyperscale (Citus) server group using a Java application. You'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity), and are new to working with Hyperscale (Citus).
+
+> [!TIP]
+>
+> The process of creating a Java app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
+* A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell).
+* The [Apache Maven](https://maven.apache.org/) build tool.
+
+## Setup
+
+### Get Database Connection Information
+
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal. Replace the password placeholder with the actual password. See below screenshot.
+
+![Diagram showing Java connection string.](../media/howto-app-stacks/02-java-connection-string.png)
+
+### Create a new Java project
+
+Using your favorite IDE, create a new Java project with groupId **test** and artifactId **crud**. Add a `pom.xml` file in its root directory:
+
+```XML
+<?xml version="1.0" encoding="UTF-8"?>
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+
+ <groupId>test</groupId>
+ <artifactId>crud</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <packaging>jar</packaging>
+
+ <name>crud</name>
+ <url>http://www.example.com</url>
+
+ <properties>
+ <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-engine</artifactId>
+ <version>5.7.1</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.postgresql</groupId>
+ <artifactId>postgresql</artifactId>
+ <version>42.2.12</version>
+ </dependency>
+ <dependency>
+ <groupId>org.junit.jupiter</groupId>
+ <artifactId>junit-jupiter-params</artifactId>
+ <version>5.7.1</version>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <version>3.0.0-M5</version>
+ </plugin>
+ </plugins>
+ </build>
+</project>
+```
+
+This file configures [Apache Maven](https://maven.apache.org/) to use:
+
+* Java 18
+* A recent PostgreSQL driver for Java
+
+### Prepare a configuration file to connect to Hyperscale (Citus)
+
+Create a `src/main/resources/application.properties` file, and add:
+
+``` properties
+url=jdbc:postgresql://<host>:5432/citus?ssl=true&sslmode=require
+user=citus
+password=<password>
+```
+
+Replace the \<host\> using the Connection string that you gathered previously. Replace \<password\> with the password that you set for the database.
+
+> [!NOTE]
+>
+> We append `?ssl=true&sslmode=require` to the configuration property url, to tell the JDBC driver to use TLS (Transport Layer Security) when connecting to the database. It's mandatory to use TLS with Hyperscale (Citus), and it is a good security practice.
+
+## Create tables in Hyperscale (Citus)
+
+### Create an SQL file to generate the database schema
+
+We'll use a `src/main/resources/schema.sql` file in order to create a database schema. Create that file, with the following content:
+
+``` SQL
+DROP TABLE IF EXISTS public.pharmacy;
+CREATE TABLE public.pharmacy(pharmacy_id integer,pharmacy_name text ,city text ,state text ,zip_code integer);
+CREATE INDEX idx_pharmacy_id ON public.pharmacy(pharmacy_id);
+```
+
+### Use the super power of distributed tables
+
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!TIP]
+>
+> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
+
+Append the below command to the `schema.sql` file in the previous section if you wanted to distribute your table.
+
+```SQL
+select create_distributed_table('public.pharmacy','pharmacy_id');
+```
+
+### Connect to the database, and create schema
+
+Next, add the Java code that will use JDBC to store and retrieve data from your Hyperscale (Citus) server group.
+
+Create a `src/main/java/DemoApplication.java` file that contains:
+
+``` java
+package test.crud;
+import java.io.IOException;
+import java.sql.*;
+import java.util.*;
+import java.util.logging.Logger;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import org.postgresql.copy.CopyManager;
+import org.postgresql.core.BaseConnection;
+import java.io.IOException;
+import java.io.Reader;
+import java.io.StringReader;
+
+public class DemoApplication {
+
+ private static final Logger log;
+
+ static {
+ System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
+ log =Logger.getLogger(DemoApplication.class.getName());
+ }
+ public static void main(String[] args)throws Exception
+ {
+ log.info("Loading application properties");
+ Properties properties = new Properties();
+ properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
+ log.info("Connecting to the database");
+ Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
+ log.info("Database connection test: " + connection.getCatalog());
+ log.info("Creating table");
+ log.info("Creating index");
+ log.info("distributing table");
+ Scanner scanner = new Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
+ Statement statement = connection.createStatement();
+ while (scanner.hasNextLine()) {
+ statement.execute(scanner.nextLine());
+ }
+ log.info("Closing database connection");
+ connection.close();
+ }
+
+}
+```
+
+The above code will use the **application.properties** and **schema.sql** files to connect to Hyperscale (Citus) and create the schema.
+
+> [!NOTE]
+>
+> The database credentials are stored in the user and password properties of the application.properties file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
+
+You can now execute this main class with your favorite tool:
+
+* Using your IDE, you should be able to right-click on the `DemoApplication` class and execute it.
+* Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
+
+The application should connect to the Hyperscale (Citus), create a database schema, and then close the connection, as you should see in the console logs:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Create database schema
+[INFO ] Closing database connection
+```
+
+## Create a domain class
+
+Create a new `Pharmacy` Java class, next to the `DemoApplication` class, and add the following code:
+
+``` Java
+public class Pharmacy {
+ private Integer pharmacy_id;
+ private String pharmacy_name;
+ private String city;
+ private String state;
+ private Integer zip_code;
+ public Pharmacy() { }
+ public Pharmacy(Integer pharmacy_id, String pharmacy_name, String city,String state,Integer zip_code)
+ {
+ this.pharmacy_id = pharmacy_id;
+ this.pharmacy_name = pharmacy_name;
+ this.city = city;
+ this.state = state;
+ this.zip_code = zip_code;
+ }
+
+ public Integer getpharmacy_id() {
+ return pharmacy_id;
+ }
+
+ public void setpharmacy_id(Integer pharmacy_id) {
+ this.pharmacy_id = pharmacy_id;
+ }
+
+ public String getpharmacy_name() {
+ return pharmacy_name;
+ }
+
+ public void setpharmacy_name(String pharmacy_name) {
+ this.pharmacy_name = pharmacy_name;
+ }
+
+ public String getcity() {
+ return city;
+ }
+
+ public void setcity(String city) {
+ this.city = city;
+ }
+
+ public String getstate() {
+ return state;
+ }
+
+ public void setstate(String state) {
+ this.state = state;
+ }
+
+ public Integer getzip_code() {
+ return zip_code;
+ }
+
+ public void setzip_code(Integer zip_code) {
+ this.zip_code = zip_code;
+ }
+ @Override
+ public String toString() {
+ return "TPharmacy{"
+ "pharmacy_id=" + pharmacy_id
+ ", pharmacy_name='" + pharmacy_name + '\''
+ ", city='" + city + '\''
+ ", state='" + state + '\''
+ ", zip_code='" + zip_code + '\''
+ '}';
+ }
+}
+```
+
+This class is a domain model mapped on the `Pharmacy` table that you created when executing the `schema.sql` script.
+
+## Insert data into Hyperscale (Citus)
+
+In the `src/main/java/DemoApplication.java` file, after the `main` method, add the following method to insert data into the database:
+
+``` Java
+private static void insertData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Insert data");
+ PreparedStatement insertStatement = connection
+ .prepareStatement("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (?, ?, ?, ?, ?);");
+
+ insertStatement.setInt(1, todo.getpharmacy_id());
+ insertStatement.setString(2, todo.getpharmacy_name());
+ insertStatement.setString(3, todo.getcity());
+ insertStatement.setString(4, todo.getstate());
+ insertStatement.setInt(5, todo.getzip_code());
+
+ insertStatement.executeUpdate();
+}
+```
+
+You can now add the two following lines in the main method:
+
+```java
+Pharmacy todo = new Pharmacy(0,"Target","Sunnyvale","California",94001);
+insertData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Closing database connection
+```
+
+## Reading data from Hyperscale (Citus)
+
+Let's read the data previously inserted, to validate that our code works correctly.
+
+In the `src/main/java/DemoApplication.java` file, after the `insertData` method, add the following method to read data from the database:
+
+``` java
+private static Pharmacy readData(Connection connection) throws SQLException {
+ log.info("Read data");
+ PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM Pharmacy;");
+ ResultSet resultSet = readStatement.executeQuery();
+ if (!resultSet.next()) {
+ log.info("There is no data in the database!");
+ return null;
+ }
+ Pharmacy todo = new Pharmacy();
+ todo.setpharmacy_id(resultSet.getInt("pharmacy_id"));
+ todo.setpharmacy_name(resultSet.getString("pharmacy_name"));
+ todo.setcity(resultSet.getString("city"));
+ todo.setstate(resultSet.getString("state"));
+ todo.setzip_code(resultSet.getInt("zip_code"));
+ log.info("Data read from the database: " + todo.toString());
+ return todo;
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+todo = readData(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Closing database connection
+```
+
+## Updating data in Hyperscale (Citus)
+
+Let's update the data we previously inserted.
+
+Still in the `src/main/java/DemoApplication.java` file, after the `readData` method, add the following method to update data inside the database:
+
+``` java
+private static void updateData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Update data");
+ PreparedStatement updateStatement = connection
+ .prepareStatement("UPDATE pharmacy SET city = ? WHERE pharmacy_id = ?;");
+
+ updateStatement.setString(1, todo.getcity());
+
+ updateStatement.setInt(2, todo.getpharmacy_id());
+ updateStatement.executeUpdate();
+ readData(connection);
+}
+
+```
+
+You can now add the two following lines in the main method:
+
+``` java
+todo.setcity("Guntur");
+updateData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Closing database connection
+```
+
+## Deleting data in Hyperscale (Citus)
+
+Finally, let's delete the data we previously inserted.
+
+Still in the `src/main/java/DemoApplication.java` file, after the `updateData` method, add the following method to delete data inside the database:
+
+``` java
+private static void deleteData(Pharmacy todo, Connection connection) throws SQLException {
+ log.info("Delete data");
+ PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM pharmacy WHERE pharmacy_id = ?;");
+ deleteStatement.setLong(1, todo.getpharmacy_id());
+ deleteStatement.executeUpdate();
+ readData(connection);
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+deleteData(todo, connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Closing database connection
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code is an example for copying data from a CSV file to a database table.
+
+It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```java
+public static long copyFromFile(Connection connection, String filePath, String tableName)
+ throws SQLException, IOException
+{
+ long count = 0;
+ FileInputStream fileInputStream = null;
+
+ try {
+ CopyManager copyManager = new CopyManager((BaseConnection) connection);
+ fileInputStream = new FileInputStream(filePath);
+ count = copyManager.copyIn("COPY " + tableName + " FROM STDIN delimiter ',' csv", fileInputStream);
+ } finally {
+ if (fileInputStream != null) {
+ try {
+ fileInputStream.close();
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ }
+ return count;
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+int c = (int) copyFromFile(connection,"C:\\Users\\pharmacies.csv", "pharmacy");
+log.info("Copied "+ c +" rows using COPY command");
+```
+
+Executing the `main` class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+[INFO ] Copied 5000 rows using COPY command
+[INFO ] Closing database connection
+```
+
+### COPY command to load data in-memory
+
+The following code is an example for copying in-memory data to table.
+
+```java
+private static void inMemory(Connection connection) throws SQLException,IOException {
+ log.info("Copying in-memory data into table");
+ String[] input = {"5000,Target,Sunnyvale,California,94001"};
+
+ CopyManager copyManager = new CopyManager((BaseConnection) connection);
+ String copyCommand = "COPY pharmacy FROM STDIN with csv " ;
+
+ for (String var : input)
+ {
+ Reader reader = new StringReader(var);
+ copyManager.copyIn(copyCommand, reader);
+ }
+ copyManager.copyIn(copyCommand);
+}
+```
+
+You can now add the following line in the main method:
+
+``` java
+inMemory(connection);
+```
+
+Executing the main class should now produce the following output:
+
+```
+[INFO ] Loading application properties
+[INFO ] Connecting to the database
+[INFO ] Database connection test: citus
+[INFO ] Creating table
+[INFO ] Creating index
+[INFO ] distributing table
+[INFO ] Insert data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Sunnyvale', state='California', zip_code='94001'}
+[INFO ] Update data
+[INFO ] Read data
+[INFO ] Data read from the database: Pharmacy{pharmacy_id=0, pharmacy_name='Target', city='Guntur', state='California', zip_code='94001'}
+[INFO ] Delete data
+[INFO ] Read data
+[INFO ] There is no data in the database!
+5000
+[INFO ] Copying in-memory data into table
+[INFO ] Closing database connection
+```
+
+## Next steps
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto App Stacks Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-nodejs.md
+
+ Title: Node.js app to connect and query Hyperscale (Citus)
+description: Learn to query Hyperscale (Citus) using Node.js
+++++ Last updated : 06/20/2022++
+# Node.js app to connect and query Hyperscale (Citus)
++
+In this article, you'll connect to a Hyperscale (Citus) server group using a Node.js application. We'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js, and are new to working with Hyperscale (Citus).
+
+> [!TIP]
+>
+> The process of creating a NodeJS app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+
+## Setup
+
+### Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
+* [Node.js](https://nodejs.org/)
+
+Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js.
+To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
+
+```dotnetcli
+npm install pg
+```
+
+Verify the installation by listing the packages installed.
+
+```dotnetcli
+npm list
+```
+
+### Get database connection information
+
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
+
+![Diagram showing NodeJS connection string.](../media/howto-app-stacks/01-python-connection-string.png)
+
+### Running JavaScript code in Node.js
+
+You may launch Node.js from the Bash shell, Terminal, or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
+
+## Connect, create table, insert data
+
+All examples in this article need to connect to the database. Let's put the
+connection logic into its own module for reuse. We'll use the
+[pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object to
+interface with the PostgreSQL server.
+
+Create a `citus.js` with the common connection code:
+
+```javascript
+// citus.js
+
+module.exports = {
+ // fill in your server group's hostname and password below
+ //
+ // the user and database names must be "citus"
+
+ client: function() {
+ const pg = require('pg');
+ return new pg.Client({
+ host: 'c.<servergroup>.postgres.database.azure.com',
+ user: 'citus',
+ password: '<password>',
+ database: 'citus',
+ port: 5432,
+ ssl: true
+ });
+ }
+};
+```
+
+Next, use the following code to connect and load the data using CREATE TABLE
+and INSERT INTO SQL statements. The
+[pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect)
+function is used to establish the connection to the server. The
+[pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query)
+function is used to execute the SQL query against PostgreSQL database.
+
+```javascript
+// create.js
+
+const client = require('./citus').client();
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ const q = `
+ DROP TABLE IF EXISTS pharmacy;
+ CREATE TABLE pharmacy (pharmacy_id integer,pharmacy_name text,city text,state text,zip_code integer);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);
+ CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);
+ `;
+
+ client
+ .query(q)
+ .then(() => {
+ console.log('Created tables and inserted rows');
+ client.end(console.log('Closed client connection'));
+ })
+ .catch(err => console.log(err))
+ .then(() => {
+ console.log('Finished execution, exiting now');
+ process.exit();
+ });
+}
+```
+
+## Super power of Distributed Tables
+
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!TIP]
+>
+> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
+
+Use the following code to connect to the database and distribute the table.
+
+```javascript
+const client = require('./citus').client();
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ const q = `
+ select create_distributed_table('pharmacy','pharmacy_id');
+ `;
+ client
+ .query(q)
+ .then(() => {
+ console.log('Distributed pharmacy table');
+ client.end(console.log('Closed client connection'));
+ })
+ .catch(err => console.log(err))
+ .then(() => {
+ console.log('Finished execution, exiting now');
+ process.exit();
+ });
+}
+```
+
+## Read data
+
+Use the following code to connect and read the data using a SELECT SQL statement.
+
+```javascript
+// read.js
+
+const client = require('./citus').client();
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ console.log('Querying PostgreSQL server');
+ const query = 'SELECT * FROM pharmacy';
+ client.query(query)
+ .then(res => {
+ const rows = res.rows;
+ rows.map(row => {
+ console.log(`Read: ${JSON.stringify(row)}`);
+ });
+ process.exit();
+ })
+ .catch(err => {
+ console.log(err);
+ });
+}
+```
+
+## Update data
+
+Use the following code to connect and read the data using a UPDATE SQL statement.
+
+```javascript
+// update.js
+
+const client = require('./citus').client();
+
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ const query = `
+ UPDATE pharmacy SET city = 'guntur'
+ WHERE pharmacy_id = 1 ;
+ `;
+ client
+ .query(query)
+ .then(result => {
+ console.log('Update completed');
+ console.log(`Rows affected: ${result.rowCount}`);
+ process.exit();
+ })
+ .catch(err => {
+ console.log(err);
+ throw err;
+ });
+}
+```
+
+## Delete data
+
+Use the following code to connect and read the data using a DELETE SQL statement.
+
+```javascript
+// delete.js
+
+const client = require('./citus').client();
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ const q = `
+ DELETE FROM pharmacy WHERE pharmacy_name = 'Target';
+ `;
+ client
+ .query(q)
+ .then(result => {
+ console.log('Delete completed');
+ console.log(`Rows affected: ${result.rowCount}`);
+ })
+ .catch(err => {
+ console.log(err);
+ throw err;
+ })
+ .then(() => {
+ console.log('Finished execution, exiting now');
+ process.exit();
+ });
+}
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+Before running below code, install
+[pg-copy-streams](https://www.npmjs.com/package/pg-copy-streams). To do so,
+run the node package manager (npm) for JavaScript from your command line.
+
+```dotnetcli
+npm install pg-copy-streams
+```
+
+The following code is an example for copying data from a CSV file to a database table.
+It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```javascript
+// copy.js
+
+const inputFile = require('path').join(__dirname, '/pharmacies.csv')
+const copyFrom = require('pg-copy-streams').from;
+const client = require('./citus').client();
+
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ const q = `
+ COPY pharmacy FROM STDIN WITH (FORMAT CSV, HEADER true, NULL '');
+ `;
+
+ var fileStream = require('fs').createReadStream(inputFile)
+ fileStream.on('error', (error) =>{
+ console.log(`Error in reading file: ${error}`)
+ process.exit();
+ });
+
+ var stream = client
+ .query(copyFrom(q))
+ .on('error', (error) => {
+ console.log(`Error in copy command: ${error}`)
+ })
+ .on('end', () => {
+ // TODO: this is never reached
+ console.log(`Completed loading data into pharmacy`)
+ client.end()
+ process.exit();
+ });
+
+ console.log('Copying from CSV...');
+ fileStream.pipe(stream);
+}
+```
+
+### COPY command to load data in-memory
+
+Before running the below code, install
+[through2](https://www.npmjs.com/package/through2). This package allows pipe
+chaining. Install it with node package manager (npm) for JavaScript like this:
+
+```dotnetcli
+npm install through2
+```
+
+The following code is an example for copying in-memory data to a table.
+
+```javascript
+// copymem.js
+
+const through2 = require('through2');
+const copyFrom = require('pg-copy-streams').from;
+const client = require('./citus').client();
+
+client.connect(err => {
+ if (err)
+ throw err;
+ else
+ queryDatabase();
+});
+
+function queryDatabase() {
+ var stream = client.query(copyFrom('COPY pharmacy FROM STDIN '));
+
+ var interndataset = [['0','Target','Sunnyvale','California','94001'],
+ ['1','CVS','San Francisco','California','94002']];
+
+ var started = false;
+ var internmap = through2.obj(function(arr, enc, cb) {
+ var rowText = (started ? '\n' : '') + arr.join('\t');
+ started = true;
+ console.log(rowText);
+ cb(null, rowText);
+ });
+ interndataset.forEach(function(r) { internmap.write(r); })
+
+ internmap.end();
+ internmap.pipe(stream);
+ console.log("inserted successfully");
+ process.exit();
+}
+```
+
+## Next steps
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto App Stacks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-overview.md
+
+ Title: Writing apps to connect and query Hyperscale (Citus)
+description: Learn to query Hyperscale (Citus) in multiple languages
+++++ Last updated : 06/20/2022++
+# Query Hyperscale (Citus) from your app stack
++
+Select your development language to learn how to connect to Hyperscale (Citus)
+to create, read, update, and delete data.
+
+* [Python](howto-app-stacks-python.md)
+* [Node JS](howto-app-stacks-nodejs.md)
+* [C#](howto-app-stacks-csharp.md)
+* [Java](howto-app-stacks-java.md)
+* [Ruby](howto-app-stacks-ruby.md)
+
+**Next steps**
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto App Stacks Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-python.md
+
+ Title: Python app to connect and query Hyperscale (Citus)
+description: Learn to query Hyperscale (Citus) using Python
+++++ Last updated : 06/20/2022++
+# Python app to connect and query Hyperscale (Citus)
++
+In this article, you'll learn how to connect to the database on Hyperscale (Citus) and run SQL statements to query using Python on macOS, Ubuntu Linux, or Windows.
+
+> [!TIP]
+>
+> The process of creating a Python app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+
+## Setup
+
+### Prerequisites
+
+For this article you need:
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* Create a Hyperscale (Citus) server group using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md).
+* [Python](https://www.python.org/downloads/) 2.7 or 3.6+.
+* The latest [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+* Install [psycopg2](https://pypi.python.org/pypi/psycopg2-binary/) using pip in a terminal or command prompt window. For more information, see [how to install psycopg2](https://www.psycopg.org/docs/install.html).
+
+### Get database connection information
+
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal:
+
+![Diagram showing python connection string.](../media/howto-app-stacks/01-python-connection-string.png)
+
+Replace the following values:
+
+* \<host\> with the value you copied from the Azure portal.
+* \<password\> with the server password you created.
+* Use the default admin user, which is `citus`.
+* Use the default database, which is `citus`.
+
+## Step 1: Connect, create table, and insert data
+
+The following code example connects to your Hyperscale (Citus) database using
+the [psycopg2.connect](https://www.psycopg.org/docs/connection.html) function,
+and loads data with a SQL INSERT statement. The
+[cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) function
+executes the SQL query against the database.
+
+```python
+import psycopg2
+
+# NOTE: fill in these variables for your own server group
+host = "<host>"
+dbname = "citus"
+user = "citus"
+password = "<password>"
+sslmode = "require"
+
+# now we'll build a connection string from the variables
+conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode)
+
+conn = psycopg2.connect(conn_string)
+print("Connection established")
+
+cursor = conn.cursor()
+
+# Drop previous table of same name if one exists
+cursor.execute("DROP TABLE IF EXISTS pharmacy;")
+print("Finished dropping table (if existed)")
+
+# Create a table
+cursor.execute("CREATE TABLE pharmacy (pharmacy_id integer, pharmacy_name text, city text, state text, zip_code integer);")
+print("Finished creating table")
+
+# Create a index
+cursor.execute("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
+print("Finished creating index")
+
+# Insert some data into the table
+cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (1,"Target","Sunnyvale","California",94001))
+cursor.execute("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (%s, %s, %s, %s,%s);", (2,"CVS","San Francisco","California",94002))
+print("Inserted 2 rows of data")
+
+# Clean up
+conn.commit()
+cursor.close()
+conn.close()
+```
+
+When the code runs successfully, it produces the following output:
+
+```
+Connection established
+Finished dropping table
+Finished creating table
+Finished creating index
+Inserted 2 rows of data
+```
+
+## Step 2: Use the super power of distributed tables
+
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!TIP]
+>
+> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
+
+```python
+# Create distribute table
+cursor.execute("select create_distributed_table('pharmacy','pharmacy_id');")
+print("Finished distributing the table")
+```
+
+## Step 3: Read data
+
+The following code example uses these APIs:
+
+* [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL SELECT statement to read data.
+* [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) accepts a query and returns a result set to iterate
+
+```python
+# Fetch all rows from table
+cursor.execute("SELECT * FROM pharmacy;")
+rows = cursor.fetchall()
+
+# Print all rows
+for row in rows:
+ print("Data row = (%s, %s)" %(str(row[0]), str(row[1])))
+```
+
+## Step 4: Update data
+
+The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL UPDATE statement to update data.
+
+```python
+# Update a data row in the table
+cursor.execute("UPDATE pharmacy SET city = %s WHERE pharmacy_id = %s;", ("guntur",1))
+print("Updated 1 row of data")
+```
+
+## Step 5: Delete data
+
+The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL DELETE statement to delete the data.
+
+```python
+# Delete data row from table
+cursor.execute("DELETE FROM pharmacy WHERE pharmacy_name = %s;", ("Target",))
+print("Deleted 1 row of data")
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code is an example for copying data from a CSV file to a database table.
+
+It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```python
+with open('pharmacies.csv', 'r') as f:
+ # Notice that we don't need the `csv` module.
+ next(f) # Skip the header row.
+ cursor.copy_from(f, 'pharmacy', sep=',')
+print("copying data completed")
+```
+
+### COPY command to load data in-memory
+
+The following code is an example for copying in-memory data to table.
+
+```python
+data = [[3,"Walgreens","Sunnyvale","California",94006], [4,"Target","Sunnyvale","California",94016]]
+buf = io.StringIO()
+writer = csv.writer(buf)
+writer.writerows(data)
+
+buf.seek(0)
+with conn.cursor() as cur:
+ cur.copy_from(buf, "pharmacy", sep=",")
+
+conn.commit()
+conn.close()
+```
+
+## Next steps
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto App Stacks Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-app-stacks-ruby.md
+
+ Title: Ruby app to connect and query Hyperscale (Citus)
+description: Learn to query Hyperscale (Citus) using Ruby
+++++ Last updated : 06/20/2022++
+# Ruby app to connect and query Hyperscale (Citus)
++
+In this how-to article, you'll connect to a Hyperscale (Citus) server group using a Ruby application. We'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js, and are new to working with Hyperscale (Citus).
+
+> [!TIP]
+>
+> The process of creating a Ruby app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+
+## Setup
+
+### Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+* Create a Hyperscale (Citus) database using this link [Create Hyperscale (Citus) server group](quickstart-create-portal.md)
+* [Ruby](https://www.ruby-lang.org/en/downloads/)
+* [Ruby pg](https://rubygems.org/gems/pg/), the PostgreSQL module for Ruby
+
+### Get database connection information
+
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
+
+![Diagram showing ruby connection string.](../media/howto-app-stacks/01-python-connection-string.png)
+
+## Connect, create table, insert data
+
+Use the following code to connect and create a table using CREATE TABLE SQL statement, followed by INSERT INTO SQL statements to add rows into the table.
+
+The code uses a `PG::Connection` object with constructor to connect to Hyperscale (Citus). Then it calls method `exec()` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
++
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Drop previous table of same name if one exists
+ connection.exec('DROP TABLE IF EXISTS pharmacy;')
+ puts 'Finished dropping table (if existed).'
+
+ # Drop previous table of same name if one exists.
+ connection.exec('CREATE TABLE pharmacy (pharmacy_id integer ,pharmacy_name text,city text,state text,zip_code integer);')
+ puts 'Finished creating table.'
+
+ # Insert some data into table.
+ connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);")
+ connection.exec("INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);")
+ puts 'Inserted 2 rows of data.'
+
+ # Create index
+ connection.exec("CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);")
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Use the super power of distributed tables
+
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+
+> [!TIP]
+>
+> Distributing your tables is optional if you are using the Basic Tier of Hyperscale (Citus), which is a single-node server group.
+
+Use the following code to connect to the database and distribute the table:
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Super power of Distributed Tables.
+ connection.exec("select create_distributed_table('pharmacy','pharmacy_id');")
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Read data
+
+Use the following code to connect and read the data using a SELECT SQL statement.
+
+The code uses a `PG::Connection` object with constructor new to connect to Hyperscale (Citus). Then it calls method `exec()` to run the SELECT command, keeping the results in a result set. The result set collection is iterated using the `resultSet.each` do loop, keeping the current row values in the row variable. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ resultSet = connection.exec('SELECT * from pharmacy')
+ resultSet.each do |row|
+ puts 'Data row = (%s, %s, %s, %s, %s)' % [row['pharmacy_id'], row['pharmacy_name'], row['city'], row['state'], row['zip_code ']]
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Update data
+
+Use the following code to connect and update the data using a UPDATE SQL statement.
+
+The code uses a `PG::Connection` object with constructor to connect to Hyperscale (Citus). Then it calls method `exec()` to run the UPDATE command. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Modify some data in table.
+ connection.exec('UPDATE pharmacy SET city = %s WHERE pharmacy_id = %d;' % ['\'guntur\'',100])
+ puts 'Updated 1 row of data.'
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Delete data
+
+Use the following code to connect and read the data using a DELETE SQL statement.
+
+The code uses a `PG::Connection` object with constructor new to connect to Hyperscale (Citus). Then it calls method `exec()` to run the DELETE command. The code checks for errors using the `PG::Error` class. Then it calls method `close()` to close the connection before terminating. For more information about these classes and methods, see the [Ruby pg reference documentation](https://rubygems.org/gems/pg).
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Delete some data in table.
+ connection.exec('DELETE FROM pharmacy WHERE city = %s;' % ['\'guntur\''])
+ puts 'Deleted 1 row of data.'
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## COPY command for super fast ingestion
+
+The COPY command can yield [tremendous throughput](https://www.citusdata.com/blog/2016/06/15/copy-postgresql-distributed-tables) while ingesting data into Hyperscale (Citus). The COPY command can ingest data in files, or from micro-batches of data in memory for real-time ingestion.
+
+### COPY command to load data from a file
+
+The following code is an example for copying data from a CSV file to a database table.
+
+It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv).
+
+```ruby
+require 'pg'
+begin
+ filename = String('pharmacies.csv')
+
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ # Copy the data from Csv to table.
+ result = connection.copy_data "COPY pharmacy FROM STDIN with csv" do
+ File.open(filename , 'r').each do |line|
+ connection.put_copy_data line
+ end
+ puts 'Copied csv data successfully .'
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+### COPY command to load data in-memory
+
+The following code is an example for copying in-memory data to a table.
+
+```ruby
+require 'pg'
+begin
+ # NOTE: Replace the host and password arguments in the connection string.
+ # (The connection string can be obtained from the Azure portal)
+ connection = PG::Connection.new("host=<server name> port=5432 dbname=citus user=citus password={your password} sslmode=require")
+ puts 'Successfully created connection to database'
+
+ enco = PG::TextEncoder::CopyRow.new
+ connection.copy_data "COPY pharmacy FROM STDIN", enco do
+ connection.put_copy_data [5000,'Target','Sunnyvale','California','94001']
+ connection.put_copy_data [5001, 'CVS','San Francisco','California','94002']
+ puts 'Copied inmemory data successfully .'
+ end
+rescue PG::Error => e
+ puts e.message
+ensure
+ connection.close if connection
+end
+```
+
+## Next steps
+
+Learn to [build scalable applications](howto-build-scalable-apps-overview.md)
+with Hyperscale (Citus).
postgresql Howto Build Scalable Apps Model High Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-high-throughput.md
When building a high-throughput app, keep some optimization in mind.
We've completed the how-to for building scalable apps.
+* Learn how to use specific [app stacks](howto-app-stacks-overview.md) with Hyperscale (Citus).
* You may now want to know how to [scale a server group](howto-scale-grow.md) to give your app more nodes and hardware capacity.
postgresql Howto Build Scalable Apps Model Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-multi-tenant.md
it easy to include a tenant ID in queries. Here are instructions:
## Next steps
-If you're migrating an existing multi-tenant app to Hyperscale (Citus), see
-this highly detailed guide:
+We've completed the how-to for building scalable apps.
-> [!div class="nextstepaction"]
-> [Migrating an existing app (external) >](https://docs.citusdata.com/en/stable/develop/migration.html#transitioning-mt)
+* Learn how to use specific [app stacks](howto-app-stacks-overview.md) with Hyperscale (Citus).
+* You may now want to know how to [scale a server group](howto-scale-grow.md)
+ to give your app more nodes and hardware capacity.
+* To migrate an existing multi-tenant app to Hyperscale (Citus), see
+ [Migrating an existing app (external) >](https://docs.citusdata.com/en/stable/develop/migration.html#transitioning-mt)
postgresql Howto Build Scalable Apps Model Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-model-real-time.md
SELECT create_reference_table('countries');
We've completed the how-to for building scalable apps.
+* Learn how to use specific [app stacks](howto-app-stacks-overview.md) with Hyperscale (Citus).
* You may now want to know how to [scale a server group](howto-scale-grow.md) to give your app more nodes and hardware capacity.
postgresql Howto Build Scalable Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-build-scalable-apps-overview.md
Last updated 04/28/2022
This series covers how to build scalable relational apps with Hyperscale (Citus).
-If you're building an app that a single node database node (8vcore, 32-GB RAM
+If you're building an app that a single node database node (64vcore, 256-GB RAM
and 512-GB storage) can handle for the near future (~6 months), then you can start with the Hyperscale (Citus) **Basic Tier**. Later, you can add more nodes, rebalance your, data and scale out seamlessly.
postgresql Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-connect.md
Hyperscale (Citus).
# [pgAdmin](#tab/pgadmin) - [pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source administration and development platform for PostgreSQL.
administration and development platform for PostgreSQL.
# [psql](#tab/psql) - The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
Last updated 05/11/2022
-# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview)
+# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (preview)
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
->[!NOTE]
-> Single Server to Flexible Server migration tool is in private preview.
+Azure Database for PostgreSQL Flexible Server provides zone-redundant high availability, control over price, and control over maintenance windows. You can use the available migration tool to move your databases from Single Server to Flexible Server. To understand the differences between the two deployment options, see [this comparison chart](../flexible-server/concepts-compare-single-server-flexible-server.md).
-Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration tool enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration tool. This tool automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The tool is provided free of cost for customers.
+By using the migration tool, you can initiate migrations for multiple servers and databases in a repeatable way. The tool automates most of the migration steps to make the migration journey across Azure platforms as seamless as possible. The tool is free for customers.
-Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
+>[!NOTE]
+> The migration tool is in private preview.
+>
+> Migration from Single Server to Flexible Server is enabled in preview in these regions: Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
## Overview
-Single to Flexible server migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
+The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
-You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration tool automates the following steps:
+You choose the source server and can select up to eight databases from it. This limitation is per migration task. The migration tool automates the following steps:
-1. Creates the migration infrastructure in the region of the target flexible server
-2. Creates public IP address and attaches it to the migration infrastructure
-3. Allow-listing of migration infrastructureΓÇÖs IP address on the firewall rules of both source and target servers
-4. Creates a migration project with both source and target types as Azure database for PostgreSQL
-5. Creates a migration activity to migrate the databases specified by the user from source to target.
-6. Migrates schema from source to target
-7. Creates databases with the same name on the target Flexible server
-8. Migrates data from source to target
+1. Creates the migration infrastructure in the region of the target server.
+2. Creates a public IP address and attaches it to the migration infrastructure.
+3. Adds the migration infrastructure's IP address to the allowlist on the firewall rules of both the source and target servers.
+4. Creates a migration project with both source and target types as Azure Database for PostgreSQL.
+5. Creates a migration activity to migrate the databases specified by the user from the source to the target.
+6. Migrates schemas from the source to the target.
+7. Creates databases with the same name on the Flexible Server target.
+8. Migrates data from the source to the target.
-Following is the flow diagram for Single to Flexible migration tool.
+The following diagram shows the process flow for migration from Single Server to Flexible Server via the migration tool.
+
-**Steps:**
-1. Create a Flex PG server
-2. Invoke migration
-3. Migration infrastructure provisioned (DMS)
-4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only)
-5. Cutover to the target
+The steps in the process are:
+
+1. Create a Flexible Server target.
+2. Invoke migration.
+3. Provision the migration infrastructure by using Azure Database Migration Service.
+4. Start the migration.
+ 1. Initial dump/restore (online and offline)
+ 1. Streaming the changes (online only)
+5. Cut over to the target.
-The migration tool is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
+The migration tool is exposed through the Azure portal and through easy-to-use Azure CLI commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations.
-## Migration modes comparison
+## Comparison of migration modes
-Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases.
+The tool supports two modes for migration from Single Server to Flexible Server. The *online* option provides reduced downtime for the migration, with logical replication restrictions. The *offline* option offers a simple migration but might incur extended downtime, depending on the size of databases.
-The following table summarizes the differences between these two modes of migration.
+The following table summarizes the differences between the migration modes.
| Capability | Online | Offline | |:|:-|:--| | Database availability for reads during migration | Available | Available |
-| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated |
-| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
-| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime |
-| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. |
-| Manual Cutover | Required | Not required |
-| Downtime Required | Less | More |
-| Logical replication limitations | Applicable | Not Applicable |
-| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size |
+| Database availability for writes during migration | Available | Generally not recommended, because any writes initiated after the migration are not captured or migrated |
+| Application suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
+| Environment suitability | Production environments | Usually development environments, testing environments, and some production environments that can afford downtime |
+| Suitability for write-heavy workloads | Suitable but expected to reduce the workload during migration | Not applicable, because writes at the source after migration begins are not replicated to the target |
+| Manual cutover | Required | Not required |
+| Downtime required | Less | More |
+| Logical replication limitations | Applicable | Not applicable |
+| Migration time required | Depends on the database size and the write activity until cutover | Depends on the database size |
+
+Based on those differences, pick the mode that best works for your workloads.
-**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server.
+### Migration considerations for offline mode
-The following table shows the approximate time taken to perform offline migrations for databases of various sizes.
+The migration process for offline mode entails a dump of the source Single Server database, followed by a restore at the Flexible Server target.
+
+The following table shows the approximate time for performing offline migrations for databases of various sizes.
>[!NOTE]
-> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases.
+> Add about 15 minutes for the migration infrastructure to be deployed for each migration task. Each task can migrate up to eight databases.
-| Database Size | Approximate Time Taken (HH:MM) |
+| Database size | Approximate time taken (HH:MM) |
|:|:-| | 1 GB | 00:01 | | 5 GB | 00:05 |
The following table shows the approximate time taken to perform offline migratio
| 50 GB | 00:45 | | 100 GB | 06:00 | | 500 GB | 08:00 |
-| 1000 GB | 09:30 |
-
-**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding).
+| 1,000 GB | 09:30 |
-The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server.
-
-Based on the above differences, pick the mode that best works for your workloads.
+### Migration considerations for online mode
+The migration process for online mode entails a dump of the Single Server source database, a restore of that dump in the Flexible Server target, and then replication of ongoing changes. You capture change data by using logical decoding.
+The time for completing an online migration depends on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to be replicated to Flexible Server.
## Migration steps
-### Pre-requisites
+### Prerequisites
-Follow the steps provided in this section before you get started with the single to flexible server migration tool.
+Before you start using the migration tool:
-- **Target Server Creation** - You need to create the target PostgreSQL Flexible Server before using the migration tool. Use the creation [QuickStart guide](../flexible-server/quickstart-create-server-portal.md) to create one.
+- [Create an Azure Database for PostgreSQL server](../flexible-server/quickstart-create-server-portal.md).
-- **Source Server pre-requisites** - You must [enable logical replication](../single-server/concepts-logical.md) on the source server.
+- [Enable logical replication](../single-server/concepts-logical.md) on the source server.
- :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in the Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
->[!NOTE]
-> Enabling logical replication will require a server reboot for the change to take effect.
+ >[!NOTE]
+ > Enabling logical replication will require a server restart for the change to take effect.
-- **Azure Active Directory App set up** - It is a critical component of the migration tool. Azure AD App helps with role-based access control as the migration tool needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-azure-ad-app-portal.md) for step-by-step process.
+- [Set up an Azure Active Directory (Azure AD) app](./how-to-setup-azure-ad-app-portal.md). An Azure AD app is a critical component of the migration tool. It helps with role-based access control as the migration tool accesses both the source and target servers.
### Data and schema migration
-Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI.
+After you finish the prerequisites, migrate the data and schemas by using one of these methods:
-- [Migrate using Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)-- [Migrate using Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
+- [Migrate by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)
+- [Migrate by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
-### Post migration
+### Post-migration considerations
-- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you.
+- All the resources that the migration tool creates will be automatically cleaned up, whether the migration succeeds, fails, or is canceled. No action is required from you.
-- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation.
+- If your migration fails, you can create a new migration task with a different name and retry the operation.
-- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases.
+- If you have more than eight databases on your Single Server source and you want to migrate them all, we recommend that you create multiple migration tasks. Each task can migrate up to eight databases.
-- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration.
+- The migration does not move the database users and roles of the source server. You have to manually create these and apply them to the target server after migration.
-- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes.
+- For security reasons, we highly recommended that you delete the Azure AD app after the migration finishes.
-- Post data validations and making your application point to flexible server, you can consider deleting your single server.
+- After you validate your data and make your application point to Flexible Server, you can consider deleting your Single Server source.
## Limitations
-### Size limitations
+### Size
-- Databases of sizes up to 1TB can be migrated using this tool. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
+- You can migrate databases of sizes up to 1 TB by using this tool. To migrate larger databases or heavy write workloads, contact your account team or [contact us](mailto:AskAzureDBforPGS2F@microsoft.com).
-- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.
+- In one migration attempt, you can migrate up to eight user databases from Single Server to Flexible Server. If you have more databases to migrate, you can create multiple migrations between the same Single Server source and Flexible Server target.
-### Performance limitations
+### Performance
-- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance.
+- The migration infrastructure is deployed on a four-vCore virtual machine that might limit migration performance.
-- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline).
+- The deployment of migration infrastructure takes 10 to 15 minutes before the actual data migration starts, regardless of the size of data or the migration mode (online or offline).
-### Replication limitations
+### Replication
-- Single to Flexible Server migration tool uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
- - **DDL commands** are not replicated.
- - **Sequence** data is not replicated.
- - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
+- The migration tool uses a logical decoding feature of PostgreSQL to perform the online migration. The decoding feature has the following limitations. For more information about logical replication limitations, see the [PostgreSQL documentation](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
+ - Data Definition Language (DDL) commands are not replicated.
+ - Sequence data is not replicated.
+ - Truncate commands are not replicated.
+
+ To work around this limitation, use `DELETE` instead of `TRUNCATE`. To avoid accidental `TRUNCATE` invocations, you can revoke the `TRUNCATE` privilege from tables.
- - Views, Materialized views, partition root tables and foreign tables will not be migrated.
+ - Views, materialized views, partition root tables, and foreign tables are not migrated.
-- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration.
+- Logical decoding will use resources in the Single Server source. Consider reducing the workload, or plan to scale CPU/memory resources at the Single Server source during the migration.
### Other limitations -- The migration tool migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
+- The migration tool migrates only data and schemas of the Single Server databases to Flexible Server. It does not migrate other features, such as server parameters, connection security details, firewall rules, users, roles, and permissions. In other words, everything except data and schemas must be manually configured in the Flexible Server target.
-- It does not validate the data in flexible server post migration. The customers must manually do this.
+- The migration tool does not validate the data in the Flexible Server target after migration. You must do this validation manually.
-- The migration tool only migrates user databases including Postgres database and not system/maintenance databases.
+- The migration tool migrates only user databases, including Postgres databases. It doesn't migrate system or maintenance databases.
-- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name has to be created.
+- If migration fails, there is no option to retry the same migration task. You have to create a new migration task with a unique name.
-- The migration tool does not include assessment of your single server.
+- The migration tool does not include an assessment of your Single Server source.
## Best practices -- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.-- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations.-- Batch similar sized databases in a migration task.
+- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+- Plan the mode of migration for each database. For simpler migrations and smaller databases, consider offline mode.
+- Batch similar-sized databases in a migration task.
- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.-- Perform test migrations before migrating for production.
- - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility.
+- Perform test migrations before migrating for production:
+ - Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing.
+
+ The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the continuous replication (CDC) phase with minimal lag, make your Flexible Server target the primary database server. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher Postgres version, test for application compatibility.
- - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss.
+ - After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready.
+
+ The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss.
-- Cut over all dependent applications to access the new primary database and open the applications for production usage.-- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required.
+- Cut over all dependent applications to access the new primary database, and open the applications for production usage.
+- After the application starts running on the Flexible Server target, monitor the database performance closely to see if performance tuning is required.
## Next steps -- [Migrate to Flexible Server using Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md).-- [Migrate to Flexible Server using Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
+- [Migrate to Flexible Server by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)
+- [Migrate to Flexible Server by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI"
+ Title: "Migrate from Single Server to Flexible Server by using the Azure CLI"
-description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI.
+description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure CLI.
Last updated 05/09/2022
-# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI
+# Migrate from Single Server to Flexible Server by using the Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
->[!NOTE]
-> Single Server to Flexible Server migration tool is in private preview.
+This article shows you how to use the migration tool in the Azure CLI to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
-This quick start article shows you how to use Single to Flexible Server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+>[!NOTE]
+> The migration tool is in private preview.
-## Before you begin
+## Prerequisites
-1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below.
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
+2. Register your subscription for Azure Database Migration Service. (If you've already done it, you can skip this step.)
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
+ 1. On the Azure portal, go to your subscription.
-3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**.
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I Database Migration Service register button." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
+ 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-## Pre-requisites
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of the Register button for Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
-### Setup Azure CLI
+3. Install the latest Azure CLI for your operating system from the [Azure CLI installation page](/cli/azure/install-azure-cli).
-1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli)
-2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli).
-3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli).
+ If the Azure CLI is already installed, check the version by using the `az version` command. The version should be 2.28.0 or later to use the migration CLI commands. If not, [update your Azure CLI version](/cli/azure/update-azure-cli).
+4. Run the `az login` command:
- ```bash
- az login
- ```
-4. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration tool.
+ ```bash
+ az login
+ ```
+
+ A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, see [this article](/cli/azure/authenticate-azure-cli).
+5. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
## Migration CLI commands
-Single to Flexible Server migration tool comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
+The migration tool comes with easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with `az postgres flexible-server migration`.
+
+For help with understanding the options associated with a command and with framing the right syntax, you can use the `help` parameter:
```azurecli-interactive az postgres flexible-server migration --help ```
- gives you the following output.
+That command gives you the following output:
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-help.png":::
-It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail.
+The output lists the supported migration commands, along with their actions. Let's look at these commands in detail.
-### Create migration
+### Create a migration
-The create migration command helps in creating a migration from a source server to a target server
+The `create` command helps in creating a migration from a source server to a target server:
```azurecli-interactive az postgres flexible-server migration create -- help ```
-gives the following result
+That command gives the following result:
-It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below
+It calls out the expected arguments and has an example syntax for creating a successful migration from the source server to the target server. Here's the CLI command to create a migration:
```azurecli az postgres flexible-server migration create [--subscription]
az postgres flexible-server migration create [--subscription]
| Parameter | Description | | - | - |
-|**subscription** | Subscription ID of the target flexible server |
-| **resource-group** | Resource group of the target flexible server |
-| **name** | Name of the target flexible server |
-| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. |
-| **properties** | Absolute path to a JSON file, that has the information about the source single server |
+|`subscription` | Subscription ID of the Flexible Server target. |
+|`resource-group` | Resource group of the Flexible Server target. |
+|`name` | Name of the Flexible Server target. |
+|`migration-name` | Unique identifier to migrations attempted to Flexible Server. This field accepts only alphanumeric characters and does not accept any special characters, except a hyphen (`-`). The name can't start with `-`, and no two migrations to a Flexible Server target can have the same name. |
+|`properties` | Absolute path to a JSON file that has the information about the Single Server source. |
-**For example:**
+For example:
```azurecli-interactive az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON" ```
-The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
+The `migration-name` argument used in the `create` command will be used in other CLI commands, such as `update`, `delete`, and `show.` In all those commands, it will uniquely identify the migration attempt in the corresponding actions.
-The migration tool offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
+The migration tool offers online and offline modes of migration. To know more about the migration modes and their differences, see [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
-Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
+Create a migration between source and target servers by using the migration mode of your choice. The `create` command needs a JSON file to be passed as part of its `properties` argument.
-The structure of the JSON is given below.
+The structure of the JSON is:
```bash {
The structure of the JSON is given below.
```
-Create migration parameters:
+Here are the `create` parameters:
| Parameter | Type | Description | | - | - | - |
-| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. |
-| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. |
-| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. |
-| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
-| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this tool are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
-| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. |
-| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
-| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration tool, permission to automatically overwrite databases by setting the value of this property to **true** |
+| `SourceDBServerResourceId` | Required | This is the resource ID of the Single Server source and is mandatory. |
+| `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
+| `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
+| `SecretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target, along with the Azure Active Directory app credentials. These passwords help to authenticate against the source and target servers. They also help in checking proper authorization access to the resources.
+| `MigrationResourceGroup` | Optional | This section consists of two properties: <br><br> `ResourceID` (optional): The migration infrastructure and other network infrastructure components are created to migrate data and schemas from the source to the target. By default, all the components that this tool creates are provisioned under the resource group of the target server. If you want to deploy them under a different resource group, you can assign the resource ID of that resource group to this property. <br><br> `SubnetResourceID` (optional): If your source has public access turned off, or if your target server is deployed inside a virtual network, specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
+| `DBsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
+| `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. |
+| `OverwriteDBsinTarget` | Optional | If the target server happens to have an existing database with the same name as the one you're trying to migrate, the migration will pause until you acknowledge that overwrites in the target databases are allowed. You can avoid this pause by setting the value of this property to `true`, which gives the migration tool permission to automatically overwrite databases. |
-### Mode of migrations
+### Choose a migration mode
-The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server.
+The default migration mode for migrations created through CLI commands is *online*. Filling out the preceding properties in your JSON file would create an online migration from your Single Server source to the Flexible Server target.
-If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command.
+If you want to migrate in offline mode, you need to add another property (`"TriggerCutover":"true"`) to your JSON file before you initiate the `create` command.
### List migrations
-The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below
+The `list` command shows the migration attempts that were made to a Flexible Server target. Here's the CLI command to list migrations:
```azurecli az postgres flexible-server migration list [--subscription]
az postgres flexible-server migration list [--subscription]
[--filter] ```
-There is a parameter called **filter** and it can take **Active** and **All** as values.
+The `filter` parameter can take these values:
+
+- `Active`: Lists the current active migration attempts for the target server. It does not include the migrations that have reached a failed, canceled, or succeeded state.
+- `All`: Lists all the migration attempts to the target server. This includes both the active and past migrations, regardless of the state.
-- **Active** ΓÇô Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/canceled/succeeded state.-- **All** ΓÇô Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state.
+For more information about this command, use the `help` parameter:
```azurecli-interactive az postgres flexible-server migration list -- help ```
-For any additional information.
-
-### Show Details
+### Show details
-The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below:
+Use the following `list` command to get the details of a specific migration. These details include information on the current state and substate of the migration.
```azurecli az postgres flexible-server migration list [--subscription]
az postgres flexible-server migration list [--subscription]
[--migration-name] ```
-The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command.
+The `migration_name` parameter is the name assigned to the migration during the `create` command. Here's a snapshot of the sample response from the CLI command for showing details:
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-migration-name.png" alt-text="Screenshot of C L I migration name." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-migration-name.png":::
-Some important points to note on the command response:
+Note these important points for the command response:
-- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks. -- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.-- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc.-- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.-- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state.-- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section.
+- As soon as the `create` command is triggered, the migration moves to the `InProgress` state and the `PerformingPreRequisiteSteps` substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and perform a few maintenance tasks.
+- After the `PerformingPreRequisiteSteps` substate is completed, the migration moves to the substate of `Migrating Data`, where the dump and restore of the databases take place.
+- Each database being migrated has its own section with all migration details, such as table count, incremental inserts, deletions, and pending bytes.
+- The time that the `Migrating Data` substate takes to finish depends on the size of databases that are being migrated.
+- For offline mode, the migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state.
+- For online mode, the migration moves to the state of `WaitingForUserAction` and a substate of `WaitingForCutoverTrigger` after the `Migrating Data` state finishes successfully. The next section covers the details of the `WaitingForUserAction` state.
+
+For more information about this command, use the `help` parameter:
```azurecli-interactive az postgres flexible-server migration show -- help ```
-for any additional information.
+### Update a migration
+
+As soon as the infrastructure setup is complete, the migration activity will pause. Messages in the response for the CLI command will show details if some prerequisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called `WaitingForUserAction`.
+
+You use the `update` command to set values for parameters, which helps the migration move to the next stage in the process. Let's look at each of the substates.
-### Update migration
+#### WaitingForLogicalReplicationSetupRequestOnSourceDB
-As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states.
+If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. You can enable the logical replication setting manually by changing the replication flag to `Logical` on the portal. This change requires a server restart.
-- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command
+You can also enable the logical replication setting by using the following CLI command:
```azurecli az postgres flexible-server migration update [--subscription]
az postgres flexible-server migration update [--subscription]
[--initiate-data-migration] ```
-You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server.
-
-**For example:**
+To set logical replication on your source server, pass the value `true` to the `initiate-data-migration` property. For example:
```azurecli-interactive az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true" ```
-In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action.
+If you enable it manually, *you still need to issue the preceding `update` command* for the migration to move out of the `WaitingForUserAction` state. The server doesn't need to restart again because that already happened via the portal action.
+
+#### WaitingForTargetDBOverwriteConfirmation
-- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command.
+`WaitingForTargetDBOverwriteConfirmation` is the state where migration is waiting for confirmation on target overwrite, because data is already present in the target server for the database that's being migrated. You can enable it by using the following CLI command:
```azurecli az postgres flexible-server migration update [--subscription]
az postgres flexible-server migration update [--subscription]
[--overwrite-dbs] ```
-You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server.
-
-**For example:**
+To give the migration permissions to overwrite any existing data in the target server, you need to pass the value `true` to the `overwrite-dbs` property. For example:
```azurecli-interactive az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true" ``` -- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command.
+#### WaitingForCutoverTrigger
+
+Migration gets to the `WaitingForCutoverTrigger` state when the dump and restore of the databases have finished and the ongoing writes at your Single Server source are being replicated to the Flexible Server target. You should wait for the replication to finish so that the target is in sync with the source.
+
+You can monitor the replication lag by using the response from the `show` command. A metric called **Pending Bytes** is associated with each database that's being migrated. This metric gives you an indication of the difference between the source and target databases in bytes. This number should be nearing zero over time. After the number reaches zero for all the databases, stop any further writes to your Single Server source. Then, validate the data and schema on your Flexible Server target to make sure they match exactly with the source server.
+
+After you complete the preceding steps, you can trigger a cutover by using the following CLI command:
```azurecli az postgres flexible-server migration update [--subscription]
az postgres flexible-server migration update [--subscription]
[--cutover] ```
-**For example:**
+For example:
```azurecli-interactive az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover" ```
-After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server.
+After you use the preceding command, use the command for showing details to monitor if the cutover has finished successfully. Upon successful cutover, migration will move to a `Succeeded` state. Update your application to point to the new Flexible Server target.
+
+For more information about this command, use the `help` parameter:
```azurecli-interactive az postgres flexible-server migration update -- help ```
-for any additional information.
-
-### Delete/Cancel Migration
+### Delete or cancel a migration
-Any ongoing migration attempts can be deleted or canceled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration
+You can delete or cancel any ongoing migration attempts by using the `delete` command. This command stops all migration activities in that task, but it doesn't drop or roll back any changes on your target server. Here's the CLI command to delete a migration:
```azurecli az postgres flexible-server migration delete [--subscription]
az postgres flexible-server migration delete [--subscription]
[--migration-name] ```
-**For example:**
+For example:
```azurecli-interactive az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1" ```
+For more information about this command, use the `help` parameter:
+ ```azurecli-interactive az postgres flexible-server migration delete -- help ```
-for any additional information.
-
-## Monitoring Migration
+## Monitoring migration
-The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration.
+The `create` command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the `completed` state. The `show` command helps you monitor ongoing migrations, because it gives the current state and substate of the migration.
-Migration **states**:
+The following tables describe the migration states and substates.
-| Migration State | Description |
+| Migration state | Description |
| - | - |
-| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. |
-| **Canceled** | The migration has been canceled or deleted. |
-| **Failed** | The migration has failed. |
-| **Succeeded** | The migration has succeeded and is complete. |
-| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
-
-Migration **substates**:
+| `InProgress` | The migration infrastructure is being set up, or the actual data migration is in progress. |
+| `Canceled` | The migration has been canceled or deleted. |
+| `Failed` | The migration has failed. |
+| `Succeeded` | The migration has succeeded and is complete. |
+| `WaitingForUserAction` | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
-| Migration substates | Description |
+| Migration substate | Description |
| - | - |
-| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. |
-| **MigratingData** | Data is being migrated. |
-| **CompletingMigration** | Migration cutover in progress. |
-| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. |
-| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. |
-| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into. <br> You can enable this via the **update migration** CLI command. |
-| **Completed** | Cutover was successful, and migration is complete. |
--
-## How to find if custom DNS is used for name resolution?
-Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server.
+| `PerformingPreRequisiteSteps` | Infrastructure is being set up and is being prepped for data migration. |
+| `MigratingData` | Data is being migrated. |
+| `CompletingMigration` | Migration cutover is in progress. |
+| `WaitingForLogicalReplicationSetupRequestOnSourceDB` | Waiting for logical replication enablement. You can enable this substate manually or by using the `update` CLI command covered in the next section. |
+| `WaitingForCutoverTrigger` | Migration is ready for cutover. You can start the cutover when ready. |
+| `WaitingForTargetDBOverwriteConfirmation` | Waiting for confirmation on target overwrite. Data is present in the target server. <br> You can enable this substate via the `update` CLI command. |
+| `Completed` | Cutover was successful, and migration is complete. |
+## Custom DNS for name resolution
-## Post Migration Steps
+To find out if custom DNS is used for name resolution, go to the virtual network where you deployed your source or target server, and then select **DNS server**. The virtual network should indicate if it's using a custom DNS server or the default Azure-provided DNS server.
-Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
## Next steps -- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)
+- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
Title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal"
+ Title: "Migrate from Single Server to Flexible Server by using the Azure portal"
-description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal.
+description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure portal.
Last updated 05/09/2022
-# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
-
+# Migrate from Single Server to Flexible Server by using the Azure portal
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
->[!NOTE]
-> Single Server to Flexible Server migration tool is in private preview.
-
-This guide shows you how to use Single to Flexible server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+This article shows you how to use the migration tool in the Azure portal to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
-## Before you begin
+>[!NOTE]
+> The migration tool is in private preview.
-1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Register your subscription for the Azure Database Migration Service
+## Prerequisites
-Go to Azure portal homepage and navigate to your subscription as shown below.
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
+2. Register your subscription for Azure Database Migration Service:
+ 1. On the Azure portal, go to your subscription.
-In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**.
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png" alt-text="Screenshot of Azure portal subscription details." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png":::
+ 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-## Pre-requisites
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png" alt-text="Screenshot of the Register button for Azure Data Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png":::
-Take care of the [pre-requisites](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration tool.
+3. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
-## Configure migration task
+## Configure the migration task
-Single to Flexible server migration tool comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
+The migration tool comes with a simple, wizard-based experience on the Azure portal. Here's how to start:
-1. **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
+1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
-2. Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](../flexible-server/quickstart-create-server-portal.md).
+2. Go to your Azure Database for PostgreSQL Flexible Server target. If you haven't created a Flexible Server target, [create one now](../flexible-server/quickstart-create-server-portal.md).
-3. In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it.
+3. On the **Overview** tab for Flexible Server, on the left menu, scroll down to **Migration (preview)** and select it.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration Preview Tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
-4. Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration tool, you will see an empty grid with a prompt to begin your first migration.
+4. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of the Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
- If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers.
+ If you've already created migrations to your Flexible Server target, the grid is populated with information about migrations that were attempted to this target from Single Server sources.
-5. Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server.
+5. Select the **Migrate from Single Server** button. You'll go through a wizard-based series of tabs to create a migration to this Flexible Server target from any Single Server source.
### Setup tab
-The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same.
+The first tab is **Setup**. It has basic information about the migration and the list of prerequisites for getting started with migrations. These prerequisites are the same as the ones listed in the [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md) article.
-- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.-- The **Migration resource group** is where all the migration-related components will be created by this migration tool.
+**Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and does not accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
-By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
+**Migration resource group** is where the migration tool will create all the migration-related components. By default, the resource group of the Flexible Server target and all the components will be cleaned up automatically after the migration finishes. If you want to create a temporary resource group for the migration, create it and then select it from the dropdown list.
-- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field.
+For **Azure Active Directory App**, click the **select** option and choose the Azure Active Directory app that you created for the prerequisite step. Then, in the **Azure Active Directory Client Secret** box, paste the client secret that was generated for that app.
-Click on the **Next** button.
+Select the **Next** button.
### Source tab
+The **Source** tab prompts you to give details related to the Single Server source that databases need to be migrated from.
-The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region.
-
-Choose the single server from which you want to migrate databases from, in the drop down.
-Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration tool to login into the single server to initiate the dump and migration.
+After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Server sources under that resource group across regions. Select the source that you want to migrate databases from. We recommend that you migrate databases from a Single Server source to a Flexible Server target in the same region.
-You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
+After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are automatically populated. The server admin login name is the admin username that was used to create the Single Server source. In the **Password** box, enter the password for that admin login name. It will enable the migration tool to log in to the Single Server source to initiate the dump and migration.
-The final property in the source tab is migration mode. The migration tool offers online and offline mode of migration. The concepts page talks more about the [migration modes and their differences](./concepts-single-to-flexible.md).
+Under **Choose databases to migrate**, there's a list of user databases inside the Single Server source. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations by using the same experience between the source and target servers.
-Once you pick the migration mode, the restrictions associated with the mode are displayed.
+The final property on the **Source** tab is **Migration mode**. The migration tool offers online and offline modes of migration. The [concepts article](./concepts-single-to-flexible.md) talks more about the migration modes and their differences. After you choose the migration mode, the restrictions that are associated with that mode appear.
-After filling out all the fields, please click the **Next** button.
+When you're finished filling out all the fields, select the **Next** button.
### Target tab
+The **Target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version.
+ :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-target.png" alt-text="Screenshot of target database server details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-target.png":::
-This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration tool to login into the flexible server to perform restore operations.
+For **Server admin login name**, the tab displays the admin username that was used during the creation of the Flexible Server target. Enter the corresponding password for the admin user. This password is required for the migration tool to log in to the Flexible Server target and perform restore operations.
-Choose an option **yes/no** for **Authorize DB overwrite**.
+For **Authorize DB overwrite**:
-- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present.-- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration.
+- If you select **Yes**, you give this migration service permission to overwrite existing data if a database that's being migrated to Flexible Server is already present.
+- If you select **No**, the migration service goes into a waiting state and asks you for permission to either overwrite the data or cancel the migration.
-Click on the **Next** button
+Select the **Next** button.
### Networking tab
-The content on the Networking tab depends on the networking topology of your source and target servers.
--- If both source and target servers are in public access, then you are going to see the message below.
+The content on the **Networking** tab depends on the networking topology of your source and target servers. If both source and target servers are in public access, the following message appears.
-In this case, you need not do anything and can just click on the **Next** button.
+In this case, you don't need to do anything and can select the **Next** button.
-- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server:
+If either the source or target server is configured in private access, the content of the **Networking** tab is different.
-- **Single Server Private Access** ΓÇô **Deny public network access** set to **Yes** and a private end point configured-- **Flexible Server Private Access** ΓÇô When flexible server is deployed inside a Vnet.
-If either source or target is configured in private access, then the networking tab looks like the following
+Let's try to understand what private access means for Single Server and Flexible Server:
+- **Single Server Private Access**: **Deny public network access** is set to **Yes**, and a private endpoint is configured.
+- **Flexible Server Private Access**: A Flexible Server target is deployed inside a virtual network.
-All the fields will be automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure DMS to move data between the source and target.
+For private access, all the fields are automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure Database Migration Service to move data between the source and the target.
-You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
+You can use the suggested subnet or choose a different one. But make sure that the selected subnet can connect to both the source and target servers.
-After picking a subnet, click on **Next** button
+After you choose a subnet, select the **Next** button.
### Review + create tab
-This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration.
+The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration.
-## Monitoring migrations
+## Monitor migrations
-After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created.
+After you select the **Create** button, a notification appears in a few seconds to say that the migration was successfully created.
-You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration
+You should automatically be redirected to the **Migration (Preview)** page of Flexible Server. That page has a new entry for the recently created migration.
-The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid.
+The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Version**, **Databases**, and **Start time**. By default, the grid shows the list of migrations in descending order of migration start times. In other words, recent migrations appear on top of the grid.
You can use the refresh button to refresh the status of the migrations.
-You can click on the migration name in the grid to see the details of that migration.
+You can also select the migration name in the grid to see the details of that migration.
-- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks.-- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place.-- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated.-- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc.-- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state.-- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully.
+As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate. The reason is that it takes time to create and deploy Database Migration Service, add the IP address on the firewall list of source and target servers, and perform maintenance tasks.
+After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place. The time that the **Migrating Data** substate takes to finish depends on the size of databases that you're migrating.
-You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**.
+When you select each of the databases that are being migrated, a fan-out pane appears. It has all the migration details, such as table count, incremental inserts, deletes, and pending bytes.
+For offline mode, the migration moves to the **Succeeded** state as soon as the **Migrating Data** state finishes successfully. If there's an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
-At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server.
+For online mode, the migration moves to the **WaitingForUserAction** state and the **WaitingForCutOver** substate after the **Migrating Data** substate finishes successfully.
-After completing the above steps, click on the **Cutover** button. You should see the following message
+Select the migration name to open the migration details page. There, you should see the substate of **WaitingForCutover**.
-Click on the **Yes** button to start cutover.
-In a few seconds after starting cutover, you should see the following notification
+At this stage, the ongoing writes at your Single Server source are replicated to the Flexible Server target via the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source.
+You can monitor the replication lag by selecting each database that's being migrated. That opens a fan-out pane with metrics. The value of the **Pending Bytes** metric should be nearing zero over time. After it reaches a few megabytes for all the databases, stop any further writes to your Single Server source and wait until the metric reaches 0. Then, validate the data and schemas on your Flexible Server target to make sure that they match exactly with the source server.
-Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful.
+After you complete the preceding steps, select the **Cutover** button. The following message appears.
-After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server.
-Possible migration states include
+Select the **Yes** button to start cutover.
-- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress.
+A few seconds after you start cutover, the following notification appears.
++
+When the cutover is complete, the migration moves to the **Succeeded** state. Migration of schema data from your Single Server source to your Flexible Server target is now complete. You can use the refresh button on the page to check if the cutover was successful.
+
+After you complete the preceding steps, you can change your application code to point database connection strings to Flexible Server. You can then start using the target as the primary database server.
+
+Possible migration states include:
+
+- **InProgress**: The migration infrastructure is being set up, or the actual data migration is in progress.
- **Canceled**: The migration has been canceled or deleted. - **Failed**: The migration has failed. - **Succeeded**: The migration has succeeded and is complete.-- **WaitingForUserAction**: Migration is waiting on a user action..
+- **WaitingForUserAction**: The migration is waiting for a user action.
-Possible migration substates include
+Possible migration substates include:
-- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration-- **MigratingData**: Data is being migrated-- **CompletingMigration**: Migration cutover in progress
+- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration.
+- **MigratingData**: Data is being migrated.
+- **CompletingMigration**: Migration cutover is in progress.
- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement. - **WaitingForCutoverTrigger**: Migration is ready for cutover.-- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into.
+- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite. Data is present in the target server that you're migrating into.
- **Completed**: Cutover was successful, and migration is complete. ## Cancel migrations
-You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**.
-
-You can choose multiple ongoing migrations at once and can cancel them.
+You have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in the **InProgress** or **WaitingForUserAction** state. You can't cancel a migration that's in the **Succeeded** or **Failed** state.
+You can choose multiple ongoing migrations at once and cancel them.
-Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a canceled migration on your target server.
-## Post migration steps
-
-Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration.
+Canceling a migration stops further migration activity on your target server. It doesn't drop or roll back any changes on your target server from the migration attempts. Be sure to drop the databases involved in a canceled migration on your target server.
## Next steps-- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)+
+Follow the [post-migration steps](./concepts-single-to-flexible.md) for a successful end-to-end migration.
postgresql How To Setup Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-setup-azure-ad-app-portal.md
Title: "Set up Azure AD app to use with Single to Flexible migration"
+ Title: "Set up an Azure AD app to use with Single Server to Flexible Server migration"
-description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature.
+description: Learn about setting up an Azure AD app to be used with the feature that migrates from Single Server to Flexible Server.
Last updated 05/09/2022
-# Set up Azure AD app to use with Single to Flexible server Migration
+# Set up an Azure AD app to use with migration from Single Server to Flexible Server
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This quick start article shows you how to set up Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps:
+This article shows you how to set up an [Azure Active Directory (Azure AD) app](../../active-directory/develop/howto-create-service-principal-portal.md) to use with a migration from Azure Database for PostgreSQL Single Server to Flexible Server.
-## Create Azure AD App
+An Azure AD app helps with role-based access control (RBAC). The migration infrastructure requires access to both the source and target servers, and it's restricted by the roles assigned to the Azure AD app. After you create the Azure AD app, you can use it to manage multiple migrations.
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings.
-2. Search for Azure Active Directory in the search bar on the top in the portal.
-3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**.
-4. Click on **New Registration**
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png":::
+## Create an Azure AD app
+
+1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
+2. In the Azure portal, enter **Azure Active Directory** in the search box.
+3. On the page for Azure Active Directory, under **Manage** on the left, select **App registrations**.
+4. Select **New registration**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png" alt-text="Screenshot that shows selections for creating a new registration for an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png":::
-5. Give the app registration a name, choose an option that suits your needs for account types and click register
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png":::
+5. Give the app registration a name, choose an option that suits your needs for account types, and then select **Register**.
-6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png":::
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png" alt-text="Screenshot that shows selections for naming and registering an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png":::
-7. In the next screen, click on **New client secret**.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png":::
+6. After the app is created, copy the client ID and tenant ID and store them. You'll need them for later steps in the migration. Then, select **Add a certificate or secret**.
-8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png":::
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png" alt-text="Screenshot that shows essential information about an Azure Active Directory app, along with the button for adding a certificate or secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png":::
-9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png":::
+7. For **Certificates & Secrets**, on the **Client secrets** tab, select **New client secret**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png" alt-text="Screenshot that shows the button for creating a new client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png":::
-10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources:
+8. On the fan-out pane, add a description, and then use the drop-down list to select the life span of your Azure AD app.
- | Resource | Type | Description |
- | - | - | - |
- | Single Server | Required | Source single server you're migrating from. |
- | Flexible Server | Required | Target flexible server you're migrating into. |
- | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. |
- | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. |
+ After all the migrations are complete, you can delete the Azure AD app that you created for RBAC. The default option is **6 months**. If you don't need the Azure AD app for six months, select **3 months**. Then select **Add**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png" alt-text="Screenshot that shows adding a description and selecting a life span for a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png":::
+9. In the **Value** column, copy the Azure AD app secret. You can copy the secret only during creation. If you miss this step, you'll need to delete the secret and create another one for future tries.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png" alt-text="Screenshot of copying a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png":::
## Add contributor privileges to an Azure resource
-Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used).
+After you create the Azure AD app, you need to add contributor privileges for it to the following resources.
+
+| Resource | Type | Description |
+| - | - | - |
+| Single Server | Required | Single Server source that you're migrating from. |
+| Flexible Server | Required | Flexible Server target that you're migrating into. |
+| Azure resource group | Required | Resource group for the migration. By default, this is the resource group for the Flexible Server target. If you're using a temporary resource group to create the migration infrastructure, the Azure AD app will require contributor privileges to this resource group. |
+| Virtual network | Required (if used) | If the source or the target has private access, the Azure AD app will require contributor privileges to the corresponding virtual network. If you're using public access, you can skip this step. |
+
+The following steps add contributor privileges to a Flexible Server target. Repeat the steps for the Single Server source, resource group, and virtual network (if used).
-1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png":::
+1. In the Azure portal, select the Flexible Server target. Then select **Access Control (IAM)** on the upper left.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png" alt-text="Screenshot of the Access Control I A M page." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png":::
-2. Click **Add** and choose **Add role assignment**.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png":::
+2. Select **Add** > **Add role assignment**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png" alt-text="Screenshot that shows selections for adding a role assignment." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png":::
-> [!NOTE]
-> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments.
+ > [!NOTE]
+ > The **Add role assignment** capability is enabled only for users in the subscription who have a role type of **Owners**. Users who have other roles don't have permission to add role assignments.
-3. Under the **Role** tab, click on **Contributor** and click Next button
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png":::
+3. On the **Role** tab, select **Contributor** > **Next**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png" alt-text="Screenshot of the selections for choosing the contributor role." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png":::
-4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**.
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png":::
+4. On the **Members** tab, keep the default option of **User, group, or service principal** for **Assign access to**. Click **Select Members**, search for your Azure AD app, and then click **Select**.
+
+ :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png" alt-text="Screenshot of the Members tab." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png":::
## Next steps -- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md)-- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flexible-portal.md)-- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
+- [Single Server to Flexible Server migration concepts](./concepts-single-to-flexible.md)
+- [Migrate from Single Server to Flexible Server by using the Azure portal](./how-to-migrate-single-to-flexible-portal.md)
+- [Migrate from Single Server to Flexible server by using the Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| |Azure Container Registry | All public regions<br/> All Government regions | Supported with premium tier of container registry. [Select for tiers](../container-registry/container-registry-skus.md)| GA <br/> [Learn how to create a private endpoint for Azure Container Registry.](../container-registry/container-registry-private-link.md) |
-|Azure Kubernetes Service - Kubernetes API | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Kubernetes Service.](../aks/private-clusters.md) |
+|Azure Kubernetes Service - Kubernetes API | All public regions <br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Kubernetes Service.](../aks/private-clusters.md) |
### Databases
private-link Tutorial Private Endpoint Cosmosdb Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-cosmosdb-portal.md
Previously updated : 06/16/2022 Last updated : 06/22/2022
The bastion host will be used to connect securely to the virtual machine for tes
||--| | **Project Details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
+ | Resource Group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
| **Instance details** | | | Name | Enter **myVNet**. | | Region | Select **East US**. |
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-portal.md
- Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Portal'
-description: Use this tutorial to learn how to create an Azure SQL server with a private endpoint using the Azure portal.
+ Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal'
+description: Get started with this tutorial to learn how to connect to a storage account privately via Azure Private Endpoint using Azure portal.
-# Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
Previously updated : 10/20/2020 Last updated : 06/22/2022 --+
+# Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
# Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal
-Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to communicate with Private Link resources privately.
+Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure SQL server.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Create an Azure SQL server and private endpoint. > * Test connectivity to the SQL server private endpoint.
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure subscription
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
-
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual network and bastion host
The bastion host will be used to connect securely to the virtual machine for tes
2. In **Create virtual network**, enter or select this information in the **Basics** tab:
- | **Setting** | **Value** |
+ | Setting | Value |
||--| | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreateSQLEndpointTutorial-rg** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter **CreateSQLEndpointTutorial** in **Name**. </br> Select **OK**. |
| **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **East US** |
+ | Name | Enter **myVNet**. |
+ | Region | Select **East US**. |
3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
+ | IPv4 address space | Enter **10.1.0.0/16**. |
5. Under **Subnet name**, select the word **default**.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | Subnet name | Enter **mySubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
+ | Subnet name | Enter **mySubnet**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
7. Select **Save**.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
+ | Bastion name | Enter **myBastionHost**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
| Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
In this section, you'll create a virtual machine that will be used to test the p
1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+2. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
| Setting | Value | |--|-| | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreateSQLEndpointTutorial** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **CreateSQLEndpointTutorial**. |
| **Instance details** | |
- | Virtual machine name | Enter **myVM** |
- | Region | Select **East US** |
- | Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Select **No** |
- | Size | Choose VM size or take default setting |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **(US) East US**. |
+ | Availability Options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
+ | Azure Spot instance | Select **No**. |
+ | Size | Choose VM size or take default setting. |
| **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-4. In the Networking tab, select or enter:
+4. In the **Networking** tab, enter or select this information:
| Setting | Value | |-|-| | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **mySubnet** |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet**. |
| Public IP | Select **None**. |
- | NIC network security group | **Basic**|
+ | NIC network security group | Select **Basic**. |
| Public inbound ports | Select **None**. | 5. Select **Review + create**.
In this section, you'll create a SQL server in Azure.
1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL database**.
-1. In the **Basics** tab of **Create SQL database**, enter, or select this information:
+1. In the **Basics** tab of **Create SQL database**, enter or select this information:
| Setting | Value | | - | -- |
In this section, you'll create a SQL server in Azure.
| Subscription | Select your subscription. | | Resource group | Select **CreateSQLEndpointTutorial**. You created this resource group in the previous section.| | **Database details** | |
- | Database name | Enter **mysqldatabase**. If this name is taken, create a unique name. |
+ | Database name | Enter **mysqldatabase**. |
| Server | Select **Create new**. |
-6. In **New server**, enter or select this information:
+1. In **Create SQL Database Server**, enter or select this information:
| Setting | Value | | - | -- |
+ | **Server details** | |
| Server name | Enter **mysqlserver**. If this name is taken, create a unique name.|
+ | Location | Select **(US) East US**. |
+ | **Authentication** | |
+ | Authentication method | Select **Use SQL authentication**. |
| Server admin login | Enter an administrator name of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 8 characters long and meet the defined requirements. |
- | Location | Select **East US** |
+ | Password | Enter a password of your choosing. The password must be at least eight characters long and meet the defined requirements. |
+ | Confirm password | Reenter password. |
-7. Select **OK**.
+1. Select **OK**.
-8. Select the **Networking** tab or select the **Next: Networking** button.
+1. In the **Basics** tab, enter or select this information after creating the SQL database server:
-9. In the **Networking** tab, enter or select this information:
+ | Setting | Value |
+ | - | -- |
+ | **Database details** | |
+ | Want to use SQL elastic pool? | Select **No**. |
+ | Compute + Storage | Take default settings or select **Configure database** to configure compute and storage settings. |
+ | **Backup storage redundancy** | |
+ | Backup storage redundancy | Select **Locally-redundant backup storage**. |
+
+ :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/create-sql-database-basics-tab-inline.png" alt-text="Screenshot of Create S Q L Database page showing the settings used." lightbox="./media/tutorial-private-endpoint-sql-portal/create-sql-database-basics-tab-expanded.png":::
+
+1. Select the **Networking** tab or select the **Next: Networking** button.
+
+1. In the **Networking** tab, enter or select this information:
| Setting | Value | | - | -- | | **Network connectivity** | | | Connectivity method | Select **Private endpoint**. |
-10. Select **+ Add private endpoint** in **Private endpoints**.
+1. Select **+ Add private endpoint** in **Private endpoints**.
-11. In **Create private endpoint**, enter or select this information:
+1. In **Create private endpoint**, enter or select this information:
| Setting | Value | | - | -- |
In this section, you'll create a SQL server in Azure.
| Resource group | Select **CreateSQLEndpointTutorial**. | | Location | Select **East US**. | | Name | Enter **myPrivateSQLendpoint**. |
- | Target sub-resource | Select **SQLServer**. |
+ | Target sub-resource | Select **SqlServer**. |
| **Networking** | | | Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. |
In this section, you'll create a SQL server in Azure.
| Integrate with private DNS zone | Leave the default **Yes**. | | Private DNS Zone | Leave the default **(New) privatelink.database.windows.net**. |
-12. Select **OK**.
+1. Select **OK**.
+
+ :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/create-private-endpoint-sql-inline.png" alt-text="Screenshot of Create private endpoint page showing the settings used." lightbox="./media/tutorial-private-endpoint-sql-portal/create-private-endpoint-sql-expanded.png":::
-13. Select **Review + create**.
+1. Select **Review + create**.
-14. Select **Create**.
+1. Select **Create**.
> [!IMPORTANT]
-> When adding a Private endpoint connection, public routing to your Azure SQL logical server is not blocked by default. The setting "Deny public network access" under the "Firewall and virtual networks" blade is left unchecked by default. To disable public network access ensure this is checked.
+> When adding a Private endpoint connection, public routing to your Azure SQL server is not blocked by default. The setting "Deny public network access" under the "Firewall and virtual networks" blade is left unchecked by default. To disable public network access ensure this is checked.
## Disable public access to Azure SQL logical server
-For this scenario, assume you would like to disable all public access to your Azure SQL Logical server, and only allow connections from your virtual network.
+For this scenario, assume you would like to disable all public access to your Azure SQL server, and only allow connections from your virtual network.
-1. Ensure your Private endpoint connection(s) are enabled and configured.
-2. Disable public access:
- 1. Navigate to the "Firewalls and virtual network" blade of your Azure SQL Logical Server
- 2. Click the box to check mark "Deny public network access"
+1. In the Azure portal search box, enter **mysqlserver** or the server name you entered in the previous steps.
+2. On the **Networking** page, select **Public access** tab, then select **Disable** for **Public network access**.
- :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/pec-deny-public-access.png" alt-text="Deny public network access option":::
+ :::image type="content" source="./media/tutorial-private-endpoint-sql-portal/disable-sql-server-public-access-inline.png" alt-text="Screenshot of the S Q L server Networking page showing how to disable public access." lightbox="./media/tutorial-private-endpoint-sql-portal/disable-sql-server-public-access-expanded.png":::
- 3. Click the Save icon to enable.
+3. Select **Save**.
## Test connectivity to private endpoint
-In this section, you'll use the virtual machine you created in the previous step to connect to the SQL server across the private endpoint.
+In this section, you'll use the virtual machine you created in the previous steps to connect to the SQL server across the private endpoint.
1. Select **Resource groups** in the left-hand navigation pane.
In this section, you'll use the virtual machine you created in the previous step
4. On the overview page for **myVM**, select **Connect** then **Bastion**.
-5. Select the blue **Use Bastion** button.
+5. Enter the username and password that you entered during the virtual machine creation.
-6. Enter the username and password that you entered during the virtual machine creation.
+6. Select **Connect** button.
7. Open Windows PowerShell on the server after you connect.
-8. Enter `nslookup <sqlserver-name>.database.windows.net`. Replace **\<sqlserver-name>** with the name of the SQL server you created in the previous steps. You'll receive a message similar to what is displayed below:
+8. Enter `nslookup <sqlserver-name>.database.windows.net`. Replace **\<sqlserver-name>** with the name of the SQL server you created in the previous steps. You'll receive a message similar to what is displayed below:
```powershell Server: UnKnown Address: 168.63.129.16 Non-authoritative answer:
- Name: mysqlserver8675.privatelink.database.windows.net
+ Name: mysqlserver.privatelink.database.windows.net
Address: 10.1.0.5
- Aliases: mysqlserver8675.database.windows.net
+ Aliases: mysqlserver.database.windows.net
```-
- A private IP address of **10.1.0.5** is returned for the SQL server name. This address is in the subnet of the virtual network you created previously.
-
+ A private IP address of **10.1.0.5** is returned for the SQL server name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
9. Install [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?preserve-view=true&view=sql-server-2017) on **myVM**.
In this section, you'll use the virtual machine you created in the previous step
| Setting | Value | | - | -- | | Server type | Select **Database Engine**.|
- | Server name | Enter **\<sqlserver-name>.database.windows.net** |
+ | Server name | Enter **\<sqlserver-name>.database.windows.net**. |
| Authentication | Select **SQL Server Authentication**. |
- | User name | Enter the username you entered during server creation |
- | Password | Enter the password you entered during server creation |
+ | User name | Enter the username you entered during server creation. |
+ | Password | Enter the password you entered during server creation. |
| Remember password | Select **Yes**. | 1. Select **Connect**.
In this section, you'll use the virtual machine you created in the previous step
When you're done using the private endpoint, SQL server, and the VM, delete the resource group and all of the resources it contains: 1. Enter **CreateSQLEndpointTutorial** in the **Search** box at the top of the portal and select **CreateSQLEndpointTutorial** from the search results. 2. Select **Delete resource group**.
-3. Enter CreateSQLEndpointTutorial for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+3. Enter *CreateSQLEndpointTutorial* for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
## Next steps
-In this tutorial, you created a:
+In this tutorial, you learned how to create:
* Virtual network and bastion host. * Virtual machine. * Azure SQL server with private endpoint.
-You used the virtual machine to test connectivity securely to the SQL server across the private endpoint.
+You used the virtual machine to test connectivity privately and securely to the SQL server across the private endpoint.
As a next step, you may also be interested in the **Web app with private connectivity to Azure SQL Database** architecture scenario, which connects a web application outside of the virtual network to the private endpoint of a database. > [!div class="nextstepaction"]
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
Previously updated : 06/16/2022 Last updated : 06/22/2022
The bastion host will be used to connect securely to the virtual machine for tes
||| | **Project Details** | | | Subscription | Select your Azure subscription. |
- | Resource Group | Select **myResourceGroup**. |
+ | Resource Group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
| **Instance details** | | | Name | Enter **myVNet**. | | Region | Select **East US**. |
private-link Tutorial Private Endpoint Webapp Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-webapp-portal.md
Previously updated : 10/19/2020 Last updated : 06/22/2022+ # Tutorial: Connect to a web app using an Azure Private Endpoint
-Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to communicate with Private Link resources privately.
+Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as a web app.
In this tutorial, you learn how to: > [!div class="checklist"] > * Create a virtual network and bastion host. > * Create a virtual machine.
-> * Create a webapp.
+> * Create a web app.
> * Create a private endpoint.
-> * Test connectivity to web app private endpoint.
+> * Test connectivity to the web app private endpoint.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
The bastion host will be used to connect securely to the virtual machine for tes
2. In **Create virtual network**, enter or select this information in the **Basics** tab:
- | **Setting** | **Value** |
+ | Setting | Value |
||--| | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **myResourceGroup** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
| **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **West Europe** |
+ | Name | Enter **myVNet**. |
+ | Region | Select **East US**. |
3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
+ | IPv4 address space | Enter **10.1.0.0/16**. |
5. Under **Subnet name**, select the word **default**.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | Subnet name | Enter **mySubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
+ | Subnet name | Enter **mySubnet**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
7. Select **Save**.
The bastion host will be used to connect securely to the virtual machine for tes
| Setting | Value | |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
+ | Bastion name | Enter **myBastionHost**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
| Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | - 8. Select the **Review + create** tab or select the **Review + create** button. 9. Select **Create**.
The bastion host will be used to connect securely to the virtual machine for tes
In this section, you'll create a virtual machine that will be used to test the private endpoint. - 1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box. 2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
In this section, you'll create a virtual machine that will be used to test the p
| Setting | Value | |--|-| | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **myResourceGroup** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroup**. |
| **Instance details** | |
- | Virtual machine name | Enter **myVM** |
- | Region | Select **West Europe** |
- | Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Select **No** |
- | Size | Choose VM size or take default setting |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **(US) East US**. |
+ | Availability Options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
+ | Azure Spot instance | Select **No**. |
+ | Size | Choose VM size or take default setting. |
| **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-4. In the Networking tab, select or enter:
+4. In the Networking tab, select this information:
| Setting | Value | |-|-| | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **mySubnet** |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet**. |
| Public IP | Select **None**. |
- | NIC network security group | **Basic**|
+ | NIC network security group | Select **Basic**. |
| Public inbound ports | Select **None**. | 5. Select **Review + create**.
In this section, you'll create a virtual machine that will be used to test the p
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create web app
+## Create a web app
In this section, you'll create a web app.
-1. In the left-hand menu, select **Create a resource** > **Storage** > **Web App**, or search for **Web App** in the search box.
+1. In the left-hand menu, select **Create a resource** > **Web** > **Web App**, or search for **Web App** in the search box.
2. In the **Basics** tab of **Create Web App** enter or select the following information: | Setting | Value | |--|-| | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **myResourceGroup** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroup**. |
| **Instance details** | | | Name | Enter **mywebapp**. If the name is unavailable, enter a unique name. | | Publish | Select **Code**. | | Runtime stack | Select **.NET Core 3.1 (LTS)**. | | Operating System | Select **Windows**. |
- | Region | Select **West Europe** |
+ | Region | Select **East US**. |
| **App Service Plan** | |
- | Windows Plan (West Europe) | Select **Create new**. </br> Enter **myServicePlan** in **Name**. |
- | Sku and size | Select **Change size**. </br> Select **P2V2** in the **Spec Picker** screen. </br> Select **Apply**. |
+ | Windows Plan (East US) | Select **Create new**. </br> Enter **myServicePlan** in **Name**. </br> Select **OK**. |
+ | Sku and size | Select **Change size**. </br> Select **P2V2** in the **Spec Picker** page. </br> Select **Apply**. |
+ | **Zone redundancy** | |
+ | Zone redundancy | Select **Disabled**. |
3. Select **Review + create**. 4. Select **Create**.
- :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/create-web-app.png" alt-text="Basics tab of create web app in Azure portal." border="true":::
+ :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/create-web-app-inline.png" alt-text="Screenshot of Create Web App page showing the settings used in the Basics tab to create the web app." lightbox="./media/tutorial-private-endpoint-webapp-portal/create-web-app-expanded.png":::
## Create private endpoint
-1. In the left-hand menu, select **All Resources** > **mywebapp** or the name you chose during creation.
+1. In the left-hand menu, select **All Resources** > **mywebapp** or the name you chose during web app creation.
2. In the web app overview, select **Settings** > **Networking**.
-3. In **Networking**, select **Configure your private endpoint connections**.
+3. In **Networking**, select **Private endpoints**.
-4. Select **+ Add** in the **Private Endpoint connections** screen.
+4. Select **+ Add** in the **Private Endpoint connections** page.
-5. Enter or select the following information in the **Add Private Endpoint** screen:
+5. Enter or select the following information in the **Add Private Endpoint** page:
| Setting | Value | | - | -- | | Name | Enter **mywebappendpoint**. |
- | Subscription | Select your subscription. |
+ | Subscription | Select your Azure subscription. |
| Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. | | Integrate with private DNS zone | Select **Yes**. | 6. Select **OK**.
-
+
+ :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/add-private-endpoint-inline.png" alt-text="Screenshot of Add Private Endpoint page showing the settings used to create the private endpoint." lightbox="./media/tutorial-private-endpoint-webapp-portal/add-private-endpoint-expanded.png":::
## Test connectivity to private endpoint
In this section, you'll use the virtual machine you created in the previous step
4. On the overview page for **myVM**, select **Connect** then **Bastion**.
-5. Select the blue **Use Bastion** button.
+5. Enter the username and password that you entered during the virtual machine creation.
-6. Enter the username and password that you entered during the virtual machine creation.
+6. Select **Connect** button.
7. Open Windows PowerShell on the server after you connect.
In this section, you'll use the virtual machine you created in the previous step
Address: 168.63.129.16 Non-authoritative answer:
- Name: mywebapp8675.privatelink.azurewebsites.net
+ Name: mywebapp.privatelink.azurewebsites.net
Address: 10.1.0.5
- Aliases: mywebapp8675.azurewebsites.net
+ Aliases: mywebapp.azurewebsites.net
```
- A private IP address of **10.1.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created previously.
-
-9. Open a web browser on your local computer and enter the external URL of your web app, **https://\<webapp-name>.azurewebsites.net**.
+ A private IP address of **10.1.0.5** is returned for the web app name. This address is in **mySubnet** subnet of **myVNet** virtual network you created previously.
-10. Verify that you receive a **403** page. This page indicates that the web app isn't accessible externally.
+9. Open Internet Explorer, and enter the URL of your web app, `https://<webapp-name>.azurewebsites.net`.
- :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/web-app-ext-403.png" alt-text="403 page for external web app address." border="true":::
+10. Verify you receive the default web app page.
-11. In the bastion connection to **myVM**, open Internet Explorer.
+ :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/web-app-default-page.png" alt-text="Screenshot of Internet Explorer showing default web app page." border="true":::
-12. Enter the url of your web app, **https://\<webapp-name>.azurewebsites.net**.
+11. Close the connection to **myVM**.
-13. Verify you receive the default web app page.
+12. Open a web browser on your local computer and enter the URL of your web app, `https://<webapp-name>.azurewebsites.net`.
- :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/web-app-default-page.png" alt-text="Default web app page." border="true":::
+13. Verify that you receive a **403** page. This page indicates that the web app isn't accessible externally.
-18. Close the connection to **myVM**.
+ :::image type="content" source="./media/tutorial-private-endpoint-webapp-portal/web-app-ext-403.png" alt-text="Screenshot of web browser showing a blue page with Error 403 for external web app address." border="true":::
## Clean up resources
If you're not going to continue to use this application, delete the virtual netw
## Next steps
-Learn how to create a Private Link service:
+Learn how to connect to an Azure SQL server using an Azure Private Endpoint:
> [!div class="nextstepaction"]
-> [Create a Private Link service](create-private-link-service-portal.md)
+> [Connect to Azure SQL server using Private Endpoint](tutorial-private-endpoint-sql-portal.md)
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
Title: Microsoft Purview supported data sources and file types
-description: This article provides details about supported data sources, file types, and functionalities in Microsoft Purview.
+ Title: Microsoft Purview Data Map supported data sources and file types
+description: This article provides details about supported data sources, file types, and functionalities in the Microsoft Purview Data Map.
# Supported data sources and file types
-This article discusses currently supported data sources, file types, and scanning concepts in Microsoft Purview.
+This article discusses currently supported data sources, file types, and scanning concepts in the Microsoft Purview Data Map.
-## Microsoft Purview data sources
+## Microsoft Purview Data Map available data sources
The table below shows the supported capabilities for each data source. Select the data source, or the feature, to learn more.
The table below shows the supported capabilities for each data source. Select th
\* Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md). > [!NOTE]
-> Currently, Microsoft Purview can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
+> Currently, the Microsoft Purview Data Map can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
## Scan regions
-The following is a list of all the Azure data source (data center) regions where the Microsoft Purview scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Microsoft Purview instance.
+The following is a list of all the Azure data source (data center) regions where the Microsoft Purview Data Map scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Microsoft Purview instance.
-### Microsoft Purview scanner regions
+### Microsoft Purview Data Map scanner regions
- Australia East - Australia Southeast
The following file types are supported for scanning, for schema extraction, and
- Structured file formats supported by extension: AVRO, ORC, PARQUET, CSV, JSON, PSV, SSV, TSV, TXT, XML, GZIP > [!Note]
- > * Microsoft Purview scanner only supports schema extraction for the structured file types listed above.
- > * For AVRO, ORC, and PARQUET file types, Microsoft Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
- > * Microsoft Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
+ > * The Microsoft Purview Data Map scanner only supports schema extraction for the structured file types listed above.
+ > * For AVRO, ORC, and PARQUET file types, the scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
+ > * The scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
> * For GZIP file types, the GZIP must be mapped to a single csv file within. > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv. > * For delimited file types (CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns. - Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT-- Microsoft Purview also supports custom file extensions and custom parsers.
+- The Microsoft Purview Data Map also supports custom file extensions and custom parsers.
## Nested data
Nested data, or nested schema parsing, isn't supported in SQL. A column with nes
## Sampling within a file
-In Microsoft Purview terminology,
+In Microsoft Purview Data Map terminology,
- L1 scan: Extracts basic information and meta data like file name, size and fully qualified name - L2 scan: Extracts schema for structured file types and database tables - L3 scan: Extracts schema where applicable and subjects the sampled file to system and custom classification rules
-For all structured file formats, Microsoft Purview scanner samples files in the following way:
+For all structured file formats, the Microsoft Purview Data Map scanner samples files in the following way:
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file.
For all structured file formats, Microsoft Purview scanner samples files in the
## Resource set file sampling
-A folder or group of partition files is detected as a *resource set* in Microsoft Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Microsoft Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
+A folder or group of partition files is detected as a *resource set* in the Microsoft Purview Data Map if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then the scanner will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
File sampling for resource sets by file types:
File sampling for resource sets by file types:
## Classification
-All 208 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in Microsoft Purview](supported-classifications.md).
+All 208 system classification rules apply to structured file formats. Only the MCE classification rules apply to document file types (Not the data scan native regex patterns, bloom filter-based detection). For more information on supported classifications, see [Supported classifications in the Microsoft Purview Data Map](supported-classifications.md).
## Next steps
purview Concept Best Practices Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-accounts.md
Title: Microsoft Purview accounts architecture and best practices
-description: This article provides examples of Microsoft Purview accounts architectures and describes best practices.
+ Title: Microsoft Purview (formerly Azure Purview) accounts architecture and best practices
+description: This article provides examples of accounts architectures and describes best practices for deploying Microsoft Purview (formerly Azure Purview).
Last updated 10/12/2021
# Microsoft Purview accounts architectures and best practices
-Microsoft Purview is a unified data governance solution. You deploy a Microsoft Purview account to centrally manage data governance across your data estate, spanning both cloud and on-prem environments. To use Microsoft Purview as your centralized data governance solution, you need to deploy one or more Microsoft Purview accounts inside your Azure subscription. We recommend keeping the number of Microsoft Purview instances as minimum, however, in some cases more Microsoft Purview instances are needed to fulfill business security and compliance requirements.
+To enable Microsoft Purview governance solutions, like Microsoft Purview Data Map and Data Catalog, in your environment, you'll deploy a Microsoft Purview (formerly Azure Purview) account in the Azure portal. You'll use this account to centrally manage data governance across your data estate, spanning both cloud and on-premises environments. To use Microsoft Purview as your centralized data governance solution, you may need to deploy one or more Microsoft Purview accounts inside your Azure subscription. We recommend keeping the number of Microsoft Purview instances as minimum, however, in some cases more Microsoft Purview instances are needed to fulfill business security and compliance requirements.
## Single Microsoft Purview account
-Consider deploying minimum number of Microsoft Purview accounts for the entire organization. This approach takes maximum advantage of the "network effects" where the value of the platform increases exponentially as a function of the data that resides inside the platform.
+Consider deploying minimum number of Microsoft Purview (formerly Azure Purview) accounts for the entire organization. This approach takes maximum advantage of the "network effects" where the value of the platform increases exponentially as a function of the data that resides inside the platform.
-Use [Microsoft Purview collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Microsoft Purview account. In this scenario, one Microsoft Purview account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Microsoft Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
+Use [Microsoft Purview Data Map collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Microsoft Purview account. In this scenario, one account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Microsoft Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
:::image type="content" source="media/concept-best-practices/accounts-single-account.png" alt-text="Screenshot that shows the single Microsoft Purview account."lightbox="media/concept-best-practices/accounts-single-account.png":::
Some organizations may require setting up multiple Microsoft Purview accounts. R
### Testing new features
-It is recommended to create a new instance of Microsoft Purview account when testing scan configurations or classifications in isolated environments. For some scenarios, there is a "versioning" feature in some areas of the platform such as glossary, however, it would be easier to have a "disposable" instance of Microsoft Purview to freely test expected functionality and then plan to roll out the feature into the production instance.
+It's recommended to create a new account when testing scan configurations or classifications in isolated environments. For some scenarios, there's a "versioning" feature in some areas of the platform such as glossary, however, it would be easier to have a "disposable" instance of Microsoft Purview to freely test expected functionality and then plan to roll out the feature into the production instance.
-Additionally, consider using a test Microsoft Purview account when you cannot perform a rollback. For example, currently you cannot remove a glossary term attribute from a Microsoft Purview instance once it is added to your Microsoft Purview account. In this case, it is recommended using a test Microsoft Purview account first.
+Additionally, consider using a test Microsoft Purview account when you can't perform a rollback. For example, currently you can't remove a glossary term attribute from a Microsoft Purview instance once it's added to your Microsoft Purview account. In this case, it's recommended using a test Microsoft Purview account first.
### Isolating Production and non-production environments
Optionally, you can register a data source in more than one Microsoft Purview in
### Fulfilling compliance requirements
-When you scan data sources in Microsoft Purview, information related to your metadata is ingested and stored inside your Microsoft Purview Data Map in the Azure region where your Microsoft Purview account is deployed. Consider deploying separate instances of Microsoft Purview if you have specific regulatory and compliance requirements that include even having metadata in a specific geographical location.
+When you scan data sources in the Microsoft Purview Data Map, information related to your metadata is ingested and stored inside your data map in the Azure region where your Microsoft Purview account is deployed. Consider deploying separate instances of Microsoft Purview if you have specific regulatory and compliance requirements that include even having metadata in a specific geographical location.
-If your organization has data in multiple geographies and you must keep metadata in the same region as the actual data, you have to deploy multiple Microsoft Purview instances, one for each geography. In this case, data sources from each regions should be registered and scanned in the Microsoft Purview account that corresponds to the data source region or geography.
+If your organization has data in multiple geographies and you must keep metadata in the same region as the actual data, you'll have to deploy multiple Microsoft Purview instances, one for each geography. In this case, data sources from each region should be registered and scanned in the Microsoft Purview account that corresponds to the data source region or geography.
:::image type="content" source="media/concept-best-practices/accounts-multiple-regions.png" alt-text="Screenshot that shows multiple Microsoft Purview accounts based on compliance requirements."lightbox="media/concept-best-practices/accounts-multiple-regions.png"::: ### Having Data sources distributed across multiple tenants
-Currently, Microsoft Purview doesn't support multi-tenancy. If you have Azure data sources distributed across multiple Azure subscriptions under different Azure Active Directory tenants, it is recommended deploying separate Microsoft Purview accounts under each tenant.
+Currently, Microsoft Purview doesn't support multi-tenancy. If you have Azure data sources distributed across multiple Azure subscriptions under different Azure Active Directory tenants, it's recommended deploying separate Microsoft Purview accounts under each tenant.
An exception applies to VM-based data sources and Power BI tenants.For more information about how to scan and register a cross tenant Power BI in a single Microsoft Purview account, see, [Register and scan a cross-tenant Power BI](./register-scan-power-bi-tenant.md).
An exception applies to VM-based data sources and Power BI tenants.For more info
### Billing model
-Review [Microsoft Purview Pricing model](https://azure.microsoft.com/pricing/details/azure-purview) when defining budgeting model and designing Microsoft Purview architecture for your organization. One billing is generated for a single Microsoft Purview account in the subscription where Microsoft Purview account is deployed. This model also applies to other Microsoft Purview costs such as scanning and classifying metadata inside Microsoft Purview Data Map.
+Review [Microsoft Purview Pricing model](https://azure.microsoft.com/pricing/details/azure-purview) when defining budgeting model and designing an architecture for your organization. One billing is generated for a single Microsoft Purview account in the subscription where Microsoft Purview account is deployed. This model also applies to other Microsoft Purview costs such as scanning and classifying metadata inside Microsoft Purview Data Map.
-Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating a Microsoft Purview instance for each BU. This model is not ideal, however, may be necessary, especially because Business Units are often not willing to share Azure billing.
+Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating a Microsoft Purview instance for each BU.
-For more information about cloud computing cost model in chargeback and showback models, see, [What is cloud accounting?](/azure/cloud-adoption-framework/strategy/cloud-accounting).
+For more information about cloud computing cost model in chargeback and showback models, see: [What is cloud accounting?](/azure/cloud-adoption-framework/strategy/cloud-accounting).
-## Additional considerations and recommendations
+## Other considerations and recommendations
-- Keep the number of Microsoft Purview accounts low for simplified administrative overhead. If you plan building multiple Microsoft Purview accounts, you may require creating and managing additional scans, access control model, credentials, and runtimes across your Microsoft Purview accounts. Additionally, you may need to manage classifications and glossary terms for each Microsoft Purview account.
+- Keep the number of Microsoft Purview accounts low for simplified administrative overhead. If you plan building multiple Microsoft Purview accounts, you may require creating and managing extra scans, access control model, credentials, and runtimes across your Microsoft Purview accounts. Additionally, you may need to manage classifications and glossary terms for each Microsoft Purview account.
- Review your budgeting and financial requirements. If possible, use chargeback or showback model when using Azure services and divide the cost of Microsoft Purview across the organization to keep the number of Microsoft Purview accounts minimum. -- Use [Microsoft Purview collections](concept-best-practices-collections.md) to define metadata access control inside Microsoft Purview Data Map for your organization's business users, data management and governance teams. For more information, see [Access control in Microsoft Purview](./catalog-permissions.md).
+- Use [collections](concept-best-practices-collections.md) to define metadata access control inside Microsoft Purview Data Map for your organization's business users, data management and governance teams. For more information, see [Access control in Microsoft Purview](./catalog-permissions.md).
- Review [Microsoft Purview limits](./how-to-manage-quotas.md#microsoft-purview-limits) before deploying any new Microsoft Purview accounts. Currently, the default limit of Microsoft Purview accounts per region, per tenant (all subscriptions combined) is 3. You may need to contact Microsoft support to increase this limit in your subscription or tenant before deploying extra instances of Microsoft Purview.ΓÇ»
purview Deployment Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/deployment-best-practices.md
Title: 'Deployment best practices'
-description: This article provides best practices for deploying Microsoft Purview. Microsoft Purview enables any user to register, discover, understand, and consume data sources.
+ Title: 'Deployment best practices for Microsoft Purview (formerly Azure Purview)'
+description: This article provides best practices for deploying Microsoft Purview (formerly Azure Purview). The Microsoft Purview Data Map and governance portal enable any user to register, discover, understand, and consume data sources.
Last updated 11/23/2020
-# Microsoft Purview deployment best practices
+# Microsoft Purview (formerly Azure Purview) deployment best practices
-This article identifies common tasks that can help you deploy Microsoft Purview into production. These tasks can be completed in phases, over the course of a month or more. Even organizations who have already deployed Microsoft Purview can use this guide to ensure they're getting the most out of their investment.
+This article identifies common tasks that can help you deploy Microsoft Purview (formerly Azure Purview) into production. These tasks can be completed in phases, over the course of a month or more. Even organizations who have already deployed Microsoft Purview can use this guide to ensure they're getting the most out of their investment.
A well-planned deployment of a data governance platform (such as Microsoft Purview), can give the following benefits:
A well-planned deployment of a data governance platform (such as Microsoft Purvi
- Access to data sources such as Azure Data Lake Storage or Azure SQL in test, development, or production environments - For Data Lake Storage, the required role to scan is Reader Role - For SQL, the identity must be able to query tables for sampling of classifications-- Access to Microsoft Defender for Cloud or ability to collaborate with Defender for Cloud Admin for data labeling
+- Access to Microsoft 365 Defender for Cloud or ability to collaborate with Microsoft 365 Defender for Cloud Admin for data labeling
## Identify objectives and goals
The general approach is to break down those overarching objectives into various
|Consumption|The business users should be able to find information about each asset for both business and technical metadata.| |Lineage|Each asset must show a graphical view of underlying datasets so that the users understand the original sources and what changes have been made.| |Collaboration|The platform must allow users to collaborate by providing additional information about each data asset.|
-|Reporting|The users must be able to view reporting on the data estate including sensitive data and data that needs additional enrichment.|
+|Reporting|The users must be able to view reporting on the data estate including sensitive data and data that needs extra enrichment.|
|Data governance|The platform must allow the admin to define policies for access control and automatically enforce the data access based on each user.|
-|Workflow|The platform must have the ability to create and modify workflow so that it is easy to scale out and automate various tasks within the platform.|
+|Workflow|The platform must have the ability to create and modify workflow so that it's easy to scale out and automate various tasks within the platform.|
|Integration|Other third-party technologies such as ticketing or orchestration must be able to integrate into the platform via script or REST APIs.| ## Top questions to ask
The general approach is to break down those overarching objectives into various
Once your organization agrees on the high-level objectives and goals, there will be many questions from multiple groups. ItΓÇÖs crucial to gather these questions in order to craft a plan to address all of the concerns. Some example questions that you may run into during the initial phase: 1. What are the main organization data sources and data systems?
-2. For data sources that are not supported yet by Microsoft Purview, what are my options?
+2. For data sources that aren't supported yet by Microsoft Purview, what are my options?
3. How many Microsoft Purview instances do we need? 4. Who are the users? 5. Who can scan new data sources?
While you might not have the answer to most of these questions right away, it ca
## Include the right stakeholders
-To ensure the success of implementing Microsoft Purview for the entire enterprise, itΓÇÖs important to involve the right stakeholders. Only a few people are involved in the initial phase. However, as the scope expands, you will require additional personas to contribute to the project and provide feedback.
+To ensure the success of implementing Microsoft Purview for the entire enterprise, itΓÇÖs important to involve the right stakeholders. Only a few people are involved in the initial phase. However, as the scope expands, you'll require more personas to contribute to the project and provide feedback.
Some key stakeholders that you may want to include:
Some key stakeholders that you may want to include:
## Identify key scenarios
-Microsoft Purview can be used to centrally manage data governance across an organizationΓÇÖs data estate spanning cloud and on-premises environments. To have a successful implementation, you must identify key scenarios that are critical to the business. These scenarios can cross business unit boundaries or impact multiple user personas either upstream or downstream.
+Microsoft Purview governance services can be used to centrally manage data governance across an organizationΓÇÖs data estate spanning cloud and on-premises environments. To have a successful implementation, you must identify key scenarios that are critical to the business. These scenarios can cross business unit boundaries or impact multiple user personas either upstream or downstream.
These scenarios can be written up in various ways, but you should include at least these five dimensions:
If you have only one small group using Microsoft Purview with basic consumption
### Determine the number of Microsoft Purview instances
-In most cases, there should only be one Microsoft Purview account for the entire organization. This approach takes maximum advantage of the ΓÇ£network effectsΓÇ¥ where the value of the platform increases exponentially as a function of the data that resides inside the platform.
+In most cases, there should only be one Microsoft Purview (formerly Azure Purview) account for the entire organization. This approach takes maximum advantage of the ΓÇ£network effectsΓÇ¥ where the value of the platform increases exponentially as a function of the data that resides inside the platform.
However, there are exceptions to this pattern:
-1. **Testing new configurations** ΓÇô Organizations may want to create multiple instances for testing out scan configurations or classifications in isolated environments. Although there is a ΓÇ£versioningΓÇ¥ feature in some areas of the platform such as glossary, it would be easier to have a ΓÇ£disposableΓÇ¥ instance to freely test.
-2. **Separating Test, Pre-production and Production** ΓÇô Organizations want to create different platforms for different kinds of data stored in different environments. It is not recommended as those kinds of data are different content types. You could use glossary term at the top hierarchy level or category to segregate content types.
-3. **Conglomerates and federated model** ΓÇô Conglomerates often have many business units (BUs) that operate separately, and, in some cases, they won't even share billing with each other. In those cases, the organization will end up creating a Microsoft Purview instance for each BU. This model is not ideal, but may be necessary, especially because BUs are often not willing to share billing.
+1. **Testing new configurations** ΓÇô Organizations may want to create multiple instances for testing out scan configurations or classifications in isolated environments. Although there's a ΓÇ£versioningΓÇ¥ feature in some areas of the platform such as glossary, it would be easier to have a ΓÇ£disposableΓÇ¥ instance to freely test.
+2. **Separating Test, Pre-production and Production** ΓÇô Organizations want to create different platforms for different kinds of data stored in different environments. It isn't recommended as those kinds of data are different content types. You could use glossary term at the top hierarchy level or category to segregate content types.
+3. **Conglomerates and federated model** ΓÇô Conglomerates often have many business units (BUs) that operate separately, and, in some cases, they won't even share billing with each other. In those cases, the organization will end up creating a Microsoft Purview instance for each BU. This model isn't ideal, but may be necessary, especially because BUs are often not willing to share billing.
4. **Compliance** ΓÇô There are some strict compliance regimes, which treat even metadata as sensitive and require it to be in a specific geography. If a company has multiple geographies, the only solution is to have multiple Microsoft Purview instances, one for each geography. ### Create a process to move to production
Some organizations may decide to keep things simple by working with a single pro
Another important aspect to include in your production process is how classifications and labels can be migrated. Microsoft Purview has over 90 system classifiers. You can apply system or custom classifications on file, table, or column assets. Classifications are like subject tags and are used to mark and identify content of a specific type found within your data estate during scanning. Sensitivity labels are used to identify the categories of classification types within your organizational data, and then group the policies you wish to apply to each category. It makes use of the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire content and data estate. It can scan and automatically classify documents. For example, if you have a file named multiple.docx and it has a National ID number in its content, Microsoft Purview will add classification such as EU National Identification Number in the Asset Detail page.
-In Microsoft Purview, there are several areas where the Catalog Administrators need to ensure consistency and maintenance best practices over its life cycle:
+In the Microsoft Purview Data Map, there are several areas where the Catalog Administrators need to ensure consistency and maintenance best practices over its life cycle:
* **Data assets** ΓÇô Data sources will need to be rescanned across environments. ItΓÇÖs not recommended to scan only in development and then regenerate them using APIs in Production. The main reason is that the Microsoft Purview scanners do a lot more ΓÇ£wiringΓÇ¥ behind the scenes on the data assets, which could be complex to move them to a different Microsoft Purview instance. ItΓÇÖs much easier to just add the same data source in production and scan the sources again. The general best practice is to have documentation of all scans, connections, and authentication mechanisms being used. * **Scan rule sets** ΓÇô This is your collection of rules assigned to specific scan such as file type and classifications to detect. If you donΓÇÖt have that many scan rule sets, itΓÇÖs possible to just re-create them manually again via Production. This will require an internal process and good documentation. However, if your rule sets change on a daily or weekly basis, this could be addressed by exploring the REST API route.
-* **Custom classifications** ΓÇô Your classifications may not also change on a regular basis. During the initial phase of deployment, it may take some time to understand various requirements to come up with custom classifications. However, once settled, this will require little change. So the recommendation here is to manually migrate any custom classifications over or use the REST API.
+* **Custom classifications** ΓÇô Your classifications may not also change regularly. During the initial phase of deployment, it may take some time to understand various requirements to come up with custom classifications. However, once settled, this will require little change. So the recommendation here's to manually migrate any custom classifications over or use the REST API.
* **Glossary** ΓÇô ItΓÇÖs possible to export and import glossary terms via the UX. For automation scenarios, you can also use the REST API.
-* **Resource set pattern policies** ΓÇô This functionality is very advanced for any typical organizations to apply. In some cases, your Azure Data Lake Storage has folder naming conventions and specific structure that may cause problems for Microsoft Purview to generate the resource set. Your business unit may also want to change the resource set construction with additional customizations to fit the business needs. For this scenario, itΓÇÖs best to keep track of all changes via REST API, and document the changes through external versioning platform.
-* **Role assignment** ΓÇô This is where you control who has access to Microsoft Purview and which permissions they have. Microsoft Purview also has REST API to support export and import of users and roles but this is not Atlas API-compatible. The recommendation is to assign an Azure Security Group and manage the group membership instead.
+* **Resource set pattern policies** ΓÇô This functionality is advanced for any typical organizations to apply. In some cases, your Azure Data Lake Storage has folder naming conventions and specific structure that may cause problems for Microsoft Purview to generate the resource set. Your business unit may also want to change the resource set construction with more customizations to fit the business needs. For this scenario, itΓÇÖs best to keep track of all changes via REST API, and document the changes through external versioning platform.
+* **Role assignment** ΓÇô This is where you control who has access to Microsoft Purview and which permissions they have. Microsoft Purview also has REST API to support export and import of users and roles but this isn't Atlas API-compatible. The recommendation is to assign an Azure Security Group and manage the group membership instead.
### Plan and implement different integration points with Microsoft Purview
-ItΓÇÖs likely that a mature organization already has an existing data catalog. The key question is whether to continue to use the existing technology and sync with Microsoft Purview or not. To handle syncing with existing products in an organization, Microsoft Purview provides Atlas REST APIs. Atlas APIs provide a powerful and flexible mechanism handling both push and pull scenarios. Information can be published to Microsoft Purview using Atlas APIs for bootstrapping or to push latest updates from another system into Microsoft Purview. The information available in Microsoft Purview can also be read using Atlas APIs and then synced back to existing products.
+ItΓÇÖs likely that a mature organization already has an existing data catalog. The key question is whether to continue to use the existing technology and sync with the Microsoft Purview Data Map and Data Catalog or not. To handle syncing with existing products in an organization, Microsoft Purview provides Atlas REST APIs. Atlas APIs provide a powerful and flexible mechanism handling both push and pull scenarios. Information can be published to Microsoft Purview using Atlas APIs for bootstrapping or to push latest updates from another system into Microsoft Purview. The information available in Microsoft Purview can also be read using Atlas APIs and then synced back to existing products.
For other integration scenarios such as ticketing, custom user interface, and orchestration you can use Atlas APIs and Kafka endpoints. In general, there are four integration points with Microsoft Purview: * **Data Asset** ΓÇô This enables Microsoft Purview to scan a storeΓÇÖs assets in order to enumerate what those assets are and collect any readily available metadata about them. So for SQL this could be a list of DBs, tables, stored procedures, views and config data about them kept in places like `sys.tables`. For something like Azure Data Factory (ADF) this could be enumerating all the pipelines and getting data on when they were created, last run, current state. * **Lineage** ΓÇô This enables Microsoft Purview to collect information from an analysis/data mutation system on how data is moving around. For something like Spark this could be gathering information from the execution of a notebook to see what data the notebook ingested, how it transformed it and where it outputted it. For something like SQL, it could be analyzing query logs to reverse engineer what mutation operations were executed and what they did. We support both push and pull based lineage depending on the needs.
-* **Classification** ΓÇô This enables Microsoft Purview to take physical samples from data sources and run them through our classification system. The classification system figures out the semantics of a piece of data. For example, we may know that a file is a Parquet file and has three columns and the third one is a string. But the classifiers we run on the samples will tell us that the string is a name, address, or phone number. Lighting up this integration point means that we have defined how Microsoft Purview can open up objects like notebooks, pipelines, parquet files, tables, and containers.
+* **Classification** ΓÇô This enables Microsoft Purview to take physical samples from data sources and run them through our classification system. The classification system figures out the semantics of a piece of data. For example, we may know that a file is a Parquet file and has three columns and the third one is a string. But the classifiers we run on the samples will tell us that the string is a name, address, or phone number. Lighting up this integration point means that we've defined how Microsoft Purview can open up objects like notebooks, pipelines, parquet files, tables, and containers.
* **Embedded Experience** ΓÇô Products that have a ΓÇ£studioΓÇ¥ like experience (such as ADF, Synapse, SQL Studio, PBI, and Dynamics) usually want to enable users to discover data they want to interact with and also find places to output data. Microsoft PurviewΓÇÖs catalog can help to accelerate these experiences by providing an embedding experience. This experience can occur at the API or the UX level at the partnerΓÇÖs option. By embedding a call to Microsoft Purview, the organization can take advantage of Microsoft PurviewΓÇÖs map of the data estate to find data assets, see lineage, check schemas, look at ratings, contacts etc. ## Phase 1: Pilot
-In this phase, Microsoft Purview must be created and configured for a very small set of users. Usually, it is just a group of 2-3 people working together to run through end-to-end scenarios. They are considered the advocates of Microsoft Purview in their organization. The main goal of this phase is to ensure key functionalities can be met and the right stakeholders are aware of the project.
+In this phase, Microsoft Purview must be created and configured for a small set of users. Usually, it's just a group of 2-3 people working together to run through end-to-end scenarios. They're considered the advocates of Microsoft Purview in their organization. The main goal of this phase is to ensure key functionalities can be met and the right stakeholders are aware of the project.
### Tasks to complete |Task|Detail|Duration| ||||
-|Gather & agree on requirements|Discussion with all stakeholders to gather a full set of requirements. Different personas must participate to agree on a subset of requirements to complete for each phase of the project.|1 Week|
-|Navigating Microsoft Purview|Understand how to use Microsoft Purview from the home page.|1 Day|
-|Configure ADF for lineage|Identify key pipelines and data assets. Gather all information required to connect to an internal ADF account.|1 Day|
-|Scan a data source such as Azure Data Lake Storage|Add the data source and set up a scan. Ensure the scan successfully detects all assets.|2 Day|
-|Search and browse|Allow end users to access Microsoft Purview and perform end-to-end search and browse scenarios.|1 Day|
+|Gather & agree on requirements|Discussion with all stakeholders to gather a full set of requirements. Different personas must participate to agree on a subset of requirements to complete for each phase of the project.|One Week|
+|Navigating Microsoft Purview|Understand how to use Microsoft Purview from the home page.|One Day|
+|Configure ADF for lineage|Identify key pipelines and data assets. Gather all information required to connect to an internal ADF account.|One Day|
+|Scan a data source such as Azure Data Lake Storage|Add the data source and set up a scan. Ensure the scan successfully detects all assets.|Two Day|
+|Search and browse|Allow end users to access Microsoft Purview and perform end-to-end search and browse scenarios.|One Day|
### Acceptance criteria
In this phase, Microsoft Purview must be created and configured for a very small
* Lineage * Users should be able to assign asset ownership in the asset page. * Presentation and demo to raise awareness to key stakeholders.
-* Buy-in from management to approve additional resources for MVP phase.
+* Buy-in from management to approve more resources for MVP phase.
## Phase 2: Minimum viable product
-Once you have the agreed requirements and participated business units to onboard Microsoft Purview, the next step is to work on a Minimum Viable Product (MVP) release. In this phase, you will expand the usage of Microsoft Purview to more users who will have additional needs horizontally and vertically. There will be key scenarios that must be met horizontally for all users such as glossary terms, search, and browse. There will also be in-depth requirements vertically for each business unit or group to cover specific end-to-end scenarios such as lineage from Azure Data Lake Storage to Azure Synapse DW to Power BI.
+Once you have the agreed requirements and participated business units to onboard Microsoft Purview, the next step is to work on a Minimum Viable Product (MVP) release. In this phase, you'll expand the usage of Microsoft Purview to more users who will have more needs horizontally and vertically. There will be key scenarios that must be met horizontally for all users such as glossary terms, search, and browse. There will also be in-depth requirements vertically for each business unit or group to cover specific end-to-end scenarios such as lineage from Azure Data Lake Storage to Azure Synapse DW to Power BI.
### Tasks to complete |Task|Detail|Duration| ||||
-|[Scan Azure Synapse Analytics](register-scan-azure-synapse-analytics.md)|Start to onboard your database sources and scan them to populate key assets|2 Days|
-|[Create custom classifications and rules](create-a-custom-classification-and-classification-rule.md)|Once your assets are scanned, your users may realize that there are additional use cases for more classification beside the default classifications from Microsoft Purview.|2-4 Weeks|
-|[Scan Power BI](register-scan-power-bi-tenant.md)|If your organization uses Power BI, you can scan Power BI in order to gather all data assets being used by Data Scientists or Data Analysts which have requirements to include lineage from the storage layer.|1-2 Weeks|
-|[Import glossary terms](how-to-create-import-export-glossary.md)|In most cases, your organization may already develop a collection of glossary terms and term assignment to assets. This will require an import process into Microsoft Purview via .csv file.|1 Week|
-|Add contacts to assets|For top assets, you may want to establish a process to either allow other personas to assign contacts or import via REST APIs.|1 Week|
+|[Scan Azure Synapse Analytics](register-scan-azure-synapse-analytics.md)|Start to onboard your database sources and scan them to populate key assets|Two Days|
+|[Create custom classifications and rules](create-a-custom-classification-and-classification-rule.md)|Once your assets are scanned, your users may realize that there are other use cases for more classification beside the default classifications from Microsoft Purview.|2-4 Weeks|
+|[Scan Power BI](register-scan-power-bi-tenant.md)|If your organization uses Power BI, you can scan Power BI in order to gather all data assets being used by Data Scientists or Data Analysts that have requirements to include lineage from the storage layer.|1-2 Weeks|
+|[Import glossary terms](how-to-create-import-export-glossary.md)|In most cases, your organization may already develop a collection of glossary terms and term assignment to assets. This will require an import process into Microsoft Purview via .csv file.|One Week|
+|Add contacts to assets|For top assets, you may want to establish a process to either allow other personas to assign contacts or import via REST APIs.|One Week|
|Add sensitive labels and scan|This might be optional for some organizations, depending on the usage of Labeling from Microsoft 365.|1-2 Weeks|
-|Get classification and sensitive insights|For reporting and insight in Microsoft Purview, you can access this functionality to get various reports and provide presentation to management.|1 Day|
-|Onboard additional users using Microsoft Purview managed users|This step will require the Microsoft Purview Admin to work with the Azure Active Directory Admin to establish new Security Groups to grant access to Microsoft Purview.|1 Week|
+|Get classification and sensitive insights|For reporting and insight in Microsoft Purview, you can access this functionality to get various reports and provide presentation to management.|One Day|
+|Onboard more users using Microsoft Purview managed users|This step will require the Microsoft Purview Admin to work with the Azure Active Directory Admin to establish new Security Groups to grant access to Microsoft Purview.|One Week|
### Acceptance criteria
Once you have the agreed requirements and participated business units to onboard
## Phase 3: Pre-production
-Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone. Your organization may decide to have a separate instance of Microsoft Purview for pre-production and production, or keep the same instance but restrict access. Also in this phase, you may want to include scanning on on-premises data sources such as SQL Server. If there is any gap in data sources not supported by Microsoft Purview, it is time to explore the Atlas API to understand additional options.
+Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone. Your organization may decide to have a separate instance of Microsoft Purview for pre-production and production, or keep the same instance but restrict access. Also in this phase, you may want to include scanning on on-premises data sources such as SQL Server. If there's any gap in data sources not supported by Microsoft Purview, it's time to explore the Atlas API to understand other options.
### Tasks to complete |Task|Detail|Duration| ||||
-|Refine your scan using scan rule set|Your organization will have a lot of data sources for pre-production. ItΓÇÖs important to pre-define key criteria for scanning so that classifications and file extension can be applied consistently across the board.|1-2 Days|
-|Assess region availability for scan|Depending on the region of the data sources and organizational requirements on compliance and security, you may want to consider what regions must be available for scanning.|1 Day|
-|Understand firewall concept when scanning|This step requires some exploration of how the organization configures its firewall and how Microsoft Purview can authenticate itself to access the data sources for scanning.|1 Day|
-|Understand Private Link concept when scanning|If your organization uses Private Link, you must lay out the foundation of network security to include Private Link as a part of the requirements.|1 Day|
+|Refine your scan using scan rule set|Your organization will have many data sources for pre-production. ItΓÇÖs important to pre-define key criteria for scanning so that classifications and file extension can be applied consistently across the board.|1-2 Days|
+|Assess region availability for scan|Depending on the region of the data sources and organizational requirements on compliance and security, you may want to consider what regions must be available for scanning.|One Day|
+|Understand firewall concept when scanning|This step requires some exploration of how the organization configures its firewall and how Microsoft Purview can authenticate itself to access the data sources for scanning.|One Day|
+|Understand Private Link concept when scanning|If your organization uses Private Link, you must lay out the foundation of network security to include Private Link as a part of the requirements.|One Day|
|[Scan on-premises SQL Server](register-scan-on-premises-sql-server.md)|This is optional if you have on-premises SQL Server. The scan will require setting up [Self-hosted Integration Runtime](manage-integration-runtimes.md) and adding SQL Server as a data source.|1-2 Weeks|
-|Use Microsoft Purview REST API for integration scenarios|If you have requirements to integrate Microsoft Purview with other 3rd party technologies such as orchestration or ticketing system, you may want to explore REST API area.|1-4 Weeks|
+|Use Microsoft Purview REST API for integration scenarios|If you have requirements to integrate Microsoft Purview with other third party technologies such as orchestration or ticketing system, you may want to explore REST API area.|1-4 Weeks|
|Understand Microsoft Purview pricing|This step will provide the organization important financial information to make decision.|1-5 Days| ### Acceptance criteria
Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone.
* Successfully onboard at least one business unit with all of users * Scan on-premises data source such as SQL Server * POC at least one integration scenario using REST API
-* Complete a plan to go to production which should include key areas on infrastructure and security
+* Complete a plan to go to production, which should include key areas on infrastructure and security
## Phase 4: Production
-The above phases should be followed to create an effective information governance, which is the foundation for better governance programs. Data governance will help your organization prepare for the growing trends such as AI, Hadoop, IoT, and blockchain. It is just the start for many things data and analytics, and there is plenty more that can be discussed. The outcome of this solution would deliver:
+The above phases should be followed to create an effective data lifecycle management, which is the foundation for better governance programs. Data governance will help your organization prepare for the growing trends such as AI, Hadoop, IoT, and blockchain. It's just the start for many things data and analytics, and there's plenty more that can be discussed. The outcome of this solution would deliver:
* **Business Focused** - A solution that is aligned to business requirements and scenarios over technical requirements. * **Future Ready** - A solution will maximize default features of the platform and use standardized industry practices for configuration or scripting activities to support the advancements/evolution of the platform.
The above phases should be followed to create an effective information governanc
|Scan production data sources with Firewall enabled|If this is optional when firewall is in place but itΓÇÖs important to explore options to hardening your infrastructure.|1-5 Days| |Enable Private Link|If this is optional when Private Link is used. Otherwise, you can skip this as itΓÇÖs a must-have criterion when Private is enabled.|1-5 Days| |Create automated workflow|Workflow is important to automate process such as approval, escalation, review and issue management.|2-3 Weeks|
-|Operation documentation|Data governance is not a one-time project. It is an ongoing program to fuel data-driven decision making and creating opportunities for business. It is critical to document key procedure and business standards.|1 Week|
+|Operation documentation|Data governance isn't a one-time project. It's an ongoing program to fuel data-driven decision making and creating opportunities for business. It's critical to document key procedure and business standards.|One Week|
### Acceptance criteria
The above phases should be followed to create an effective information governanc
## Platform hardening
-Additional hardening steps can be taken:
+More hardening steps can be taken:
* Increase security posture by enabling scan on firewall resources or use Private Link * Fine-tune scope scan to improve scan performance
After the move, follow the below steps to clear the old identities, and create n
```azurecli-interactive az login ```
- Alternatively, you can use the [Azure Cloud Shell](../cloud-shell/overview.md) in the Azure Portal.
+ Alternatively, you can use the [Azure Cloud Shell](../cloud-shell/overview.md) in the Azure portal.
Direct browser link: [https://shell.azure.com](https://shell.azure.com). 1. Obtain an access token by using [az account get-access-token](/cli/azure/account#az-account-get-access-token).
purview How To Data Owner Policies Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-arc-sql-server.md
This how-to guide describes how a data owner can delegate authoring policies in
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]-- SQL server version 2022 CTP 2.0 or later. [Follow this link](https://www.microsoft.com/sql-server/sql-server-2022)
+- SQL server version 2022 CTP 2.0 or later running on Windows. [Follow this link](https://www.microsoft.com/sql-server/sql-server-2022)
- Complete process to onboard that SQL server with Azure Arc and enable Azure AD Authentication. [Follow this guide to learn how](https://aka.ms/sql-on-arc-AADauth). **Enforcement of policies for this data source is available only in the following regions for Microsoft Purview**
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Access policies allow a data owner to delegate in Microsoft Purview access manag
Before authoring data policies in the Microsoft Purview governance portal, you'll need to configure the data sources so that they can enforce those policies.
-1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](azure-purview-connector-overview.md#microsoft-purview-data-sources) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections.
+1. Follow any policy-specific prerequisites for your source. Check the [Microsoft Purview supported data sources table](azure-purview-connector-overview.md) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections.
1. Register the data source in Microsoft Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources. 1. [Enable the Data Use Management toggle on the data source](how-to-enable-data-use-management.md#enable-data-use-management). Additional permissions for this step are described in the linked document.
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
Title: Microsoft Purview product glossary
-description: A glossary defining the terminology used throughout Microsoft Purview
+ Title: Microsoft Purview governance portal product glossary
+description: A glossary defining the terminology used throughout the Microsoft Purview governance portal
Last updated 05/27/2022
-# Microsoft Purview product glossary
+# Microsoft Purview governance portal product glossary
-Below is a glossary of terminology used throughout Microsoft Purview.
+Below is a glossary of terminology used throughout the Microsoft Purview governance portal, and documentation.
## Advanced resource sets
-A set of features activated at the Microsoft Purview instance level that, when enabled, enrich resource set assets by computing additional aggregations on the metadata to provide information such as partition counts, total size, and schema counts. Resource set pattern rules are also included.
+A set of features activated at the Microsoft Purview instance level that, when enabled, enrich resource set assets by computing extra aggregations on the metadata to provide information such as partition counts, total size, and schema counts. Resource set pattern rules are also included.
## Annotation
-Information that is associated with data assets in Microsoft Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets.
+Information that is associated with data assets in the Microsoft Purview Data Map, for example, glossary terms and classifications. After they're applied, annotations can be used within Search to aid in the discovery of the data assets.
## Approved The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request. ## Asset
A cloud solution that supports labeling of documents and emails to classify and
## Business glossary A searchable list of specialized terms that an organization uses to describe key business words and their definitions. Using a business glossary can provide consistent data usage across the organization. ## Capacity unit
-A measure of data map usage. All Microsoft Purview data maps include one capacity unit by default, which provides up to 2GB of metadata storage and has a throughput of 25 data map operations/second.
+A measure of data map usage. All Microsoft Purview Data Maps include one capacity unit by default, which provides up to 2 GB of metadata storage and has a throughput of 25 data map operations/second.
## Classification report A report that shows key classification details about the scanned data. ## Classification
An asset where Microsoft Purview extracts schema and applies classifications dur
## Collection An organization-defined grouping of assets, terms, annotations, and sources. Collections allow for easier fine-grained access control and discoverability of assets within a data catalog. ## Collection admin
-A role that can assign roles in Microsoft Purview. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+A role that can assign roles in the Microsoft Purview governance portal. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
## Column pattern A regular expression included in a classification rule that represents the column names that you want to match. ## Contact An individual who is associated with an entity in the data catalog. ## Control plane operation
-An operation that manages resources in your subscription, such as role-based access control and Azure policy, that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources.
+An operation that manages resources in your subscription, such as role-based access control and Azure policy that are sent to the Azure Resource Manager end point. Control plane operations can also apply to resources outside of Azure across on-premises, multicloud, and SaaS sources.
## Credential A verification of identity or tool used in an access control system. Credentials can be used to authenticate an individual or group to grant access to a data asset. ## Data Catalog
A searchable inventory of assets and their associated metadata that allows users
## Data curator A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. ## Data map
-A metadata repository that is the foundation of Microsoft Purview. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview governance portal.
+A metadata repository that is the foundation of the Microsoft Purview governance portal. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview governance portal.
## Data map operation A create, read, update, or delete action performed on an entity in the data map. For example, creating an asset in the data map is considered a data map operation. ## Data owner
A role that can manage data sources and scans. A user in the Data source admin r
## Data steward An individual or group responsible for maintaining nomenclature, data quality standards, security controls, compliance requirements, and rules for the associated object. ## Data dictionary
-A list of canonical names of database columns and their corresponding data types. It is often used to describe the format and structure of a database, and the relationship between its elements.
+A list of canonical names of database columns and their corresponding data types. It's often used to describe the format and structure of a database, and the relationship between its elements.
## Discovered asset
-An asset that Microsoft Purview identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping.
+An asset that the Microsoft Purview Data Map identifies in a data source during the scanning process. The number of discovered assets includes all files or tables before resource set grouping.
## Distinct match threshold The total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. For example, a distinct match threshold of eight for employee ID requires that there are at least eight unique data values among the sampled values in the column that match the data pattern set for employee ID. ## Expert
An entry in the Business glossary that defines a concept specific to an organiza
## Incremental scan A scan that detects and processes assets that have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source. ## Ingested asset
-An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview data map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
+An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview Data Map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
## Insight reader A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role. ## Data Estate Insights
The compute infrastructure used to scan in a data source.
## Lineage How data transforms and flows as it moves from its origin to its destination. Understanding this flow across the data estate helps organizations see the history of their data, and aid in troubleshooting or impact analysis. ## Management
-An area within Microsoft Purview where you can manage connections, users, roles, and credentials. Also referred to as "Management center."
+An area within the Microsoft Purview Governance Portal where you can manage connections, users, roles, and credentials. Also referred to as "Management center."
## Minimum match threshold The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied.
Data that is in a data center controlled by a customer, for example, not in the
## Owner An individual or group in charge of managing a data asset. ## Pattern rule
-A configuration that overrides how Microsoft Purview groups assets as resource sets and displays them within the catalog.
+A configuration that overrides how the Microsoft Purview Data Map groups assets as resource sets and displays them within the catalog.
## Microsoft Purview instance
-A single Microsoft Purview account.
+A single Microsoft Purview (formerly Azure Purview) account.
## Registered source A source that has been added to a Microsoft Purview instance and is now managed as a part of the Data catalog. ## Related terms Glossary terms that are linked to other terms within the organization. ## Resource set
-A single asset that represents many partitioned files or objects in storage. For example, Microsoft Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.
+A single asset that represents many partitioned files or objects in storage. For example, the Microsoft Purview Data Map stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.
## Role Permissions assigned to a user within a Microsoft Purview instance. Roles, such as Microsoft Purview Data Curator or Microsoft Purview Data Reader, determine what can be done within the product. ## Root collection A system-generated collection that has the same friendly name as the Microsoft Purview account. All assets belong to the root collection by default. ## Scan
-A Microsoft Purview process that discovers and examines metadata in a source or set of sources to populate the data map. A scan automatically connects to a source, extracts metadata, captures lineage, and applies classifications. Scans can be run manually or on a schedule.
+A Microsoft Purview Data Map process that discovers and examines metadata in a source or set of sources to populate the data map. A scan automatically connects to a source, extracts metadata, captures lineage, and applies classifications. Scans can be run manually or on a schedule.
## Scan rule set A set of rules that define which data types and classifications a scan ingests into a catalog. ## Scan trigger
The scoring of data assets that determine the order search results are returned.
## Self-hosted integration runtime An integration runtime installed on an on-premises machine or virtual machine inside a private network that is used to connect to data on-premises or in a private network. ## Sensitivity label
-Annotations that classify and protect an organizationΓÇÖs data. Microsoft Purview integrates with Microsoft Purview Information Protection for creation of sensitivity labels.
+Annotations that classify and protect an organizationΓÇÖs data. The Microsoft Purview Data Map integrates with Microsoft Purview Information Protection for creation of sensitivity labels.
## Sensitivity label report A summary of which sensitivity labels are applied across the data estate. ## Service A product that provides standalone functionality and is available to customers by subscription or license. ## Source
-A system where data is stored. Sources can be hosted in various places such as a cloud or on-premises. You register and scan sources so that you can manage them in Microsoft Purview.
+A system where data is stored. Sources can be hosted in various places such as a cloud or on-premises. You register and scan sources so that you can manage them in the Microsoft Purview governance portal.
## Source type
-A categorization of the registered sources used in a Microsoft Purview instance, for example, Azure SQL Database, Azure Blob Storage, Amazon S3, or SAP ECC.
+A categorization of the registered sources used in the Microsoft Purview Data Map, for example, Azure SQL Database, Azure Blob Storage, Amazon S3, or SAP ECC.
## Steward
-An individual who defines the standards for a glossary term. They are responsible for maintaining quality standards, nomenclature, and rules for the assigned entity.
+An individual who defines the standards for a glossary term. They're responsible for maintaining quality standards, nomenclature, and rules for the assigned entity.
## Term template A definition of attributes included in a glossary term. Users can either use the system-defined term template or create their own to include custom attributes. ## Workflow
An automated process that coordinates the creation and modification of catalog e
## Next steps
-To get started with Microsoft Purview, see [Quickstart: Create a Microsoft Purview account](create-catalog-portal.md).
+To get started with other Microsoft Purview governance services, see [Quickstart: Create a Microsoft Purview (formerly Azure Purview) account](create-catalog-portal.md).
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
Title: Learn about prerequisites to successfully deploy a Microsoft Purview account
-description: This tutorial lists prerequisites to deploy a Microsoft Purview account.
+ Title: Prerequisites to successfully deploy a Microsoft Purview (formerly Azure Purview) account
+description: This tutorial lists a prerequisite checklist to deploy a Microsoft Purview (formerly Azure Purview) account.
Last updated 04/22/2022
# Customer Intent: As a Data and Data Security administrator, I want to deploy Microsoft Purview as a unified data governance solution.
-# Microsoft Purview deployment checklist
+# Microsoft Purview (formerly Azure Purview) deployment checklist
-This article lists prerequisites that help you get started quickly on Microsoft Purview planning and deployment.
+This article lists prerequisites that help you get started quickly on planning and deployment for your Microsoft Purview (formerly Azure Purview) account
|No. |Prerequisite / Action |Required permission |More guidance and recommendations | |:|:|:|:|
-|1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Microsoft Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> |
+|1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to the Microsoft Purview Data Map for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> |
|2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Microsoft Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. | |3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A |A managed event hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. | |4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
purview Tutorial Azure Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-tools.md
Title: Learn about Microsoft Purview open-source tools and utilities
-description: This tutorial lists various tools and utilities available in Microsoft Purview and discusses their usage.
+ Title: Learn about open-source tools and utilities for Microsoft Purview governance services
+description: This tutorial lists various tools and utilities available for Microsoft Purview governance services and discusses their usage.
Last updated 10/10/2021
# Customer Intent: As a Microsoft Purview administrator, I want to kickstart and be up and running with Microsoft Purview service in a matter of minutes; additionally, I want to perform and set up automations, batch-mode API executions and scripts that help me run Microsoft Purview smoothly and effectively for the long-term on a regular basis.
-# Microsoft Purview open-source tools and utilities
+# Microsoft Purview governance services open-source tools and utilities
-This article lists several open-source tools and utilities (command-line, python, and PowerShell interfaces) that help you get started quickly on Microsoft Purview service in a matter of minutes! These tools have been authored & developed by collective effort of the Microsoft Purview Product Group and the open-source community. The objective of such tools is to make learning, starting up, regular usage, and long-term adoption of Microsoft Purview breezy and super fast.
+This article lists several open-source tools and utilities (command-line, python, and PowerShell interfaces) that help you get started quickly with Microsoft Purview governance services, like Microsoft Purview Data Map, Data Catalog, and Data Estate Insights in a matter of minutes! These tools have been authored & developed by collective effort of the Microsoft Purview Product Group and the open-source community. The objective of such tools is to make learning, starting up, regular usage, and long-term adoption of Microsoft Purview fast and easy.
### Intended audience - Microsoft Purview community including customers, developers, ISVs, partners, evangelists, and enthusiasts. -- Microsoft Purview catalog is based on [Apache Atlas](https://atlas.apache.org/) and extends full support for Apache Atlas APIs. We welcome Apache Atlas community, enthusiasts, and developers to wholeheartedly build on and evangelize Microsoft Purview.
+- The Microsoft Purview Data Catalog is based on [Apache Atlas](https://atlas.apache.org/) and extends full support for Apache Atlas APIs. We welcome Apache Atlas community, enthusiasts, and developers to wholeheartedly build on and evangelize Microsoft Purview.
### Microsoft Purview customer journey stages -- *Microsoft Purview Learners*: Learners who are starting fresh with Microsoft Purview service and are keen to understand and explore how a multi-cloud unified data governance solution works. A section of learners includes users who want to compare and contrast Microsoft Purview with other competing solutions in the data governance market and try it before adopting for long-term usage.
+- *Microsoft Purview Learners*: Learners who are starting fresh with Microsoft Purview governance services and are keen to understand and explore how a multi-cloud unified data governance solution works. A section of learners includes users who want to compare and contrast Microsoft Purview with other competing solutions in the data governance market and try it before adopting for long-term usage.
-- *Microsoft Purview Innovators*: Innovators who are keen to understand existing and latest features, ideate, and conceptualize features upcoming on Microsoft Purview. They are adept at building and developing solutions for customers, and have futuristic forward-looking ideas for the next-gen cutting-edge data governance product.
+- *Microsoft Purview Innovators*: Innovators who are keen to understand existing and latest features, ideate, and conceptualize features upcoming on Microsoft Purview. They're adept at building and developing solutions for customers, and have futuristic forward-looking ideas for the next-gen cutting-edge data governance product.
- *Microsoft Purview Enthusiasts/Evangelists*: Enthusiasts who are a combination of Learners and Innovators. They have developed solid understanding and knowledge of Microsoft Purview, hence, are upbeat about adoption of Microsoft Purview. They can help evangelize Microsoft Purview as a service and educate several other Microsoft Purview users and probable customers across the globe. -- *Microsoft Purview Adopters*: Adopters who have migrated from starting up and exploring Microsoft Purview and are smoothly using Microsoft Purview for more than a few months.
+- *Microsoft Purview Adopters*: Adopters who have migrated from starting up and exploring the Microsoft Purview governance portal and are smoothly using Microsoft Purview for more than a few months.
-- *Microsoft Purview Long-Term Regular Users*: Long-term users who have been using Microsoft Purview for more than one year and are now confident and comfortable using most advanced Microsoft Purview use cases on the Azure portal and Microsoft Purview governance portal; furthermore they have near perfect knowledge and awareness of the Microsoft Purview REST APIs and the other use cases supported via Microsoft Purview APIs.
+- *Microsoft Purview Long-Term Regular Users*: Long-term users who have been using the Microsoft Purview governance portal for more than one year and are now confident and comfortable using most advanced use cases on the Azure portal and Microsoft Purview governance portal; furthermore they have near perfect knowledge and awareness of the Microsoft Purview REST APIs and the other use cases supported via Microsoft Purview APIs.
## Microsoft Purview open-source tools and utilities list
This article lists several open-source tools and utilities (command-line, python
- **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts, Adopters, Long-Term Regular Users* - **Description**: This utility is based on and covers the entire set of [Microsoft Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Microsoft Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Microsoft Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Microsoft Purview governance portal. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
-1. [Purview-Starter-Kit](https://aka.ms/PurviewKickstart)
-
- - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
- - **Description**: PowerShell script to perform initial setup of Microsoft Purview account. Useful for anyone looking to set up several fresh new Microsoft Purview account(s) in less than 5 minutes!
- 1. [Microsoft Purview Lab](https://aka.ms/purviewlab) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
This article lists several open-source tools and utilities (command-line, python
1. [Microsoft Purview Demo](https://aka.ms/pvdemo) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
- - **Description**: An Azure Resource Manager (ARM) template-based tool to automatically set up and deploy fresh new Microsoft Purview account quickly and securely at the issue of just one command. It is similar to [Purview-Starter-Kit](https://aka.ms/PurviewKickstart), the extra feature being it deploys a few more pre-configured data sources - Azure SQL Database, Azure Data Lake Storage Gen2 Account, Azure Data Factory, Azure Synapse Analytics Workspace
+ - **Description**: An Azure Resource Manager (ARM) template-based tool to automatically set up and deploy fresh new Microsoft Purview account quickly and securely at the issue of just one command. It's similar to [Purview-Starter-Kit](https://aka.ms/PurviewKickstart), the extra feature being it deploys a few more pre-configured data sources - Azure SQL Database, Azure Data Lake Storage Gen2 Account, Azure Data Factory, Azure Synapse Analytics Workspace
1. [PyApacheAtlas: Interface between Microsoft Purview and Apache Atlas](https://github.com/wjohnson/pyapacheatlas) using Atlas APIs
This article lists several open-source tools and utilities (command-line, python
## Feedback and disclaimer
-None of the tools come with an express warranty from Microsoft verifying their efficacy or guarantees of functionality. They are certified to be free of any malicious activity or viruses, and guaranteed to not collect any private or sensitive data.
+None of the tools come with an express warranty from Microsoft verifying their efficacy or guarantees of functionality. They're certified to be free of any malicious activity or viruses, and guaranteed to not collect any private or sensitive data.
For feedback or questions about efficacy and functionality during usage, contact the respective tool owners and authors on the contact details mentioned in the respective GitHub repo.
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Title: 'About Azure Route Server supports for ExpressRoute and Azure VPN'
+ Title: 'Azure Route Server support for ExpressRoute and Azure VPN'
description: Learn about how Azure Route Server interacts with ExpressRoute and Azure VPN gateways.
Last updated 10/01/2021
-# About Azure Route Server support for ExpressRoute and Azure VPN
+# Azure Route Server support for ExpressRoute and Azure VPN
-Azure Route Server supports not only third-party network virtual appliances (NVA) running on Azure but also integrates seamlessly with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateway and Azure Route Server with a simple [configuration change](quickstart-configure-route-server-powershell.md#route-exchange).
+Azure Route Server supports not only third-party network virtual appliances (NVA) running on Azure but also integrates seamlessly with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server.
## How does it work?
-When you deploy an Azure Route Server along with an ExpressRoute gateway and an NVA in a virtual network by default Azure Route Server doesnΓÇÖt propagate the routes it receives from the NVA and ExpressRoute gateway between each other. Once you enable the route exchange, ExpressRoute and the NVA will learn each otherΓÇÖs routes.
+When you deploy an Azure Route Server along with an ExpressRoute gateway and an NVA in a virtual network, by default Azure Route Server doesnΓÇÖt propagate the routes it receives from the NVA and ExpressRoute gateway between each other. Once you enable the route exchange, ExpressRoute and the NVA will learn each otherΓÇÖs routes.
For example, in the following diagram:
-* The SDWAN appliance will receive from Azure Route Server the route from ΓÇ£On-prem 2ΓÇ¥, which is connected to ExpressRoute, along with the virtual network route.
+* The SDWAN appliance will receive from Azure Route Server the route from ΓÇ£On-premises 2ΓÇ¥, which is connected to ExpressRoute, along with the virtual network route.
-* The ExpressRoute gateway will receive the route from ΓÇ£On-prem 1ΓÇ¥, which is connected to the SDWAN appliance, along with the virtual network route from Azure Route Server.
+* The ExpressRoute gateway will receive the route from ΓÇ£On-premises 1ΓÇ¥, which is connected to the SDWAN appliance, along with the virtual network route from Azure Route Server.
![Diagram showing ExpressRoute configured with Route Server.](./media/expressroute-vpn-support/expressroute-with-route-server.png)
route-server Tutorial Configure Route Server With Quagga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md
In this tutorial, you learn how to:
> * Configure Route Server peering > * Check learned routes
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* An Azure subscription
## Create a virtual network You'll need a virtual network to deploy both the Azure Route Server and the Quagga NVA into. Each deployment will have its own dedicated subnet.
-1. Sign in to the [Azure portal](https://portal.azure.com).
- 1. On the top left-hand side of the screen, select **Create a resource** and search for **Virtual Network**. Then select **Create**. :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-new-virtual-network.png" alt-text="Screenshot of create a new virtual network resource.":::
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-interface.md
If instead your function or app uses Azure managed identities and Azure roles fo
+ Your function or app must be [configured for Azure Active Directory](../app-service/configure-authentication-provider-aad.md).
-+ Your [custom skill definition](cognitive-search-custom-skill-web-api.md) must include an "authResourceId" property. This property takes an application (client) ID, in a [supported format](../active-directory/develop/security-best-practices-for-app-registration.md#appid-uri-configuration): `api://<appId>`.
++ Your [custom skill definition](cognitive-search-custom-skill-web-api.md) must include an "authResourceId" property. This property takes an application (client) ID, in a [supported format](../active-directory/develop/security-best-practices-for-app-registration.md#application-id-uri): `api://<appId>`. By default, the connection to the endpoint will time out if a response is not returned within a 30-second window. The indexing pipeline is synchronous and indexing will produce a timeout error if a response is not received in that time frame. You can increase the interval to a maximum value of 230 seconds by setting the timeout parameter:
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Title: Configure the similarity algorithm
+ Title: Configure BM25 similarity algorithm
-description: Learn how to enable BM25 on older search services, and how BM25 parameters can be modified to better accommodate the content of your indexes.
+description: Enable Okapi BM25 ranking to upgrade the search ranking and relevance behavior on older Azure Search services.
--++ - Previously updated : 03/12/2021+ Last updated : 06/22/2022 # Configure the similarity ranking algorithm in Azure Cognitive Search
-Azure Cognitive Search supports two similarity ranking algorithms:
+Depending on the age of your search service, Azure Cognitive Search supports two [similarity ranking algorithms](index-similarity-and-scoring.md) for scoring relevance on full text search results:
-+ A *classic similarity* algorithm, used by all search services up until July 15, 2020.
-+ An implementation of the *Okapi BM25* algorithm, used in all search services created after July 15.
++ An *Okapi BM25* algorithm, used in all search services created after July 15, 2020++ A *classic similarity* algorithm, used by all search services created before July 15, 2020
-BM25 ranking is the new default because it tends to produce search rankings that align better with user expectations. It comes with [parameters](#set-bm25-parameters) for tuning results based on factors such as document size.
+BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size.
-For new services created after July 15, 2020, BM25 is used automatically and is the sole similarity algorithm. If you try to set similarity to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
+For search services created after July 2020, BM25 is the sole similarity algorithm. If you try to set similarity to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
-For older services created before July 15, 2020, classic similarity remains the default algorithm. Older services can upgrade to BM25 on a per-index basis, as explained below. If you are switching from classic to BM25, you can expect to see some differences how search results are ordered.
+For older services, classic similarity remains the default algorithm. Older services can [upgrade to BM25](#enable-bm25-scoring-on-older-services) on a per-index basis. When switching from classic to BM25, you can expect to see some differences how search results are ordered.
-> [!NOTE]
-> Semantic ranking, currently in preview for standard services in selected regions, is an additional step forward in producing more relevant results. Unlike the other algorithms, it is an add-on feature that iterates over an existing result set. For more information, see [Semantic search overview](semantic-search-overview.md) and [Semantic ranking](semantic-ranking.md).
+## Set BM25 parameters
+
+BM25 similarity adds two parameters to control the relevance score calculation. To set "similarity" parameters, issue a [Create or Update Index](/rest/api/searchservice/create-index) request as illustrated by the following example.
+
+Because Cognitive Search won't allow updates to a live index, you'll need to take the index offline so that the parameters can be added. Indexing and query requests will fail while the index is offline. The duration of the outage is the amount of time it takes to update the index, usually no more than several seconds. When the update is complete, the index comes back automatically. To take the index offline, append the "allowIndexDowntime=true" URI parameter on the request that sets the "similarity" property:
+
+```http
+PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=2020-06-30&allowIndexDowntime=true
+{
+ "similarity": {
+ "@odata.type": "#Microsoft.Azure.Search.BM25Similarity",
+ "b" : 0.5,
+ "k1" : 1.3
+ }
+}
+```
+
+### BM25 property reference
+
+| Property | Type | Description |
+|-||-|
+| k1 | number | Controls the scaling function between the term frequency of each matching terms to the final relevance score of a document-query pair. Values are usually 0.0 to 3.0, with 1.2 as the default. </br></br>A value of 0.0 represents a "binary model", where the contribution of a single matching term is the same for all matching documents, regardless of how many times that term appears in the text, while a larger k1 value allows the score to continue to increase as more instances of the same term is found in the document. </br></br>Using a higher k1 value can be important in cases where we expect multiple terms to be part of a search query. In those cases, we might want to favor documents that match many of the different query terms being searched over documents that only match a single one, multiple times. For example, when querying the index for documents containing the terms "Apollo Spaceflight", we might want to lower the score of an article about Greek Mythology that contains the term "Apollo" a few dozen times, without mentions of "Spaceflight", compared to another article that explicitly mentions both "Apollo" and "Spaceflight" a handful of times only. |
+| b | number | Controls how the length of a document affects the relevance score. Values are between 0 and 1, with 0.75 as the default. </br></br>A value of 0.0 means the length of the document will not influence the score, while a value of 1.0 means the impact of term frequency on relevance score will be normalized by the document's length. </br></br>Normalizing the term frequency by the document's length is useful in cases where we want to penalize longer documents. In some cases, longer documents (such as a complete novel), are more likely to contain many irrelevant terms, compared to much shorter documents. |
## Enable BM25 scoring on older services
-If you are running a search service that was created prior to July 15, 2020, you can enable BM25 by setting a Similarity property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a new Similarity property set to "Microsoft.Azure.Search.BM25Similarity".
+If you are running a search service that was created from March 2014 through July 15, 2020, you can enable BM25 by setting a "similarity" property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a "similarity" property set to "Microsoft.Azure.Search.BM25Similarity".
-Once an index exists with a Similarity property, you can switch between BM25Similarity or ClassicSimilarity.
+Once an index exists with a "similarity" property, you can switch between `BM25Similarity` or `ClassicSimilarity`.
The following links describe the Similarity property in the Azure SDKs.
PUT https://[search service name].search.windows.net/indexes/[index name]?api-ve
} ```
-## Set BM25 parameters
-
-BM25 similarity adds two user customizable parameters to control the calculated relevance score. You can set BM25 parameters during index creation, or as an index update if the BM25 algorithm was specified during index creation.
-
-| Property | Type | Description |
-|-||-|
-| k1 | number | Controls the scaling function between the term frequency of each matching terms to the final relevance score of a document-query pair. Values are usually 0.0 to 3.0, with 1.2 as the default. </br></br>A value of 0.0 represents a "binary model", where the contribution of a single matching term is the same for all matching documents, regardless of how many times that term appears in the text, while a larger k1 value allows the score to continue to increase as more instances of the same term is found in the document. </br></br>Using a higher k1 value can be important in cases where we expect multiple terms to be part of a search query. In those cases, we might want to favor documents that match many of the different query terms being searched over documents that only match a single one, multiple times. For example, when querying the index for documents containing the terms "Apollo Spaceflight", we might want to lower the score of an article about Greek Mythology that contains the term "Apollo" a few dozen times, without mentions of "Spaceflight", compared to another article that explicitly mentions both "Apollo" and "Spaceflight" a handful of times only. |
-| b | number | Controls how the length of a document affects the relevance score. Values are between 0 and 1, with 0.75 as the default. </br></br>A value of 0.0 means the length of the document will not influence the score, while a value of 1.0 means the impact of term frequency on relevance score will be normalized by the document's length. </br></br>Normalizing the term frequency by the document's length is useful in cases where we want to penalize longer documents. In some cases, longer documents (such as a complete novel), are more likely to contain many irrelevant terms, compared to much shorter documents. |
-
-### Setting k1 and b parameters
-
-To set or modify b or k1 values, add them to the BM25 similarity object. Setting or changing these values on an existing index will take the index offline for at least a few seconds, causing active indexing and query requests to fail. Consequently, you should set the "allowIndexDowntime=true" parameter of the update request:
-
-```http
-PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=2020-06-30&allowIndexDowntime=true
-{
- "similarity": {
- "@odata.type": "#Microsoft.Azure.Search.BM25Similarity",
- "b" : 0.5,
- "k1" : 1.3
- }
-}
-```
- ## See also ++ [Similarity and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) + [REST API Reference](/rest/api/searchservice/) + [Add scoring profiles to your index](index-add-scoring-profiles.md) + [Create Index API](/rest/api/searchservice/create-index)
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Title: Similarity and scoring overview
+ Title: Similarity and scoring
-description: Explains the concepts of similarity and scoring, and what a developer can do to customize the scoring result.
+description: Explains the concepts of similarity and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result.
Previously updated : 11/30/2021 Last updated : 06/22/2022
-# Similarity and scoring in Azure Cognitive Search
-
-This article describes the similarity ranking algorithms used by Azure Cognitive Search to determine which matching documents are the most relevant in a [full text search query](search-lucene-query-architecture.md). This article also introduces two related features: *scoring profiles* (criteria for boosting the relevance of a specific match) and the *featuresMode* parameter (unpacks a search score to show more detail).
-
-> [!NOTE]
-> A third [semantic re-ranking algorithm](semantic-ranking.md) is currently in public preview. For more information, start with [Semantic search overview](semantic-search-overview.md).
-
-## Similarity ranking algorithms
-Azure Cognitive Search supports two similarity ranking algorithms.
+# Similarity and scoring in Azure Cognitive Search
-| Algorithm | Score | Availability |
-|--|-|--|
-| BM25Similarity | @search.score | Used by all search services created after July 15, 2020. |
-| ClassicSimilarity | @search.score | Used by all search services created from March 2014 through July 15, 2020. Older services that use classic by default can [opt in to BM25](index-ranking-similarity.md). |
+This article describes relevance scoring and the similarity ranking algorithms used to rank search results in Azure Cognitive Search. A relevance score applies to matches returned in a [full text search query](search-lucene-query-architecture.md). Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked.
-Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research. BM25 also offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms.
+In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms:
-The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
-
-> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=322&end=643]
++ Similarity ranking configuration++ Semantic ranking (in preview, described in [this article](semantic-ranking.md))++ Scoring profiles++ Custom scoring logic enabled through the *featuresMode* parameter ## Relevance scoring
-Scoring refers to the computation of a search score for every item returned in search results for full text search queries. The score is an indicator of an item's relevance in the context of the current query. The higher the score, the more relevant the item. In search results, items are rank ordered from high to low, based on the search scores calculated for each item. The score is returned in the response as "@search.score" on every document.
+Relevance scoring refers to the computation of a search score for every item returned in search results for full text search queries. The score is an indicator of an item's relevance in the context of the current query. The higher the score, the more relevant the item.
-By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results.
+In search results, items are rank ordered from high to low, based on the search scores calculated for each item. The score is returned in the response as "@search.score" on every document. By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results.
The search score is computed based on statistical properties of the data and the query. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
If you want to break the tie among repeating scores, you can add an **$orderby**
> [!NOTE] > A `@search.score = 1` indicates an un-scored or un-ranked result set. The score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`, sometimes paired with filters, where the filter is the primary means for returning a match).
+## Similarity ranking algorithms
+
+Azure Cognitive Search provides the `BM25Similarity` ranking algorithm. On older search services, you might be using `ClassicSimilarity`.
+
+Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
+
+BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the similarity ranking algorithm](index-ranking-similarity.md).
+
+> [!NOTE]
+> If you're using a search service that was created before July 2020, the similarity algorithm is most likely the previous default, `ClassicSimilarity`, which you an upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
+
+The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
+
+> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=322&end=643]
+ <a name="scoring-statistics"></a> ## Scoring statistics and sticky sessions
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
layout: LandingPage Previously updated : 05/27/2022 Last updated : 06/21/2022
Connect to Azure Storage through Azure Files share to extract content serialized
## Data sources from our Partners
-Data source connectors are also provided by third-party Microsoft partners. See our [Terms of Use statement](search-data-sources-terms-of-use.md) and check the partner licensing and usage instructions before using a data source.
+Data source connectors are also provided by third-party Microsoft partners. See our [Terms of Use statement](search-data-sources-terms-of-use.md) and check the partner licensing and usage instructions before using a data source. These third-party Microsoft Partner data source connectors are implemented and supported by each partner and are not part of Cognitive Search built-in indexers.
:::row::: :::column span="":::
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-11.md
In terms of service version updates, where code changes in version 11 relate to
+ [Ordered results](search-query-odata-orderby.md) for null values have changed in this version, with null values appearing first if the sort is `asc` and last if the sort is `desc`. If you wrote code to handle how null values are sorted, you should review and potentially remove that code if it's no longer necessary.
+Due to these behavior changes, it's likely that you'll see slight variations in ranked results.
+ ## Next steps + [How to use Azure.Search.Documents in a C# .NET Application](search-howto-dotnet-sdk.md)
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
description: Learn how to set up an indexer connection to a Cosmos DB account us
-+ Previously updated : 02/11/2022 Last updated : 06/20/2022
Before learning more about this feature, it is recommended that you have an unde
* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Cosmos DB. For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Cosmos DB APIs supported by Cognitive Search.
+* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Cosmos DB. For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role. At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) is not supported when using Search with managed identities to connect to Cosmos DB.
The easiest way to test the connection is using the [Import data wizard](search-import-data-portal.md). The wizard supports data source connections for both system and user managed identities.
Check to see if the Cosmos DB account has its access restricted to select networ
* [Azure Cosmos DB indexer using SQL API](search-howto-index-cosmosdb.md) * [Azure Cosmos DB indexer using MongoDB API](search-howto-index-cosmosdb-mongodb.md)
-* [Azure Cosmos DB indexer using Gremlin API](search-howto-index-cosmosdb-gremlin.md)
+* [Azure Cosmos DB indexer using Gremlin API](search-howto-index-cosmosdb-gremlin.md)
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
Previously updated : 02/02/2022 Last updated : 06/21/2022 # Configure IP firewall rules to allow indexer connections from Azure Cognitive Search
This article explains how to find the IP address of your search service and conf
aliases: contoso.search.windows.net ```
+## Get the Azure portal IP address
+
+If you're using the Azure portal or the [Import Data wizard](search-import-data-portal.md) to create an indexer, you'll need an inbound rule for the Azure portal.
+
+To get the portal IP address, perform `nslookup` on `stamp2.ext.search.windows.net`, which is the domain of the traffic manager.
+
+For nslookup, the IP address be visible in the "Non-authoritative answer" portion of the response. For ping, the request will time out, but the IP address will be visible in the response. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
+
+Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
+ ## Get IP addresses for "AzureCognitiveSearch" service tag We also require customers to create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment) to ensure we optimize the resource availability for search services. This step explains how to get the range of IP addresses needed for this inbound rule.
Now that you have the necessary IP addresses, you can set up the inbound rule. T
:::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall.png" alt-text="Screenshot of Azure Storage Firewall and virtual networks page" border="true":::
-1. Add the IP addresses obtained previously (one for the search service IP, plus all of the IP ranges for the "AzureCognitiveSearch" service tag) in the address range and select **Save**.
+1. Add the IP addresses obtained previously in the address range and select **Save**. You should have rules for the search service, Azure portal (optional), plus all of the IP ranges for the "AzureCognitiveSearch" service tag for your region
:::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall-ip.png" alt-text="Screenshot of the IP address section of the page." border="true":::
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Private endpoints created through Azure Cognitive Search APIs are referred to as
+ Connections from the search client should be programmatic, either REST APIs or an Azure SDK, rather than through the Azure portal. The device must connect using an authorized IP in the Azure PaaS resource's firewall rules. ++ Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multi-tenant environment.+ > [!NOTE] > When using Private Link for data sources, Azure portal access (from Cognitive Search to your content) - such as through the [Import data](search-import-data-portal.md) wizard - is not supported.
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
Last updated 06/06/2022
An *indexer* in Azure Cognitive Search is a crawler that extracts searchable content from cloud data sources and populates a search index using field-to-field mappings between source data and a search index. This approach is sometimes referred to as a 'pull model' because the search service pulls data in without you having to write any code that adds data to an index. Indexers also drive the [AI enrichment](cognitive-search-concept-intro.md) capabilities of Cognitive Search, integrating external processing of content en route to an index.
-Indexers are cloud-only, with individual indexers for [supported data sources](#supported-data-sources). When configuring an indexer, you'll specify a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have additional configuration properties specific to that content type.
+Indexers are cloud-only, with individual indexers for [supported data sources](#supported-data-sources). When configuring an indexer, you'll specify a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have more configuration properties specific to that content type.
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a ['push model'](search-what-is-data-import.md) that simultaneously updates data in both Azure Cognitive Search and your external data source. ## Indexer scenarios and use cases
-You can use an indexer as the sole means for data ingestion, or as part of a combination of techniques that load and optionally transform or enrich content along the way. The following table summarizes the main scenarios.
+You can use an indexer as the sole means for data ingestion, or in combination with other techniques. The following table summarizes the main scenarios.
| Scenario |Strategy | |-||
Indexers crawl data stores on Azure and outside of Azure.
Indexers accept flattened row sets, such as a table or view, or items in a container or folder. In most cases, it creates one search document per row, record, or item.
-Indexer connections to remote data sources can be made using standard Internet connections (public) or encrypted private connections when you use Azure virtual networks for client apps. You can also set up connections to authenticate using a managed identity. For more information about secure connections, see [Granting access via private endpoints](search-indexer-securing-resources.md#granting-access-via-private-endpoints) and [Connect to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
+Indexer connections to remote data sources can be made using standard Internet connections (public) or encrypted private connections when you use Azure virtual networks for client apps. You can also set up connections to authenticate using a managed identity. For more information about secure connections, see [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) and [Connect to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
## Stages of indexing
Document cracking is the process of opening files and extracting content. Text-b
Depending on the data source, the indexer will try different operations to extract potentially indexable content:
-+ When the document is a file, such as a PDF or other supported file format in [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md#supported-document-formats), the indexer will open the file and extract text, images, and metadata. Indexers can also open files from [SharePoint](search-howto-index-sharepoint-online.md#supported-document-formats) and [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md#supported-document-formats).
++ When the document is a file with embedded images, such as a PDF, the indexer extracts text, images, and metadata. Indexers can open files from [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md#supported-document-formats), [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md#supported-document-formats), and [SharePoint](search-howto-index-sharepoint-online.md#supported-document-formats). + When the document is a record in [Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md), the indexer will extract non-binary content from each field in each record.
Field mapping occurs after document cracking, but before transformations, when t
### Stage 3: Skillset execution
-Skillset execution is an optional step that invokes built-in or custom AI processing. You might need it for optical character recognition (OCR) in the form of image analysis if the source data is a binary image, or you might need text translation if content is in different languages.
+Skillset execution is an optional step that invokes built-in or custom AI processing. Skillsets can add optical character recognition (OCR) or other forms of image analysis if the content is binary. Skillsets can also add natural language processing. For example, you can add text translation or key phrase extraction.
Whatever the transformation, skillset execution is where enrichment occurs. If an indexer is a pipeline, you can think of a [skillset](cognitive-search-defining-skillset.md) as a "pipeline within the pipeline". ### Stage 4: Output field mappings
-If you include a skillset, you will need to [specify output field mappings](cognitive-search-output-field-mapping.md) in the indexer definition. The output of a skillset is manifested internally as a tree structure referred to as an *enriched document*. Output field mappings allow you to select which parts of this tree to map into fields in your index.
+If you include a skillset, you'll need to [specify output field mappings](cognitive-search-output-field-mapping.md) in the indexer definition. The output of a skillset is manifested internally as a tree structure referred to as an *enriched document*. Output field mappings allow you to select which parts of this tree to map into fields in your index.
-Despite the similarity in names, output field mappings and field mappings build associations from different sources. Field mappings associate the content of source field to a destination field in a search index. Output field mappings associate the content of an internal enriched document (skill outputs) to destination fields in the index. Unlike field mappings, which are considered optional, you will always need to define an output field mapping for any transformed content that needs to reside in an index.
+Despite the similarity in names, output field mappings and field mappings build associations from different sources. Field mappings associate the content of source field to a destination field in a search index. Output field mappings associate the content of an internal enriched document (skill outputs) to destination fields in the index. Unlike field mappings, which are considered optional, an output field mapping is required for any transformed content that should be in the index.
The next image shows a sample indexer [debug session](cognitive-search-debug-session.md) representation of the indexer stages: document cracking, field mappings, skillset execution, and output field mappings.
Indexers can offer features that are unique to the data source. In this respect,
Indexers require a *data source* object that provides a connection string and possibly credentials. Call the [Create Data Source (REST)](/rest/api/searchservice/create-data-source) or [SearchIndexerDataSourceConnection class](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection) to create the resource.
-Data sources are configured and managed independently of the indexers that use them, which means a data source can be used by multiple indexers to load more than one index at a time.
+Data sources are independent objects. Multiple indexers can use the same data source object to load more than one index at a time.
### Step 2: Create an index
-An indexer will automate some tasks related to data ingestion, but creating an index is generally not one of them. As a prerequisite, you must have a predefined index with fields that match those in your external data source. Fields need to match by name and data type. If not, you can [define field mappings](search-indexer-field-mappings.md) to establish the association. For more information about structuring an index, see [Create an Index (REST)](/rest/api/searchservice/Create-Index) or [SearchIndex class](/dotnet/api/azure.search.documents.indexes.models.searchindex).
+An indexer will automate some tasks related to data ingestion, but creating an index is generally not one of them. As a prerequisite, you must have a predefined index that contains corresponding target fields for any source fields in your external data source. Fields need to match by name and data type. If not, you can [define field mappings](search-indexer-field-mappings.md) to establish the association. For more information about structuring an index, see [Create an Index (REST)](/rest/api/searchservice/Create-Index) or [SearchIndex class](/dotnet/api/azure.search.documents.indexes.models.searchindex).
> [!Tip] > Although indexers cannot generate an index for you, the **Import data** wizard in the portal can help. In most cases, the wizard can infer an index schema from existing metadata in the source, presenting a preliminary index schema which you can edit in-line while the wizard is active. Once the index is created on the service, further edits in the portal are mostly limited to adding new fields. Consider the wizard for creating, but not revising, an index. For hands-on learning, step through the [portal walkthrough](search-get-started-portal.md).
An indexer will automate some tasks related to data ingestion, but creating an i
By default, the first indexer execution occurs when you [create an indexer](/rest/api/searchservice/Create-Indexer) on the search service. You can set the "disabled" property in an indexer to create it without running it.
-During indexer execution is when you'll find out if the data source is accessible or the skillset is valid. Until indexer execution starts, dependent objects such as data sources and skillsets are inactive on the search service.
+Any errors or warnings about data access or skillset validation will occur during indexer execution. Until indexer execution starts, dependent objects such as data sources and skillsets are passive on the search service.
-After the first indexer run, you can re-run it on demand using [Run Indexer](/rest/api/searchservice/run-indexer), or you can [define a recurring schedule](search-howto-schedule-indexers.md).
+After the first indexer run, you can rerun it on demand using [Run Indexer](/rest/api/searchservice/run-indexer), or you can [define a recurring schedule](search-howto-schedule-indexers.md).
You can monitor [indexer status in the portal](search-howto-monitor-indexers.md) or through [Get Indexer Status API](/rest/api/searchservice/get-indexer-status). You should also [run queries on the index](search-query-create.md) to verify the result is what you expected.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
Previously updated : 03/30/2022 Last updated : 06/20/2022
-# Indexer access to content protected by Azure network security features
+# Indexer access to content protected by Azure network security
-Azure Cognitive Search indexers can make outbound calls to various Azure resources during execution. This article explains the concepts behind indexer access to content that's protected by IP firewalls, private endpoints, or other Azure network-level security mechanisms.
+If your search application requirements include an Azure virtual network, this concept article explains how a search indexer can access content that's protected by network security. It describes the outbound traffic patterns and indexer execution environments. It also covers the network protections supported by Cognitive Search and factors that might influence your approach. Finally, because Azure Storage is used for both data access and persistent storage, this article also covers network considerations that are specific to search and storage connectivity.
+
+Looking for step-by-step instructions instead? See [How to configure firewall rules to allow indexer access](search-indexer-howto-access-ip-restricted.md) or [How to make outbound connections through a private endpoint](search-indexer-howto-access-private.md).
## Resources accessed by indexers
-An indexer makes outbound calls in three situations:
+Azure Cognitive Search indexers can make outbound calls to various Azure resources during execution. An indexer makes outbound calls in three situations:
- Connecting to external data sources during indexing-- Connecting to external, encapsulated code through a skillset
+- Connecting to external, encapsulated code through a skillset that includes custom skills
- Connecting to Azure Storage during skillset execution to cache enrichments, save debug session state, or write to a knowledge store
-A list of all possible resource types that an indexer might access in a typical run are listed in the table below.
+A list of all possible Azure resource types that an indexer might access in a typical run are listed in the table below.
| Resource | Purpose within indexer run | | | |
A list of all possible resource types that an indexer might access in a typical
> [!NOTE] > An indexer also connects to Cognitive Services for built-in skills. However, that connection is made over the internal network and isn't subject to any network provisions under your control.
+## Supported network protections
+ Your Azure resources could be protected using any number of the network isolation mechanisms offered by Azure. Depending on the resource and region, Cognitive Search indexers can make outbound connections through IP firewalls and private endpoints, subject to the limitations indicated in the following table. | Resource | IP restriction | Private endpoint |
Your Azure resources could be protected using any number of the network isolatio
| SQL Managed Instance | Supported | N/A | | Azure Functions | Supported | Supported, only for certain tiers of Azure functions |
-### Access to a network-protected storage account
+## Indexer execution environment
-A search service stores indexes and synonym lists. For other features that require storage, Cognitive Search takes a dependency on Azure Storage. Enrichment caching, debug sessions, and knowledge stores fall into this category. The location of each service, and any network protections in place for storage, will determine your data access strategy.
+Azure Cognitive Search has the concept of an *indexer execution environment* that optimizes processing based on the characteristics of the job. There are two environments. If you're using an IP firewall to control access to Azure resources, knowing about execution environments will help you set up an IP range that is inclusive of both.
-#### Same-region services
+For any given indexer run, Azure Cognitive Search determines the best environment in which to run the indexer. Depending on the number and types of tasks assigned, the indexer will run in one of two environments:
-In Azure Storage, access through a firewall requires that the request originates from a different region. If Azure Storage and Azure Cognitive Search are in the same region, you can bypass the IP restrictions on the storage account by accessing data under the system identity of the search service.
+- A *private execution environment* that's internal to a search service.
-There are two options for supporting data access using the system identity:
+ Indexers running in the private environment share computing resources with other indexing and query workloads on the same search service. Typically, only indexers that perform text-based indexing (without skillsets) run in this environment.
-- Configure search to run as a [trusted service](search-indexer-howto-access-trusted-service-exception.md) and use the [trusted service exception](../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) in Azure Storage.
+- A *multi-tenant environment* that's managed and secured by Microsoft at no extra cost. It isn't subject to any network provisions under your control.
-- Configure a [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) in Azure Storage that admits inbound requests from an Azure resource.
+ This environment is used to offload computationally intensive processing, leaving service-specific resources available for routine operations. Examples of resource-intensive indexer jobs include attaching skillsets, processing large documents, or processing a high volume of documents.
-The above options depend on Azure Active Directory for authentication, which means that the connection must be made with an Azure AD login. Currently, only a Cognitive Search [system-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity) is supported for same-region connections through a firewall.
+The following section explains the IP configuration for admitting requests from either execution environment.
-#### Services in different regions
+### Setting up IP ranges for indexer execution
-When search and storage are in different regions, you can use the previously mentioned options or set up IP rules that admit requests from your service. Depending on the workload, you might need to set up rules for multiple execution environments as described in the next section.
+If the Azure resource that provides source data exists behind a firewall, you'll need [inbound rules that admit indexer connections](search-indexer-howto-access-ip-restricted.md) for all of the IPs from which an indexer request can originate. The IPs include the one used by the search service and the multi-tenant environment.
-## Indexer execution environment
+- To obtain the IP address of the search service (and the private execution environment), you'll use `nslookup` (or `ping`) the fully qualified domain name (FQDN) of your search service. The FQDN of a search service in the public cloud would be `<service-name>.search.windows.net`.
-Azure Cognitive Search indexers are capable of efficiently extracting content from data sources, adding enrichments to the extracted content, optionally generating projections before writing the results to the search index.
+- To obtain the IP addresses of the multi-tenant environments within which an indexer might run, you'll use the `AzureCognitiveSearch` service tag.
-For optimum processing, a search service will determine an internal execution environment to set up the operation. Depending on the number and types of tasks assigned, the indexer will run in one of two environments:
+ [Azure service tags](../virtual-network/service-tags-overview.md) have a published range of IP addresses for each service. You can find these IPs using the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or a [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). IP ranges are allocated by region, so check your search service region before you start.
-- An environment private to a specific search service. Indexers running in such environments share resources with other workloads (such as other customer-initiated indexing or querying workloads). Typically, only indexers that perform text-based indexing (for example, do not use a skillset) run in this environment.
+When setting the IP rule for the multi-tenant environment, certain SQL data sources support a simple approach for IP address specification. Instead of enumerating all of the IP addresses in the rule, you can create a [Network Security Group rule](../virtual-network/network-security-groups-overview.md) that specifies the `AzureCognitiveSearch` service tag.
-- A multi-tenant environment hosting indexers that are resource intensive - such as indexers with skillsets, indexers processing big documents, indexers processing a lot of documents and so on. This environment is used to offload computationally intensive processing, leaving service-specific resources available for routine operations. This multi-tenant environment is managed and secured by Microsoft, at no extra cost to the customer.
+You can specify the service tag if your data source is either:
-For any given indexer run, Azure Cognitive Search determines the best environment in which to run the indexer. If you're using an IP firewall to control access to Azure resources, knowing about execution environments will help you set up an IP range that is inclusive of both, as discussed in the next section.
+- [SQL Server on Azure virtual machines](./search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md#restrict-access-to-the-azure-cognitive-search)
-## Granting access to indexer IP ranges
+- [SQL Managed Instances](./search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md#verify-nsg-rules)
-If the resource that your indexer pulls data from exists behind a firewall, you'll need [inbound rules that admit indexer connections](search-indexer-howto-access-ip-restricted.md). Make sure that the IP ranges in inbound rules include all of the IPs from which an indexer request can originate. As stated above, there are two possible environments in which indexers run and from which access requests can originate. You'll need to add the IP addresses of **both** environments for indexer access to work.
+Notice that if you specified the service tag for the multi-tenant environment IP rule, you'll still need an explicit inbound rule for the private execution environment (meaning the search service itself), as obtained through `nslookup`.
-- To obtain the IP address of the search service private environment, use `nslookup` (or `ping`) the fully qualified domain name (FQDN) of your search service. The FQDN of a search service in the public cloud would be `<service-name>.search.windows.net`.
+## Choosing a connectivity approach
-- To obtain the IP addresses of the multi-tenant environments within which an indexer might run, use the `AzureCognitiveSearch` service tag. [Azure service tags](../virtual-network/service-tags-overview.md) have a published range of IP addresses for each service. You can find these IPs using the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or a [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). In either case, IP ranges are broken down by region. You should specify only those IP ranges assigned to the region in which your search service is provisioned.
+When integrating Azure Cognitive Search into a solution that runs on a virtual network, consider the following constraints:
-For certain data sources, the service tag itself can be used directly instead of enumerating the list of IP ranges (the IP address of the search service still needs to be used explicitly). These data sources restrict access by means of setting up a [Network Security Group rule](../virtual-network/network-security-groups-overview.md), which natively support adding a service tag, unlike IP rules such as the ones offered by Azure Storage, Cosmos DB, Azure SQL, and so forth. The data sources that support the ability to utilize the `AzureCognitiveSearch` service tag directly in addition to search service IP address are:
+- An indexer can't make a direct connection to a [virtual network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md). Public endpoints with credentials, private endpoints, trusted service, and IP addressing are the only supported methodologies for indexer connections.
-- [SQL Server on Azure virtual machines](./search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md#restrict-access-to-the-azure-cognitive-search)
+- A search service always runs in the cloud and can't be provisioned into a specific virtual network, running natively on a virtual machine. This functionality won't be offered by Azure Cognitive Search.
-- [SQL Managed Instances](./search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md#verify-nsg-rules)
+Given the above constrains, your choices for achieving search integration in a virtual network are:
-## Granting access via private endpoints
+- Configure an inbound firewall rule on your Azure resource that admits indexer requests for data.
-When integrating Azure Cognitive Search into a solution that runs on a virtual network, consider the following constraints:
+- Configure an outbound connection that makes indexer connections using a [private endpoint](../private-link/private-endpoint-overview.md).
-- An indexer can't make a direct connection to a [virtual network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md). Public endpoints with credentials, private endpoints, trusted service, and IP addressing are the only supported methodologies for indexer connections.-- A search service always runs in the cloud and can't be provisioned into a specific virtual network, running natively on a virtual machine. This functionality will not be offered by Azure Cognitive Search.
+ For a private endpoint, the search service connection to your protected resource is through a *shared private link*. A shared private link is an [Azure Private Link](../private-link/private-link-overview.md) resource that's created, managed, and used from within Cognitive Search. If your resources are fully locked down (running on a protected virtual network, or otherwise not available over a public connection), a private endpoint is your only choice.
+
+ Connections through a private endpoint must originate from the search service's private execution environment. To meet this requirement, you'll have to disable multi-tenant execution. This step is described in [Make outbound connections through a private endpoint](search-indexer-howto-access-private.md).
+
+Configuring an IP firewall is free. A private endpoint, which is based on Azure Private Link, has a billing impact.
-To achieve integration, you can use [private endpoints](../private-link/private-endpoint-overview.md) on outbound connections to resources that are locked down (running on a protected virtual network, or just not available over a public connection).
+### Working with a private endpoint
-The mechanism by which a search service connects to your protected resource is through a shared private link. A shared private link is [Azure Private Link](../private-link/private-link-overview.md) resource that's created, managed, and used from within Cognitive Search.
+This section summarizes the main steps for setting up a private endpoint for outbound indexer connections. This summary might help you decide whether a private endpoint is the best choice for your scenario. Detailed steps are covered in [How to make outbound connections through a private endpoint](search-indexer-howto-access-private.md).
-### Billing impact
+#### Billing impact of Azure Private Link
- A shared private link requires a billable search service, where the minimum tier is either Basic for text-based indexing or Standard 2 (S2) for skills-based indexing. See [tier limits on the number of private endpoints](search-limits-quotas-capacity.md#shared-private-link-resource-limits) for details. - Inbound and outbound connections are subject to [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-### Step 1: Create a private endpoint to the secure resource
+#### Step 1: Create a private endpoint to the secure resource
-In Azure Cognitive Search, you can create a shared private link using either the portal or a [management API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update).
+You'll create a shared private link using either the portal pages of your search service or through the [Management API](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update).
-Traffic that goes over this (outbound) private endpoint connection will originate only from the virtual network that's in the search service specific "private" indexer execution environment.
+In Azure Cognitive Search, your search service must be at least the Basic tier for text-based indexers, and S2 for indexers with skillsets.
-Azure Cognitive Search will validate that callers of this API have Azure RBAC role permissions to approve private endpoint connection requests to the secure resource. For example, if you request a private endpoint connection to a storage account with read-only permissions, this call will be rejected.
+A private endpoint connection will accept requests from the private indexer execution environment, but not the multi-tenant environment. You'll need to disable multi-tenant execution as described in step 3 to meet this requirement.
-### Step 2: Approve the private endpoint connection
+#### Step 2: Approve the private endpoint connection
When the (asynchronous) operation that creates a shared private link resource completes, a private endpoint connection will be created in a "Pending" state. No traffic flows over the connection yet.
-The customer is then expected to locate this request on their secure resource and "Approve" it. Typically, this can be done either via the Azure portal or via the [REST API](/rest/api/virtualnetwork/privatelinkservices/updateprivateendpointconnection).
+You'll need to locate and approve this request on your secure resource. Depending on the resource, you can complete this task using Azure portal. Otherwise, use the [Private Link Service REST API](/rest/api/virtualnetwork/privatelinkservices/updateprivateendpointconnection).
-### Step 3: Force indexers to run in the "private" environment
+#### Step 3: Force indexers to run in the "private" environment
-An approved private endpoint allows outgoing calls from the search service to a resource that has some form of network level access restrictions (for example a storage account data source that is configured to only be accessed from certain virtual networks) to succeed.
+For private endpoint connections, it's mandatory to set the `executionEnvironment` of the indexer to `"Private"`. This step ensures that all indexer execution is confined to the private environment provisioned within the search service.
-This means any indexer that is able to reach out to such a data source over the private endpoint will succeed.
-If the private endpoint isn't approved, or if the indexer doesn't utilize the private endpoint connection then the indexer run will end up in `transientFailure`.
-
-To enable indexers to access resources via private endpoint connections, it's mandatory to set the `executionEnvironment` of the indexer to `"Private"` to ensure that all indexer runs will be able to utilize the private endpoint. This is because private endpoints are provisioned within the private search service-specific environment.
+This setting is scoped to an indexer and not the search service. If you want all indexers to connect over private endpoints, each one must have the following configuration:
```json {
To enable indexers to access resources via private endpoint connections, it's ma
} ```
-These steps are described in greater detail in [Indexer connections through a private endpoint](search-indexer-howto-access-private.md).
-Once you have an approved private endpoint to a resource, indexers that are set to be *private* attempt to obtain access via the private endpoint connection.
+Once you have an approved private endpoint to a resource, indexers that are set to be *private* attempt to obtain access via the private link that was created and approved for the Azure resource.
+
+Azure Cognitive Search will validate that callers of the private endpoint have appropriate Azure RBAC role permissions. For example, if you request a private endpoint connection to a storage account with read-only permissions, this call will be rejected.
+
+If the private endpoint isn't approved, or if the indexer didn't use the private endpoint connection, you'll find a `transientFailure` error message in indexer execution history.
+
+## Access to a network-protected storage account
+
+A search service stores indexes and synonym lists. For other features that require storage, Cognitive Search takes a dependency on Azure Storage. Enrichment caching, debug sessions, and knowledge stores fall into this category. The location of each service, and any network protections in place for storage, will determine your data access strategy.
+
+### Same-region services
+
+In Azure Storage, access through a firewall requires that the request originates from a different region. If Azure Storage and Azure Cognitive Search are in the same region, you can bypass the IP restrictions on the storage account by accessing data under the system identity of the search service.
+
+There are two options for supporting data access using the system identity:
+
+- Configure search to run as a [trusted service](search-indexer-howto-access-trusted-service-exception.md) and use the [trusted service exception](../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) in Azure Storage.
+
+- Configure a [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) in Azure Storage that admits inbound requests from an Azure resource.
+
+The above options depend on Azure Active Directory for authentication, which means that the connection must be made with an Azure AD login. Currently, only a Cognitive Search [system-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity) is supported for same-region connections through a firewall.
+
+### Services in different regions
+
+When search and storage are in different regions, you can use the previously mentioned options or set up IP rules that admit requests from your service. Depending on the workload, you might need to set up rules for multiple execution environments as described in the next section.
## Next steps -- [Indexer connections through IP firewalls](search-indexer-howto-access-ip-restricted.md)-- [Indexer connections using the trusted service exception](search-indexer-howto-access-trusted-service-exception.md)-- [Indexer connections to a private endpoint](search-indexer-howto-access-private.md)
+Now that you're familiar with indexer data access options for solutions deployed in an Azure virtual network, review either of the following how-to articles as your next step:
+
+- [How to make indexer connections to a private endpoint](search-indexer-howto-access-private.md)
+- [How to make indexer connections through an IP firewall](search-indexer-howto-access-ip-restricted.md)
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
Previously updated : 05/23/2022 Last updated : 06/20/2022 # Indexer troubleshooting guidance for Azure Cognitive Search
api-key: [admin key]
Azure Cognitive Search has an implicit dependency on Cosmos DB indexing. If you turn off automatic indexing in Cosmos DB, Azure Cognitive Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal). +
+## Indexer reflects a different document count than data source or index
+
+Indexer may show a different document count than either the data source, the index or count in your code in a point in time, depending on specific circumstances. Here are some possible causes of why this may occur:
+
+- The indexer has a Deleted Document Policy. The deleted documents get counted on the indexer end if they are indexed before they get deleted.
+- If the ID column in the data source is not unique. This is for data sources that have the concept of column, such as Cosmos DB.
+- If the data source definition has a different query than the one you are using to estimate the number of records. In example, in your data base you are querying all your data base record count, while in the data source definition query you may be selecting just a subset of records to index.
+- The counts are being checked in different intervals for each component of the pipeline: data source, indexer and index.
+- The index may take some minutes to show the real document count.
+- The data source has a file that's mapped to many documents. This condition can occur when [indexing blobs](search-howto-index-json-blobs.md) and "parsingMode" is set to **`jsonArray`** and **`jsonLines`**.
+- Due to [documents processed multiple times](#documents-processed-multiple-times).
+
+ ## Documents processed multiple times Indexers leverage a conservative buffering strategy to ensure that every new and changed document in the data source is picked up during indexing. In certain situations, these buffers can overlap, causing an indexer to index a document two or more times resulting in the processed documents count to be more than actual number of documents in the data source. This behavior does **not** affect the data stored in the index, such as duplicating documents, only that it may take longer to reach eventual consistency. This can be especially prevalent if any of the following conditions are true:
If you have [sensitivity labels set on documents](/microsoft-365/compliance/sens
## See also * [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md)
-* [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
+* [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
search Search Query Fuzzy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-fuzzy.md
Title: Fuzzy search
-description: Implement a "did you mean" search experience to auto-correct a misspelled term or typo.
+description: Implement a fuzzy search query for a "did you mean" search experience. Fuzzy search will auto-correct a misspelled term or typo on the query.
- Previously updated : 03/03/2021+ Last updated : 06/22/2022 # Fuzzy search to correct misspellings and typos
Azure Cognitive Search supports fuzzy search, a type of query that compensates f
## What is fuzzy search?
-It's an expansion exercise that produces a match on terms having a similar composition. When a fuzzy search is specified, the engine builds a graph (based on [deterministic finite automaton theory](https://en.wikipedia.org/wiki/Deterministic_finite_automaton)) of similarly composed terms, for all whole terms in the query. For example, if your query includes three terms "university of washington", a graph is created for every term in the query `search=university~ of~ washington~` (there is no stop-word removal in fuzzy search, so "of" gets a graph).
+It's a query expansion exercise that produces a match on terms having a similar composition. When a fuzzy search is specified, the search engine builds a graph (based on [deterministic finite automaton theory](https://en.wikipedia.org/wiki/Deterministic_finite_automaton)) of similarly composed terms, for all whole terms in the query. For example, if your query includes three terms "university of washington", a graph is created for every term in the query `search=university~ of~ washington~` (there's no stop-word removal in fuzzy search, so "of" gets a graph).
The graph consists of up to 50 expansions, or permutations, of each term, capturing both correct and incorrect variants in the process. The engine then returns the topmost relevant matches in the response. For a term like "university", the graph might have "unversty, universty, university, universe, inverse". Any documents that match on those in the graph are included in results. In contrast with other queries that analyze the text to handle different forms of the same word ("mice" and "mouse"), the comparisons in a fuzzy query are taken at face value without any linguistic analysis on the text. "Universe" and "inverse", which are semantically different, will match because the syntactic discrepancies are small.
-A match succeeds if the discrepancies are limited to two or fewer edits, where an edit is an inserted, deleted, substituted, or transposed character. The string correction algorithm that specifies the differential is the [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) metric, described as the "minimum number of operations (insertions, deletions, substitutions, or transpositions of two adjacent characters) required to change one word into the other".
+A match succeeds if the discrepancies are limited to two or fewer edits, where an edit is an inserted, deleted, substituted, or transposed character. The string correction algorithm that specifies the differential is the [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) metric. It's described as the "minimum number of operations (insertions, deletions, substitutions, or transpositions of two adjacent characters) required to change one word into the other".
In Azure Cognitive Search:
In Azure Cognitive Search:
+ The default distance of an edit is 2. A value of `~0` signifies no expansion (only the exact term is considered a match), but you could specify `~1` for one degree of difference, or one edit.
-+ A fuzzy query can expand a term up to 50 additional permutations. This limit is not configurable, but you can effectively reduce the number of expansions by decreasing the edit distance to 1.
++ A fuzzy query can expand a term up to 50 permutations. This limit isn't configurable, but you can effectively reduce the number of expansions by decreasing the edit distance to 1. + Responses consist of documents containing a relevant match (up to 50).
+During query processing, fuzzy queries don't undergo [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis). The query input is added directly to the query tree and expanded to create a graph of terms. The only transformation performed is lower casing.
+ Collectively, the graphs are submitted as match criteria against tokens in the index. As you can imagine, fuzzy search is inherently slower than other query forms. The size and complexity of your index can determine whether the benefits are enough to offset the latency of the response. > [!NOTE]
Collectively, the graphs are submitted as match criteria against tokens in the i
## Indexing for fuzzy search
-Analyzers are not used during query processing to create an expansion graph, but that doesn't mean analyzers should be ignored in fuzzy search scenarios. After all, analyzers are used during indexing to create tokens against which matching is done, whether the query is free form, filtered search, or a fuzzy search with a graph as input.
+Make sure the index includes text fields that are conducive to fuzzy search, such as names, categories, descriptions, or tags.
-Generally, when assigning analyzers on a per-field basis, the decision to fine-tune the analysis chain is based on the primary use case (a filter or full text search) rather than specialized query forms like fuzzy search. For this reason, there is not a specific analyzer recommendation for fuzzy search.
+Analyzers aren't used to create an expansion graph, but that doesn't mean analyzers should be ignored in fuzzy search scenarios. Analyzers are important for tokenization during indexing, where tokens are used for both full text search and for matching against the graph.
-However, if test queries are not producing the matches you expect, you could try varying the indexing analyzer, setting it to a [language analyzer](index-add-language-analyzers.md), to see if you get better results. Some languages, particularly those with vowel mutations, can benefit from the inflection and irregular word forms generated by the Microsoft natural language processors. In some cases, using the right language analyzer can make a difference in whether a term is tokenized in a way that is compatible with the value provided by the user.
+As always, if test queries aren't producing the matches you expect, you could try varying the indexing analyzer, setting it to a [language analyzer](index-add-language-analyzers.md), to see if you get better results. Some languages, particularly those with vowel mutations, can benefit from the inflection and irregular word forms generated by the Microsoft natural language processors. In some cases, using the right language analyzer can make a difference in whether a term is tokenized in a way that is compatible with the value provided by the user.
## How to use fuzzy search
-Fuzzy queries are constructed using the full Lucene query syntax, invoking the [Lucene query parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html).
+Fuzzy queries are constructed using the full Lucene query syntax, invoking the [full Lucene query parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html).
+
+```http
+POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+{
+ "search": "seatle~2",
+ "queryType": "full",
+ "searchMode": "any",
+ "searchFields": "HotelName, Address/City",
+ "select": "HotelName, Address/City,",
+ "count": "true"
+}
+```
-1. Set the full Lucene parser on the query (`queryType=full`).
+1. Set the query type to the full Lucene syntax (`queryType=full`).
1. Optionally, scope the request to specific fields, using this parameter (`searchFields=<field1,field2>`).
-1. Append the tilde (`~`) operator at the end of the whole term (`search=<string>~`).
+1. Provide the query string. An expansion graph will be created for every term in the query input. Append the tilde (`~`) operator at the end of each whole term (`search=<string>~`).
Include an optional parameter, a number between 0 and 2 (default), if you want to specify the edit distance (`~1`). For example, "blue~" or "blue~1" would return "blue", "blues", and "glue".
-In Azure Cognitive Search, besides the term and distance (maximum of 2), there are no additional parameters to set on the query.
-
-> [!NOTE]
-> During query processing, fuzzy queries do not undergo [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis). The query input is added directly to the query tree and expanded to create a graph of terms. The only transformation performed is lower casing.
+In Azure Cognitive Search, besides the term and distance (maximum of 2), there are no other parameters to set on the query.
## Testing fuzzy search
For simple testing, we recommend [Search explorer](search-explorer.md) or [Postm
When results are ambiguous, [hit highlighting](search-pagination-page-layout.md#hit-highlighting) can help you identify the match in the response.
-> [!Note]
-> The use of hit highlighting to identify fuzzy matches has limitations and only works for basic fuzzy search. If your index has scoring profiles, or if you layer the query with additional syntax, hit highlighting might fail to identify the match.
+> [!NOTE]
+> The use of hit highlighting to identify fuzzy matches has limitations and only works for basic fuzzy search. If your index has scoring profiles, or if you layer the query with more syntax, hit highlighting might fail to identify the match.
### Example 1: fuzzy search with the exact term
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Built-in roles include generally available and preview roles. If these roles are
| Role | Description and availability | | - | - |
-| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. |
-| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. |
+| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default.</br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
+| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. </br></br> (Preview) This role has the same access as the Search Service Contributor role on the data plane. It includes access to all data plane actions except the ability to query the search index or index documents. |
| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: service name, resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage both the service and its content. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role does not give you access to query search indexes or index documents. This role is for search service administrators who need to manage the search services indexes and other resources. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. | > [!NOTE]
-> Azure resources have the concept of [control plane and data plane](../azure-resource-manager/management/control-plane-and-data-plane.md) categories of operations. In Cognitive Search, "control plane" refers to any operation supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries. Most roles apply to just one plane. The exception is Search Service Contributor which supports actions across both.
+> Azure resources have the concept of [control plane and data plane](../azure-resource-manager/management/control-plane-and-data-plane.md) categories of operations. In Cognitive Search, "control plane" refers to any operation supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries.
<a name="preview-limitations"></a>
sentinel Anomalies Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/anomalies-reference.md
+
+ Title: Anomalies detected by the Microsoft Sentinel machine learning engine
+description: Learn about the anomalies detected by Microsoft Sentinel's machine learning engines.
++ Last updated : 06/13/2022+++
+# Anomalies detected by the Microsoft Sentinel machine learning engine
+
+This article lists the anomalies that Microsoft Sentinel detects using different machine learning models.
+
+Anomaly detection works by analyzing the behavior of users in an environment over a period of time and constructing a baseline of legitimate activity. Once the baseline is established, any activity outside the normal parameters is considered anomalous and therefore suspicious.
+
+Microsoft Sentinel uses two different models to create baselines and detect anomalies.
+
+- [UEBA anomalies](#ueba-anomalies)
+- [Machine learning-based anomalies](#machine-learning-based-anomalies)
+
+> [!NOTE]
+> Anomalies are in **PREVIEW**.
+
+## UEBA anomalies
+
+Sentinel UEBA detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
+
+- [Anomalous Account Access Removal](#anomalous-account-access-removal)
+- [Anomalous Account Creation](#anomalous-account-creation)
+- [Anomalous Account Deletion](#anomalous-account-deletion)
+- [Anomalous Account Manipulation](#anomalous-account-manipulation)
+- [Anomalous Code Execution (UEBA)](#anomalous-code-execution-ueba)
+- [Anomalous Data Destruction](#anomalous-data-destruction)
+- [Anomalous Defensive Mechanism Modification](#anomalous-defensive-mechanism-modification)
+- [Anomalous Failed Sign-in](#anomalous-failed-sign-in)
+- [Anomalous Password Reset](#anomalous-password-reset)
+- [Anomalous Privilege Granted](#anomalous-privilege-granted)
+- [Anomalous Sign-in](#anomalous-sign-in)
+
+### Anomalous Account Access Removal
+
+**Description:** An attacker may interrupt the availability of system and network resources by blocking access to accounts used by legitimate users. The attacker might delete, lock, or manipulate an account (for example, by changing its credentials) to remove access to it.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Impact |
+| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal |
+| **Activity:** | Microsoft.Authorization/roleAssignments/delete<br>Log Out |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Account Creation
+
+**Description:** Adversaries may create an account to maintain access to targeted systems. With a sufficient level of access, creating such accounts may be used to establish secondary credentialed access without requiring persistent remote access tools to be deployed on the system.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Persistence |
+| **MITRE ATT&CK techniques:** | T1136 - Create Account |
+| **MITRE ATT&CK sub-techniques:** | Cloud Account |
+| **Activity:** | Core Directory/UserManagement/Add user |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Account Deletion
+
+**Description:** Adversaries may interrupt availability of system and network resources by inhibiting access to accounts utilized by legitimate users. Accounts may be deleted, locked, or manipulated (ex: changed credentials) to remove access to accounts.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Impact |
+| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal |
+| **Activity:** | Core Directory/UserManagement/Delete user<br>Core Directory/Device/Delete user<br>Core Directory/UserManagement/Delete user |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Account Manipulation
+
+**Description:** Adversaries may manipulate accounts to maintain access to target systems. These actions include adding new accounts to high-privileged groups. Dragonfly 2.0, for example, added newly created accounts to the administrators group to maintain elevated access. The query below generates an output of all high-Blast Radius users performing "Update user" (name change) to privileged role, or ones that changed users for the first time.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Persistence |
+| **MITRE ATT&CK techniques:** | T1098 - Account Manipulation |
+| **Activity:** | Core Directory/UserManagement/Update user |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Code Execution (UEBA)
+
+**Description:** Adversaries may abuse command and script interpreters to execute commands, scripts, or binaries. These interfaces and languages provide ways of interacting with computer systems and are a common feature across many different platforms.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Execution |
+| **MITRE ATT&CK techniques:** | T1059 - Command and Scripting Interpreter |
+| **MITRE ATT&CK sub-techniques:** | PowerShell |
+| **Activity:** | Microsoft.Compute/virtualMachines/runCommand/action |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Data Destruction
+
+**Description:** Adversaries may destroy data and files on specific systems or in large numbers on a network to interrupt availability to systems, services, and network resources. Data destruction is likely to render stored data irrecoverable by forensic techniques through overwriting files or data on local and remote drives.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Impact |
+| **MITRE ATT&CK techniques:** | T1485 - Data Destruction |
+| **Activity:** | Microsoft.Compute/disks/delete<br>Microsoft.Compute/galleries/images/delete<br>Microsoft.Compute/hostGroups/delete<br>Microsoft.Compute/hostGroups/hosts/delete<br>Microsoft.Compute/images/delete<br>Microsoft.Compute/virtualMachines/delete<br>Microsoft.Compute/virtualMachineScaleSets/delete<br>Microsoft.Compute/virtualMachineScaleSets/virtualMachines/delete<br>Microsoft.Devices/digitalTwins/Delete<br>Microsoft.Devices/iotHubs/Delete<br>Microsoft.KeyVault/vaults/delete<br>Microsoft.Logic/integrationAccounts/delete  <br>Microsoft.Logic/integrationAccounts/maps/delete <br>Microsoft.Logic/integrationAccounts/schemas/delete <br>Microsoft.Logic/integrationAccounts/partners/delete <br>Microsoft.Logic/integrationServiceEnvironments/delete<br>Microsoft.Logic/workflows/delete<br>Microsoft.Resources/subscriptions/resourceGroups/delete<br>Microsoft.Sql/instancePools/delete<br>Microsoft.Sql/managedInstances/delete<br>Microsoft.Sql/managedInstances/administrators/delete<br>Microsoft.Sql/managedInstances/databases/delete<br>Microsoft.Storage/storageAccounts/delete<br>Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete<br>Microsoft.Storage/storageAccounts/fileServices/fileshares/files/delete<br>Microsoft.Storage/storageAccounts/blobServices/containers/delete<br>Microsoft.AAD/domainServices/delete |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Defensive Mechanism Modification
+
+**Description:** Adversaries may disable security tools to avoid possible detection of their tools and activities.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Defense Evasion |
+| **MITRE ATT&CK techniques:** | T1562 - Impair Defenses |
+| **MITRE ATT&CK sub-techniques:** | Disable or Modify Tools<br>Disable or Modify Cloud Firewall |
+| **Activity:** | Microsoft.Sql/managedInstances/databases/vulnerabilityAssessments/rules/baselines/delete<br>Microsoft.Sql/managedInstances/databases/vulnerabilityAssessments/delete<br>Microsoft.Network/networkSecurityGroups/securityRules/delete<br>Microsoft.Network/networkSecurityGroups/delete<br>Microsoft.Network/ddosProtectionPlans/delete<br>Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/delete<br>Microsoft.Network/applicationSecurityGroups/delete<br>Microsoft.Authorization/policyAssignments/delete<br>Microsoft.Sql/servers/firewallRules/delete<br>Microsoft.Network/firewallPolicies/delete<br>Microsoft.Network/azurefirewalls/delete |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Failed Sign-in
+
+**Description:** Adversaries with no prior knowledge of legitimate credentials within the system or environment may guess passwords to attempt access to accounts.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory sign-in logs<br>Windows Security logs |
+| **MITRE ATT&CK tactics:** | Credential Access |
+| **MITRE ATT&CK techniques:** | T1110 - Brute Force |
+| **Activity:** | **Azure AD:** Sign-in activity<br>**Windows Security:** Failed login (Event ID 4625) |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Password Reset
+
+**Description:** Adversaries may interrupt availability of system and network resources by inhibiting access to accounts utilized by legitimate users. Accounts may be deleted, locked, or manipulated (ex: changed credentials) to remove access to accounts.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Impact |
+| **MITRE ATT&CK techniques:** | T1531 - Account Access Removal |
+| **Activity:** | Core Directory/UserManagement/User password reset |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Privilege Granted
+
+**Description:** Adversaries may add adversary-controlled credentials for Azure Service Principals in addition to existing legitimate credentials to maintain persistent access to victim Azure accounts.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Persistence |
+| **MITRE ATT&CK techniques:** | T1098 - Account Manipulation |
+| **MITRE ATT&CK sub-techniques:** | Additional Azure Service Principal Credentials |
+| **Activity:** | Account provisioning/Application Management/Add app role assignment to service principal |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+### Anomalous Sign-in
+
+**Description:** Adversaries may steal the credentials of a specific user or service account using Credential Access techniques or capture credentials earlier in their reconnaissance process through social engineering for means of gaining Persistence.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | UEBA |
+| **Data sources:** | Azure Active Directory sign-in logs<br>Windows Security logs |
+| **MITRE ATT&CK tactics:** | Persistence |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+| **Activity:** | **Azure AD:** Sign-in activity<br>**Windows Security:** Successful login (Event ID 4624) |
+
+[Back to UEBA anomalies list](#ueba-anomalies)
+
+## Machine learning-based anomalies
+
+Microsoft Sentinel's customizable, machine learning-based anomalies can identify anomalous behavior with analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve detections, investigations, and threat hunting.
+
+- [Anomalous Azure AD sign-in sessions](#anomalous-azure-ad-sign-in-sessions)
+- [Anomalous Azure operations](#anomalous-azure-operations)
+- [Anomalous Code Execution](#anomalous-code-execution)
+- [Anomalous local account creation](#anomalous-local-account-creation)
+- [Anomalous scanning activity](#anomalous-scanning-activity)
+- [Anomalous user activities in Office Exchange](#anomalous-user-activities-in-office-exchange)
+- [Anomalous user/app activities in Azure audit logs](#anomalous-userapp-activities-in-azure-audit-logs)
+- [Anomalous W3CIIS logs activity](#anomalous-w3ciis-logs-activity)
+- [Anomalous web request activity](#anomalous-web-request-activity)
+- [Attempted computer brute force](#attempted-computer-brute-force)
+- [Attempted user account brute force](#attempted-user-account-brute-force)
+- [Attempted user account brute force per login type](#attempted-user-account-brute-force-per-login-type)
+- [Attempted user account brute force per failure reason](#attempted-user-account-brute-force-per-failure-reason)
+- [Detect machine generated network beaconing behavior](#detect-machine-generated-network-beaconing-behavior)
+- [Domain generation algorithm (DGA) on DNS domains](#domain-generation-algorithm-dga-on-dns-domains)
+- [Domain Reputation Palo Alto anomaly](#domain-reputation-palo-alto-anomaly)
+- [Excessive data transfer anomaly](#excessive-data-transfer-anomaly)
+- [Excessive Downloads via Palo Alto GlobalProtect](#excessive-downloads-via-palo-alto-globalprotect)
+- [Excessive uploads via Palo Alto GlobalProtect](#excessive-uploads-via-palo-alto-globalprotect)
+- [Login from an unusual region via Palo Alto GlobalProtect account logins](#login-from-an-unusual-region-via-palo-alto-globalprotect-account-logins)
+- [Multi-region logins in a single day via Palo Alto GlobalProtect](#multi-region-logins-in-a-single-day-via-palo-alto-globalprotect)
+- [Potential data staging](#potential-data-staging)
+- [Potential domain generation algorithm (DGA) on next-level DNS Domains](#potential-domain-generation-algorithm-dga-on-next-level-dns-domains)
+- [Suspicious geography change in Palo Alto GlobalProtect account logins](#suspicious-geography-change-in-palo-alto-globalprotect-account-logins)
+- [Suspicious number of protected documents accessed](#suspicious-number-of-protected-documents-accessed)
+- [Suspicious volume of AWS API calls from Non-AWS source IP address](#suspicious-volume-of-aws-api-calls-from-non-aws-source-ip-address)
+- [Suspicious volume of AWS CloudTrail log events of group user account by EventTypeName](#suspicious-volume-of-aws-cloudtrail-log-events-of-group-user-account-by-eventtypename)
+- [Suspicious volume of AWS write API calls from a user account](#suspicious-volume-of-aws-write-api-calls-from-a-user-account)
+- [Suspicious volume of failed login attempts to AWS Console by each group user account](#suspicious-volume-of-failed-login-attempts-to-aws-console-by-each-group-user-account)
+- [Suspicious volume of failed login attempts to AWS Console by each source IP address](#suspicious-volume-of-failed-login-attempts-to-aws-console-by-each-source-ip-address)
+- [Suspicious volume of logins to computer](#suspicious-volume-of-logins-to-computer)
+- [Suspicious volume of logins to computer with elevated token](#suspicious-volume-of-logins-to-computer-with-elevated-token)
+- [Suspicious volume of logins to user account](#suspicious-volume-of-logins-to-user-account)
+- [Suspicious volume of logins to user account by logon types](#suspicious-volume-of-logins-to-user-account-by-logon-types)
+- [Suspicious volume of logins to user account with elevated token](#suspicious-volume-of-logins-to-user-account-with-elevated-token)
+- [Unusual external firewall alarm detected](#unusual-external-firewall-alarm-detected)
+- [Unusual mass downgrade AIP label](#unusual-mass-downgrade-aip-label)
+- [Unusual network communication on commonly used ports](#unusual-network-communication-on-commonly-used-ports)
+- [Unusual network volume anomaly](#unusual-network-volume-anomaly)
+- [Unusual web traffic detected with IP in URL path](#unusual-web-traffic-detected-with-ip-in-url-path)
+
+### Anomalous Azure AD sign-in sessions
+
+**Description:** The machine learning model groups the Azure AD sign-in logs on a per-user basis. The model is trained on the previous 6 days of user sign-in behavior. It indicates anomalous user sign-in sessions over the past day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Active Directory sign-in logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts<br>T1566 - Phishing<br>T1133 - External Remote Services |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous Azure operations
+
+**Description:** This detection algorithm collects 21 days' worth of data on Azure operations grouped by user to train this ML model. The algorithm then generates anomalies in the case of users who performed sequences of operations uncommon in their workspaces. The trained ML model scores the operations performed by the user and considers anomalous those whose score is greater than the defined threshold.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1190 - Exploit Public-Facing Application |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous Code Execution
+
+**Description:** Attackers may abuse command and script interpreters to execute commands, scripts, or binaries. These interfaces and languages provide ways of interacting with computer systems and are a common feature across many different platforms.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Activity logs |
+| **MITRE ATT&CK tactics:** | Execution |
+| **MITRE ATT&CK techniques:** | T1059 - Command and Scripting Interpreter |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous local account creation
+
+**Description:** This algorithm detects anomalous local account creation on Windows systems. Attackers may create local accounts to maintain access to targeted systems. This algorithm analyzes local account creation activity over the prior 14 days by users. It looks for similar activity on the current day from users who were not previously seen in historical activity. You can specify an allowlist to filter known users from triggering this anomaly.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Persistence |
+| **MITRE ATT&CK techniques:** | T1136 - Create Account |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous scanning activity
+
+**Description:** This algorithm looks for port scanning activity, coming from a single source IP to one or more destination IPs, that is not normally seen in a given environment.
+
+The algorithm takes into account whether the IP is public/external or private/internal, and the event is marked accordingly. Only private-to-public or public-to-private activity is considered at this time. Scanning activity can indicate an attacker attempting to determine available services in an environment that can be potentially exploited and used for ingress or lateral movement. A high number of source ports and high number of destination ports from a single source IP to either a single or multiple destination IP or IPs can be interesting and indicate anomalous scanning. Additionally, if there is a high ratio of destination IPs to the single source IP this can indicate anomalous scanning.
+
+Configuration details:
+- Job run default is daily, with hourly bins.
+ The algorithm uses the following configurable defaults to limit the results based on hourly bins.
+- Included device actions - accept, allow, start
+- Excluded ports - 53, 67, 80, 8080, 123, 137, 138, 443, 445, 3389
+- Distinct destination port count >= 600
+- Distinct source port count >= 600
+- Distinct source port count divided by distinct destination port, ratio converted to percent >= 99.99
+- Source IP (always 1) divided by destination IP, ratio converted to percent >= 99.99
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN, Zscaler, CEF, CheckPoint, Fortinet) |
+| **MITRE ATT&CK tactics:** | Discovery |
+| **MITRE ATT&CK techniques:** | T1046 - Network Service Scanning |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous user activities in Office Exchange
+
+**Description:** This machine learning model groups the Office Exchange logs on a per-user basis into hourly buckets. We define one hour as a session. The model is trained on the previous 7 days of behavior across all regular (non-admin) users. It indicates anomalous user Office Exchange sessions in the last day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Office Activity log (Exchange) |
+| **MITRE ATT&CK tactics:** | Persistence<br>Collection |
+| **MITRE ATT&CK techniques:** | **Collection:**<br>T1114 - Email Collection<br>T1213 - Data from Information Repositories<br><br>**Persistence:**<br>T1098 - Account Manipulation<br>T1136 - Create Account<br>T1137 - Office Application Startup<br>T1505 - Server Software Component |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous user/app activities in Azure audit logs
+
+**Description:** This algorithm identifies anomalous user/app Azure sessions in audit logs for the last day, based on the behavior of the previous 21 days across all users and apps. The algorithm checks for sufficient volume of data before training the model.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Active Directory audit logs |
+| **MITRE ATT&CK tactics:** | Collection<br>Discovery<br>Initial Access<br>Persistence<br>Privilege Escalation |
+| **MITRE ATT&CK techniques:** | **Collection:**<br>T1530 - Data from Cloud Storage Object<br><br>**Discovery:**<br>T1087 - Account Discovery<br>T1538 - Cloud Service Dashboard<br>T1526 - Cloud Service Discovery<br>T1069 - Permission Groups Discovery<br>T1518 - Software Discovery<br><br>**Initial Access:**<br>T1190 - Exploit Public-Facing Application<br>T1078 - Valid Accounts<br><br>**Persistence:**<br>T1098 - Account Manipulation<br>T1136 - Create Account<br>T1078 - Valid Accounts<br><br>**Privilege Escalation:**<br>T1484 - Domain Policy Modification<br>T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous W3CIIS logs activity
+
+**Description:** This machine learning algorithm indicates anomalous IIS sessions over the past day. It will capture, for example, an unusually high number of distinct URI queries, user agents, or logs in a session, or of specific HTTP verbs or HTTP statuses in a session. The algorithm identifies unusual W3CIISLog events within an hourly session, grouped by site name and client IP. The model is trained on the previous 7 days of IIS activity. The algorithm checks for sufficient volume of IIS activity before training the model.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | W3CIIS logs |
+| **MITRE ATT&CK tactics:** | Initial Access<br>Persistence |
+| **MITRE ATT&CK techniques:** | **Initial Access:**<br>T1190 - Exploit Public-Facing Application<br><br>**Persistence:**<br>T1505 - Server Software Component |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Anomalous web request activity
+
+**Description:** This algorithm groups W3CIISLog events into hourly sessions grouped by site name and URI stem. The machine learning model identifies sessions with unusually high numbers of requests that triggered 5xx-class response codes in the last day. 5xx-class codes are an indication that some application instability or error condition has been triggered by the request. They can be an indication that an attacker is probing the URI stem for vulnerabilities and configuration issues, performing some exploitation activity such as SQL injection, or leveraging an unpatched vulnerability. This algorithm uses 6 days of data for training.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | W3CIIS logs |
+| **MITRE ATT&CK tactics:** | Initial Access<br>Persistence |
+| **MITRE ATT&CK techniques:** | **Initial Access:**<br>T1190 - Exploit Public-Facing Application<br><br>**Persistence:**<br>T1505 - Server Software Component |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Attempted computer brute force
+
+**Description:** This algorithm detects an unusually high volume of failed login attempts (security event ID 4625) per computer over the past day. The model is trained on the previous 21 days of Windows security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Credential Access |
+| **MITRE ATT&CK techniques:** | T1110 - Brute Force |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Attempted user account brute force
+
+**Description:** This algorithm detects an unusually high volume of failed login attempts (security event ID 4625) per user account over the past day. The model is trained on the previous 21 days of Windows security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Credential Access |
+| **MITRE ATT&CK techniques:** | T1110 - Brute Force |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Attempted user account brute force per login type
+
+**Description:** This algorithm detects an unusually high volume of failed login attempts (security event ID 4625) per user account per logon type over the past day. The model is trained on the previous 21 days of Windows security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Credential Access |
+| **MITRE ATT&CK techniques:** | T1110 - Brute Force |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Attempted user account brute force per failure reason
+
+**Description:** This algorithm detects an unusually high volume of failed login attempts (security event ID 4625) per user account per failure reason over the past day. The model is trained on the previous 21 days of Windows security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Credential Access |
+| **MITRE ATT&CK techniques:** | T1110 - Brute Force |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Detect machine generated network beaconing behavior
+
+**Description:** This algorithm identifies beaconing patterns from network traffic connection logs based on recurrent time delta patterns. Any network connection towards untrusted public networks at repetitive time deltas is an indication of malware callbacks or data exfiltration attempts. The algorithm will calculate the time delta between consecutive network connections between the same source IP and destination IP, as well as the number of connections in a time-delta sequence between the same sources and destinations. The percentage of beaconing is calculated as the connections in time-delta sequence against total connections in a day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN) |
+| **MITRE ATT&CK tactics:** | Command and Control |
+| **MITRE ATT&CK techniques:** | T1071 - Application Layer Protocol<br>T1132 - Data Encoding<br>T1001 - Data Obfuscation<br>T1568 - Dynamic Resolution<br>T1573 - Encrypted Channel<br>T1008 - Fallback Channels<br>T1104 - Multi-Stage Channels<br>T1095 - Non-Application Layer Protocol<br>T1571 - Non-Standard Port<br>T1572 - Protocol Tunneling<br>T1090 - Proxy<br>T1205 - Traffic Signaling<br>T1102 - Web Service |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Domain generation algorithm (DGA) on DNS domains
+
+**Description:** This machine learning model indicates potential DGA domains from the past day in the DNS logs. The algorithm applies to DNS records that resolve to IPv4 and IPv6 addresses.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | DNS Events |
+| **MITRE ATT&CK tactics:** | Command and Control |
+| **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Domain Reputation Palo Alto anomaly
+
+**Description:** This algorithm evaluates the reputation for all domains seen specifically in Palo Alto firewall (PAN-OS product) logs. A high anomaly score indicates a low reputation, suggesting that the domain has been observed to host malicious content or is likely to do so.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN) |
+| **MITRE ATT&CK tactics:** | Command and Control |
+| **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Excessive data transfer anomaly
+
+**Description:** This algorithm detects unusually high data transfer observed in network logs. It uses time series to decompose the data into seasonal, trend and residual components to calculate baseline. Any sudden large deviation from the historical baseline is considered anomalous activity.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN, Zscaler, CEF, CheckPoint, Fortinet) |
+| **MITRE ATT&CK tactics:** | Exfiltration |
+| **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Excessive Downloads via Palo Alto GlobalProtect
+
+**Description:** This algorithm detects unusually high volume of download per user account through the Palo Alto VPN solution. The model is trained on the previous 14 days of the VPN logs. It indicates anomalous high volume of downloads in the past day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN VPN) |
+| **MITRE ATT&CK tactics:** | Exfiltration |
+| **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Excessive uploads via Palo Alto GlobalProtect
+
+**Description:** This algorithm detects unusually high volume of upload per user account through the Palo Alto VPN solution. The model is trained on the previous 14 days of the VPN logs. It indicates anomalous high volume of upload in the past day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN VPN) |
+| **MITRE ATT&CK tactics:** | Exfiltration |
+| **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits<br>T1041 - Exfiltration Over C2 Channel<br>T1011 - Exfiltration Over Other Network Medium<br>T1567 - Exfiltration Over Web Service<br>T1029 - Scheduled Transfer<br>T1537 - Transfer Data to Cloud Account |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Login from an unusual region via Palo Alto GlobalProtect account logins
+
+**Description:** When a Palo Alto GlobalProtect account signs in from a source region that has rarely been signed in from during the last 14 days, an anomaly is triggered. This anomaly may indicate that the account has been compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN VPN) |
+| **MITRE ATT&CK tactics:** | Credential Access<br>Initial Access<br>Lateral Movement |
+| **MITRE ATT&CK techniques:** | T1133 - External Remote Services |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Multi-region logins in a single day via Palo Alto GlobalProtect
+
+**Description:** This algorithm detects a user account which had sign-ins from multiple non-adjacent regions in a single day through a Palo Alto VPN.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN VPN) |
+| **MITRE ATT&CK tactics:** | Defense Evasion<br>Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Potential data staging
+
+**Description:** This algorithm compares the downloads of distinct files on a per-user basis from the previous week with the downloads for the current day for each user, and an anomaly is triggered when the number of downloads of distinct files exceeds the configured number of standard deviations above the mean. Currently the algorithm only analyzes files commonly seen during exfiltration of documents, images, videos and archives with the extensions `doc`, `docx`, `xls`, `xlsx`, `xlsm`, `ppt`, `pptx`, `one`, `pdf`, `zip`, `rar`, `bmp`, `jpg`, `mp3`, `mp4`, and `mov`.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Office Activity log (Exchange) |
+| **MITRE ATT&CK tactics:** | Collection |
+| **MITRE ATT&CK techniques:** | T1074 - Data Staged |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Potential domain generation algorithm (DGA) on next-level DNS Domains
+
+**Description:** This machine learning model indicates the next-level domains (third-level and up) of the domain names from the last day of DNS logs that are unusual. They could potentially be the output of a domain generation algorithm (DGA). The anomaly applies to the DNS records that resolve to IPv4 and IPv6 addresses.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | DNS Events |
+| **MITRE ATT&CK tactics:** | Command and Control |
+| **MITRE ATT&CK techniques:** | T1568 - Dynamic Resolution |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious geography change in Palo Alto GlobalProtect account logins
+
+**Description:** A match indicates that a user logged in remotely from a country that is different from the country of the user's last remote login. This rule might also indicate an account compromise, particularly if the rule matches occurred closely in time. This includes the scenario of impossible travel.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN VPN) |
+| **MITRE ATT&CK tactics:** | Initial Access<br>Credential Access |
+| **MITRE ATT&CK techniques:** | T1133 - External Remote Services<br>T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious number of protected documents accessed
+
+**Description:** This algorithm detects high volume of access to protected documents in Azure Information Protection (AIP) logs. It considers AIP workload records for a given number of days and determines whether the user performed unusual access to protected documents in a day given historical behavior.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Information Protection logs |
+| **MITRE ATT&CK tactics:** | Collection |
+| **MITRE ATT&CK techniques:** | T1530 - Data from Cloud Storage Object<br>T1213 - Data from Information Repositories<br>T1005 - Data from Local System<br>T1039 - Data from Network Shared Drive<br>T1114 - Email Collection |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of AWS API calls from Non-AWS source IP address
+
+**Description:** This algorithm detects an unusually high volume of AWS API calls per user account per workspace, from source IP addresses outside of AWS's source IP ranges, within the last day. The model is trained on the previous 21 days of AWS CloudTrail log events by source IP address. This activity may indicate that the user account is compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | AWS CloudTrail logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of AWS CloudTrail log events of group user account by EventTypeName
+
+**Description:** This algorithm detects an unusually high volume of events per group user account, by different event types (AwsApiCall, AwsServiceEvent, AwsConsoleSignIn, AwsConsoleAction), in your AWS CloudTrail log within the last day. The model is trained on the previous 21 days of AWS CloudTrail log events by group user account. This activity may indicate that the account is compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | AWS CloudTrail logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of AWS write API calls from a user account
+
+**Description:** This algorithm detects an unusually high volume of AWS write API calls per user account within the last day. The model is trained on the previous 21 days of AWS CloudTrail log events by user account. This activity may indicate that the account is compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | AWS CloudTrail logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of failed login attempts to AWS Console by each group user account
+
+**Description:** This algorithm detects an unusually high volume of failed login attempts to AWS Console per group user account in your AWS CloudTrail log within the last day. The model is trained on the previous 21 days of AWS CloudTrail log events by group user account. This activity may indicate that the account is compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | AWS CloudTrail logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of failed login attempts to AWS Console by each source IP address
+
+**Description:** This algorithm detects an unusually high volume of failed login events to AWS Console per source IP address in your AWS CloudTrail log within the last day. The model is trained on the previous 21 days of AWS CloudTrail log events by source IP address. This activity may indicate that the IP address is compromised.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | AWS CloudTrail logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of logins to computer
+
+**Description:** This algorithm detects an unusually high volume of successful logins (security event ID 4624) per computer over the past day. The model is trained on the previous 21 days of Windows Security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of logins to computer with elevated token
+
+**Description:** This algorithm detects an unusually high volume of successful logins (security event ID 4624) with administrative privileges, per computer, over the last day. The model is trained on the previous 21 days of Windows Security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of logins to user account
+
+**Description:** This algorithm detects an unusually high volume of successful logins (security event ID 4624) per user account over the past day. The model is trained on the previous 21 days of Windows Security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of logins to user account by logon types
+
+**Description:** This algorithm detects an unusually high volume of successful logins (security event ID 4624) per user account, by different logon types, over the past day. The model is trained on the previous 21 days of Windows Security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Suspicious volume of logins to user account with elevated token
+
+**Description:** This algorithm detects an unusually high volume of successful logins (security event ID 4624) with administrative privileges, per user account, over the last day. The model is trained on the previous 21 days of Windows Security event logs.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Windows Security logs |
+| **MITRE ATT&CK tactics:** | Initial Access |
+| **MITRE ATT&CK techniques:** | T1078 - Valid Accounts |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Unusual external firewall alarm detected
+
+**Description:** This algorithm identifies unusual external firewall alarms which are threat signatures released by a firewall vendor. It uses the last 7 days' activities to calculate the 10 most triggered signatures and the 10 hosts that triggered the most signatures. After excluding both type of noisy events, it triggers an anomaly only after exceeding the threshold for the number of signatures triggered in a single day.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN) |
+| **MITRE ATT&CK tactics:** | Discovery<br>Command and Control |
+| **MITRE ATT&CK techniques:** | **Discovery:**<br>T1046 - Network Service Scanning<br>T1135 - Network Share Discovery<br><br>**Command and Control:**<br>T1071 - Application Layer Protocol<br>T1095 - Non-Application Layer Protocol<br>T1571 - Non-Standard Port |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Unusual mass downgrade AIP label
+
+**Description:** This algorithm detects unusually high volume of downgrade label activity in Azure Information Protection (AIP) logs. It considers "AIP" workload records for a given number of days and determines the sequence of activity performed on documents along with the label applied to classify unusual volume of downgrade activity.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | Azure Information Protection logs |
+| **MITRE ATT&CK tactics:** | Collection |
+| **MITRE ATT&CK techniques:** | T1530 - Data from Cloud Storage Object<br>T1213 - Data from Information Repositories<br>T1005 - Data from Local System<br>T1039 - Data from Network Shared Drive<br>T1114 - Email Collection |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Unusual network communication on commonly used ports
+
+**Description:** This algorithm identifies unusual network communication on commonly used ports, comparing daily traffic to a baseline from the previous 7 days. This includes traffic on commonly used ports (22, 53, 80, 443, 8080, 8888), and compares daily traffic to the mean and standard deviation of several network traffic attributes calculated over the baseline period. The traffic attributes considered are daily total events, daily data transfer and number of distinct source IP addresses per port. An anomaly is triggered when the daily values are greater than the configured number of standard deviations above the mean.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN, Zscaler, CheckPoint, Fortinet) |
+| **MITRE ATT&CK tactics:** | Command and Control<br>Exfiltration |
+| **MITRE ATT&CK techniques:** | **Command and Control:**<br>T1071 - Application Layer Protocol<br><br>**Exfiltration:**<br>T1030 - Data Transfer Size Limits |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Unusual network volume anomaly
+
+**Description:** This algorithm detects unusually high volume of connections in network logs. It uses time series to decompose the data into seasonal, trend and residual components to calculate baseline. Any sudden large deviation from the historical baseline is considered as anomalous activity.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN, Zscaler, CEF, CheckPoint, Fortinet) |
+| **MITRE ATT&CK tactics:** | Exfiltration |
+| **MITRE ATT&CK techniques:** | T1030 - Data Transfer Size Limits |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+### Unusual web traffic detected with IP in URL path
+
+**Description:** This algorithm identifies unusual web requests listing an IP address as the host. The algorithm finds all web requests with IP addresses in the URL path and compares them with the previous week of data to exclude known benign traffic. After excluding known benign traffic, it triggers an anomaly only after exceeding certain thresholds with configured values such as total web requests, numbers of URLs seen with same host destination IP address, and number of distinct source IPs within the set of URLs with the same destination IP address. This type of request can indicate an attempt to bypass URL reputation services for malicious purposes.
+
+| Attribute | Value |
+| -- | |
+| **Anomaly type:** | Customizable machine learning |
+| **Data sources:** | CommonSecurityLog (PAN, Zscaler, CheckPoint, Fortinet) |
+| **MITRE ATT&CK tactics:** | Command and Control<br>Initial Access |
+| **MITRE ATT&CK techniques:** | **Command and Control:**<br>T1071 - Application Layer Protocol<br><br>**Initial Access:**<br>T1189 - Drive-by Compromise |
+
+[Back to Machine learning-based anomalies list](#machine-learning-based-anomalies)
+
+## Next steps
+
+- Learn about [machine learning-generated anomalies](soc-ml-anomalies.md) in Microsoft Sentinel.
+
+- Learn how to [work with anomaly rules](work-with-anomaly-rules.md).
+
+- [Investigate incidents](investigate-cases.md) with Microsoft Sentinel.
sentinel Extend Sentinel Across Workspaces Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md
Title: Extend Microsoft Sentinel across workspaces and tenants | Microsoft Docs
description: How to use Microsoft Sentinel to query and analyze data across workspaces and tenants. Previously updated : 11/09/2021 Last updated : 05/03/2022
To address this requirement, Microsoft Sentinel offers multiple-workspace capabi
This model offers significant advantages over a fully centralized model in which all data is copied to a single workspace: -- Flexible role assignment to the global and local SOCs, or to the MSSP its customers.
+- Flexible role assignment to the global and local SOCs, or to the MSSP and its customers.
- Fewer challenges regarding data ownership, data privacy and regulatory compliance.
sentinel Manage Analytics Rule Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/manage-analytics-rule-templates.md
With the implementation of template version control, you can see and track the v
:::image type="content" source="media/manage-analytics-rule-templates/see-template-versions.png" alt-text="Screenshot of details pane. Scroll down to see template version numbers." border="false"::: The number is in a ΓÇ£1.0.0ΓÇ¥ format ΓÇô major version, minor version, and build.
- (For the time being, the build number is not in use and will always be 0.)
- A difference in the *major version* number indicates that something essential in the template was changed, that could affect how the rule detects threats or even its ability to function altogether. This is a change you will want to include in your rules.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
This integration gives Microsoft 365 security incidents the visibility to be man
- **Microsoft Defender for Office 365** (formerly Office 365 ATP) - **Microsoft Defender for Cloud Apps** (formerly Microsoft Cloud App Security)
-In addition to collecting alerts from these components, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
+Other services whose alerts are collected by Microsoft 365 Defender include:
+
+- **Microsoft Purview Data Loss Prevention (DLP)** ([Learn more](/microsoft-365/security/defender/investigate-dlp))
+
+In addition to collecting alerts from these components and other services, Microsoft 365 Defender generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
### Common use cases and scenarios
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | |
+| **AppGate SDP** ip connection logs collected using Syslog |`_ASim_NetworkSession_AppGateSDP` (regular)<br> `_Im_NetworkSession_AppGateSDP` (filtering) | `ASimNetworkSessionAppGateSDP` (regular)<br> `vimNetworkSessionAppGateSDP` (filtering) |
| **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) | | **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) | | **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
Microsoft Sentinel provides the following out-of-the-box, product-specific Web S
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | | |**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> |
+| **Vectra AI Streams** |`_ASim_WebSession_VectraAI` (regular)<br> `_Im_WebSession_VectraAI` (filtering) | `ASimWebSessionVectraAI` (regular)<br> `vimWebSessionVectraAI` (filtering) |
| **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resources.md
We love hearing from our users.
In the TechCommunity space for Microsoft Sentinel: -- [View and comment on recent blog posts](https://techcommunity.microsoft.com/t5/Azure-Sentinel/bg-p/AzureSentinelBlog)-- [Post your own questions about Microsoft Sentinel](https://techcommunity.microsoft.com/t5/Azure-Sentinel/bd-p/AzureSentinel)
+- [View and comment on recent blog posts](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog)
+- [Post your own questions about Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel)
You can also send suggestions for improvements via our [User Voice](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8) program.
Download sample content from the private community GitHub repository to create c
> [Get certified!](/learn/paths/security-ops-sentinel/) > [!div class="nextstepaction"]
-> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
+> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
sentinel Soc Ml Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-ml-anomalies.md
With attackers and defenders constantly fighting for advantage in the cybersecur
Anomalies can be powerful tools, but they are notoriously very noisy. They typically require a lot of tedious tuning for specific environments or complex post-processing. Microsoft Sentinel customizable anomaly templates are tuned by our data science team to provide out-of-the box value, but should you need to tune them further, the process is simple and requires no knowledge of machine learning. The thresholds and parameters for many of the anomalies can be configured and fine-tuned through the already familiar analytics rule user interface. The performance of the original threshold and parameters can be compared to the new ones within the interface and further tuned as necessary during a testing, or flighting, phase. Once the anomaly meets the performance objectives, the anomaly with the new threshold or parameters can be promoted to production with the click of a button. Microsoft Sentinel customizable anomalies enable you to get the benefit of anomalies without the hard work.
+## UEBA anomalies
+
+Some of the anomalies detected by Microsoft Sentinel come from its [User and Entity Behavior Analytics (UEBA) engine](identify-threats-with-entity-behavior-analytics.md), which detects anomalies based on dynamic baselines created for each entity across various data inputs. Each entity's baseline behavior is set according to its own historical activities, those of its peers, and those of the organization as a whole. Anomalies can be triggered by the correlation of different attributes such as action type, geo-location, device, resource, ISP, and more.
+ ## Next steps In this document, you learned how to take advantage of customizable anomalies in Microsoft Sentinel. - Learn how to [view, create, manage, and fine-tune anomaly rules](work-with-anomaly-rules.md).
+- Learn about [User and Entity Behavior Analytics (UEBA)](identify-threats-with-entity-behavior-analytics.md).
+- See the list of [currently supported anomalies](anomalies-reference.md).
- Learn about [other types of analytics rules](detect-threats-built-in.md).
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-anomaly-rules.md
## View customizable anomaly rule templates
-Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules must be activated before they will generate anomalies, which you can find in the **Anomalies** table in the **Logs** section.
+Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules are enabled, or activated, by default, so they will generate anomalies out-of-the-box. You can find and query these anomalies in the **Anomalies** table in the **Logs** section.
-1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-
-1. On the **Analytics** page, select the **Rule templates** tab.
-
-1. Filter the list for **Anomaly** templates:
+You can now find anomaly rules displayed in a grid in the **Anomalies** tab in the **Analytics** page. The list can be filtered by the following criteria:
- 1. Select the **Rule type** filter, then the drop-down list that appears below.
+- **Status** - whether the rule is enabled or disabled.
- 1. Unmark **Select all**, then mark **Anomaly**.
+- **Tactics** - the MITRE ATT&CK framework tactics covered by the anomaly.
- 1. If necessary, select the top of the drop-down list to retract it, then select **OK**.
+- **Techniques** - the MITRE ATT&CK framework techniques covered by the anomaly.
-## Activate anomaly rules
+- **Data sources** - the type of logs that need to be ingested and analyzed for the anomaly to be defined.
-When you select one of the rule templates, you will see the following information in the details pane, along with a **Create rule** button:
+When you select a rule, you will see the following information in the details pane:
- **Description** explains how the anomaly works and the data it requires. -- **Data sources** indicates the type of logs that need to be ingested in order to be analyzed.- - **Tactics and techniques** are the MITRE ATT&CK framework tactics and techniques covered by the anomaly. - **Parameters** are the configurable attributes for the anomaly.
When you select one of the rule templates, you will see the following informatio
- **Rule frequency** is the time between log processing jobs that find the anomalies. -- **Anomaly version** shows the version of the template that is used by a rule. If you want to change the version used by a rule that is already active, you must recreate the rule.--- **Template last updated** is the date the anomaly version was changed.-
-Complete the following steps to activate a rule:
-
-1. Choose a rule template that is not already labeled **IN USE**. Select the **Create rule** button to open the rule creation wizard.
-
- The wizard for each rule template will be slightly different, but it has three steps or tabs: **General**, **Configuration**, **Review and create**.
-
- You can't change any of the values in the wizard; you first have to create and activate the rule.
+- **Rule status** tells you whether the rule runs in **Production** or **Flighting** (staging) mode when enabled.
-1. Cycle through the tabs, wait for the "Validation passed" message on the **Review and create** tab, and select the **Create** button.
+- **Anomaly version** shows the version of the template that is used by a rule. If you want to change the version used by a rule that is already active, you must recreate the rule.
- You can only create one active rule from each template. Once you complete the wizard, an active anomaly rule is created in the **Active rules** tab, and the template (in the **Rule templates** tab) will be marked **IN USE**.
+The rules that come with Microsoft Sentinel out of the box cannot be edited or deleted. To customize a rule, you must first create a duplicate of the rule, and then customize the duplicate. [See the complete instructions](#tune-anomaly-rules).
- > [!NOTE]
- > Assuming the required data is available, the new rule may still take up to 24 hours to appear in the **Active rules** tab. To view the new rules, select the Active rules tab and filter it the same way you filtered the Rule templates list above.
+> [!NOTE]
+> **Why is there an Edit button if the rule can't be edited?**
+>
+> While you can't change the configuration of an out-of-the-box anomaly rule, you can do two things:
+>
+> 1. You can toggle the **rule status** of the rule between **Production** and **Flighting**.
+>
+> 1. You can submit feedback to Microsoft on your experience with customizable anomalies.
-Once the anomaly rule is activated, detected anomalies will be stored in the **Anomalies** table in the **Logs** section of your Microsoft Sentinel workspace.
-Each anomaly rule has a training period, and anomalies will not appear in the table until after that training period. You can find the training period in the description of each anomaly rule.
## Assess the quality of anomalies
You can see how well an anomaly rule is performing by reviewing a sample of the
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-1. On the **Analytics** page, check that the **Active rules** tab is selected.
-
-1. Filter the list for **Anomaly** rules (as above).
+1. On the **Analytics** page, select the **Anomalies** tab.
-1. Select the rule you want to assess, and copy its name from the top of the details pane to the right.
+1. Select the rule you want to assess, and copy its ID from the top of the details pane to the right.
1. From the Microsoft Sentinel navigation menu, select **Logs**.
You can see how well an anomaly rule is performing by reviewing a sample of the
```kusto Anomalies
- | where AnomalyTemplateName contains "________________________________"
+ | where RuleId contains "<RuleId>"
```
- Paste the rule name you copied above in place of the underscores between the quotation marks.
+ Paste the rule ID you copied above in place of `<RuleId>` between the quotation marks.
1. Select **Run**.
The original anomaly rule will keep running until you either disable or delete i
This is by design, to give you the opportunity to compare the results generated by the original configuration and the new one. Duplicate rules are disabled by default. You can only make one customized copy of any given anomaly rule. Attempts to make a second copy will fail.
-1. To change the configuration of an anomaly rule, select the anomaly rule in the **Active rules** tab.
+1. To change the configuration of an anomaly rule, select the rule from the list in the **Anomalies** tab.
-1. Right-click anywhere on the row of the rule, or left-click the ellipsis (...) at the end of the row, then select **Duplicate**.
+1. Right-click anywhere on the row of the rule, or left-click the ellipsis (...) at the end of the row, then select **Duplicate** from the context menu.
-1. The new copy of the rule will have the suffix " - Customized" in the rule name. To actually customize this rule, select this rule and select **Edit**.
+ A new rule will appear in the list, with the following characteristics:
+ - The rule name will be the same as the original, with " - Customized" appended to the end.
+ - The rule's status will be **Disabled**.
+ - The **FLGT** badge will appear at the beginning of the row to indicate that the rule is in Flighting mode.
+
+1. To customize this rule, select the rule and select **Edit** in the details pane, or from the rule's context menu.
1. The rule opens in the Analytics rule wizard. Here you can change the parameters of the rule and its threshold. The parameters that can be changed vary with each anomaly type and algorithm.
This is by design, to give you the opportunity to compare the results generated
1. Enable the customized rule to generate results. Some of your changes may require the rule to run again, so you must wait for it to finish and come back to check the results on the logs page. The customized anomaly rule runs in **Flighting** (testing) mode by default. The original rule continues to run in **Production** mode by default.
-1. To compare the results, go back to the Anomalies table in **Logs** to [assess the new rule as before](#assess-the-quality-of-anomalies), only look for rows with the original rule name as well as the duplicate rule name with " - Customized" appended to it in the **AnomalyTemplateName** column.
+1. To compare the results, go back to the Anomalies table in **Logs** to [assess the new rule as before](#assess-the-quality-of-anomalies), only use the following query instead to look for anomalies generated by the original rule as well as the duplicate rule.
+
+ ```kusto
+ Anomalies
+ | where AnomalyTemplateId contains "<RuleId>"
+ ```
+ Paste the rule ID you copied from the original rule in place of `<RuleId>` between the quotation marks. The value of `AnomalyTemplateId` in both the original and duplicate rules is identical to the value of `RuleId` in the original rule.
- If you are satisfied with the results for the customized rule, you can go back to the **Active rules** tab, select on the customized rule, select the **Edit** button and on the **General** tab switch it from **Flighting** to **Production**. The original rule will automatically change to **Flighting** since you can't have two versions of the same rule in production at the same time.
+If you are satisfied with the results for the customized rule, you can go back to the **Anomalies** tab, select the customized rule, select the **Edit** button and on the **General** tab switch it from **Flighting** to **Production**. The original rule will automatically change to **Flighting** since you can't have two versions of the same rule in production at the same time.
## Next steps In this document, you learned how to work with customizable anomaly detection analytics rules in Microsoft Sentinel. - Get some background information about [customizable anomalies](soc-ml-anomalies.md).
+- View the [available anomaly types](anomalies-reference.md) in Microsoft Sentinel.
- Explore other [analytics rule types](detect-threats-built-in.md).
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
A related benefit is **load-leveling**, which enables producers and consumers to
Using queues to intermediate between message producers and consumers provides an inherent loose coupling between the components. Because producers and consumers aren't aware of each other, a consumer can be upgraded without having any effect on the producer. ### Create queues
-You can create queues using the [Azure portal](service-bus-quickstart-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-quickstart-cli.md), or [Resource Manager templates](service-bus-resource-manager-namespace-queue.md). Then, send and receive messages using clients written in [C#](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), [Python](service-bus-python-how-to-use-queues.md), and [JavaScript](service-bus-nodejs-how-to-use-queues.md).
+You can create queues using the [Azure portal](service-bus-quickstart-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-quickstart-cli.md), or [Azure Resource Manager templates (ARM templates)](service-bus-resource-manager-namespace-queue.md). Then, send and receive messages using clients written in [C#](service-bus-dotnet-get-started-with-queues.md), [Java](service-bus-java-how-to-use-queues.md), [Python](service-bus-python-how-to-use-queues.md), and [JavaScript](service-bus-nodejs-how-to-use-queues.md).
### Receive modes You can specify two different modes in which Service Bus receives messages.
A queue allows processing of a message by a single consumer. In contrast to queu
The message-sending functionality of a queue maps directly to a topic and its message-receiving functionality maps to a subscription. Among other things, this feature means that subscriptions support the same patterns described earlier in this section regarding queues: competing consumer, temporal decoupling, load leveling, and load balancing. ### Create topics and subscriptions
-Creating a topic is similar to creating a queue, as described in the previous section. You can create topics and subscriptions using the [Azure portal](service-bus-quickstart-topics-subscriptions-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-tutorial-topics-subscriptions-cli.md), or [Resource Manager templates](service-bus-resource-manager-namespace-topic.md). Then, send messages to a topic and receive messages from subscriptions using clients written in [C#](service-bus-dotnet-how-to-use-topics-subscriptions.md), [Java](service-bus-java-how-to-use-topics-subscriptions.md), [Python](service-bus-python-how-to-use-topics-subscriptions.md), and [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md).
+Creating a topic is similar to creating a queue, as described in the previous section. You can create topics and subscriptions using the [Azure portal](service-bus-quickstart-topics-subscriptions-portal.md), [PowerShell](service-bus-quickstart-powershell.md), [CLI](service-bus-tutorial-topics-subscriptions-cli.md), or [ARM templates](service-bus-resource-manager-namespace-topic.md). Then, send messages to a topic and receive messages from subscriptions using clients written in [C#](service-bus-dotnet-how-to-use-topics-subscriptions.md), [Java](service-bus-java-how-to-use-topics-subscriptions.md), [Python](service-bus-python-how-to-use-topics-subscriptions.md), and [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md).
### Rules and actions In many scenarios, messages that have specific characteristics must be processed in different ways. To enable this processing, you can configure subscriptions to find messages that have desired properties and then perform certain modifications to those properties. While Service Bus subscriptions see all messages sent to the topic, it is possible to only copy a subset of those messages to the virtual subscription queue. This filtering is accomplished using subscription filters. Such modifications are called **filter actions**. When a subscription is created, you can supply a filter expression that operates on the properties of the message. The properties can be both the system properties (for example, **Label**) and custom application properties (for example, **StoreName**.) The SQL filter expression is optional in this case. Without a SQL filter expression, any filter action defined on a subscription will be done on all the messages for that subscription.
service-fabric Service Fabric Connect To Secure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-to-secure-cluster.md
Title: Connect securely to an Azure Service Fabric cluster
description: Describes how to authenticate client access to a Service Fabric cluster and how to secure communication between clients and a cluster. Previously updated : 01/29/2019 Last updated : 06/22/2022 # Connect to a secure cluster
The following example relies on Microsoft.IdentityModel.Clients.ActiveDirectory,
> [!IMPORTANT] > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
-For more information on AAD token acquisition, see [Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?view=azure-dotnet).
+For more information on AAD token acquisition, see [Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?view=azure-dotnet&preserve-view=true).
```csharp string tenantId = "C15CFCEA-02C1-40DC-8466-FBD0EE0B05D2";
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
14.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | |||
-16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1112-azure, 4.15.0-1113-azure |
16.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic <br/>4.8.0-34-generic to 4.8.0-58-generic <br/>4.10.0-14-generic to 4.10.0-42-generic <br/>4.11.0-13-generic to 4.11.0-14-generic <br/>4.13.0-16-generic to 4.13.0-45-generic <br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| 16.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| |||
-18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-107-generic |
+18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | |||
-20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic |
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic |
Debian 8 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure
Debian 8 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 | |||
-Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 9.1 kernels supported in this release.
-Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64
+Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-18-amd64 </br> 4.19.0-0.bpo.19-amd64 </br> 4.19.0-0.bpo.17-cloud-amd64 to 4.19.0-0.bpo.19-cloud-amd64
+Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-16-amd64, 4.9.0-17-amd64
Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 9.1 kernels supported in this release. Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 |||
-Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 |
+Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 |
Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 10 kernels supported in this release. Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 10 kernels supported in this release. Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new SLES 12 kernels supported in this release. |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.110-default:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new SLES 12 kernels supported in this release. |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.44](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new SLES 15 kernels supported in this release.
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.3.18-59.5-default:3
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br> SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
You can follow and subscribe to Site Recovery update notifications in the [Azure
For Site Recovery components, we support N-4 versions, where N is the latest released version. These are summarized in the following table.
-**Update** | **Unified Setup** | **Configuration server ova** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent**
+**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent**
| | | | |
-[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6263.1 | 5.1.7207.0 | 9.48.6263.1 | 5.1.7207.0 | 2.0.9245.0
+[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0
[Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0 [Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0 [Rollup 58](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 9.45.6096.1 | 5.1.6952.0 | 9.45.6096.1 | 5.1.6952.0 | 2.0.9237.0
spatial-anchors Setup Unity Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/setup-unity-project.md
If you are developing for HoloLens or Android please follow the additional setup
# [HoloLens](#tab/ExtraConfigurationsHoloLens) #### Configure your Unity project XR settings
-When developing MixedReality apps on HoloLens, you need to set the XR configuration in Unity. For more information, see [Setting up your XR configuration - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/xr-project-setup?tabs=openxr) and [Choosing a Unity version and XR plugin - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/choosing-unity-version).
+When developing MixedReality apps on HoloLens, you need to set the XR configuration in Unity. For more information, see [Setting up your XR configuration - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/new-openxr-project-without-mrtk) and [Choosing a Unity version and XR plugin - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/choosing-unity-version).
Azure Spatial Anchors SDK versions 2.9.0 or earlier only provide support for the Windows XR plugin (`com.unity.xr.windowsmr`), so the Azure Spatial Anchors windows package has an explicit dependency on the Windows XR Plugin.
spring-cloud Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/vnet-customer-responsibilities.md
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
## Azure Spring Apps optional FQDN for third-party application performance management
-Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the following configurations:
- | Destination FQDN | Port | Use | | - | - | | | <i>collector*.newrelic.com</i> | TCP:443/80 | Required networks of New Relic APM agents from US region, also see [APM Agents Networks](https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/install-configure/networks/#agents). |
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
# Hot, Cool, and Archive access tiers for blob data
-We sometimes use the first person plural in content.
- Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include: - **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs.
storage Data Lake Storage Migrate On Premises HDFS Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-on-premises-HDFS-cluster.md
Title: Migrate from on-prem HDFS store to Azure Storage with Azure Data Box
description: Migrate data from an on-premises HDFS store into Azure Storage (blob storage or Data Lake Storage Gen2) by using a Data Box device. Previously updated : 02/14/2019 Last updated : 06/16/2022
Follow these steps to copy data via the REST APIs of Blob/Object storage to your
To improve the copy speed:
- - Try changing the number of mappers. (The above example uses `m` = 4 mappers.)
+ - Try changing the number of mappers. (The default number of mappers is 20. The above example usesΓÇ»`m`ΓÇ»= 4 mappers.)
+
+ - Try `-D fs.azure.concurrentRequestCount.out=<thread_number>` \. Replace `<thread_number>` with the number of threads per mapper. The product of the number of mappers and the number of threads per mapper, `m*<thread_number>`, should not exceed 32.
- Try running multiple `distcp` in parallel. - Remember that large files perform better than small files.
+
+ - If you have files larger than 200 GB, we recommend changing the block size to 100MB with the following parameters:
+
+ ```
+ hadoop distcp \
+ -libjars $azjars \
+ -Dfs.azure.write.request.size= 104857600 \
+ -Dfs.AbstractFileSystem.wasb.Impl=org.apache.hadoop.fs.azure.Wasb \
+ -Dfs.azure.account.key.<blob_service_endpoint<>=<account_key> \
+ -strategy dynamic \
+ -Dmapreduce.map.memory.mb=16384 \
+ -Dfs.azure.concurrentRequestCount.out=8 \
+ -Dmapreduce.map.java.opts=-Xmx8196m \
+ -m 4 \
+ -update \
+ /data/bigfile wasb://hadoop@mystorageaccount.blob.core.windows.net/bigfile
+ ```
## Ship the Data Box to Microsoft
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-overview.md
Previously updated : 11/15/2021 Last updated : 06/22/2022
Point-in-time restore requires that the following Azure Storage features be enab
- [Change feed](storage-blob-change-feed.md) - [Blob versioning](versioning-overview.md)
-Enabling these features may result in additional charges. Make sure that you understand the billing implications before enabling point-in-time restore and the prerequisite features.
+To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
+
+> [!CAUTION]
+> After you enable blob versioning for a storage account, every write operation to a blob in that account results in the creation of a new version. For this reason, enabling blob versioning may result in additional costs. To minimize costs, use a lifecycle management policy to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md).
### Retention period for point-in-time restore
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The following clients are known to be incompatible with SFTP for Azure Blob Stor
- To access the storage account using SFTP, your network must allow traffic on port 22. -- There's a 4-minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
+- There's a 2 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
## Security
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- Symbolic links aren't supported. -- `ssh-keyscan` isn't supported.- - SSH and SCP commands that aren't SFTP aren't supported. - FTPS and FTP aren't supported.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
You can use any SFTP client to securely connect and then transfer files. The fol
> [!div class="mx-imgBorder"] > ![Connect with Open SSH](./media/secure-file-transfer-protocol-support-how-to/ssh-connect-and-transfer.png)
+> [!NOTE]
+> The SFTP username is `storage_account_name`.`username`. In the example above the `storage_account_name` is "contoso4" and the `username` is "contosouser." The combined username becomes `contoso4.contosouser` for the SFTP command.
+ > [!NOTE] > You might be prompted to trust a host key. During the public preview, valid host keys are published [here](secure-file-transfer-protocol-host-keys.md).
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 02/23/2022 Last updated : 06/22/2022
Blob soft delete is part of a comprehensive data protection strategy for blob da
To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
+> [!CAUTION]
+> After you enable blob versioning for a storage account, every write operation to a blob in that account results in the creation of a new version. For this reason, enabling blob versioning may result in additional costs. To minimize costs, use a lifecycle management policy to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md).
+ ## How blob soft delete works When you enable blob soft delete for a storage account, you specify a retention period for deleted objects of between 1 and 365 days. The retention period indicates how long the data remains available after it's deleted or overwritten. The clock starts on the retention period as soon as an object is deleted or overwritten.
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-container-overview.md
Previously updated : 02/23/2022 Last updated : 06/22/2022
Blob soft delete is part of a comprehensive data protection strategy for blob da
To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
+> [!CAUTION]
+> After you enable blob versioning for a storage account, every write operation to a blob in that account results in the creation of a new version. For this reason, enabling blob versioning may result in additional costs. To minimize costs, use a lifecycle management policy to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md).
+ ## How container soft delete works When you enable container soft delete, you can specify a retention period for deleted containers that is between 1 and 365 days. The default retention period is seven days. During the retention period, you can recover a deleted container by calling the **Restore Container** operation.
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Previously updated : 05/10/2021 Last updated : 06/22/2022
Blob versioning is part of a comprehensive data protection strategy for blob dat
To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
+> [!CAUTION]
+> After you enable blob versioning for a storage account, every write operation to a blob in that account results in the creation of a new version. For this reason, enabling blob versioning may result in additional costs. To minimize costs, use a lifecycle management policy to automatically delete old versions. For more information about lifecycle management, see [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md).
+ ## How blob versioning works A version captures the state of a blob at a given point in time. Each version is identified with a version ID. When blob versioning is enabled for a storage account, Azure Storage automatically creates a new version with a unique ID when a blob is first created and each time that the blob is subsequently modified.
The following diagram shows how modifying a blob after versioning is disabled cr
## Blob versioning and soft delete
-Microsoft recommends enabling both versioning and blob soft delete for your storage accounts for optimal data protection. For more information about blob soft delete, see [Soft delete for Azure Storage blobs](./soft-delete-blob-overview.md).
+Blob versioning and blob soft delete are part of the recommended data protection configuration for storage accounts. For more information about Microsoft's recommendations for data protection, see [Recommended data protection configuration](#recommended-data-protection-configuration) in this article, as well as [Data protection overview](data-protection-overview.md).
### Overwriting a blob
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
The following table describes the legacy storage account types. These account ty
| Type of legacy storage account | Supported storage services | Redundancy options | Deployment model | Usage | |--|--|--|--|--|
-| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> |
+| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic<sup>1</sup> | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md)<sup>1</sup>.</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> |
| Standard Blob Storage | Blob Storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
+<sup>1</sup> Beginning August 1, 2022, you'll no longer be able to create new storage accounts with the classic deployment model. Resources created prior to that date will continue to be supported through August 31, 2024. For more information, see [Azure classic storage accounts will be retired on 31 August 2024](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024).
+ ## Scalability targets for standard storage accounts [!INCLUDE [azure-storage-account-limits-standard](../../../includes/azure-storage-account-limits-standard.md)]
storage File Sync Extend Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-extend-servers.md
description: Learn how to extend Windows file servers with Azure File Sync, from
Previously updated : 04/13/2021 Last updated : 06/21/2022
-#Customer intent: As an IT Administrator, I want see how to extend Windows file servers with Azure File Sync, so I can evaluate the process for extending storage capacity of my Windows servers.
+#Customer intent: As an IT administrator, I want see how to extend Windows file servers with Azure File Sync, so I can evaluate the process for extending the storage capacity of my Windows servers.
# Tutorial: Extend Windows file servers with Azure File Sync
-The article demonstrates the basic steps for extending the storage capacity of a Windows server by using Azure File Sync. Although the tutorial features Windows Server as an Azure virtual machine (VM), you would typically do this process for your on-premises servers. You can find instructions for deploying Azure File Sync in your own environment in the [Deploy Azure File Sync](file-sync-deployment-guide.md) article.
+The article demonstrates the basic steps for extending the storage capacity of a Windows server by using Azure File Sync. Although this tutorial features Windows Server as an Azure virtual machine (VM), you would typically do this process for your on-premises servers. You can find instructions for deploying Azure File Sync in your own environment in the [Deploy Azure File Sync](file-sync-deployment-guide.md) article.
> [!div class="checklist"] > - Deploy the Storage Sync Service
Sign in to the [Azure portal](https://portal.azure.com).
For this tutorial, you need to do the following before you can deploy Azure File Sync: - Create an Azure storage account and file share-- Set up a Windows Server 2016 Datacenter VM
+- Set up a Windows Server 2019 Datacenter VM
- Prepare the Windows Server VM for Azure File Sync ### Create a folder and .txt file
On your local computer, create a new folder named *FilesToSync* and add a text f
### Create a file share
-After you deploy an Azure storage account, you create a file share.
+After you deploy an Azure storage account, follow these steps to create a file share.
1. In the Azure portal, select **Go to resource**.
-1. Select **Files** from the storage account pane.
-
- ![Select Files](./media/storage-sync-files-extend-servers/click-files.png)
-
+1. From the menu at the left, select **Data storage** > **File shares**.
1. Select **+ File Share**.
- ![Select the add file share button](./media/storage-sync-files-extend-servers/create-file-share-portal2.png)
+1. Name the new file share *afsfileshare*, leave the tier set to *Transaction optimized*, and then select **Create**. You only need 5 TiB for this tutorial.
-1. Name the new file share *afsfileshare*. Enter "5120" for the **Quota**, and then select **Create**. The quota can be a maximum of 100 TiB, but you only need 5 TiB for this tutorial.
-
- ![Provide a name and quota for the new file share](./media/storage-sync-files-extend-servers/create-file-share-portal3.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/create-file-share-portal.png" alt-text="Screenshot showing how to create a new file share using the Azure portal.":::
1. Select the new file share. On the file share location, select **Upload**.
- ![Upload a file](./media/storage-sync-files-extend-servers/create-file-share-portal5.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/create-file-share-portal5.png" alt-text="Screenshot showing where to find the Upload button for the new file share.":::
-1. Browse to the *FilesToSync* folder where you created your .txt file, select *mytestdoc.txt* and select **Upload**.
+1. Browse to the *FilesToSync* folder on your local machine where you created your .txt file, select *mytestdoc.txt* and select **Upload**.
- ![Browse file share](./media/storage-sync-files-extend-servers/create-file-share-portal6.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/create-file-share-portal6.png" alt-text="Screenshot showing how to browse and upload a file to the new file share using the Azure portal.":::
-At this point, you've created a storage account and a file share with one file in it. Next, you deploy an Azure VM with Windows Server 2016 Datacenter to represent the on-premises server in this tutorial.
+At this point, you've created a storage account and a file share with one file in it. Next, you'll deploy an Azure VM with Windows Server 2019 Datacenter to represent the on-premises server in this tutorial.
### Deploy a VM and attach a data disk
-1. Go to the Azure portal and expand the menu on the left. Choose **Create a resource** in the upper left-hand corner.
-1. In the search box above the list of **Azure Marketplace** resources, search for **Windows Server 2016 Datacenter** and select it in the results. Choose **Create**.
-1. Go to the **Basics** tab. Under **Project details**, select the resource group you created for this tutorial.
+1. Select **Home** in the Azure portal and under **Azure services**, select **+ Create a resource**.
+1. Under **Popular Azure services**, select **Virtual machine** > **Create**.
+1. Under **Project details**, select your subscription and the resource group you created for this tutorial.
- ![Enter basic information about your VM on the portal blade](./media/storage-sync-files-extend-servers/vm-resource-group-and-subscription.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/vm-project-and-instance-details.png" alt-text="Screenshot showing how to supply project and instance details when creating a V M for this tutorial.":::
1. Under **Instance details**, provide a VM name. For example, use *myVM*.
-1. Don't change the default settings for **Region**, **Availability options**, **Image**, and **Size**.
-1. Under **Administrator account**, provide a **Username** and **Password** for the VM.
-1. Under **Inbound port rules**, choose **Allow selected ports** and then select **RDP (3389)** and **HTTP** from the drop-down menu.
+1. Don't change the default settings for **Region**, **Availability options**, and **Security type**.
+1. Under **Image**, select **Windows Server 2019 Datacenter - Gen2**. Leave **Size** set to the default.
+1. Under **Administrator account**, provide a **Username** and **Password** for the VM. The username must be between 1 and 20 characters long and can't contain special characters \\/""[]:|<>+=;,?*@& or end with '.' The password must be between 12 and 123 characters long, and must have 3 of the following: 1 lower case character, 1 upper case character, 1 number, and 1 special character.
+
+ :::image type="content" source="media/storage-sync-files-extend-servers/vm-username-and-password.png" alt-text="Screenshot showing how to set the username, password, and inbound port rules for the V M.":::
+
+1. Under **Inbound port rules**, choose **Allow selected ports** and then select **RDP (3389)** and **HTTP (80)** from the drop-down menu.
1. Before you create the VM, you need to create a data disk.
- 1. Select **Next:Disks**.
+ 1. At the bottom of the page, select **Next:Disks**.
- ![Add data disks](./media/storage-sync-files-extend-servers/vm-add-data-disk.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/vm-add-data-disk.png" alt-text="Screenshot showing how to select the Disks tab.":::
1. On the **Disks** tab, under **Disk options**, leave the defaults.
- 1. Under **DATA DISKS**, select **Create and attach a new disk**.
+ 1. Under **Data disks**, select **Create and attach a new disk**.
- 1. Use the default settings except for **Size (GiB)**, which you can change to **1 GiB** for this tutorial.
+ 1. Use the default settings except for **Size**, which you can change to **4 GiB** for this tutorial by selecting **Change size**.
- ![Data disk details](./media/storage-sync-files-extend-servers/vm-create-new-disk-details.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/create-data-disk.png" alt-text="Screenshot showing how to create a new data disk for your V M.":::
1. Select **OK**. 1. Select **Review + create**.
At this point, you've created a storage account and a file share with one file i
1. After your VM deployment is complete, select **Go to resource**.
- ![Go to resource](./media/storage-sync-files-extend-servers/vm-gotoresource.png)
- At this point, you've created a new virtual machine and attached a data disk. Next you connect to the VM. ### Connect to your VM
-1. In the Azure portal, select **Connect** on the virtual machine properties page.
+1. In the Azure portal, select **Connect** > **RDP** on the VM properties page.
+
+ :::image type="content" source="media/storage-sync-files-extend-servers/connect-vm.png" alt-text="Screenshot showing the Connect button on the Azure portal with R D P highlighted.":::
- ![Connect to an Azure VM from the portal](./media/storage-sync-files-extend-servers/connect-vm.png)
+1. On the **Connect** page, keep the default options to connect by **Public IP address** over port 3389. Select **Download RDP file**.
-1. On the **Connect to virtual machine** page, keep the default options to connect by **IP address** over port 3389. Select **Download RDP file**.
+ :::image type="content" source="media/storage-sync-files-extend-servers/download-rdp.png" alt-text="Screenshot showing how to connect with R D P.":::
- ![Download the RDP file](./media/storage-sync-files-extend-servers/download-rdp.png)
+1. Open the downloaded RDP file and select **Connect** when prompted. You might see a warning that says *The publisher of this remote connection can't be identified*. Click **Connect** anyway.
-1. Open the downloaded RDP file and select **Connect** when prompted.
-1. In the **Windows Security** window, select **More choices** and then **Use a different account**. Type the username as *localhost\username*, enter the password you created for the virtual machine, and then select **OK**.
+1. In the **Windows Security** window that asks you to enter your credentials, select **More choices** and then **Use a different account**. Enter *localhost\username* in the **email address** field, enter the password you created for the VM, and then select **OK**.
- ![More choices](./media/storage-sync-files-extend-servers/local-host2.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/local-host2.png" alt-text="Screenshot showing how to enter your login credentials for the V M.":::
-1. You might receive a certificate warning during the sign-in process. Select **Yes** or **Continue** to create the connection.
+1. You might receive a certificate warning during the sign-in process saying that the identity of the remote computer cannot be verified. Select **Yes** or **Continue** to create the connection.
-### Prepare the Windows server
+### Prepare the Windows Server VM
-For the Windows Server 2016 Datacenter server, disable Internet Explorer Enhanced Security Configuration. This step is required only for initial server registration. You can re-enable it after the server has been registered.
+For the Windows Server 2019 Datacenter VM, disable Internet Explorer Enhanced Security Configuration. This step is required only for initial server registration. You can re-enable it after the server has been registered.
-In the Windows Server 2016 Datacenter VM, Server Manager opens automatically. If Server Manager doesn't open by default, search for it in Start Menu.
+In the Windows Server 2019 Datacenter VM, Server Manager opens automatically. If Server Manager doesn't open by default, search for it in Start Menu.
1. In **Server Manager**, select **Local Server**.
- !["Local Server" on the left side of the Server Manager UI](media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-1.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-1.png" alt-text="Screenshot showing how to locate Local Server on the left side of the Server Manager U I.":::
-1. On the **Properties** pane, select the link for **IE Enhanced Security Configuration**.
+1. On the **Properties** pane, find the entry for **IE Enhanced Security Configuration** and click **On**.
- ![The "IE Enhanced Security Configuration" pane in the Server Manager UI](media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-2.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-2.png" alt-text="Screenshot showing the Internet Explorer Enhanced Security Configuration pane in the Server Manager UI.":::
-1. In the **Internet Explorer Enhanced Security Configuration** dialog box, select **Off** for **Administrators** and **Users**.
+1. In the **Internet Explorer Enhanced Security Configuration** dialog box, select **Off** for **Administrators** and **Users**, and then select **OK**.
- ![The Internet Explorer Enhanced Security Configuration pop-window with "Off" selected](media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-3.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/prepare-server-disable-ieesc-3.png" alt-text="Screenshot showing the Internet Explorer Enhanced Security Configuration pop-window with Off selected.":::
Now you can add the data disk to the VM. ### Add the data disk
-1. While still in the **Windows Server 2016 Datacenter** VM, select **Files and storage services** > **Volumes** > **Disks**.
+1. While still in the **Windows Server 2019 Datacenter** VM, select **Files and storage services** > **Volumes** > **Disks**.
- ![Data disk](media/storage-sync-files-extend-servers/your-disk.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/your-disk.png" alt-text="Screenshot showing how to bring the data disk online and create a volume." lightbox="media/storage-sync-files-extend-servers/your-disk.png":::
-1. Right-click the 1 GiB disk named **Msft Virtual Disk** and select **New volume**.
+1. Right-click the 4 GiB disk named **Msft Virtual Disk** and select **New volume**.
1. Complete the wizard. Use the default settings and make note of the assigned drive letter. 1. Select **Create**. 1. Select **Close**.
Now you can add the data disk to the VM.
1. Open the **FilesToSync** folder. 1. Right-click and select **New** > **Text Document**. Name the text file *MyTestFile*.
- ![Add a new text file](media/storage-sync-files-extend-servers/new-file.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/new-file.png" alt-text="Screenshot showing how to add a new text file on the V M.":::
1. Close **File Explorer** and **Server Manager**.
-### Download the Azure PowerShell module
+### Install the Azure PowerShell module
-Next, in the Windows Server 2016 Datacenter VM, install the Azure PowerShell module on the server.
+Next, in the Windows Server 2019 Datacenter VM, install the Azure PowerShell module on the server. The `Az` module is a rollup module for the Azure PowerShell cmdlets. Installing it downloads all the available Azure Resource Manager modules and makes their cmdlets available for use.
-1. In the VM, open an elevated PowerShell window.
+1. In the VM, open an elevated PowerShell window (run as administrator).
1. Run the following command: ```powershell
Next, in the Windows Server 2016 Datacenter VM, install the Azure PowerShell mod
1. Answer **Yes** or **Yes to All** to continue with the installation.
-The `Az` module is a rollup module for the Azure PowerShell cmdlets. Installing it downloads all the available Azure Resource Manager modules and makes their cmdlets available for use.
+At this point, you've set up your environment for the tutorial. Close the PowerShell window. You're ready to deploy the Storage Sync Service.
-At this point, you've set up your environment for the tutorial. You're ready to deploy the Storage Sync Service.
-
-## Deploy the service
+## Deploy the Storage Sync Service
To deploy Azure File Sync, you first place a **Storage Sync Service** resource into a resource group for your selected subscription. The Storage Sync Service inherits access permissions from its subscription and resource group. 1. In the Azure portal, select **Create a resource** and then search for **Azure File Sync**. 1. In the search results, select **Azure File Sync**.
-1. Select **Create** to open the **Deploy Storage Sync** tab.
+1. Select **Create** to open the **Deploy Azure File Sync** tab.
- ![Deploy Storage Sync](media/storage-sync-files-extend-servers/afs-info.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/deploy-storage-sync-service.png" alt-text="Screenshot showing how to deploy the Storage Sync Service in the Azure portal.":::
On the pane that opens, enter the following information:
To deploy Azure File Sync, you first place a **Storage Sync Service** resource i
| -- | -- | | **Name** | A unique name (per subscription) for the Storage Sync Service.<br><br>Use *afssyncservice02* for this tutorial. | | **Subscription** | The Azure subscription you use for this tutorial. |
- | **Resource group** | The resource group that contains the Storage Sync Service.<br><br>Use *afsresgroup101918* for this tutorial. |
+ | **Resource group** | The resource group that contains the Storage Sync Service.<br><br>Use *myexamplegroup* for this tutorial. |
| **Location** | East US |
-1. When you're finished, select **Create** to deploy the **Storage Sync Service**.
-1. Select the **Notifications** tab > **Go to resource**.
+1. When you're finished, select **Review + Create** and then **Create** to deploy the **Storage Sync Service**. The service will take a few minutes to deploy.
+1. When the deployment is complete, select **Go to resource**.
-## Install the agent
+## Install the Azure File Sync agent
The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure file share.
-1. In the **Windows Server 2016 Datacenter** VM, open **Internet Explorer**.
+1. In the **Windows Server 2019 Datacenter** VM, open **Internet Explorer**.
+
+ > [!IMPORTANT]
+ > You might see a warning telling you to turn on **Internet Explorer Enhanced Security Configuration**. Don't turn this back on until you've finished registering the server in the next step.
+ 1. Go to the [Microsoft Download Center](https://go.microsoft.com/fwlink/?linkid=858257). Scroll down to the **Azure File Sync Agent** section and select **Download**.
- ![Sync agent download](media/storage-sync-files-extend-servers/sync-agent-download.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/sync-agent-download.png" alt-text="Screenshot showing how to download the Azure File Sync agent.":::
-1. Select the check box for **StorageSyncAgent_V3_WS2016.EXE** and select **Next**.
+1. Select the check box for **StorageSyncAgent_WS2019.msi** and select **Next**.
- ![Select agent](media/storage-sync-files-extend-servers/select-agent.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/select-agent.png" alt-text="Screenshot showing how to select the right Azure File Sync agent download.":::
-1. Select **Allow once** > **Run** > **Open**.
-1. If you haven't already done so, close the PowerShell window.
-1. Accept the defaults in the **Storage Sync Agent Setup Wizard**.
+1. Select **Allow once** > **Run**.
+1. Go through the **Storage Sync Agent Setup Wizard** and accept the defaults.
1. Select **Install**. 1. Select **Finish**.
-You've deployed the Azure Sync Service and installed the agent on the Windows Server 2016 Datacenter VM. Now you need to register the VM with the Storage Sync Service.
+You've deployed the Azure Sync Service and installed the agent on the Windows Server VM. Now you need to register the VM with the Storage Sync Service.
## Register Windows Server
Registering your Windows server with a Storage Sync Service establishes a trust
The Server Registration UI should open automatically after you install the Azure File Sync agent. If it doesn't, you can open it manually from its file location: `C:\Program Files\Azure\StorageSyncAgent\ServerRegistration.exe.`
-1. When the Server Registration UI opens in the VM, select **OK**.
-1. Select **Sign-in** to begin.
-1. Sign in with your Azure account credentials and select **Sign-in**.
-1. Provide the following information:
+1. When the Server Registration UI opens in the VM, select **Sign in**.
+
+ :::image type="content" source="media/storage-sync-files-extend-servers/server-registration.png" alt-text="Screenshot showing the Server Registration U I to register with an existing Storage Sync Service.":::
- ![A screenshot of the Server Registration UI](media/storage-sync-files-extend-servers/signin.png)
+1. Sign in with your Azure account credentials.
+1. Provide the following information:
| Value | Description | | -- | -- | | **Azure Subscription** | The subscription that contains the Storage Sync Service for this tutorial. |
- | **Resource Group** | The resource group that contains the Storage Sync Service. Use *afsresgroup101918* for this tutorial. |
+ | **Resource Group** | The resource group that contains the Storage Sync Service. Use *myexamplegroup* for this tutorial. |
| **Storage Sync Service** | The name of the Storage Sync Service. Use *afssyncservice02* for this tutorial. | 1. Select **Register** to complete the server registration.
The Server Registration UI should open automatically after you install the Azure
A sync group defines the sync topology for a set of files. A sync group must contain one cloud endpoint, which represents an Azure file share. A sync group also must contain one or more server endpoints. A server endpoint represents a path on a registered server. To create a sync group:
-1. In the [Azure portal](https://portal.azure.com/), select **+ Sync group** from the Storage Sync Service. Use *afssyncservice02* for this tutorial.
+1. In the [Azure portal](https://portal.azure.com/), select **+ Sync group** from the Storage Sync Service you deployed.
- ![Create a new sync group in the Azure portal](media/storage-sync-files-extend-servers/add-sync-group.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/add-sync-group.png" alt-text="Screenshot showing how to create a new sync group in the Azure portal.":::
1. Enter the following information to create a sync group with a cloud endpoint: | Value | Description | | -- | -- |
- | **Sync group name** | This name must be unique within the Storage Sync Service, but can be any name that is logical for you. Use *afssyncgroup* for this tutorial.|
+ | **Sync group name** | This name must be unique within the Storage Sync Service, but can be any name that is logical for you.|
| **Subscription** | The subscription where you deployed the Storage Sync Service for this tutorial. |
- | **Storage account** | Choose **Select storage account**. On the pane that appears, select the storage account that has the Azure file share you created. Use *afsstoracct101918* for this tutorial. |
- | **Azure file share** | The name of the Azure file share you created. Use *afsfileshare* for this tutorial. |
+ | **Storage account** | Choose **Select storage account**. On the pane that appears, select the storage account that has the Azure file share you created. |
+ | **Azure file share** | The name of the Azure file share you created. |
1. Select **Create**.
A server endpoint represents a specific location on a registered server. For exa
1. Select the newly created sync group and then select **Add server endpoint**.
- ![Add a new server endpoint in the sync group pane](media/storage-sync-files-extend-servers/add-server-endpoint.png)
+ :::image type="content" source="media/storage-sync-files-extend-servers/add-server-endpoint.png" alt-text="Screenshot showing how to add a new server endpoint in the sync group pane.":::
1. On the **Add server endpoint** pane, enter the following information to create a server endpoint: | Value | Description | | -- | -- |
- | **Registered server** | The name of the server you created. Use *afsvm101918* for this tutorial. |
- | **Path** | The Windows Server path to the drive you created. Use *f:\filestosync* in this tutorial. |
+ | **Registered server** | The name of the server you created. For example, *myVM*. |
+ | **Path** | The Windows Server path to the drive you created. For example, *f:\filestosync*. |
| **Cloud Tiering** | Leave disabled for this tutorial. | | **Volume Free Space** | Leave blank for this tutorial. |
A server endpoint represents a specific location on a registered server. For exa
Your files are now in sync across your Azure file share and Windows Server.
-![Azure Storage successfully synced](media/storage-sync-files-extend-servers/files-synced-in-azurestorage.png)
## Clean up resources
-If you'd like to clean up the resources you created in this tutorial, first remove the endpoints from the storage sync service. Then, unregister the server with your storage sync service, remove the sync groups, and delete the sync service.
+If you'd like to clean up the resources you created in this tutorial, first remove the endpoints from the Storage Sync service. Then, unregister the server with your Storage Sync service, remove the sync groups, and delete the Storage Sync service.
[!INCLUDE [storage-files-clean-up-portal](../../../includes/storage-files-clean-up-portal.md)]
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
The targets listed here might be affected by other variables in your deployment.
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Azure Files scale targets
-Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and files.
+Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and individual files.
### Storage account scale targets
-There are two main types of storage accounts for Azure Files:
+Storage account scale targets apply at the storage account level. There are two main types of storage accounts for Azure Files:
- **General purpose version 2 (GPv2) storage accounts**: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.
There are two main types of storage accounts for Azure Files:
<sup>1</sup> General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact [Azure Support](https://azure.microsoft.com/support/faq/). ### Azure file share scale targets
+Azure file share scale targets apply at the file share level.
+ | Attribute | Standard file shares<sup>1</sup> | Premium file shares | |-|-|-| | Minimum size of a file share | No minimum | 100 GiB (provisioned) |
There are two main types of storage accounts for Azure Files:
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names. ### File scale targets
+File scale targets apply to individual files stored in Azure file shares.
+ | Attribute | Files in standard file shares | Files in premium file shares | |-|-|-| | Maximum file size | 4 TiB | 4 TiB |
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Once you purchase a capacity reservation, it will automatically be consumed by y
For more information on how to purchase storage reservations, see [Optimize costs for Azure Files with reserved capacity](files-reserve-capacity.md). ## Provisioned model
-Azure Files uses a provisioned model for premium file shares. In a provisioned business model, you proactively specify to the Azure Files service what your storage requirements are, rather than being billed based on what you use. A provisioned model for storage is similar to buying an on-premises storage solution because when you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares can be dynamically scaled up or down depending on your storage and IO performance characteristics.
+Azure Files uses a provisioned model for premium file shares. In a provisioned billing model, you proactively specify to the Azure Files service what your storage requirements are, rather than being billed based on what you use. A provisioned model for storage is similar to buying an on-premises storage solution because when you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares can be dynamically scaled up or down depending on your storage and IO performance characteristics.
The provisioned size of the file share can be increased at any time but can be decreased only after 24 hours since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as many times as you like, until you increase it again. IOPS/throughput scale changes will be effective within a few minutes after the provisioned size change.
Share credits have three states:
New file shares start with the full number of credits in its burst bucket. Burst credits won't be accrued if the share IOPS fall below baseline IOPS due to throttling by the server. ## Pay-as-you-go model
-Azure Files uses a pay-as-you-go business model for standard file shares. In a pay-as-you-go business model, the amount you pay is determined by how much you actually use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to overprovision to account for future growth or performance requirements. You also don't need to deprovision if your workload and data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
+Azure Files uses a pay-as-you-go billing model for standard file shares. In a pay-as-you-go billing model, the amount you pay is determined by how much you actually use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to overprovision to account for future growth or performance requirements. You also don't need to deprovision if your workload and data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
### Differences in standard tiers When you create a standard file share, you pick between the following tiers: transaction optimized, hot, and cool. All three tiers are stored on the exact same standard storage hardware. The main difference for these three tiers is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in the cooler tiers. This means:
Similarly, if you put a highly accessed workload in the cool tier, you'll pay a
Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.). For standard file shares, we recommend starting in the transaction optimized tier during the initial migration into Azure Files, and then picking the correct tier based on usage after the migration is complete. Transaction usage during migration is not typically indicative of normal transaction usage. ### What are transactions?
-Transactions are operations or requests against Azure Files to upload, download, or otherwise manipulate the contents of the file share. Every action taken on a file share translates to one or more transactions, and on standard shares that use the pay-as-you-go billing model, that translates to transaction costs.
+When you mount an Azure file share on a computer using SMB, the Azure file share is exposed on your computer as if it were local storage. This means that applications, scripts, and other programs that you have on your computer can access the files and folders on the Azure file share without needing to know that they are stored in Azure.
-There are five basic transaction categories: write, list, read, other, and delete. All operations done via the REST API or SMB are bucketed into one of these categories:
+When you read or write to a file, the application you are using performs a series of API calls to the file system API provided by your operating system. These calls are then interpreted by your operating system into SMB protocol transactions, which are sent over the wire to Azure Files to fulfill. A task that the end user perceives as a single operation, such as reading a file from start to finish, may be translated into multiple SMB transactions served by Azure Files.
+
+As a principle, the pay-as-you-go billing model used by standard file shares bills based on usage. SMB and FileREST transactions made by the applications, scripts, and other programs used by your users represent usage of your file share and show up as part of your bill. The same concept applies to value-added cloud services that you might add to your share, such as Azure File Sync or Azure Backup. Transactions are grouped into five different transaction categories which have different prices based on their impact on the Azure file share. These categories are: write, list, read, other, and delete.
+
+The following table shows the categorization of each transaction:
| Transaction bucket | Management operations | Data operations | |-|-|-|
synapse-analytics Apache Spark Azure Create Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md
Last updated 04/21/2022 + # Manage Apache Spark configuration In this tutorial, you will learn how to create an Apache Spark configuration for your synapse studio. The created Apache Spark configuration can be managed in a standardized manner and when you create Notebook or Apache spark job definition can select the Apache Spark configuration that you want to use with your Apache Spark pool. When you select it, the details of the configuration are displayed.
In this tutorial, you will learn how to create an Apache Spark configuration for
You can create custom configurations from different entry points, such as from the Apache Spark configurations page, from the Apache Spark configuration page of an existing spark pool.
-### Create custom configurations in Apache Spark configurations
+## Create custom configurations in Apache Spark configurations
Follow the steps below to create an Apache Spark Configuration in Synapse Studio.
Follow the steps below to create an Apache Spark Configuration in Synapse Studio
> > **Upload Apache Spark configuration** feature has been removed, but Synapse Studio will keep your previously uploaded configuration.
-### Create an Apache Spark Configuration in already existing Apache Spark pool
+## Create an Apache Spark Configuration in already existing Apache Spark pool
Follow the steps below to create an Apache Spark configuration in an existing Apache Spark pool.
Follow the steps below to create an Apache Spark configuration in an existing Ap
5. Click on **Apply** button to save your action.
-### Create an Apache Spark Configuration in the Notebook's configure session
+## Create an Apache Spark Configuration in the Notebook's configure session
If you need to use a custom Apache Spark Configuration when creating a Notebook, you can create and configure it in the **configure session** by following the steps below.
If you need to use a custom Apache Spark Configuration when creating a Notebook,
![Screenshot that create configuration in configure session.](./media/apache-spark-azure-create-spark-configuration/create-spark-config-in-configure-session.png)
-### Create an Apache Spark Configuration in Apache Spark job definitions
+## Create an Apache Spark Configuration in Apache Spark job definitions
When you are creating a spark job definition, you need to use Apache Spark configuration, which can be created by following the steps below:
When you are creating a spark job definition, you need to use Apache Spark confi
> If the Apache Spark configuration in the Notebook and Apache Spark job definition does not do anything special, the default configuration will be used when running the job.
+## Import and Export an Apache Spark configuration
+
+You can import .txt/.conf/.json config in three formats and then convert it to artifact and publish it. And can also export to one of these three formats.
+
+- Import .txt/.conf/.json configuration from local.
+
+ ![Screenshot that import config.](./media/apache-spark-azure-create-spark-configuration/import-config.png)
++
+- Export .txt/.conf/.json configuration to local.
+
+ ![Screenshot that export config.](./media/apache-spark-azure-create-spark-configuration/export-config.png)
++
+For .txt config file and .conf config file, you can refer to the following examples:
+
+ ```txt
+
+ spark.synapse.key1 sample
+ spark.synapse.key2 true
+ # spark.synapse.key3 sample2
+
+ ```
+
+For .json config file, you can refer to the following examples:
+
+ ```json
+ {
+ "configs": {
+ "spark.synapse.key1": "hello world",
+ "spark.synapse.key2": "true"
+ },
+ "annotations": [
+ "Sample"
+ ]
+ }
+ ```
++++ ## Next steps - [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
synapse-analytics Apache Spark Custom Conda Channel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-custom-conda-channel.md
In the next set of steps, we will create a custom Conda channel.
``` cd ~/privatechannel/
-mkdir -P channel/linux64
+mkdir -p channel/linux64
<Add all .tar.bz2 from https://repo.anaconda.com/pkgs/main/linux-64/> // Note: Add all dependent .tar.bz2 as well
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
Parquet and Delta Lake files contain type descriptions for every column. The fol
| INT64 |INT(64, true) |bigint | | INT64 |INT(64, false) |decimal(20,0) | | INT64 |DECIMAL |decimal |
-| INT64 |TIME (MICROS) |time - TIME(NANOS) is not supported |
-|INT64 |TIMESTAMP (MILLIS / MICROS) |datetime2 - TIMESTAMP(NANOS) is not supported |
+| INT64 |TIME (MICROS) | time |
+| INT64 |TIME (NANOS) | Not supported |
+| INT64 |TIMESTAMP ([normalized to utc](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#instant-semantics-timestamps-normalized-to-utc)) (MILLIS / MICROS) | datetime2 |
+| INT64 |TIMESTAMP ([not normalized to utc](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#local-semantics-timestamps-not-normalized-to-utc)) (MILLIS / MICROS) | bigint - make sure that you explicitly adjust `bigint` value with the timezone offset before converting it to a datetime value. |
+| INT64 |TIMESTAMP (NANOS) | Not supported |
|[Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#lists) |LIST |varchar(8000), serialized into JSON | |[Complex type](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#maps)|MAP|varchar(8000), serialized into JSON |
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
The key differences between Hadoop and native external tables are presented in t
| Supported formats | Delimited/CSV, Parquet, ORC, Hive RC, and RC | Serverless SQL pool: Delimited/CSV, Parquet, and [Delta Lake](query-delta-lake-format.md)<br/>Dedicated SQL pool: Parquet (preview) | | [Folder partition elimination](#folder-partition-elimination) | No | Only for partitioned tables synchronized from Apache Spark pools in Synapse workspace to serverless SQL pools | | [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. |
-| Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*` |
-| Recursive folder scan | Yes | Only in serverless SQL pools when specified `/**` at the end of the location path |
+| Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*`. In the serverless SQL pol you can also use recursive wildcards `/logs/**`. |
+| Recursive folder scan | Yes | Yes. In serverless SQL pools must be specified `/**` at the end of the location path. In Dedicated pool the folders are alwasy scanned recursively. |
| Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [AAD passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). |
+| Column mapping | Ordinal - the columns in the external table definition are mapped to the columns in the underlying Parquet files by position. | Serverless pool: by name. The columns in the external table definition are mapped to the columns in the underlying Parquet files by column name matching. <br/> Dedicated pool: ordinal matching. The columns in the external table definition are mapped to the columns in the underlying Parquet files by position.|
> [!NOTE] > The native external tables are the recommended solution in the pools where they are generally available. If you need to access external data, always use the native tables in serverless pools. In dedicated pools, you should switch to the native tables for reading Parquet files once they are in GA. Use the Hadoop tables only if you need to access some types that are not supported in native external tables (for example - ORC, RC), or if the native version is not available.
traffic-manager Quickstart Create Traffic Manager Profile Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md
+
+ Title: 'Quickstart: Create an Azure Traffic Manager profile - Bicep'
+description: This quickstart article describes how to create an Azure Traffic Manager profile by using Bicep.
+++ Last updated : 06/20/2022+++++
+# Quickstart: Create a Traffic Manager profile using Bicep
+
+This quickstart describes how to use Bicep to create a Traffic Manager profile with external endpoints using the performance routing method.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/traffic-manager-external-endpoint).
++
+One Azure resource is defined in the Bicep file:
+
+* [**Microsoft.Network/trafficManagerProfiles**](/azure/templates/microsoft.network/trafficmanagerprofiles)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters uniqueDnsName=<dns-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -uniqueDnsName "<dns-name>"
+ ```
+
+
+
+ The Bicep file deployment creates a profile with two external endpoints. **Endpoint1** uses a target endpoint of `www.microsoft.com` with the location in **North Europe**. **Endpoint2** uses a target endpoint of `docs.microsoft.com` with the location in **South Central US**.
+
+ > [!NOTE]
+ > **uniqueDNSname** needs to be a globally unique name in order for the Bicep file to deploy successfully.
+
+ When the deployment finishes, you'll see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to validate the deployment.
+
+1. Determine the DNS name of the Traffic Manager profile.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az network traffic-manager profile show --name ExternalEndpointExample --resource-group exampleRG
+ ```
+
+ From the output, copy the **fqdn** value. It'll be in the following format: `<relativeDnsName>.trafficmanager.net`. This value is also the DNS name of your Traffic Manager profile.
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ Get-AzTrafficManagerProfile -Name ExternalEndpointExample -ResourceGroupName exampleRG | Select RelativeDnsName
+ ```
+
+ Copy the **RelativeDnsName** value. The DNS name of your Traffic Manager profile is `<relativeDnsName>.trafficmanager.net`.
+
+
+
+2. Run the following command by replacing the **{relativeDnsName}** variable with `<relativeDnsName>.trafficmanager.net`.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ nslookup -type=cname {relativeDnsName}
+ ```
+
+ You should get a canonical name of either `www.microsoft.com` or `docs.microsoft.com` depending on which region is closer to you.
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```powershell-interactive
+ Resolve-DnsName -Name {relativeDnsname} | Select-Object NameHost | Select -First 1
+ ```
+
+ You should get a NameHost of either `www.microsoft.com` or `docs.microsoft.com` depending on which region is closer to you.
+
+
+
+3. To check if you can resolve to the other endpoint, disable the endpoint for the target you got in the last step. Replace the **{endpointName}** with either **endpoint1** or **endpoint2** to disable the target for `www.microsoft.com` or `docs.microsoft.com` respectively.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az network traffic-manager endpoint update --name {endpointName} --type externalEndpoints --profile-name ExternalEndpointExample --resource-group exampleRG --endpoint-status "Disabled"
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ Disable-AzTrafficManagerEndpoint -Name {endpointName} -Type ExternalEndpoints -ProfileName ExternalEndpointExample -ResourceGroupName exampleRG -Force
+ ```
+
+
+
+4. Run the command from Step 2 again in Azure CLI or Azure PowerShell. This time, you should get the other canonical name/NameHost for the other endpoint.
+
+## Clean up resources
+
+When you no longer need the Traffic Manager profile, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group. This removes the Traffic Manager profile and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Traffic Manager profile using Bicep.
+
+To learn more about routing traffic, continue to the Traffic Manager tutorials.
+
+> [!div class="nextstepaction"]
+> [Traffic Manager tutorials](tutorial-traffic-manager-improve-website-response.md)
virtual-desktop Data Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/data-locations.md
description: A brief overview of which locations Azure Virtual Desktop's data an
Previously updated : 06/07/2022 Last updated : 06/22/2022
Storing service-generated data is currently supported in the following geographi
- Europe (EU) - United Kingdom (UK) - Canada (CA)-- Japan (JP) \**in Public Preview*
+- Japan (JP) (preview)
+- Australia (AU) (preview)
In addition, service-generated data is aggregated from all locations where the service infrastructure is, and sent to the US geography. The data sent to the US includes scrubbed data, but not customer data.
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
To create a custom Windows 10 Enterprise multi-session image manually:
1. Make sure your VM has all the latest Windows Updates. Download the updates and restart the VM, if necessary. > [!IMPORTANT]
- > After you install a language pack, you have to reinstall the latest cumulative update that is installed on your image. If you do not reinstall the latest cumulative update, you may encounter errors. If the latest cumulative update is already installed, Windows Update does not offer it again; you have to manually reinstall it. For more information, see [Languages overview](/windows-hardware/manufacture/desktop/languages-overview.md?view=windows-10&preserve-view=true#considerations).
+ > After you install a language pack, you have to reinstall the latest cumulative update that is installed on your image. If you do not reinstall the latest cumulative update, you may encounter errors. If the latest cumulative update is already installed, Windows Update does not offer it again; you have to manually reinstall it. For more information, see [Languages overview](/windows-hardware/manufacture/desktop/languages-overview?view=windows-10&preserve-view=true#considerations).
1. Connect to the language package, FOD, and Inbox Apps file share repository and mount it to a letter drive (for example, drive E).
virtual-desktop Shortpath Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/shortpath-public.md
Many of the NAT gateways are configured to allow the incoming traffic to the soc
After the initial packet exchange, the client and session host may establish one or many data flows. After that, Remote Desktop Protocol chooses the fastest network path. Client then establishes a secure TLS connection with the session host and initiates the RDP Shortpath transport. After RDP establishes the Shortpath, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
+## Requirements
+
+To support RDP Shortpath, the Azure Virtual Desktop client needs a direct line of sight to the session host. You can get a direct line of sight by using one of these methods:
+
+- Make sure the remote client machines are running Windows 11, Windows 10, or Windows 7 and have the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop) installed. Currently, non-Windows clients aren't supported.
+- Use [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md)
+- Use a [Site-to-Site virtual private network (VPN) (IPsec-based)](../vpn-gateway/tutorial-site-to-site-portal.md)
+- Use a [Point-to-Site VPN (IPsec-based)](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)
+- Use a [public IP address assignment](../virtual-network/ip-services/virtual-network-public-ip-address.md)
+
+If you're using other VPN types to connect to the Azure portal, we recommend using a User Datagram Protocol (UDP)-based VPN. While most Transmission Control Protocol (TCP)-based VPN solutions support nested UDP, they add inherited overhead of TCP congestion control, which slows down RDP performance.
+
+Having a direct line of sight means that the client can connect directly to the session host without being blocked by firewalls.
+ ## Enabling the preview of RDP Shortpath for public networks To participate in the preview of RDP Shortpath, you need to enable the Shortpath functionality. You can configure RDP Shortpath on any number of session hosts used in your environment. There's no requirement to enable RDP Shortpath on all hosts in the pool.
Use the following table for reference when configuring firewalls for RDP Shortpa
| RDP Shortpath Server Endpoint | Client network | 1024-65535 | UDP | Public IP addresses assigned to NAT Gateway or Azure Firewall | Allow | | STUN Access | Client network | 3478 | UDP | 13.107.17.41/32, 13.107.64.0/18, 20.202.0.0/16, 52.112.0.0/14, 52.120.0.0/14 | Allow |
- > [!NOTE]
- > The IP ranges for STUN servers used in preview would change at the feature's release to General Availability.
+> [!NOTE]
+> The IP ranges for STUN servers used in preview will change at the feature's release to General Availability.
### Limiting port range used on the client side
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 05/24/2022 Last updated : 06/20/2022
To enable media optimization for Teams, set the following registry key on the ho
### Install the Teams WebSocket Service
-Install the latest version of the [Remote Desktop WebRTC Redirector Service](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWQ1UW) on your VM image. If you encounter an installation error, install the [latest Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) and try again.
+Install the latest version of the [Remote Desktop WebRTC Redirector Service](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4YM8L) on your VM image. If you encounter an installation error, install the [latest Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads) and try again.
#### Latest WebSocket Service versions
The following table lists the latest versions of the WebSocket Service:
|Version |Release date | ||--|
+|1.17.2205.23001|06/20/2022 |
|1.4.2111.18001 |12/02/2021 | |1.1.2110.16001 |10/15/2021 | |1.0.2106.14001 |07/29/2021 | |1.0.2006.11001 |07/28/2020 | |0.11.0 |05/29/2020 |
+### Updates for version 1.17.2205.23001
+
+- Fixed an issue that made the WebRTC redirector service disconnect from Teams on Azure Virtual Desktop.
+- Added further stability and reliability improvements to the service.
+ #### Updates for version 1.4.2111.18001 - Fixed a mute notification problem.
virtual-machine-scale-sets Spot Vm Size Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-vm-size-recommendation.md
+
+ Title: Spot Virtual Machine Size Recommendation for Virtual Machine Scale Sets
+description: Learn how to pick the right VM size when using Azure Spot for Virtual Machine Scale Sets.
++++++ Last updated : 06/15/2022+++
+# Spot Virtual Machine size recommendation
+
+**Applies to:** :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Spot VM size recommendations tool is an easy way to view and select alternative VM sizes that are better suited for your stateless, flexible, and fault tolerant workload needs during the Virtual Machine Scale Set deployment process in the Azure portal. This tool allows Azure to recommend appropriate VM sizes to you after you filter by region, price, and eviction rate. You can further filter the recommended VMs list by size, type, generation, and disk (premium or ephemeral OS disk).
++
+## Azure portal
+
+You can access Azure's size recommendations through the virtual machine scale sets creation process in the Azure portal. The following steps will instruct you on how to access this tool during that process.
+
+1. Log in to the [Azure portal](https://portal.azure.com).
+1. In the search bar, search for and select **Virtual machine scale sets**.
+1. Select **Create** on the **Virtual machine scale sets** page.
+1. In the **Basics** tab, fill out the required fields.
+1. Under **Instance details**, select **Run with Azure Spot discount**.
+
+ :::image type="content" source="./media/spot-vm-size-recommendation/run-with-azure-spot-discount.png" alt-text="Screenshot of a selected checkbox next to the Run with Azure Spot discount option.":::
+
+1. In the same section, under **Azure Spot configuration**, select **Configure**.
+1. On the **Azure Spot configuration** page, in the **Spot details** tab, go to the **Size** selector.
+1. Expand the **Size** drop-down and select **See all sizes** option at the bottom of the list.
+
+ :::image type="content" source="./media/spot-vm-size-recommendation/spot-details-see-all-sizes.png" alt-text="Screenshot of the See all sizes option in the Size selector":::
+
+1. On the **Select a VM size** page, click **Add filter**.
+1. You can choose which filters to apply. For this example, we will only apply **Size** and set it to *Medium (7-16)* for the number of vCPU.
+
+ :::image type="content" source="./media/spot-vm-size-recommendation/size-filter-medium.png" alt-text="Screenshot of the Medium option selected for the Size filter.":::
+
+1. Click **OK**.
+1. From the resulting list of VMs, select a preferred VM size.
+1. Click **Select** at the bottom to continue.
+1. Back on the **Spot details** tab, click **Next** to go to the next tab.
+1. The **Size recommendations** tab allows you to view and select alternative VM sizes that are better suited for your stateless, flexible, and fault tolerant workload needs with regard to region, pricing, and eviction rates.
+
+ :::image type="content" source="./media/spot-vm-size-recommendation/size-recommendations-tab.png" alt-text="Screenshot of the Size recommendations tab with a list of alternative VM sizes.":::
+
+1. Make your selection and click **Save**.
+1. Continue through the virtual machine scale set creation process.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about Spot virtual machines](../virtual-machines/spot-vms.md)
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
$hostGroup = New-AzHostGroup `
```
-Add the `-SupportAutomaticPlacement true` parameter to have your VMs and scale set instances automatically placed on hosts, within a host group. For more information, see [Manual vs. automatic placement ](dedicated-hosts.md#manual-vs-automatic-placement).
+Add the `-SupportAutomaticPlacement true` parameter to have your VMs and scale set instances automatically placed on hosts, within a host group. For more information, see [Manual vs. automatic placement](dedicated-hosts.md#manual-vs-automatic-placement).
virtual-machines Image Builder Devops Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-devops-task.md
Image Builder requires a Managed Identity, which it uses to read source custom i
### VNET Support
-Currently the DevOps task does not support specifying an existing Subnet, this is on the roadmap, but if you want to utilize an existing VNET, you can use an ARM template, with an Image Builder template nested inside, please see the Windows Image Builder template examples on how this is achieved, or alternatively use [AZ AIB PowerShell](../windows/image-builder-powershell.md).
+The VM that is created can be configured to be in a specific VNET.
+Provide the resource id of a pre-existing subnet in the 'VNet Configuration (Optional)' input field when configuring the task.
+Omit if no specific virtual network needs to be used. Review https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking for more information.
### Source
virtual-machines Migration Classic Resource Manager Community Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-community-tools.md
This is a collection of helper tools created as part of enterprise migrations fr
## migAz migAz is an additional option to migrate a complete set of classic IaaS resources to Azure Resource Manager IaaS resources. The migration can occur within the same subscription or between different subscriptions and subscription types (ex: CSP subscriptions).
-[Link to the tool documentation](https://github.com/Azure/migAz)
+- [Link to the tool documentation](https://social.technet.microsoft.com/wiki/contents/articles/52069.azure-resources-migration-with-migaz-tool.aspx)
## Next Steps
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm.md
For a list of the available metrics, see [Reference: Monitoring Azure virtual ma
## Analyze logs Data in Azure Monitor Logs is stored in a Log Analytics workspace, where it's separated into tables, each with its own set of unique properties.
-VM insights store the collected data in logs, and the insights provide performance and map views that you can use to interactively analyze the data. You can work directly with this data to drill down further or perform custom analyses. For more information and to get sample queries for this data, see [How to query logs from VM insights](../azure-monitor/vm/vminsights-log-search.md).
+VM insights store the collected data in logs, and the insights provide performance and map views that you can use to interactively analyze the data. You can work directly with this data to drill down further or perform custom analyses. For more information and to get sample queries for this data, see [How to query logs from VM insights](../azure-monitor/vm/vminsights-log-query.md).
To analyze other log data that you collect from your virtual machines, use [log queries](../azure-monitor/logs/get-started-queries.md) in [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md). Several [built-in queries](../azure-monitor/logs/queries.md) for virtual machines are available to use, or you can create your own. You can interactively work with the results of these queries, include them in a workbook to make them available to other users, or generate alerts based on their results.
For more information about the various alerts for Azure virtual machines, see th
## Next steps
-For documentation about the logs and metrics that are generated by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
+For documentation about the logs and metrics that are generated by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/build-image-with-packer.md
Create a service principal with [New-AzADServicePrincipal](/powershell/module/az
```azurepowershell $sp = New-AzADServicePrincipal -DisplayName "PackerSP$(Get-Random)"
-$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret)
-$plainPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
+$plainPassword = (New-AzADSpCredential -ObjectId $sp.Id).SecretText
``` Then output the password and application ID. ```powershell $plainPassword
-$sp.ApplicationId
+$sp.AppId
```
Create a file named *windows.json* and paste the following content. Enter your o
| Parameter | Where to obtain | |-|-|
-| *client_id* | View service principal ID with `$sp.applicationId` |
+| *client_id* | View service principal ID with `$sp.AppId` |
| *client_secret* | View the auto-generated password with `$plainPassword` | | *tenant_id* | Output from `$sub.TenantId` command | | *subscription_id* | Output from `$sub.SubscriptionId` command |
virtual-machines Tutorial Config Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-config-management.md
For pricing information, see [Automation pricing for Update management](https://
To enable Update Management for your VM: 1. Navigate to your VM in the Azure portal (search for **Virtual machines** in the search bar, then choose a VM from the list).
-1. Select **Guest + host updates** under Operations.
-1. Click on **Go to Update management**.
+1. Select **Updates** under Operations.
+1. Click on **Go to Updates using automation**.
1. The **Enable Update Management** window opens. Validation is done to determine if Update Management is enabled for this VM. Validation includes checks for a Log Analytics workspace, for a linked Automation account, and for whether the solution is in the workspace.
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Resource Logs aren't collected and stored until you create a diagnostic setting
See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Virtual WAN are listed in [Virtual WAN monitoring data reference](monitor-virtual-wan-reference.md). > [!IMPORTANT]
-> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator.md).
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
The metrics and logs you can collect are discussed in the following sections.
To create a metric alert, see [Tutorial: Create a metric alert for an Azure reso
## Next steps * See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a reference of the metrics, logs, and other important values created by Virtual WAN.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
No. Virtual WAN doesn't require ExpressRoute from each site. Your sites may be c
### Is there a network throughput or connection limit when using Azure Virtual WAN?
-Network throughput is per service in a virtual WAN hub. In each hub, the VPN aggregate throughput is up to 20 Gbps, the ExpressRoute aggregate throughput is up to 20 Gbps, and the User VPN/point-to-site VPN aggregate throughput is up to 20 Gbps. The router in virtual hub supports up to 50 Gbps for VNet-to-VNet traffic flows and assumes a total of 2000 VM workload across all VNets connected to a single virtual hub. This [limit](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-wan-limits) can be increased opening an online customer support request. For cost implication, see *Routing Infrastructure Unit* cost in the [Azure Virtual WAN Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
+Network throughput is per service in a virtual WAN hub. In each hub, the VPN aggregate throughput is up to 20 Gbps, the ExpressRoute aggregate throughput is up to 20 Gbps, and the User VPN/point-to-site VPN aggregate throughput is up to 20 Gbps. The router in virtual hub supports up to 50 Gbps for VNet-to-VNet traffic flows and assumes a total of 2000 VM workload across all VNets connected to a single virtual hub.
+
+To secure upfront capacity without having to wait for the virtual hub to scale out when more throughput is needed, you can set the minimum capacity or modify as needed. See [About virtual hub settings - hub capacity](hub-settings.md#capacity). For cost implications, see *Routing Infrastructure Unit* cost in the [Azure Virtual WAN Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
When VPN sites connect into a hub, they do so with connections. Virtual WAN supports up to 1000 connections or 2000 IPsec tunnels per virtual hub. When remote users connect into virtual hub, they connect to the P2S VPN gateway, which supports up to 100,000 users depending on the scale unit(bandwidth) chosen for the P2S VPN gateway in the virtual hub.
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
[!INCLUDE [ExpressRoute Performance](../../includes/virtual-wan-expressroute-performance.md)]
-### Why am I seeing a message and button called "Update router to latest software version" in portal?
+### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 04/28/2022 Last updated : 06/21/2022
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution: Windows PowerShell Command Found|
-|932130|Remote Command Execution: Unix Shell Expression Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
|932140|Remote Command Execution: Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution: Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|932110|Remote Command Execution: Windows Command Injection| |932115|Remote Command Execution: Windows Command Injection| |932120|Remote Command Execution = Windows PowerShell Command Found|
-|932130|Remote Command Execution = Unix Shell Expression Found|
+|932130|Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found|
|932140|Remote Command Execution = Windows FOR/IF Command Found| |932150|Remote Command Execution: Direct Unix Command Execution| |932160|Remote Command Execution = Unix Shell Code Found|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |932120|Remote Command Execution = Windows PowerShell Command Found|
-|932130|Remote Command Execution = Unix Shell Expression Found|
+|932130|**Application Gateway WAF v2**: Remote Command Execution: Unix Shell Expression or Confluence Vulnerability (CVE-2022-26134) Found<br><br>**Application Gateway WAF v1**: Remote Command Execution: Unix Shell Expression|
|932140|Remote Command Execution = Windows FOR/IF Command Found| |932160|Remote Command Execution = Unix Shell Code Found| |932170|Remote Command Execution = Shellshock (CVE-2014-6271)|