Updates from: 04/23/2021 03:13:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
Previously updated : 12/16/2020 Last updated : 04/22/2021
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-user-flow.md
Previously updated : 03/03/2021 Last updated : 04/22/2021
The following template can be used to create a Conditional Access policy with di
## Enable multi-factor authentication (optional)
-When adding Conditional Access to a user flow, consider the use of **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are independent from Conditional Access settings. You can set MFA to **Always On** so that MFA is always required regardless of your Conditional Access setup. Or, you can set MFA to **Conditional** so that MFA is required only when an active Conditional Access Policy requires it.
+When adding Conditional Access to a user flow, consider the use of **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are independent from Conditional Access settings. You can choose from these MFA options:
+
+ - **Off** - MFA is never enforced during sign-in, and users are not prompted to enroll in MFA during sign-up or sign-in.
+ - **Always on** - MFA is always required regardless of your Conditional Access setup. If users aren't already enrolled in MFA, they're prompted to enroll during sign-in. During sign-up, users are prompted to enroll in MFA.
+ - **Conditional (Preview)** - MFA is required only when an active Conditional Access Policy requires it. If the result of the Conditional Access evaluation is an MFA challenge with no risk, MFA is enforced during sign-in. If the result is an MFA challenge due to risk *and* the user is not enrolled in MFA, sign-in is blocked. During sign-up, users aren't prompted to enroll in MFA.
> [!IMPORTANT] > If your Conditional Access policy grants access with MFA but the user hasn't enrolled a phone number, the user may be blocked.
To enable Conditional Access for a user flow, make sure the version supports Con
![Configure MFA and Conditional Access in Properties](media/conditional-access-user-flow/add-conditional-access.png)
-1. In the **Multi-factor authentication** section, select the desired **MFA method**, and then under **MFA enforcement**, select **Conditional (Recommended)**.
+1. In the **Multifactor authentication** section, select the desired **Type of method**, and then under **MFA enforcement**, select **Conditional (Preview)**.
-1. In the **Conditional Access** section, select the **Enforce conditional access policies** check box.
+1. In the **Conditional access (Preview)** section, select the **Enforce conditional access policies** check box.
1. Select **Save**.
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 04/19/2021 Last updated : 04/21/2021 zone_pivot_groups: b2c-policy-type
Under content definitions, still within `<BuildingBlocks>`, add the following [D
The `GenerateOtp` technical profile generates a code for the email address. The `VerifyOtp` technical profile verifies the code associated with the email address. You can change the configuration of the format and the expiration of the one-time password. For more information about OTP technical profiles, see [Define a one-time password technical profile](one-time-password-technical-profile.md). > [!NOTE]
-> OTP codes that are generated by the Web.TPEngine.Providers.OneTimePasswordProtocolProvider protocol are tied to the browser session. This means a user can generate unique OTP codes in different browser sessions that are each valid for their corresponding sessions. By contrast, an OTP code generated by the built-in user flow is independent of the browser session, so if a user generates a new OTP code in a new browser session, it replaces the previous OTP code.
+> OTP codes that are generated by the Web.TPEngine.Providers.OneTimePasswordProtocolProvider protocol are tied to the browser session. This means a user can generate unique OTP codes in different browser sessions that are each valid for their corresponding sessions. By contrast, an OTP code generated by the built-in email provider is independent of the browser session, so if a user generates a new OTP code in a new browser session, it replaces the previous OTP code.
Add the following technical profiles to the `<ClaimsProviders>` element.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Previously updated : 04/19/2021 Last updated : 04/21/2021 zone_pivot_groups: b2c-policy-type
Under content definitions, still within `<BuildingBlocks>`, add the following [D
The `GenerateOtp` technical profile generates a code for the email address. The `VerifyOtp` technical profile verifies the code associated with the email address. You can change the configuration of the format and the expiration of the one-time password. For more information about OTP technical profiles, see [Define a one-time password technical profile](one-time-password-technical-profile.md). > [!NOTE]
-> OTP codes that are generated by the Web.TPEngine.Providers.OneTimePasswordProtocolProvider protocol are tied to the browser session. This means a user can generate unique OTP codes in different browser sessions that are each valid for their corresponding sessions. By contrast, an OTP code generated by the built-in user flow is independent of the browser session, so if a user generates a new OTP code in a new browser session, it replaces the previous OTP code.
+> OTP codes that are generated by the Web.TPEngine.Providers.OneTimePasswordProtocolProvider protocol are tied to the browser session. This means a user can generate unique OTP codes in different browser sessions that are each valid for their corresponding sessions. By contrast, an OTP code generated by the built-in email provider is independent of the browser session, so if a user generates a new OTP code in a new browser session, it replaces the previous OTP code.
Add the following technical profiles to the `<ClaimsProviders>` element.
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-local.md
Previously updated : 01/19/2021 Last updated : 04/22/2021 zone_pivot_groups: b2c-policy-type
With the user option, users can sign in/up with a username and password:
![Username sign-up or sign-in experience](./media/identity-provider-local/local-account-username-experience.png)
-## Phone sign-in (Preview)
+## Phone sign-in
Passwordless authentication is a type of authentication where a user doesn't need to sign-in with their password. With phone sign-up and sign-in, the user can sign up for the app using a phone number as their primary login identifier. The user will have the following experience during sign-up and sign-in:
The following screenshots demonstrate the phone recovery flow:
![Phone recovery user flow](./media/identity-provider-local/local-account-change-phone-flow.png)
-## Phone or email sign-in (Preview)
+## Phone or email sign-in
-You can choose to combine the [phone sign-in](#phone-sign-in-preview), and the [email sign-in](#email-sign-in). In the sign-up or sign-in page, user can type a phone number, or email address. Based on the user input, Azure AD B2C takes the user to the corresponding flow.
+You can choose to combine the [phone sign-in](#phone-sign-in), and the [email sign-in](#email-sign-in). In the sign-up or sign-in page, user can type a phone number, or email address. Based on the user input, Azure AD B2C takes the user to the corresponding flow.
![Phone or email sign-up or sign-in experience](./media/identity-provider-local/local-account-phone-and-email-experience.png)
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 04/19/2021 Last updated : 04/22/2021
To use MS Graph API, and interact with resources in your Azure AD B2C tenant, yo
## User phone number management (beta)
-A phone number that can be used by a user to sign-in using [SMS or voice calls](identity-provider-local.md#phone-sign-in-preview), or [multi-factor authentication](multi-factor-authentication.md). For more information, see [Azure AD authentication methods API](/graph/api/resources/phoneauthenticationmethod).
+A phone number that can be used by a user to sign-in using [SMS or voice calls](identity-provider-local.md#phone-sign-in), or [multi-factor authentication](multi-factor-authentication.md). For more information, see [Azure AD authentication methods API](/graph/api/resources/phoneauthenticationmethod).
- [Add](/graph/api/authentication-post-phonemethods) - [List](/graph/api/authentication-list-phonemethods)
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 02/01/2021 Last updated : 04/22/2021
This feature helps applications handle scenarios such as:
1. Select **User flows**. 1. Select the user flow for which you want to enable MFA. For example, *B2C_1_signinsignup*. 1. Select **Properties**.
-1. In the **Multifactor authentication** section, select the desired **MFA method**, and then under **MFA enforcement** select **Always on**, or **Conditional (Recommended)**.
+1. In the **Multifactor authentication** section, select the desired **Type of method**. Then under **MFA enforcement** select an option:
+
+ - **Off** - MFA is never enforced during sign-in, and users are not prompted to enroll in MFA during sign-up or sign-in.
+ - **Always on** - MFA is always required (regardless of any Conditional Access setup). If users aren't already enrolled in MFA, they're prompted to enroll during sign-in. During sign-up, users are prompted to enroll in MFA.
+ - **Conditional (Preview)** - MFA is enforced only when a Conditional Access policy requires it. The policy and sign-in risk determine how MFA is presented to the user:
+ - If no risk is detected, an MFA challenge is presented to the user during sign-in. If the user isn't already enrolled in MFA, they're prompted to enroll during sign-in.
+ - If risk is detected and the user isn't already enrolled in MFA, the sign-in is blocked. During sign-up, users aren't prompted to enroll in MFA.
+ > [!NOTE] >
- > - If you select **Conditional (Recommended)**, you'll also need to [add Conditional Access to user flows](conditional-access-user-flow.md), and specify the apps you want the policy to apply to.
+ > - If you select **Conditional (Preview)**, you'll also need to [add Conditional Access to user flows](conditional-access-user-flow.md), and specify the apps you want the policy to apply to.
> - Multi-factor authentication (MFA) is disabled by default for sign-up user flows. You can enable MFA in user flows with phone sign-up, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor. 1. Select **Save**. MFA is now enabled for this user flow.
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-arkose-labs.md
Previously updated : 02/18/2021 Last updated : 04/22/2021
Learn more about [custom attributes](./user-flow-custom-attributes.md?pivots=b2c
The user flow can be either for **sign-up** and **sign in** or just **sign-up**. The Arkose Labs user flow will only be shown during sign-up.
-1. See the [instructions](./tutorial-create-user-flows.md) to create a user flow. If using an existing user flow, it must be of the **Recommended (next-generation preview)** version type.
+1. See the [instructions](./tutorial-create-user-flows.md) to create a user flow. If using an existing user flow, it must be of the **Recommended** version type.
2. In the user flow settings, go to **User attributes** and select the **ArkoseSessionToken** claim.
active-directory-b2c Phone Authentication User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/phone-authentication-user-flows.md
Previously updated : 02/01/2021 Last updated : 04/22/2021
-# Set up phone sign-up and sign-in for user flows (preview)
-
-> [!NOTE]
-> Phone sign-up and sign-in and recovery email features for user flows are in public preview.
+# Set up phone sign-up and sign-in for user flows
In addition to email and username, you can enable phone number as a sign-up option tenant-wide by adding phone sign-up and sign-in to your local account identity provider. After you enable phone sign-up and sign-in for local accounts, you can add phone sign-up to your user flows.
Setting up phone sign-up and sign-in in a user flow involves the following steps
- [Enable the recovery email prompt (preview)](#enable-the-recovery-email-prompt-preview) to let users specify an email that can be used to recover their account when they don't have their phone.
+- [Display consent information](#enable-consent-information) to the user during the sign up or sign in flow. You can display the default consent information or customize the your own consent information.
+ Multi-factor authentication (MFA) is disabled by default when you configure a user flow with phone sign-up. You can enable MFA in user flows with phone sign-up, but because a phone number is used as the primary identifier, email one-time passcode is the only option available for the second authentication factor. ## Configure phone sign-up and sign-in tenant-wide
After you've enabled phone sign-up and sign-in and the recovery email prompt in
4. Enter an email address and then select **Send verification code**. Verify that a code is sent to the email inbox you provided. Retrieve the code and enter it in the **Verification code** box. Then select **Verify code**.
+## Enable consent information
+
+We strongly suggest you include consent information in your sign-up and sign-in flow. Sample text is provided. Please refer to the Short Code Monitoring Handbook on the [CTIA website](https://www.ctia.org/programs) and consult with your own legal or compliance experts for guidance on your final text and feature configuration to meet your own compliance needs:
+>
+> *By providing your phone number, you consent to receiving a one-time passcode sent by text message to help you sign in to *&lt;insert: your application name&gt;*. Standard message and data rates may apply.*
+>
+> *&lt;insert: a link to your Privacy Statement&gt;*<br/>*&lt;insert: a link to your Terms of Service&gt;*
+
+To enable consent information
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.
+3. In the Azure portal, search for and select **Azure AD B2C**.
+4. In Azure AD B2C, under **Policies**, select **User flows**.
+5. Select the user flow from the list.
+6. Under **Customize**, select **Languages**.
+7. To display consent text, select **Enable language customization**.
+
+ ![Enable language customization](./media/phone-authentication-user-flows/enable-language-customization.png)
+
+8. To customize the consent information, select a language in the list.
+9. In the language panel, select **Phone signIn page**.
+10. Select Download defaults.
+
+ ![Download default](./media/phone-authentication-user-flows/phone-sign-in-language-override.png)
+
+11. Open the downloaded JSON file. Search for the following text and customize it:
+
+ - **disclaimer_link_1_url**: Change **override** to "true" and add the URL for your privacy information.
+
+ - **disclaimer_link_2_url**: Change **override** to "true" and add the URL for your terms of use.ΓÇ»
+
+ - **disclaimer_msg_intro**: Change **override** to "true" and change **value** to your desired disclaimer strings. 
+
+12. Save the file. Under **Upload new overrides**, browse for the file and select it. Confirm that you see a “Successfully uploaded overrides” notification.
+
+13. Select **Phone signUp page**, and then repeat steps 10 through 12.
+ ## Next steps - [Add external identity providers](add-identity-provider.md)
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
Previously updated : 03/04/2021 Last updated : 04/22/2021
You can enable the KMSI feature for users of your web and native applications wh
KMSI is configurable at the individual user flow level. Before enabling KMSI for your user flows, consider the following: -- KMSI is supported only for the **Recommended** versions of sign-up and sign-in (SUSI), sign-in, and profile editing user flows. If you currently have **Standard** or **Legacy preview - v2** versions of these user flows and want to enable KMSI, you'll need to create new, **Recommended** versions of these user flows.
+- KMSI is supported only for the **Recommended** versions of sign-up and sign-in (SUSI), sign-in, and profile editing user flows. If you currently have **Standard (Legacy)** or **Legacy preview - v2** versions of these user flows and want to enable KMSI, you'll need to create new, **Recommended** versions of these user flows.
- KMSI is not supported with password reset or sign-up user flows. - If you want to enable KMSI for all applications in your tenant, we recommend that you enable KMSI for all user flows in your tenant. Because a user can be presented with multiple policies during a session, it's possible they could encounter one that doesn't have KMSI enabled, which would remove the KMSI cookie from the session. - KMSI should not be enabled on public computers.
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-user-flows.md
Previously updated : 04/08/2021 Last updated : 04/22/2021 zone_pivot_groups: b2c-policy-type
A user flow lets you determine how users interact with your application when the
::: zone pivot="b2c-user-flow" > [!IMPORTANT]
-> We've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into **Recommended** (next-generation preview) and **Standard** (generally available) versions. All V1.1 and V2 legacy preview user flows are on a path to deprecation by **August 1, 2021**. For details, see [User flow versions in Azure AD B2C](user-flow-versions.md).
+> We've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into two versions: **Recommended** user flows with the latest features, and **Standard (Legacy)** user flows. In the public cloud, all legacy preview user flows (V1.1 and V2) are on a path to deprecation by **August 1, 2021**. For details, see [User flow versions in Azure AD B2C](user-flow-versions.md). *These changes apply to the Azure public cloud only. Other environments will continue to use [legacy user flow versioning](user-flow-versions-legacy.md).*
::: zone-end ## Prerequisites
active-directory-b2c User Flow Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-versions.md
Previously updated : 07/30/2020 Last updated : 04/22/2021
User flows in Azure Active Directory B2C (Azure AD B2C) help you to set up commo
> [!IMPORTANT] > We've changed the way we reference user flow versions. Previously, we offered V1 (production-ready) versions, and V1.1 and V2 (preview) versions. Now, we've consolidated user flows into two versions: >
->- **Recommended** user flows are the new preview versions of user flows. They're thoroughly tested and combine all the features of the legacy **V2** and **V1.1** versions. Going forward, the new recommended user flows will be maintained and updated. Once you move to these new recommended user flows, you'll have access to new features as they're released.
->- **Standard** user flows, previously known as **V1**, are generally available, production-ready user flows. If your user flows are mission-critical and depend on highly stable versions, you can continue to use standard user flows, realizing that these versions won't be maintained and updated.
+>- **Recommended** user flows are the generally available, next-generation user flows with the latest features. They combine all the features of the legacy **V1**, **V1.1**, and **V2** versions. Going forward, **Recommended** user flows will be maintained and updated. Once you move to these new recommended user flows, you'll have access to new features as they're released.
+>- **Standard (Legacy)** user flows, previously known as **V1**, are legacy user flows. Unless you have a specific business need, we don't recommend using these versions of user flows because they won't be maintained or updated.
>
->All legacy preview user flows (V1.1 and V2) are on a path to deprecation by **August 1, 2021**. Wherever possible, we highly recommend that you [switch to the new **Recommended** versions](#how-to-switch-to-a-new-recommended-user-flow) as soon as possible so you can always take advantage of the latest features and updates. *These changes apply to the Azure public cloud only. Other environments will continue to use [legacy user flow versioning](user-flow-versions-legacy.md).*
+>All legacy preview user flows (V1.1 and V2) are on a path to deprecation by **August 1, 2021**. Wherever possible, we highly recommend that you [switch to the **Recommended** versions](#how-to-switch-to-a-recommended-user-flow) as soon as possible so you can always take advantage of the latest features and updates. *These changes apply to the Azure public cloud only. Other environments will continue to use [legacy user flow versioning](user-flow-versions-legacy.md).*
## Recommended user flows
-Recommended user flows are preview versions that combine new features with legacy V2 and V1.1 capabilities. Going forward, Recommended user flows will be maintained and updated.
+Recommended user flows are the generally available, next-generation user flows with the latest features. Going forward, Recommended user flows will be maintained and updated.
| User flow | Description | | | -- |
-| Password reset (preview) | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
-| Profile editing (preview) | Enables a user to configure their user attributes. Using this user flow, you can configure: <ul><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li></ul> |
-| Sign in (preview) | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>Sign-in page customization</li></ul> |
-| Sign up (preview) | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
-| Sign up and sign in (preview) | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Password reset | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Age gating](age-gating.md)</li><li>[password complexity requirements](password-complexity.md)</li></ul> |
+| Profile editing | Enables a user to configure their user attributes. Using this user flow, you can configure: <ul><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li></ul> |
+| Sign in | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>Sign-in page customization</li></ul> |
+| Sign up | Enables a user to create an account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
+| Sign up and sign in | Enables a user to create an account or sign in their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Age gating](age-gating.md)</li><li>[Password complexity requirements](password-complexity.md)</li></ul> |
## Standard user flows
-Standard user flows (previously referred to as V1) are generally available, production-ready user flows. Standard user flows will not be updated going forward.
+Standard user flows (previously referred to as V1) user flows, previously known as **V1**, are legacy user flows. Unless you have a specific business need, we don't recommend using these versions of user flows because they won't be updated going forward.
| User flow | Description |
-| | -- | -- |
+| | -- |
| Password reset | Enables a user to choose a new password after verifying their email. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>Token compatibility settings</li><li>[Password complexity requirements](password-complexity.md)</li></ul> | | Profile editing | Enables a user to configure their user attributes. Using this user flow, you can configure: <ul><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li></ul> | | Sign in | Enables a user to sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>Block sign-in</li><li>Force password reset</li><li>Keep Me Signed In (KMSI)</ul><br>You can't customize the user interface with this user flow. |
Standard user flows (previously referred to as V1) are generally available, prod
| Sign up and sign in | Enables a user to create an account or sign in to their account. Using this user flow, you can configure: <ul><li>[Multi-factor authentication](multi-factor-authentication.md)</li><li>[Token lifetime](tokens-overview.md)</li><li>Token compatibility settings</li><li>Session behavior</li><li>[Password complexity requirements](password-complexity.md)</li></ul>|
-## How to switch to a new Recommended user flow
+## How to switch to a Recommended user flow
-To switch from a legacy version of a user flow to the new **Recommended** preview version, follow these steps:
+To switch from a legacy version of a user flow to the **Recommended** version, follow these steps:
1. Create a new user flow policy by following the steps in [Tutorial: Create user flows in Azure Active Directory](tutorial-create-user-flows.md). While creating the user flow, select the **Recommended** version.
You won't be able to create new user flows based on the legacy V2 and V1.1 versi
### Is there any reason to continue using legacy V2 and V1.1 user flows?
-Not really. The new **Recommended** preview versions contain the same functionality as the legacy V2 and V1.1 versions. Nothing has been removed, and in fact they now include additional features.
+Not really. The **Recommended** versions contain the same functionality as the legacy V2 and V1.1 versions. Nothing has been removed, and in fact they now include additional features.
### If I donΓÇÖt switch from legacy V2 and V1.1 policies, how will it impact my application?
-If you're using a legacy V2 or V1.1 user flow, your application won't be affected by this versioning change. But to get access to new features or policy changes going forward, you'll need to switch to the new **Recommended** versions.
+If you're using a legacy V2 or V1.1 user flow, your application won't be affected by this versioning change. But to get access to new features or policy changes going forward, you'll need to switch to the **Recommended** versions.
### Will Microsoft still support my legacy V2 or V1.1 user flow policy?
-The legacy V2 and V1.1 versions of user flows will continue to be fully supported.
+In the public cloud, all legacy preview user flows (V1.1 and V2) are on a path to deprecation by August 1, 2021. Wherever possible, we highly recommend that you [switch to the **Recommended** versions](#how-to-switch-to-a-recommended-user-flow) as soon as possible so you can always take advantage of the latest features and updates. *These changes apply to the Azure public cloud only. Other environments will continue to use [legacy user flow versioning](user-flow-versions-legacy.md).*
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Active Directory Domain Services
+description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 04/22/2021+++++++
+# Azure Policy built-in definitions for Azure Active Directory Domain Services
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Active Directory Domain Services. For additional Azure Policy built-ins for
+other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Active Directory Domain Services
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
# How to use Continuous Access Evaluation enabled APIs in your applications
-[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an emerging industry standard that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation-preview) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
+[Continuous Access Evaluation](../conditional-access/concept-continuous-access-evaluation.md) (CAE) is an Azure AD feature that allows access tokens to be revoked based on [critical events](../conditional-access/concept-continuous-access-evaluation.md#critical-event-evaluation) and [policy evaluation](../conditional-access/concept-continuous-access-evaluation.md#conditional-access-policy-evaluation-preview) rather than relying on token expiry based on lifetime. For some resource APIs, because risk and policy are evaluated in real time, this can increase token lifetime up to 28 hours. These long-lived tokens will be proactively refreshed by the Microsoft Authentication Library (MSAL), increasing the resiliency of your applications.
This article shows you how to use CAE-enabled APIs in your applications.
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-create-service-principal-portal.md
There are two types of authentication available for service principals: password
### Option 1: Upload a certificate
-You can use an existing certificate if you have one. Optionally, you can create a self-signed certificate for *testing purposes only*. To create a self-signed certificate, open PowerShell and run [New-SelfSignedCertificate](/powershell/module/pkiclient/new-selfsignedcertificate) with the following parameters to create the cert in the user certificate store on your computer:
+You can use an existing certificate if you have one. Optionally, you can create a self-signed certificate for *testing purposes only*. To create a self-signed certificate, open PowerShell and run [New-SelfSignedCertificate](/powershell/module/pki/new-selfsignedcertificate) with the following parameters to create the cert in the user certificate store on your computer:
```powershell $cert=New-SelfSignedCertificate -Subject "CN=DaemonConsoleCert" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
-# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
+# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
See [How the sample works](#how-the-sample-works) for an illustration.
This quickstart uses MSAL.js v2 with the authorization code flow. For a similar
> [!div renderon="docs"] > #### Step 3: Configure your JavaScript app >
-> In the *app* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+> In the *app* folder, open the *authConfig.js* file, and then update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
> > ```javascript
-> // Config object to be passed to Msal on creation
+> // Config object to be passed to MSAL on creation
> const msalConfig = { > auth: { > clientId: "Enter_the_Application_Id_Here",
This quickstart uses MSAL.js v2 with the authorization code flow. For a similar
> [!div renderon="docs"] >
-> Modify the values in the `msalConfig` section as described here:
+> Modify the values in the `msalConfig` section:
> > - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered. > > To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
-> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
-> - `Enter_the_Tenant_info_here` is set to one of the following:
+> - `Enter_the_Cloud_Instance_Id_Here` is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+> - `Enter_the_Tenant_info_here` is one of the following:
> - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`. > > To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
This quickstart uses MSAL.js v2 with the authorization code flow. For a similar
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
+>
> We have configured your project with values of your app's properties. > [!div renderon="docs"] >
-> Then, still in the same folder, edit the *graphConfig.js* file and update the `graphMeEndpoint` and `graphMailEndpoint` values in the `apiConfig` object.
+> Next, open the *graphConfig.js* file to update the `graphMeEndpoint` and `graphMailEndpoint` values in the `apiConfig` object.
> > ```javascript > // Add here the endpoints for MS Graph API services you would like to use.
This quickstart uses MSAL.js v2 with the authorization code flow. For a similar
> > [!div renderon="docs"] >
-> `Enter_the_Graph_Endpoint_Here` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information about Microsoft Graph on national clouds, see [National cloud deployment](/graph/deployments).
+> `Enter_the_Graph_Endpoint_Here` is the endpoint that API calls are made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information about Microsoft Graph on national clouds, see [National cloud deployment](/graph/deployments).
>
-> The `graphMeEndpoint` and `graphMailEndpoint` values in the *graphConfig.js* file should be similar to the following if you're using the main (global) Microsoft Graph API service:
+> If you're using the main (global) Microsoft Graph API service, the `graphMeEndpoint` and `graphMailEndpoint` values in the *graphConfig.js* file should be similar to the following:
> > ```javascript > graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
This quickstart uses MSAL.js v2 with the authorization code flow. For a similar
> > #### Step 4: Run the project
-Run the project with a web server by using Node.js:
+Run the project with a web server by using Node.js.
1. To start the server, run the following commands from within the project directory:+ ```console npm install npm start ```
-1. Browse to `http://localhost:3000/`.
+
+1. Go to `http://localhost:3000/`.
1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information should be displayed on the page.
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
## More information
Run the project with a web server by using Node.js:
![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-### msal.js
+### MSAL.js
The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples show how to configure your application to accept sign-ins
| Platform | Description | Link | | -- | | -- |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | Multi-tenant SPA calls Graph API |[ms-identity-javascript-angular-spa-aspnet-webapi-multitenant](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter1) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | Multi-tenant SPA calls multi-tenant custom Web API |[ms-identity-javascript-angular-spa-aspnet-webapi-multitenant](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter2) |
+| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | Multi-tenant SPA calls multi-tenant custom web API |[ms-identity-javascript-angular-spa-aspnet-webapi-multitenant](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnet-webapi-multitenant/tree/master/Chapter2) |
| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png) [.NET Core (MSAL.NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ASP.NET Core MVC web application calls Graph API |[active-directory-aspnetcore-webapp-openidconnect-v2](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-3-Multi-Tenant) | | ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png) [.NET Core (MSAL.NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ASP.NET Core MVC web application calls ASP.NET Core Web API |[active-directory-aspnetcore-webapp-openidconnect-v2](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-3-AnyOrg) |
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
> [!TIP] > Try executing this request in Postman! (Don't forget to replace the `code`)
-> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://app.getpostman.com/run-collection/f77994d794bab767596d)
+> [![Try running this request in Postman](./media/v2-oauth2-auth-code-flow/runInPostman.png)](https://www.getpostman.com/collections/dba7e9c2e0870702dfc6)
| Parameter | Required/optional | Description | ||-|-|
active-directory Concept Identity Protection B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/concept-identity-protection-b2b.md
Title: Identity Protection and B2B users - Azure Active Directory
-description: Using Identity Protection with B2B users how it works and known limitations
+description: Using Identity Protection with B2B users
Previously updated : 10/18/2019 Last updated : 04/19/2021
# Identity Protection and B2B users
-With Azure AD B2B collaboration, organizations can enforce risk-based policies for B2B users using Identity Protection. These policies be configured in two ways:
+Identity Protection detects compromised credentials for Azure AD users. If your credential is detected as compromised, it means that someone else may have your password and be using it illegitimately. To prevent further risk to your account, it is important to securely reset your password so that the bad actor can no longer use your compromised password. Identity Protection marks accounts that may be compromised as "at risk."
-- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, that include guest users.-- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, that includes guest users.
+You can use your organizational credentials to sign-in to another organization as a guest; this process is referred to B2B authentication. Organizations can configure policies to block users from signing-in if their credentials are at risk. If your account is at risk and you are blocked from signing-in to another organization as a guest, you may be able to self-remediate your account using the steps below. If your organization has not enabled self-service password reset, your administrator will need to manually remediate your account.
+
+## How to unblock your account
+
+If you are attempting to sign-in to another organization as a guest and are blocked due to risk, you will see the following block message: "Your account is blocked. We've detected suspicious activity on your account."
+
+![Guest account blocked, contact your organization's administrator](./media/concept-identity-protection-b2b/risky-guest-user-blocked.png)
+
+If your organization enables it, you can use self-service password reset unblock your account and get your credentials back to a safe state.
+1. Go to the [Password reset portal](https://passwordreset.microsoftonline.com/) and initiate the password reset. If self-service password reset is not enabled for your account and you cannot proceed, reach out to your IT administrator with the information [below](#how-to-remediate-a-users-risk-as-an-administrator).
+2. If self-service password reset is enabled for your account, you will be prompted to verify your identity using security methods prior to changing your password. For assistance, see the [Reset your work or school password](../user-help/active-directory-passwords-update-your-own-password.md) article.
+3. Once you have successfully and securely reset your password, your user risk will be remediated. You can now try again to sign-in as a guest user.
+
+If after resetting your password you are still blocked as a guest due to risk, reach out to your organization's IT administrator.
+
+## How to remediate a user's risk as an administrator
+
+Identity Protection automatically detects risky users for Azure AD tenants. If you have not previously checked the Identity Protection reports, there may be a large number of users with risk. Since resource tenants can apply user risk policies to guest users, your users can be blocked due to risk even if they were previously unaware of their risky state. If your user reports they have been blocked as a guest user in another tenant due to risk, it is important to remediate the user to protect their account and enable collaboration.
-## How is risk evaluated for B2B collaboration users
+### Reset the user's password
-The user risk for B2B collaboration users is evaluated at their home directory. The real-time sign-in risk for these users is evaluated at the resource directory when they try to access the resource.
+From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu, search for the impacted user using the 'User' filter. Select the impacted user in the report and click "Reset password" in the top toolbar. The user will be assigned a temporary password that must be changed on the next sign in. This process will remediate their user risk and bring their credentials back to a safe state.
+
+### Manually dismiss user's risk
+
+If password reset is not an option for you from the Azure AD portal, you can choose to manually dismiss user risk. This process will cause the user to no longer be at risk, but does not have any impact on the existing password. It is important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state.
+
+To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and click on the user. Click on "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report.
+
+To learn more about Identity Protection, see [What is Identity Protection](overview-identity-protection.md).
+
+## How does Identity Protection work for B2B users?
+
+The user risk for B2B collaboration users is evaluated at their home directory. The real-time sign-in risk for these users is evaluated at the resource directory when they try to access the resource. With Azure AD B2B collaboration, organizations can enforce risk-based policies for B2B users using Identity Protection. These policies be configured in two ways:
+
+- Administrators can configure the built-in Identity Protection risk-based policies, that apply to all apps, that include guest users.
+- Administrators can configure their Conditional Access policies, using sign-in risk as a condition, that includes guest users.
## Limitations of Identity Protection for B2B collaboration users
The risk evaluation and remediation for B2B users occurs in their home directory
### What do I do if a B2B collaboration user was blocked due to a risk-based policy in my organization?
-If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory. If they do not have self-service password reset enabled in their home directory, they will need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password.
+If a risky B2B user in your directory is blocked by your risk-based policy, the user will need to remediate that risk in their home directory. Users can remediate their risk by performing a secure password reset in their home directory [as outlined above](#how-to-unblock-your-account). If they do not have self-service password reset enabled in their home directory, they will need to contact their own organization's IT Staff to have an administrator manually dismiss their risk or reset their password.
### How do I prevent B2B collaboration users from being impacted by risk-based policies?
-Excluding B2B users from your organization's risk-based Conditional Access policies will prevent B2B users from being impacted or blocked by their risk evaluation. To exclude these B2B users, create a group in Azure AD that contains all of your organization's guest users. Then, add this group as an exclusion for your built-in Identity Protection user risk and sign-in risk policies, as well as any Conditional Access policies that use sign-in risk as a condition.
+Excluding B2B users from your organization's risk-based Conditional Access policies will prevent B2B users from being impacted or blocked by their risk evaluation. To exclude these B2B users, create a group in Azure AD that contains all of your organization's guest users. Then, add this group as an exclusion for your built-in Identity Protection user risk and sign-in risk policies, and any Conditional Access policies that use sign-in risk as a condition.
## Next steps See the following articles on Azure AD B2B collaboration: -- [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md)
+- [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following document to reconfigure a managed identity if you have mo
Refer to the following documents to use managed identity with [Azure Automation](../../automation/automation-intro.md):
-* [Automation account authentication overview - Managed identities](../../automation/automation-security-overview.md#managed-identities)
+* [Automation account authentication overview - Managed identities](../../automation/automation-security-overview.md#managed-identities-preview)
* [Enable and use managed identity for Automation](https://docs.microsoft.com/azure/automation/enable-managed-identity-for-automation) ### Azure Blueprints
active-directory Concept Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-audit-logs.md
description: Introduction to the audit activity reports in the Azure Active Dire
documentationcenter: '' -+ editor: '' ms.assetid: a1f93126-77d1-4345-ab7d-561066041161
na Previously updated : 09/17/2020 Last updated : 04/22/2021
You can also access the Microsoft 365 activity logs programmatically by using th
- [Azure AD audit activity reference](reference-audit-activities.md) - [Azure AD reports retention reference](reference-reports-data-retention.md)-- [Azure AD log latencies reference](reference-reports-latencies.md)
+- [Azure AD log latencies reference](reference-reports-latencies.md)
+- [Unknown actors in audit report](https://docs.microsoft.com/troubleshoot/azure/active-directory/unknown-actors-in-audit-reports)
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-sign-ins.md
na Previously updated : 04/19/2021 Last updated : 04/22/2021
You can also access the Microsoft 365 activity logs programmatically by using th
* [Sign-in activity report error codes](reference-sign-ins-error-codes.md) * [Azure AD data retention policies](reference-reports-data-retention.md)
-* [Azure AD report latencies](reference-reports-latencies.md)
+* [Azure AD report latencies](reference-reports-latencies.md)
+* [First party Microsoft applications in sign-ins report](https://docs.microsoft.com/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in#application-ids-for-commonly-used-microsoft-applications)
active-directory Andfrankly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/andfrankly-tutorial.md
Previously updated : 01/17/2019 Last updated : 04/16/2021 # Tutorial: Azure Active Directory integration with &frankly
-In this tutorial, you learn how to integrate &frankly with Azure Active Directory (Azure AD).
-Integrating &frankly with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate &frankly with Azure Active Directory (Azure AD). When you integrate &frankly with Azure AD, you can:
-* You can control in Azure AD who has access to &frankly.
-* You can enable your users to be automatically signed-in to &frankly (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to &frankly.
+* Enable your users to be automatically signed-in to &frankly with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with &frankly, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* &frankly single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* &frankly single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* &frankly supports **SP and IDP** initiated SSO
+* &frankly supports **SP and IDP** initiated SSO.
-## Adding &frankly from the gallery
+## Add &frankly from the gallery
To configure the integration of &frankly into Azure AD, you need to add &frankly from the gallery to your list of managed SaaS apps.
-**To add &frankly from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **&frankly**, select **&frankly** from result panel then click **Add** button to add the application.
-
- ![&frankly in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with &frankly based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in &frankly needs to be established.
-
-To configure and test Azure AD single sign-on with &frankly, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **&frankly** in the search box.
+1. Select **&frankly** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure &frankly Single Sign-On](#configure-frankly-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create &frankly test user](#create-frankly-test-user)** - to have a counterpart of Britta Simon in &frankly that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for &frankly
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with &frankly using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in &frankly.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with &frankly, perform the following steps:
-To configure Azure AD single sign-on with &frankly, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure &frankly SSO](#configure-frankly-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create &frankly test user](#create-frankly-test-user)** - to have a counterpart of B.Simon in &frankly that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **&frankly** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **&frankly** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://andfrankly.com/saml/simplesaml/www/module.php/saml/sp/metadata.php/<tenant id>`
To configure Azure AD single sign-on with &frankly, perform the following steps:
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://andfrankly.com/saml/okta/?saml_sso=<tenant id>`
To configure Azure AD single sign-on with &frankly, perform the following steps:
![The Certificate download link](common/metadataxml.png) -
-### Configure &frankly single sign-on
-
-To enable single sign-on in &frankly:
-
-1. Log in to &frankly. Go to **Account** > **User Management**.
-1. Change the authentication mechanism from the default to **Enterprise Sign-on (SAML)**.
-1. Upload the **Federation Metadata XML** that you downloaded in step 6 in the preceding section.
-1. Select **Save**.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to &frankly.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **&frankly**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to &frankly.
-2. In the applications list, select **&frankly**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **&frankly**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The &frankly link in the Applications list](common/all-applications.png)
+## Configure &frankly SSO
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+To enable single sign-on in &frankly:
- ![The Add Assignment pane](common/add-assign-user.png)
+1. Log in to &frankly. Go to **Account** > **User Management**.
+1. Change the authentication mechanism from the default to **Enterprise Sign-on (SAML)**.
+1. Upload the **Federation Metadata XML** that you downloaded in step 6 in the preceding section.
+1. Select **Save**.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create &frankly test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in &frankly. Work with [&frankly support team](mailto:help@andfrankly.com) to add the users in the &frankly platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create &frankly test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in &frankly. Work with [&frankly support team](mailto:help@andfrankly.com) to add the users in the &frankly platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to &frankly Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to &frankly Sign-on URL directly and initiate the login flow from there.
-When you click the &frankly tile in the Access Panel, you should be automatically signed in to the &frankly for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the &frankly for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the &frankly tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the &frankly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure &frankly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Browserstack Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/browserstack-single-sign-on-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure BrowserStack Single Sign-on for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to BrowserStack Single Sign-on.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 39999abc-e4a2-4058-81e0-bf88182f8864
+++
+ na
+ms.devlang: na
+ Last updated : 04/22/2021+++
+# Tutorial: Configure BrowserStack Single Sign-on for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both BrowserStack Single Sign-on and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [BrowserStack Single Sign-on](https://www.browserstack.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in BrowserStack Single Sign-on
+> * Remove users in BrowserStack Single Sign-on when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and BrowserStack Single Sign-on
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/browserstack-single-sign-on-tutorial) to BrowserStack Single Sign-on (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in BrowserStack with **Owner** permissions.
+* An [Enterprise plan](https://www.browserstack.com/pricing) with BrowserStack.
+* [Single Sign-on](https://www.browserstack.com/docs/enterprise/single-sign-on/azure-ad) integration with BrowserStack (mandatory).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and BrowserStack Single Sign-on](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure BrowserStack Single Sign-on to support provisioning with Azure AD
+
+1. Log in to [BrowserStack](https://www.browserstack.com/users/sign_in) as a user with **Owner** permissions.
+
+2. Navigate to **Account** -> **Settings & Permissions**. Select the **Security** tab.
+
+3. Under **Auto User Provisioning**, click **Configure**.
+
+ ![Settings](media/browserstack-single-sign-on-provisioning-tutorial/configure.png)
+
+4. Select the user attributes that you want to control via Azure AD and click **Confirm**.
+
+ ![User](media/browserstack-single-sign-on-provisioning-tutorial/attributes.png)
+
+5. Copy the **Tenant URL** and **Secret Token**. These values will be entered in the Tenant URL and Secret Token fields in the Provisioning tab of your BrowserStack Single Sign-on application in the Azure portal. Click **Done**.
+
+ ![Authorization](media/browserstack-single-sign-on-provisioning-tutorial/credential.png)
+
+6. Your provisioning configuration has been saved on BrowserStack. **Enable** user provisioning in BrowserStack once **the provisioning setup on Azure AD** is completed, to prevent blocking of inviting new users from BrowserStack [Account](https://www.browserstack.com/accounts/manage-users).
+
+ ![Account](media/browserstack-single-sign-on-provisioning-tutorial/enable.png)
+
+## Step 3. Add BrowserStack Single Sign-on from the Azure AD application gallery
+
+Add BrowserStack Single Sign-on from the Azure AD application gallery to start managing provisioning to BrowserStack Single Sign-on. If you have previously setup BrowserStack Single Sign-on for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users to BrowserStack Single Sign-on, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to BrowserStack Single Sign-on
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in app based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for BrowserStack Single Sign-on in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **BrowserStack Single Sign-on**.
+
+ ![The BrowserStack Single Sign-on link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your BrowserStack Single Sign-on Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to BrowserStack Single Sign-on. If the connection fails, ensure your BrowserStack Single Sign-on account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to BrowserStack Single Sign-on**.
+
+9. Review the user attributes that are synchronized from Azure AD to BrowserStack Single Sign-on in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in BrowserStack Single Sign-on for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the BrowserStack Single Sign-on API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ |||--|
+ |userName|String|&check;|
+ |name.givenName|String|
+ |name.familyName|String|
+ |urn:ietf:params:scim:schemas:extension:Bstack:2.0:User:bstack_role|String|
+ |urn:ietf:params:scim:schemas:extension:Bstack:2.0:User:bstack_team|String|
+ |urn:ietf:params:scim:schemas:extension:Bstack:2.0:User:bstack_product|String|
++
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for BrowserStack Single Sign-on, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users that you would like to provision to BrowserStack Single Sign-on by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+- Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+- Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+- If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Connector limitations
+
+* BrowserStack Single Sign-on does not support group provisioning.
+* BrowserStack Single Sign-on requires **emails[type eq "work"].value** and **userName** to have the same source value.
+
+## Troubleshooting tips
+
+* Refer to troubleshooting tips [here](https://www.browserstack.com/docs/enterprise/auto-user-provisioning/azure-ad#troubleshooting).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Configuring attribute-mappings in BrowserStack Single Sign-on](https://www.browserstack.com/docs/enterprise/auto-user-provisioning/azure-ad)
+* [Setup and enable auto user provisioning in BrowserStack](https://www.browserstack.com/docs/enterprise/auto-user-provisioning/azure-ad#setup-and-enable-auto-user-provisioning)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Chromeriver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/chromeriver-tutorial.md
Previously updated : 02/14/2019 Last updated : 04/16/2021 # Tutorial: Azure Active Directory integration with Chromeriver
-In this tutorial, you learn how to integrate Chromeriver with Azure Active Directory (Azure AD).
-Integrating Chromeriver with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Chromeriver with Azure Active Directory (Azure AD). When you integrate Chromeriver with Azure AD, you can:
-* You can control in Azure AD who has access to Chromeriver.
-* You can enable your users to be automatically signed-in to Chromeriver (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Chromeriver.
+* Enable your users to be automatically signed-in to Chromeriver with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Chromeriver, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Chromeriver single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Chromeriver single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Chromeriver supports **IDP** initiated SSO
+* Chromeriver supports **IDP** initiated SSO.
-## Adding Chromeriver from the gallery
+## Add Chromeriver from the gallery
To configure the integration of Chromeriver into Azure AD, you need to add Chromeriver from the gallery to your list of managed SaaS apps.
-**To add Chromeriver from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Chromeriver**, select **Chromeriver** from result panel then click **Add** button to add the application.
-
- ![Chromeriver in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Chromeriver** in the search box.
+1. Select **Chromeriver** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Chromeriver based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Chromeriver needs to be established.
+## Configure and test Azure AD SSO for Chromeriver
-To configure and test Azure AD single sign-on with Chromeriver, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Chromeriver using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Chromeriver.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Chromeriver Single Sign-On](#configure-chromeriver-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Chromeriver test user](#create-chromeriver-test-user)** - to have a counterpart of Britta Simon in Chromeriver that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Chromeriver, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Chromeriver SSO](#configure-chromeriver-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Chromeriver test user](#create-chromeriver-test-user)** - to have a counterpart of B.Simon in Chromeriver that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Chromeriver, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Chromeriver** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Chromeriver** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Chromeriver Domain and URLs single sign-on information](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.chromeriver.com`
To configure Azure AD single sign-on with Chromeriver, perform the following ste
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Chromeriver Single Sign-On
-
-To configure single sign-on on **Chromeriver** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Chromeriver support team](https://www.chromeriver.com/services/support). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Chromeriver.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Chromeriver**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Chromeriver.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Chromeriver**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Chromeriver**.
+## Configure Chromeriver SSO
- ![The Chromeriver link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Chromeriver** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Chromeriver support team](https://www.chromeriver.com/services/support). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Chromeriver test user
To enable Azure AD users to log in to Chromeriver, they must be provisioned into
> [!NOTE] > You can use any other Chromeriver user account creation tools or APIs provided by Chromeriver to provision Azure Active Directory user accounts.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Chromeriver tile in the Access Panel, you should be automatically signed in to the Chromeriver for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Chromeriver for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Chromeriver tile in the My Apps, you should be automatically signed in to the Chromeriver for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Chromeriver you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Dynatrace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dynatrace-tutorial.md
Previously updated : 10/22/2019 Last updated : 04/16/2021
In this tutorial, you'll learn how to integrate Dynatrace with Azure Active Dire
* Enable your users to be automatically signed-in to Dynatrace with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Dynatrace supports **SP and IDP** initiated SSO
-* Dynatrace supports **Just In Time** user provisioning
+* Dynatrace supports **SP and IDP** initiated SSO.
+* Dynatrace supports **Just In Time** user provisioning.
> [!NOTE] > The identifier of this application is a fixed string value. Only one instance can be configured in one tenant.
-## Adding Dynatrace from the gallery
+## Add Dynatrace from the gallery
To configure the integration of Dynatrace into Azure AD, you need to add Dynatrace from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications**, and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Dynatrace** in the search box. 1. Select **Dynatrace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Dynatrace
+## Configure and test Azure AD SSO for Dynatrace
Configure and test Azure AD SSO with Dynatrace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Dynatrace. To configure and test Azure AD SSO with Dynatrace, complete the following building blocks: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Dynatrace SSO](#configure-dynatrace-sso)** - to configure the single sign-on settings on application side.
- * **[Create Dynatrace test user](#create-dynatrace-test-user)** - to have a counterpart of B.Simon in Dynatrace that is linked to the Azure AD representation of user.
+ 1. **[Create Dynatrace test user](#create-dynatrace-test-user)** - to have a counterpart of B.Simon in Dynatrace that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Dynatrace** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal on the **Dynatrace** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and complete the following step to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://sso.dynatrace.com/` 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML**. Select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Dynatrace**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog box, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog box, click the **Assign** button.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
## Configure Dynatrace SSO
In this section, a user called B.Simon is created in Dynatrace. Dynatrace suppor
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Dynatrace Sign on URL where you can initiate the login flow.
-When you click the Dynatrace tile in the Access Panel, you should be automatically signed in to the Dynatrace, for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Dynatrace Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Dynatrace for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Dynatrace tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Dynatrace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Dynatrace with Azure AD](https://aad.portal.azure.com/)
+Once you configure Dynatrace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Logmein Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logmein-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory integration with LogMeIn | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and LogMeIn.
++++++++ Last updated : 04/14/2021++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with LogMeIn
+
+In this tutorial, you'll learn how to integrate LogMeIn with Azure Active Directory (Azure AD). When you integrate LogMeIn with Azure AD, you can:
+
+* Control in Azure AD who has access to LogMeIn.
+* Enable your users to be automatically signed-in to LogMeIn with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* LogMeIn single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* LogMeIn supports **SP and IDP** initiated SSO.
+
+## Adding LogMeIn from the gallery
+
+To configure the integration of LogMeIn into Azure AD, you need to add LogMeIn from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **LogMeIn** in the search box.
+1. Select **LogMeIn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for LogMeIn
+
+Configure and test Azure AD SSO with LogMeIn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LogMeIn.
+
+To configure and test Azure AD SSO with LogMeIn, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure LogMeIn SSO](#configure-logmein-sso)** - to configure the single sign-on settings on application side.
+ * **[Create LogMeIn test user](#create-logmein-test-user)** - to have a counterpart of B.Simon in LogMeIn that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **LogMeIn** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any steps as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type the URL:
+ `https://authentication.logmeininc.com/login?service=https%3A%2F%2Fmyaccount.logmeininc.com`
++
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+6. On the **Set up LogMeIn** section, copy the appropriate URL(s) as per your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
++
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to LogMeIn.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **LogMeIn**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure LogMeIn SSO
+
+1. In a different browser window, log in to your LogMeIn website as an administrator.
+
+1. Go to the **Identity Provider** tab and in the **Metadata url** textbox, paste the **Federation Metadata URL**, which you have copied from the Azure portal.
+
+ ![Screenshot for Federation Metadata URL. ](./media/logmein-tutorial/configuration.png)
+
+1. Click **Save**.
+
+### Create LogMeIn test user
+
+1. In a different browser window, log in to your LogMeIn website as an administrator.
+
+1. Go to the **Users** tab and click **Add a user**.
+
+ ![Screenshot for Add a user button.](./media/logmein-tutorial/add-user.png)
+
+1. Fill the required fields in the following page and click **Save**.
+
+ ![Screenshot for user fields.](./media/logmein-tutorial/create-user.png)
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to LogMeIn Sign on URL where you can initiate the login flow.
+
+* Go to LogMeIn Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the LogMeIn for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the LogMeIn tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the LogMeIn for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure the LogMeIn you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-aad)
active-directory Qumucloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/qumucloud-tutorial.md
Previously updated : 04/03/2019 Last updated : 04/16/2021 # Tutorial: Azure Active Directory integration with Qumu Cloud
-In this tutorial, you learn how to integrate Qumu Cloud with Azure Active Directory (Azure AD).
-Integrating Qumu Cloud with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Qumu Cloud with Azure Active Directory (Azure AD). When you integrate Qumu Cloud with Azure AD, you can:
-* You can control in Azure AD who has access to Qumu Cloud.
-* You can enable your users to be automatically signed-in to Qumu Cloud (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Qumu Cloud.
+* Enable your users to be automatically signed-in to Qumu Cloud with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Qumu Cloud, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Qumu Cloud single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Qumu Cloud single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Qumu Cloud supports **SP** and **IDP** initiated SSO
+* Qumu Cloud supports **SP** and **IDP** initiated SSO.
-* Qumu Cloud supports **Just In Time** user provisioning
+* Qumu Cloud supports **Just In Time** user provisioning.
-## Adding Qumu Cloud from the gallery
+## Add Qumu Cloud from the gallery
To configure the integration of Qumu Cloud into Azure AD, you need to add Qumu Cloud from the gallery to your list of managed SaaS apps.
-**To add Qumu Cloud from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Qumu Cloud**, select **Qumu Cloud** from result panel then click **Add** button to add the application.
-
- ![Qumu Cloud in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Qumu Cloud based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Qumu Cloud needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Qumu Cloud** in the search box.
+1. Select **Qumu Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Qumu Cloud, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Qumu Cloud
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Qumu Cloud Single Sign-On](#configure-qumu-cloud-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Qumu Cloud test user](#create-qumu-cloud-test-user)** - to have a counterpart of Britta Simon in Qumu Cloud that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Qumu Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Qumu Cloud.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Qumu Cloud, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Qumu Cloud SSO](#configure-qumu-cloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Qumu Cloud test user](#create-qumu-cloud-test-user)** - to have a counterpart of B.Simon in Qumu Cloud that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Qumu Cloud, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Qumu Cloud** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Qumu Cloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<subdomain>.qumucloud.com/saml/SSO`
To configure Azure AD single sign-on with Qumu Cloud, perform the following step
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<subdomain>.qumucloud.com`
To configure Azure AD single sign-on with Qumu Cloud, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Qumu Cloud Single Sign-On
-
-To configure single sign-on on **Qumu Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Qumu Cloud support team](mailto:support@qumu.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Qumu Cloud.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Qumu Cloud**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Qumu Cloud.
-2. In the applications list, select **Qumu Cloud**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Qumu Cloud**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The Qumu Cloud link in the Applications list](common/all-applications.png)
+## Configure Qumu Cloud SSO
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Qumu Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Qumu Cloud support team](mailto:support@qumu.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Qumu Cloud test user
In this section, a user called Britta Simon is created in Qumu Cloud. Qumu Cloud
>[!Note] >If you need to create a user manually, contact [Qumu Cloud Client support team](mailto:support@qumu.com).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Qumu Cloud Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Qumu Cloud Sign-on URL directly and initiate the login flow from there.
-When you click the Qumu Cloud tile in the Access Panel, you should be automatically signed in to the Qumu Cloud for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Qumu Cloud for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Qumu Cloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Qumu Cloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Qumu Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
Take note of the two properties listed below:
## Create a modified rules and display file
-In this section, we use the rules and display files from the Sample issuer app and modify them slightly to create your tenant's first verifiable credential.
+In this section, we use the rules and display files from the [Sample issuer app](https://github.com/Azure-Samples/active-directory-verifiable-credentials/)
+ and modify them slightly to create your tenant's first verifiable credential.
1. Copy both the rules and display json files to a temporary folder and rename them **MyFirstVC-display.json** and **MyFirstVC-rules.json** respectively. You can find both files under **issuer\issuer_config**
Lets make a few modifications so this verifiable credential looks visibly differ
"issuedBy": "Your Issuer Name", "backgroundColor": "#ffffff", "textColor": "#000000",
+ }
``` Save these changes.+ ## Create a storage account Before creating our first verifiable credential, we need to create a Blob Storage container that can hold our configuration and rules files.
Now we make modifications to the sample app's issuer code to update it with your
node app.js ```
-6. Using a different command prompt run ngrok to set up a URL on 8081
+6. Using a different command prompt run ngrok to set up a URL on 8081. You can install ngrok globally using the [ngrok npm package](https://www.npmjs.com/package/ngrok/).
```terminal ngrok http 8081
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Here's a PowerShell script that completes these steps:
```powershell # Check for marker file indicating that config has already been done
- if(Test-Path "$LOCAL_EXPANDED\tomcat\config_done_marker"){
+ if(Test-Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker"){
return 0 } # Delete previous Tomcat directory if it exists # In case previous config could not be completed or a new config should be forcefully installed
- if(Test-Path "$LOCAL_EXPANDED\tomcat"){
- Remove-Item "$LOCAL_EXPANDED\tomcat" --recurse
+ if(Test-Path "$Env:LOCAL_EXPANDED\tomcat"){
+ Remove-Item "$Env:LOCAL_EXPANDED\tomcat" --recurse
} # Copy Tomcat to local # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
- Copy-Item -Path "$AZURE_TOMCAT90_HOME\*" -Destination "$LOCAL_EXPANDED\tomcat" -Recurse
+ Copy-Item -Path "$Env:AZURE_TOMCAT90_HOME\*" -Destination "$Env:LOCAL_EXPANDED\tomcat" -Recurse
# Perform the required customization of Tomcat {... customization ...} # Mark that the operation was a success
- New-Item -Path "$LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
+ New-Item -Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
``` ##### Transforms
This example transform adds a new connector node to `server.xml`. Note the *Iden
clientAuth="false" sslProtocol="TLS" /> </xsl:template>
-</xsl:stylesheet>
+ </xsl:stylesheet>
``` ###### Function for XSL transform
The following example script copies a custom Tomcat to a local folder, performs
```powershell # Locations of xml and xsl files
- $target_xml="$LOCAL_EXPANDED\tomcat\conf\server.xml"
- $target_xsl="$HOME\site\server.xsl"
-
+ $target_xml="$Env:LOCAL_EXPANDED\tomcat\conf\server.xml"
+ $target_xsl="$Env:HOME\site\server.xsl"
+
# Define the transform function # Useful if transforming multiple files function TransformXML{ param ($xml, $xsl, $output)-
+
if (-not $xml -or -not $xsl -or -not $output) { return 0 }-
+
Try { $xslt_settings = New-Object System.Xml.Xsl.XsltSettings; $XmlUrlResolver = New-Object System.Xml.XmlUrlResolver; $xslt_settings.EnableScript = 1;-
+
$xslt = New-Object System.Xml.Xsl.XslCompiledTransform; $xslt.Load($xsl,$xslt_settings,$XmlUrlResolver); $xslt.Transform($xml, $output); }-
+
Catch { $ErrorMessage = $_.Exception.Message $FailedItem = $_.Exception.ItemName
- Write-Host 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
+ echo 'Error'$ErrorMessage':'$FailedItem':' $_.Exception;
return 0 } return 1 }-
+
+ $success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml
+
# Check for marker file indicating that config has already been done
- if(Test-Path "$LOCAL_EXPANDED\tomcat\config_done_marker"){
+ if(Test-Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker"){
return 0 }-
+
# Delete previous Tomcat directory if it exists # In case previous config could not be completed or a new config should be forcefully installed
- if(Test-Path "$LOCAL_EXPANDED\tomcat"){
- Remove-Item "$LOCAL_EXPANDED\tomcat" --recurse
+ if(Test-Path "$Env:LOCAL_EXPANDED\tomcat"){
+ Remove-Item "$Env:LOCAL_EXPANDED\tomcat" --recurse
}-
+
+ md -Path "$Env:LOCAL_EXPANDED\tomcat"
+
# Copy Tomcat to local # Using the environment variable $AZURE_TOMCAT90_HOME uses the 'default' version of Tomcat
- Copy-Item -Path "$AZURE_TOMCAT90_HOME\*" -Destination "$LOCAL_EXPANDED\tomcat" -Recurse
-
+ Copy-Item -Path "$Env:AZURE_TOMCAT90_HOME\*" "$Env:LOCAL_EXPANDED\tomcat" -Recurse
+
# Perform the required customization of Tomcat $success = TransformXML -xml $target_xml -xsl $target_xsl -output $target_xml-
+
# Mark that the operation was a success if successful if($success){
- New-Item -Path "$LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
+ New-Item -Path "$Env:LOCAL_EXPANDED\tomcat\config_done_marker" -ItemType File
} ```
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-staging-slots.md
After the setting is saved, the specified percentage of clients is randomly rout
After a client is automatically routed to a specific slot, it's "pinned" to that slot for the life of that client session. On the client browser, you can see which slot your session is pinned to by looking at the `x-ms-routing-name` cookie in your HTTP headers. A request that's routed to the "staging" slot has the cookie `x-ms-routing-name=staging`. A request that's routed to the production slot has the cookie `x-ms-routing-name=self`. > [!NOTE]
- > Next to the Azure portal, you can also use the [`az webapp traffic-routing set`](/cli/azure/webapp/traffic-routing#az_webapp_traffic_routing_set) command in the Azure CLI to set the routing percentages from CI/CD tools like DevOps pipelines or other automation systems.
- >
+ > You can also use the [`az webapp traffic-routing set`](/cli/azure/webapp/traffic-routing#az_webapp_traffic_routing_set) command in the Azure CLI to set the routing percentages from CI/CD tools like GitHub Actions, DevOps pipelines, or other automation systems.
### Route production traffic manually
To let users opt in to your beta app, set the same query parameter to the name o
By default, new slots are given a routing rule of `0%`, shown in grey. When you explicitly set this value to `0%` (shown in black text), your users can access the staging slot manually by using the `x-ms-routing-name` query parameter. But they won't be routed to the slot automatically because the routing percentage is set to 0. This is an advanced scenario where you can "hide" your staging slot from the public while allowing internal teams to test changes on the slot.
+> [!NOTE]
+> There is a known limitation affecting Private Endpoints and traffic routing with slots. As of April 2021, automatic and manual request routing between slots will result in a "403 Access Denied". This limitation will be removed in a future release.
+ <a name="Delete"></a> ## Delete a slot
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-ruby.md Binary files differ
app-service Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-cli.md Binary files differ
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-java-spring-cosmosdb.md Binary files differ
application-gateway Rewrite Http Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-http-headers.md
- Title: Rewrite HTTP headers with Azure Application Gateway | Microsoft Docs
-description: This article provides an overview of rewriting HTTP headers in Azure Application Gateway
---- Previously updated : 04/27/2020---
-# Rewrite HTTP headers with Application Gateway
--
-HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers.
-
-Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and back-end pools. And it allows you to add conditions to ensure that the specified headers are rewritten only when certain conditions are met.
-
-Application Gateway also supports several [server variables](#server-variables) that help you store additional information about requests and responses. This makes it easier for you to create powerful rewrite rules.
-
-> [!NOTE]
->
-> The HTTP header rewrite support is only available for the [Standard_V2 and WAF_v2 SKU](application-gateway-autoscaling-zone-redundant.md).
-
-![Rewriting headers](media/rewrite-http-headers/rewrite-headers.png)
-
-## Supported headers
-
-You can rewrite all headers in requests and responses, except for the Host, Connection, and Upgrade headers. You can also use the application gateway to create custom headers and add them to the requests and responses being routed through it.
-
-## Rewrite conditions
-
-You can use rewrite conditions to evaluate the content of HTTP(S) requests and responses and perform a header rewrite only when one or more conditions are met. The application gateway uses these types of variables to evaluate the content of HTTP(S) requests and responses:
--- HTTP headers in the request.-- HTTP headers in the response.-- Application Gateway server variables.-
-You can use a condition to evaluate whether a specified variable is present, whether a specified variable matches a specific value, or whether a specified variable matches a specific pattern. You use the [Perl Compatible Regular Expressions (PCRE) library](https://www.pcre.org/) to set up regular expression pattern matching in the conditions. To learn about regular expression syntax, see the [Perl regular expressions main page](https://perldoc.perl.org/perlre.html).
-
-## Rewrite actions
-
-You use rewrite actions to specify the request and response headers that you want to rewrite and the new value for the headers. You can either create a new header, modify the value of an existing header, or delete an existing header. The value of a new header or an existing header can be set to these types of values:
--- Text.-- Request header. To specify a request header, you need to use the syntax {http_req_*headerName*}.-- Response header. To specify a response header, you need to use the syntax {http_resp_*headerName*}.-- Server variable. To specify a server variable, you need to use the syntax {var_*serverVariable*}.-- A combination of text, a request header, a response header, and a server variable.-
-## Server variables
-
-Application Gateway uses server variables to store useful information about the server, the connection with the client, and the current request on the connection. Examples of information stored include the clientΓÇÖs IP address and the web browser type. Server variables change dynamically, for example, when a new page loads or when a form is posted. You can use these variables to evaluate rewrite conditions and rewrite headers. In order to use the value of server variables to rewrite headers, you will need to specify these variables in the syntax {var_*serverVariable*}
-
-Application gateway supports these server variables:
-
-| Variable name | Description |
-| -- | :-- |
-| add_x_forwarded_for_proxy | The X-Forwarded-For client request header field with the `client_ip` variable (see explanation later in this table) appended to it in the format IP1, IP2, IP3, and so on. If the X-Forwarded-For field isn't in the client request header, the `add_x_forwarded_for_proxy` variable is equal to the `$client_ip` variable. This variable is particularly useful when you want to rewrite the X-Forwarded-For header set by Application Gateway so that the header contains only the IP address without the port information. |
-| ciphers_supported | A list of the ciphers supported by the client. |
-| ciphers_used | The string of ciphers used for an established TLS connection. |
-| client_ip | The IP address of the client from which the application gateway received the request. If there's a reverse proxy before the application gateway and the originating client, *client_ip* will return the IP address of the reverse proxy. |
-| client_port | The client port. |
-| client_tcp_rtt | Information about the client TCP connection. Available on systems that support the TCP_INFO socket option. |
-| client_user | When HTTP authentication is used, the user name supplied for authentication. |
-| host | In this order of precedence: the host name from the request line, the host name from the Host request header field, or the server name matching a request. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, host value will be is *contoso.com* |
-| cookie_*name* | The *name* cookie. |
-| http_method | The method used to make the URL request. For example, GET or POST. |
-| http_status | The session status. For example, 200, 400, or 403. |
-| http_version | The request protocol. Usually HTTP/1.0, HTTP/1.1, or HTTP/2.0. |
-| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, query_string value will be *id=123&title=fabrikam* |
-| received_bytes | The length of the request (including the request line, header, and request body). |
-| request_query | The arguments in the request line. |
-| request_scheme | The request scheme: http or https. |
-| request_uri | The full original request URI (with arguments). Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, request_uri value will be */article.aspx?id=123&title=fabrikam* |
-| sent_bytes | The number of bytes sent to a client. |
-| server_port | The port of the server that accepted a request. |
-| ssl_connection_protocol | The protocol of an established TLS connection. |
-| ssl_enabled | ΓÇ£OnΓÇ¥ if the connection operates in TLS mode. Otherwise, an empty string. |
-| uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: in the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be */article.aspx* |
-
-## Rewrite configuration
-
-To configure HTTP header rewrite, you need to complete these steps.
-
-1. Create the objects that are required for HTTP header rewrite:
-
- - **Rewrite action**: Used to specify the request and request header fields that you want to rewrite and the new value for the headers. You can associate one or more rewrite conditions with a rewrite action.
-
- - **Rewrite condition**: An optional configuration. Rewrite conditions evaluate the content of the HTTP(S) requests and responses. The rewrite action will occur if the HTTP(S) request or response matches the rewrite condition.
-
- If you associate more than one condition with an action, the action occurs only when all the conditions are met. In other words, the operation is a logical AND operation.
-
- - **Rewrite rule**: Contains multiple rewrite action / rewrite condition combinations.
-
- - **Rule sequence**: Helps determine the order in which the rewrite rules execute. This configuration is helpful when you have multiple rewrite rules in a rewrite set. A rewrite rule that has a lower rule sequence value runs first. If you assign the same rule sequence to two rewrite rules, the order of execution is non-deterministic.
-
- - **Rewrite set**: Contains multiple rewrite rules that will be associated with a request routing rule.
-
-2. Attach the rewrite set (*rewriteRuleSet*) to a routing rule. The rewrite configuration is attached to the source listener via the routing rule. When you use a basic routing rule, the header rewrite configuration is associated with a source listener and is a global header rewrite. When you use a path-based routing rule, the header rewrite configuration is defined on the URL path map. In that case, it applies only to the specific path area of a site.
- > [!NOTE]
- > URL Rewrite alter the headers; it does not change the URL for the path.
-
-You can create multiple HTTP header rewrite sets and apply each rewrite set to multiple listeners. But you can apply only one rewrite set to a specific listener.
-
-## Common scenarios
-
-Here are some common scenarios for using header rewrite.
-
-### Remove port information from the X-Forwarded-For header
-
-Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the back-end servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable:
-
-![Remove port](media/rewrite-http-headers/remove-port.png)
-
-### Modify a redirection URL
-
-When a back-end application sends a redirection response, you might want to redirect the client to a different URL than the one specified by the back-end application. For example, you might want to do this when an app service is hosted behind an application gateway and requires the client to do a redirection to its relative path. (For example, a redirect from contoso.azurewebsites.net/path1 to contoso.azurewebsites.net/path2.)
-
-Because App Service is a multitenant service, it uses the host header in the request to route the request to the correct endpoint. App services have a default domain name of *.azurewebsites.net (say contoso.azurewebsites.net) that's different from the application gateway's domain name (say contoso.com). Because the original request from the client has the application gateway's domain name (contoso.com) as the hostname, the application gateway changes the hostname to contoso.azurewebsites.net. It makes this change so that the app service can route the request to the correct endpoint.
-
-When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client will make the request directly to contoso.azurewebsites.net/path2 instead of going through the application gateway (contoso.com/path2). Bypassing the application gateway isn't desirable.
-
-You can resolve this issue by setting the hostname in the location header to the application gateway's domain name.
-
-Here are the steps for replacing the hostname:
-
-1. Create a rewrite rule with a condition that evaluates if the location header in the response contains azurewebsites.net. Enter the pattern `(https?):\/\/.*azurewebsites\.net(.*)$`.
-1. Perform an action to rewrite the location header so that it has the application gateway's hostname. Do this by entering `{http_resp_Location_1}://contoso.com{http_resp_Location_2}` as the header value.
-
-![Modify location header](media/rewrite-http-headers/app-service-redirection.png)
-
-### Implement security HTTP headers to prevent vulnerabilities
-
-You can fix several security vulnerabilities by implementing necessary headers in the application response. These security headers include X-XSS-Protection, Strict-Transport-Security, and Content-Security-Policy. You can use Application Gateway to set these headers for all responses.
-
-![Security header](media/rewrite-http-headers/security-header.png)
-
-### Delete unwanted headers
-
-You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the back-end server name, operating system, or library details. You can use the application gateway to remove these headers:
-
-![Deleting header](media/rewrite-http-headers/remove-headers.png)
-
-### Check for the presence of a header
-
-You can evaluate an HTTP request or response header for the presence of a header or server variable. This evaluation is useful when you want to perform a header rewrite only when a certain header is present.
-
-![Checking presence of a header](media/rewrite-http-headers/check-presence.png)
-
-## Limitations
--- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you are using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.--- Rewrites are not supported when the application gateway is configured to redirect the requests or to show a custom error page.--- Rewriting the Connection, Upgrade, and Host headers isn't currently supported.--- Header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27). We don't currently support the underscore (\_) special character in Header names.-
-## Next steps
-
-To learn how to rewrite HTTP headers, see:
--- [Rewrite HTTP headers using Azure portal](./rewrite-http-headers-portal.md)-- [Rewrite HTTP headers using Azure PowerShell](add-http-header-rewrite-rule-powershell.md)
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
Title: Enable a managed identity for your Azure Automation account (preview)
description: This article describes how to set up managed identity for Azure Automation accounts. Previously updated : 04/14/2021-- Last updated : 04/20/2021+ # Enable a managed identity for your Azure Automation account (preview)
This topic shows you how to create a managed identity for an Azure Automation ac
- An Azure account and subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Both the managed identity and the target Azure resources that your runbook manages using that identity must be in the same Azure subscription. -- The latest version of Azure Automation account modules. Currently this is 1.6.0. (See [Az.Automation 1.6.0](https://www.powershellgallery.com/packages/Az.Automation/1.6.0) for details about this version.)
+- The latest version of Azure Account modules. Currently this is 2.2.8. (See [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/) for details about this version.)
- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant.
Write-Output $accessToken.access_token
### Sample runbook to access a SQL database without using Azure cmdlets
+Make sure you've enabled an identity before you try this script. See [Enable system-assigned identity](#enable-system-assigned-identity).
+
+For details on provisioning access to an Azure SQL database, see [Provision Azure AD admin (SQL Database)](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-database).
+ ```powershell $queryParameter = "?resource=https://database.windows.net/" $url = $env:IDENTITY_ENDPOINT + $queryParameter
$conn.Close()
### Sample runbook to access a key vault using Azure cmdlets
+Make sure you've enabled an identity before you try this script. See [Enable system-assigned identity](#enable-system-assigned-identity).
+
+For more information, see [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
+ ```powershell Write-Output "Connecting to azure via Connect-AzAccount -Identity" Connect-AzAccount -Identity
try {
``` ### Sample Python runbook to get a token
-
+
+Make sure you've enabled an identity before you try this runbook. See [Enable system-assigned identity](#enable-system-assigned-identity).
+ ```python #!/usr/bin/env python3 import os
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md Binary files differ
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md Binary files differ
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md Binary files differ
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-blob-trigger.md
Polling works as a hybrid between inspecting logs and running periodic container
### Event Grid trigger
+> [!NOTE]
+> When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger. For more information, see the [Storage extension 5.x and higher](#storage-extension-5x-and-higher) section below.
+ The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in support for [blob events](../storage/blobs/storage-blob-event-overview.md). Use Event Grid instead of the Blob storage trigger for the following scenarios: - **Blob-only storage accounts**: [Blob-only storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts) are supported for blob input and output bindings but not for blob triggers.
The [Event Grid trigger](functions-bindings-event-grid.md) also has built-in sup
See the [Image resize with Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md) tutorial of an Event Grid example.
+#### Storage Extension 5.x and higher
+
+When using the preview storage extension, there is built-in support for Event Grid in the Blob trigger which requires setting the `source` parameter to Event Grid in your existing Blob trigger.
+
+For more information on how to use the Blob Trigger based on Event Grid, refer to the [Event Grid Blob Trigger guide](./functions-event-grid-blob-trigger.md).
+ ### Queue storage trigger Another approach to processing blobs is to write queue messages that correspond to blobs being created or modified and then use a [Queue storage trigger](./functions-bindings-storage-queue.md) to begin processing.
azure-functions Functions Debug Event Grid Trigger Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-debug-event-grid-trigger-local.md
Then, set a breakpoint on the line that begins with `log.LogInformation`.
Next, **press F5** to start a debugging session.
-## Allow Azure to call your local function
-
-To break into a function being debugged on your machine, you must enable a way for Azure to communicate with your local function from the cloud.
-
-The [ngrok](https://ngrok.com/) utility provides a way for Azure to call the function running on your machine. Start *ngrok* using the following command:
-
-```bash
-ngrok http -host-header=localhost 7071
-```
-As the utility is set up, the command window should look similar to the following screenshot:
-
-![Screenshot that shows the Command Prompt after starting the "ngrok" utility.](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-ngrok.png)
-
-Copy the **HTTPS** URL generated when *ngrok* is run. This value is used when configuring the event grid event endpoint.
-
-## Add a storage event
-
-Open the Azure portal and navigate to a storage account and click on the **Events** option.
-
-![Add storage account event](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-add-event.png)
-
-In the *Events* window, click on the **Event Subscription** button. In the *Event Subscription* window, click on the *Endpoint Type* dropdown and select **Web Hook**.
-
-![Select subscription type](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-event-subscription-type.png)
-
-Once the endpoint type is configured, click on **Select an endpoint** to configure the endpoint value.
-
-![Select endpoint type](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-event-subscription-endpoint.png)
-
-The *Subscriber Endpoint* value is made up from three different values. The prefix is the HTTPS URL generated by *ngrok*. The remainder of the URL comes from the URL found in the function code file, with the function name added at the end. Starting with the URL from the function code file, the *ngrok* URL replaces `http://localhost:7071` and the function name replaces `{functionname}`.
-
-The following screenshot shows how the final URL should look:
-
-![Endpoint selection](./media/functions-debug-event-grid-trigger-local/functions-debug-event-grid-trigger-local-event-subscription-endpoint-selection.png)
-
-Once you've entered the appropriate value, click **Confirm Selection**.
-
-> [!IMPORTANT]
-> Every time you start *ngrok*, the HTTPS URL is regenerated and the value changes. Therefore you must create a new Event Subscription each time you expose your function to Azure via *ngrok*.
-
-## Upload a file
-
-Now you can upload a file to your storage account to trigger an Event Grid event for your local function to handle.
-
-Open [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to the your storage account.
--- Expand **Blob Containers** -- Right-click and select **Create Blob Container**.-- Name the container **test**-- Select the *test* container-- Click the **Upload** button-- Click **Upload Files**-- Select a file and upload it to the blob container ## Debug the function
azure-functions Functions Event Grid Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-event-grid-blob-trigger.md
+
+ Title: Azure Functions Event Grid Blob Trigger
+description: Learn to setup and debug with the Event Grid Blob Trigger
+++ Last updated : 3/1/2021+++
+# Azure Function Event Grid Blob Trigger
+
+This article demonstrates how to debug and deploy a local Event Grid Blob triggered function that handles events raised by a storage account.
+
+> [!NOTE]
+> The Event Grid Blob trigger is in preview.
+
+## Prerequisites
+
+- Create or use an existing function app
+- Create or use an existing storage account
+- Have version 5.0+ of the [Microsoft.Azure.WebJobs.Extensions.Storage extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0-beta.2) installed
+- Have version 2.1.0+ of the [Event Grid extension](https://docs.microsoft.com/azure/azure-functions/functions-bindings-event-grid) installed
+- Download [ngrok](https://ngrok.com/) to allow Azure to call your local function
+
+## Create a new function
+
+1. Open your function app in Visual Studio Code.
+
+1. **Press F1** to create a new blob trigger function. Make sure to use the connection string for your storage account.
+
+1. The default url for your event grid blob trigger is:
+
+ ```http
+ http://localhost:7071/runtime/webhooks/blobs?functionName={functionname}
+ ```
+
+ Note your function app's name and that the trigger type is a blob trigger, which is indicated by `blobs` in the url. This will be needed when setting up endpoints later in the how to guide.
+
+1. Once the function is created, add the Event Grid source parameter.
+
+ # [C#](#tab/csharp)
+ Add **Source = BlobTriggerSource.EventGrid** to the function parameters.
+
+ ```csharp
+ [FunctionName("BlobTriggerCSharp")]
+ public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "connection")]Stream myBlob, string name, ILogger log)
+ {
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
+ }
+ ```
+
+ # [Python](#tab/python)
+ Add **"source": "EventGrid"** to the function.json binding data.
+
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "name": "myblob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "source": "EventGrid",
+ "connection": "MyStorageAccountConnectionString"
+ }
+ ]
+ }
+ ```
+
+
+1. Set a breakpoint in your function on the line that handles logging.
+
+1. **Press F5** to start a debugging session.
++
+## Debug the function
+Once the Blob Trigger recognizes a new file is uploaded to the storage container, the break point is hit in your local function.
+
+## Deployment
+
+As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your deployed app endpoint. To update an endpoint, follow the steps in [Add a storage event](#add-a-storage-event) and use the below for the webhook URL in step 5. The `<BLOB-EXTENSION-KEY>` is the function key for your blob trigger function.
+
+```http
+https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Function1&code=<BLOB-EXTENSION-KEY>
+```
+
+## Clean up resources
+
+To clean up the resources created in this article, delete the event grid subscription you created in this tutorial.
+
+## Next steps
+
+- [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md)
+- [Event Grid trigger for Azure Functions](./functions-bindings-event-grid.md)
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-custom-overview.md
Again, this limit is not for an individual metric. ItΓÇÖs for the sum of all suc
**Metrics with a variable in the name** ΓÇô Do not use a variable as part of the metric name, use a constant instead. Each time the variable changes its value, Azure Monitor will generate a new metric, quickly hitting the limits on the number of metrics. Generally, when the developers want to include a variable in the metric name, they really want to track multiple timeseries within one metric and should use dimensions instead of variable metric names.
-**High cardinality metric dimensions** - Metrics with too many valid values in a dimension (a ΓÇ£high cardinalityΓÇ¥) are much more likely to hit the 50k limit. In general, you should never use a constantly changing value in a dimension or metric name. Timestamp, for example, should NEVER be a dimension. Server, customer or productid could be used, but only if you have a smaller number of each of those types. As a test, ask yourself if you would ever chart such data on a graph. If you have 10 or maybe even 100 servers, it might be useful to see them all on a graph for comparison. But if you have 1000, the resulting graph would likely be difficult if not impossible to read. Best practice is to keep it to fewer to 100 valid values. Up to 300 is a grey area. If you need to go over this amount, use Azure Monitor custom logs instead.
+**High cardinality metric dimensions** - Metrics with too many valid values in a dimension (a ΓÇ£high cardinalityΓÇ¥) are much more likely to hit the 50k limit. In general, you should never use a constantly changing value in a dimension. Timestamp, for example, should NEVER be a dimension. Server, customer or productid could be used, but only if you have a smaller number of each of those types. As a test, ask yourself if you would ever chart such data on a graph. If you have 10 or maybe even 100 servers, it might be useful to see them all on a graph for comparison. But if you have 1000, the resulting graph would likely be difficult if not impossible to read. Best practice is to keep it to fewer to 100 valid values. Up to 300 is a grey area. If you need to go over this amount, use Azure Monitor custom logs instead.
If you have a variable in the name or a high cardinality dimension, the following can occur: - Metrics become unreliable due to throttling - Metrics Explorer doesnΓÇÖt work - Alerting and notifications become unpredictable-- Costs can increase unexpectedably - Microsoft is not charging while the custom metrics with dimensions are in public preview. However, once charges start in the future, you will incur unexpected charges. The plan is to charge for metrics consumption based on the number of time-series monitored and number of API calls made.
+- Costs can increase unexpectedly - Microsoft is not charging while the custom metrics with dimensions are in public preview. However, once charges start in the future, you will incur unexpected charges. The plan is to charge for metrics consumption based on the number of time-series monitored and number of API calls made.
## Next steps Use custom metrics from different
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/20/2021 Last updated : 04/22/2021 # FAQs About Azure NetApp Files
No. Azure NetApp Files is not supported by Azure Storage Explorer.
### How do I determine if a directory is approaching the limit size?
-You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
+See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#directory-limit) for the limit and calculation.
+
+<!-- You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files containing non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
Size: 12288 Blocks: 24 IO Block: 65536 directory
File: 'tmp1' Size: 4096 Blocks: 8 IO Block: 65536 directory ```-
+-->
## Data migration and protection FAQs
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na ms.devlang: na Previously updated : 04/20/2021 Last updated : 04/22/2021 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Maximum size of a single volume | 100 TiB | No | | Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No |
+| Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
| Maximum number of files ([maxfiles](#maxfiles)) per volume | 100 million | Yes | | Maximum number of export policy rules per volume | 5 | No | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No | | Number of cross-region replication data protection volumes (destination volumes) | 5 | Yes |
-To see whether a directory is approaching the maximum size limit for directory metadata (320 MB), see [How do I determine if a directory is approaching the limit size](azure-netapp-files-faqs.md#how-do-i-determine-if-a-directory-is-approaching-the-limit-size).
- For more information, see [Capacity management FAQs](azure-netapp-files-faqs.md#capacity-management-faqs).
+## Determine if a directory is approaching the limit size <a name="directory-limit"></a>
+
+You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
+
+For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files containing non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
+
+Examples:
+
+```console
+[makam@cycrh6rtp07 ~]$ stat bin
+File: 'bin'
+Size: 4096 Blocks: 8 IO Block: 65536 directory
+
+[makam@cycrh6rtp07 ~]$ stat tmp
+File: 'tmp'
+Size: 12288 Blocks: 24 IO Block: 65536 directory
+
+[makam@cycrh6rtp07 ~]$ stat tmp1
+File: 'tmp1'
+Size: 4096 Blocks: 8 IO Block: 65536 directory
+```
+ ## Maxfiles limits <a name="maxfiles"></a> Azure NetApp Files volumes have a limit called *maxfiles*. The maxfiles limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The maxfiles limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The maxfiles limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-introduction.md
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 04/22/2021
Azure NetApp Files volume replication is supported between various [Azure region
* East US 2 and West US 2 * Australia East and Southeast Asia * Germany West Central and UK South
+* Germany West Central and West Europe
## Service-level objectives
Regular Azure NetApp Files storage capacity charge for Month 2 applies to the de
* [Resize a cross-region replication destination volume](azure-netapp-files-resize-capacity-pools-or-volumes.md#resize-a-cross-region-replication-destination-volume) * [Volume replication metrics](azure-netapp-files-metrics.md#replication) * [Delete volume replications or volumes](cross-region-replication-delete.md)
-* [Troubleshoot cross-region replication](troubleshoot-cross-region-replication.md)
+* [Troubleshoot cross-region replication](troubleshoot-cross-region-replication.md)
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na ms.devlang: na Previously updated : 01/14/2021 Last updated : 04/22/2021 # Dynamically change the service level of a volume
The capacity pool that you want to move the volume to must already exist. The ca
The feature to move a volume to another capacity pool is currently in preview. If you are using this feature for the first time, you need to register the feature first.
+If you have multiple Azure subscriptions, ensure that you are registering for the intended subscription by using the ['Set-AzContext'](/powershell/module/az.accounts/set-azcontext) command. <!-- GitHub #74191 -->
+ 1. Register the feature: ```azurepowershell-interactive
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md Binary files differ
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-configure.md Binary files differ
azure-sql Connect Query Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-nodejs.md
Open a command prompt and create a folder named *sqltest*. Open the folder you c
encrypt: true } };
+
+ /*
+ //Use Azure VM Managed Identity to connect to the SQL database
+ const connection = new Connection({
+ server: process.env["db_server"],
+ authentication: {
+ type: 'azure-active-directory-msi-vm',
+ },
+ options: {
+ database: process.env["db_database"],
+ encrypt: true,
+ port: 1433
+ }
+ });
+ //Use Azure App Service Managed Identity to connect to the SQL database
+ const connection = new Connection({
+ server: process.env["db_server"],
+ authentication: {
+ type: 'azure-active-directory-msi-app-service',
+ },
+ options: {
+ database: process.env["db_database"],
+ encrypt: true,
+ port: 1433
+ }
+ });
+
+ */
const connection = new Connection(config);
Open a command prompt and create a folder named *sqltest*. Open the folder you c
} ```
+> [!NOTE]
+> For more information about using managed identity for authentication, complete the tutorial to [access data via managed identity](../../app-service/app-service-web-tutorial-connect-msi.md).
+ > [!NOTE] > The code example uses the **AdventureWorksLT** sample database in Azure SQL Database.
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md Binary files differ
azure-sql Service Tier Hyperscale Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale-frequently-asked-questions-faq.md
- Title: Azure SQL Database Hyperscale FAQ
-description: Answers to common questions customers ask about a database in SQL Database in the Hyperscale service tier - commonly called a Hyperscale database.
-------- Previously updated : 02/03/2021-
-# Azure SQL Database Hyperscale FAQ
-
-This article provides answers to frequently asked questions for customers considering a database in the Azure SQL Database Hyperscale service tier, referred to as just Hyperscale in the remainder of this FAQ. This article describes the scenarios that Hyperscale supports and the features that are compatible with Hyperscale.
--- This FAQ is intended for readers who have a brief understanding of the Hyperscale service tier and are looking to have their specific questions and concerns answered.-- This FAQ isnΓÇÖt meant to be a guidebook or answer questions on how to use a Hyperscale database. For an introduction to Hyperscale, we recommend you refer to the [Azure SQL Database Hyperscale](service-tier-hyperscale.md) documentation.-
-## General questions
-
-### What is a Hyperscale database
-
-A Hyperscale database is a database in SQL Database in the Hyperscale service tier that is backed by the Hyperscale scale-out storage technology. A Hyperscale database supports up to 100 TB of data and provides high throughput and performance, as well as rapid scaling to adapt to the workload requirements. Scaling is transparent to the application ΓÇô connectivity, query processing, etc. work like any other database in Azure SQL Database.
-
-### What resource types and purchasing models support Hyperscale
-
-The Hyperscale service tier is only available for single databases using the vCore-based purchasing model in Azure SQL Database.
-
-### How does the Hyperscale service tier differ from the General Purpose and Business Critical service tiers
-
-The vCore-based service tiers are differentiated based on database availability and storage type, performance, and maximum size, as described in the following table.
-
-| | Resource type | General Purpose | Hyperscale | Business Critical |
-|::|::|::|::|::|
-| **Best for** |All|Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB, fast vertical and horizontal compute scaling, fast database restore.|OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Resource type** ||SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
-| **Compute size**|SQL Database* | 1 to 80 vCores | 1 to 80 vCores* | 1 to 80 vCores |
-| **Compute size**|SQL Managed Instance | 8, 16, 24, 32, 40, 64, 80 vCores | N/A | 8, 16, 24, 32, 40, 64, 80 vCores |
-| **Storage type** | All |Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance) |
-| **Storage size** | SQL Database *| 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
-| **Storage size** | SQL Managed Instance | 32 GB ΓÇô 8 TB | N/A | 32 GB ΓÇô 4 TB |
-| **IOPS** | Single database | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS|
-| **IOPS** | SQL Managed Instance | Depends on file size | N/A | 1375 IOPS/vCore |
-|**Availability**|All|1 replica, no Read Scale-out, no local cache | Multiple replicas, up to 4 Read Scale-out, partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
-|**Backups**|All|RA-GRS, 7-35 day retention (7 days by default)| RA-GRS, 7 day retention, constant time point-in-time recovery (PITR) | RA-GRS, 7-35 day retention (7 days by default) |
-
-\* Elastic pools are not supported in the Hyperscale service tier
-
-### Who should use the Hyperscale service tier
-
-The Hyperscale service tier is intended for customers who have large on-premises SQL Server databases and want to modernize their applications by moving to the cloud, or for customers who are already using Azure SQL Database and want to significantly expand the potential for database growth. Hyperscale is also intended for customers who seek both high performance and high scalability. With Hyperscale, you get:
--- Database size up to 100 TB-- Fast database backups regardless of database size (backups are based on storage snapshots)-- Fast database restores regardless of database size (restores are from storage snapshots)-- Higher log throughput regardless of database size and the number of vCores-- Read Scale-out using one or more read-only replicas, used for read offloading and as hot standbys.-- Rapid scale up of compute, in constant time, to be more powerful to accommodate the heavy workload and then scale down, in constant time. This is similar to scaling up and down between a P6 and a P11, for example, but much faster as this is not a size of data operation.-
-### What regions currently support Hyperscale
-
-The Hyperscale service tier is currently available in the regions listed under [Azure SQL Database Hyperscale Overview](service-tier-hyperscale.md#regions).
-
-### Can I create multiple Hyperscale databases per server
-
-Yes. For more information and limits on the number of Hyperscale databases per server, see [SQL Database resource limits for single and pooled databases on a server](resource-limits-logical-server.md).
-
-### What are the performance characteristics of a Hyperscale database
-
-The Hyperscale architecture provides high performance and throughput while supporting large database sizes.
-
-### What is the scalability of a Hyperscale database
-
-Hyperscale provides rapid scalability based on your workload demand.
--- **Scaling Up/Down**-
- With Hyperscale, you can scale up the primary compute size in terms of resources like CPU and memory, and then scale down, in constant time. Because the storage is shared, scaling up and scaling down is not a size of data operation.
-- **Scaling In/Out**-
- With Hyperscale, you also get the ability to provision one or more additional compute replicas that you can use to serve your read requests. This means that you can use these additional compute replicas as read-only replicas to offload your read workload from the primary compute. In addition to read-only, these replicas also serve as hot-standbys in case of a failover from the primary.
-
- Provisioning of each of these additional compute replicas can be done in constant time and is an online operation. You can connect to these additional read-only compute replicas by setting the `ApplicationIntent` argument on your connection string to `ReadOnly`. Any connections with the `ReadOnly` application intent are automatically routed to one of the additional read-only compute replicas.
-
-## Deep dive questions
-
-### Can I mix Hyperscale and single databases in a single server
-
-Yes, you can.
-
-### Does Hyperscale require my application programming model to change
-
-No, your application programming model stays as is. You use your connection string as usual and the other regular ways to interact with your Hyperscale database.
-
-### What transaction isolation level is the default in a Hyperscale database
-
-On the primary replica, the default transaction isolation level is RCSI (Read Committed Snapshot Isolation). On the Read Scale-out secondary replicas, the default isolation level is Snapshot.
-
-### Can I bring my on-premises or IaaS SQL Server license to Hyperscale
-
-Yes, [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) is available for Hyperscale. Every SQL Server Standard core can map to 1 Hyperscale vCores. Every SQL Server Enterprise core can map to 4 Hyperscale vCores. You donΓÇÖt need a SQL license for secondary replicas. The Azure Hybrid Benefit price will be automatically applied to Read Scale-out (secondary) replicas.
-
-### What kind of workloads is Hyperscale designed for
-
-Hyperscale supports all SQL Server workloads, but it is primarily optimized for OLTP. You can bring Hybrid (HTAP) and Analytical (data mart) workloads as well.
-
-### How can I choose between Azure Synapse Analytics and Azure SQL Database Hyperscale
-
-If you are currently running interactive analytics queries using SQL Server as a data warehouse, Hyperscale is a great option because you can host small and mid-size data warehouses (such as a few TB up to 100 TB) at a lower cost, and you can migrate your SQL Server data warehouse workloads to Hyperscale with minimal T-SQL code changes.
-
-If you are running data analytics on a large scale with complex queries and sustained ingestion rates higher than 100 MB/s, or using Parallel Data Warehouse (PDW), Teradata, or other Massively Parallel Processing (MPP) data warehouses, Azure Synapse Analytics may be the best choice.
-
-## Hyperscale compute questions
-
-### Can I pause my compute at any time
-
-Not at this time, however you can scale your compute and number of replicas down to reduce cost during non-peak times.
-
-### Can I provision a compute replica with extra RAM for my memory-intensive workload
-
-No. To get more RAM, you need to upgrade to a higher compute size. For more information, see [Hyperscale storage and compute sizes](resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen5).
-
-### Can I provision multiple compute replicas of different sizes
-
-No.
-
-### How many Read Scale-out replicas are supported
-
-The Hyperscale databases are created with one Read Scale-out replica (two replicas including primary) by default. You can scale the number of read-only replicas between 0 and 4 using [Azure portal](https://portal.azure.com) or [REST API](/rest/api/sql/databases/createorupdate).
-
-### For high availability, do I need to provision additional compute replicas
-
-In Hyperscale databases, data resiliency is provided at the storage level. You only need one replica to provide resiliency. When the compute replica is down, a new replica is created automatically with no data loss.
-
-However, if thereΓÇÖs only one replica, it may take some time to build the local cache in the new replica after failover. During the cache rebuild phase, the database fetches data directly from the page servers, resulting in higher storage latency and degraded query performance.
-
-For mission-critical apps that require high availability with minimal failover impact, you should provision at least 2 compute replicas including the primary compute replica. This is the default configuration. That way there is a hot-standby replica available that serves as a failover target.
-
-## Data size and storage questions
-
-### What is the maximum database size supported with Hyperscale
-
-100 TB.
-
-### What is the size of the transaction log with Hyperscale
-
-The transaction log with Hyperscale is practically infinite. You do not need to worry about running out of log space on a system that has a high log throughput. However, log generation rate might be throttled for continuous aggressively writing workloads. The peak sustained log generation rate is 100 MB/s.
-
-### Does my `tempdb` scale as my database grows
-
-Your `tempdb` database is located on local SSD storage and is sized proportionally to the compute size that you provision. Your `tempdb` is optimized to provide maximum performance benefits. `tempdb` size is not configurable and is managed for you.
-
-### Does my database size automatically grow, or do I have to manage the size of data files
-
-Your database size automatically grows as you insert/ingest more data.
-
-### What is the smallest database size that Hyperscale supports or starts with
-
-40 GB. A Hyperscale database is created with a starting size of 10 GB. Then, it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB. Each of these 10 GB chunks is allocated in a different page server in order to provide more IOPS and higher I/O parallelism. Because of this optimization, even if you choose initial database size smaller than 40 GB, the database will grow to at least 40 GB automatically.
-
-### In what increments does my database size grow
-
-Each data file grows by 10 GB. Multiple data files may grow at the same time.
-
-### Is the storage in Hyperscale local or remote
-
-In Hyperscale, data files are stored in Azure standard storage. Data is fully cached on local SSD storage, on page servers that are close to the compute replicas. In addition, compute replicas have data caches on local SSD and in memory, to reduce the frequency of fetching data from remote page servers.
-
-### Can I manage or define files or filegroups with Hyperscale
-
-No. Data files are added automatically. The common reasons for creating additional filegroups do not apply in the Hyperscale storage architecture.
-
-### Can I provision a hard cap on the data growth for my database
-
-No.
-
-### How are data files laid out with Hyperscale
-
-The data files are controlled by page servers, with one page server per data file. As the data size grows, data files and associated page servers are added.
-
-### Is database shrink supported
-
-No.
-
-### Is data compression supported
-
-Yes, including row, page, and columnstore compression.
-
-### If I have a huge table, does my table data get spread out across multiple data files
-
-Yes. The data pages associated with a given table can end up in multiple data files, which are all part of the same filegroup. SQL Server uses [proportional fill strategy](/sql/relational-databases/databases/database-files-and-filegroups#file-and-filegroup-fill-strategy) to distribute data over data files.
-
-## Data migration questions
-
-### Can I move my existing databases in Azure SQL Database to the Hyperscale service tier
-
-Yes. You can move your existing databases in Azure SQL Database to Hyperscale. This is a one-way migration. You canΓÇÖt move databases from Hyperscale to another service tier. For proofs of concept (POCs), we recommend you make a copy of your database and migrate the copy to Hyperscale.
-
-The time required to move an existing database to Hyperscale consists of the time to copy data, and the time to replay the changes made in the source database while copying data. The data copy time is proportional to data size. The time to replay changes will be shorter if the move is done during a period of low write activity.
-
-### Can I move my Hyperscale databases to other service tiers
-
-No. At this time, you canΓÇÖt move a Hyperscale database to another service tier.
-
-### Do I lose any functionality or capabilities after migration to the Hyperscale service tier
-
-Yes. Some Azure SQL Database features are not supported in Hyperscale yet, including but not limited to long term backup retention. After you migrate your databases to Hyperscale, those features stop working. We expect these limitations to be temporary.
-
-### Can I move my on-premises SQL Server database, or my SQL Server database in a cloud virtual machine to Hyperscale
-
-Yes. You can use all existing migration technologies to migrate to Hyperscale, including transactional replication, and any other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS). See also the [Azure Database Migration Service](../../dms/dms-overview.md), which supports many migration scenarios.
-
-### What is my downtime during migration from an on-premises or virtual machine environment to Hyperscale, and how can I minimize it
-
-Downtime for migration to Hyperscale is the same as the downtime when you migrate your databases to other Azure SQL Database service tiers. You can use [transactional replication](replication-to-sql-database.md#data-migration-scenario
-) to minimize downtime migration for databases up to few TB in size. For very large databases (10+ TB), you can consider to migrate data using ADF, Spark, or other data movement technologies.
-
-### How much time would it take to bring in X amount of data to Hyperscale
-
-Hyperscale is capable of consuming 100 MB/s of new/changed data, but the time needed to move data into databases in Azure SQL Database is also affected by available network throughput, source read speed and the target database service level objective.
-
-### Can I read data from blob storage and do fast load (like Polybase in Azure Synapse Analytics)
-
-You can have a client application read data from Azure Storage and load data load into a Hyperscale database (just like you can with any other database in Azure SQL Database). Polybase is currently not supported in Azure SQL Database. As an alternative to provide fast load, you can use [Azure Data Factory](../../data-factory/index.yml), or use a Spark job in [Azure Databricks](/azure/azure-databricks/) with the [Spark connector for SQL](spark-connector.md). The Spark connector to SQL supports bulk insert.
-
-It is also possible to bulk read data from Azure Blob store using BULK INSERT or OPENROWSET: [Examples of Bulk Access to Data in Azure Blob Storage](/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage#accessing-data-in-a-csv-file-referencing-an-azure-blob-storage-location).
-
-Simple recovery or bulk logging model is not supported in Hyperscale. Full recovery model is required to provide high availability and point-in-time recovery. However, Hyperscale log architecture provides better data ingest rate compared to other Azure SQL Database service tiers.
-
-### Does Hyperscale allow provisioning multiple nodes for parallel ingesting of large amounts of data
-
-No. Hyperscale is a symmetric multi-processing (SMP) architecture and is not a massively parallel processing (MPP) or a multi-master architecture. You can only create multiple replicas to scale out read-only workloads.
-
-### What is the oldest SQL Server version supported for migration to Hyperscale
-
-SQL Server 2005. For more information, see [Migrate to a single database or a pooled database](migrate-to-database-from-sql-server.md#migrate-to-a-single-database-or-a-pooled-database). For compatibility issues, see [Resolving database migration compatibility issues](migrate-to-database-from-sql-server.md#resolving-database-migration-compatibility-issues).
-
-### Does Hyperscale support migration from other data sources such as Amazon Aurora, MySQL, PostgreSQL, Oracle, DB2, and other database platforms
-
-Yes. [Azure Database Migration Service](../../dms/dms-overview.md) supports many migration scenarios.
-
-## Business continuity and disaster recovery questions
-
-### What SLAs are provided for a Hyperscale database
-
-See [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/sql-database/v1_4/). Additional secondary compute replicas increase availability, up to 99.99% for a database with two or more secondary compute replicas.
-
-### Are the database backups managed for me by Azure SQL Database
-
-Yes.
-
-### How often are the database backups taken
-
-There are no traditional full, differential, and log backups for Hyperscale databases. Instead, there are regular storage snapshots of data files. Log that is generated is simply retained as-is for the configured retention period, allowing restore to any point in time within the retention period.
-
-### Does Hyperscale support point-in-time restore
-
-Yes.
-
-### What is the Recovery Point Objective (RPO)/Recovery Time Objective (RTO) for database restore in Hyperscale
-
-The RPO is 0 min. Most restore operations complete within 60 minutes regardless of database size. Restore time may be longer for larger databases, and if the database had experienced significant write activity before and up to the restore point in time.
-
-### Does database backup affect compute performance on my primary or secondary replicas
-
-No. Backups are managed by the storage subsystem, and leverage storage snapshots. They do not impact user workloads.
-
-### Can I perform geo-restore with a Hyperscale database
-
-Yes. Geo-restore is fully supported. Unlike point-in-time restore, geo-restore requires a size-of-data operation. Data files are copied in parallel, so the duration of this operation depends primarily on the size of the largest file in the database, rather than on total database size. Geo-restore time will be significantly shorter if the database is restored in the Azure region that is [paired](../../best-practices-availability-paired-regions.md) with the region of the source database.
-
-### Can I set up geo-replication with Hyperscale database
-
-Not at this time.
-
-### Can I take a Hyperscale database backup and restore it to my on-premises server, or on SQL Server in a VM
-
-No. The storage format for Hyperscale databases is different from any released version of SQL Server, and you donΓÇÖt control backups or have access to them. To take your data out of a Hyperscale database, you can extract data using any data movement technologies, i.e. Azure Data Factory, Azure Databricks, SSIS, etc.
-
-## Cross-feature questions
-
-### Do I lose any functionality or capabilities after migration to the Hyperscale service tier
-
-Yes. Some Azure SQL Database features are not supported in Hyperscale, including but not limited to long term backup retention. After you migrate your databases to Hyperscale, those features stop working.
-
-### Will Polybase work with Hyperscale
-
-No. Polybase is not supported in Azure SQL Database.
-
-### Does Hyperscale have support for R and Python
-
-Not at this time.
-
-### Are compute nodes containerized
-
-No. Hyperscale processes run on [Service Fabric](https://azure.microsoft.com/services/service-fabric/) nodes (VMs), not in containers.
-
-## Performance questions
-
-### How much write throughput can I push in a Hyperscale database
-
-Transaction log throughput cap is set to 100 MB/s for any Hyperscale compute size. The ability to achieve this rate depends on multiple factors, including but not limited to workload type, client configuration, and having sufficient compute capacity on the primary compute replica to produce log at this rate.
-
-### How many IOPS do I get on the largest compute
-
-IOPS and IO latency will vary depending on the workload patterns. If the data being accessed is cached on the compute replica, you will see similar IO performance as with local SSD.
-
-### Does my throughput get affected by backups
-
-No. Compute is decoupled from the storage layer. This eliminates performance impact of backup.
-
-### Does my throughput get affected as I provision additional compute replicas
-
-Because the storage is shared and there is no direct physical replication happening between primary and secondary compute replicas, the throughput on primary replica will not be directly affected by adding secondary replicas. However, we may throttle continuous aggressively writing workload on the primary to allow log apply on secondary replicas and page servers to catch up, to avoid poor read performance on secondary replicas.
-
-### How do I diagnose and troubleshoot performance problems in a Hyperscale database
-
-For most performance problems, particularly the ones not rooted in storage performance, common SQL diagnostic and troubleshooting steps apply. For Hyperscale-specific storage diagnostics, see [SQL Hyperscale performance troubleshooting diagnostics](hyperscale-performance-diagnostics.md).
-
-## Scalability questions
-
-### How long would it take to scale up and down a compute replica
-
-Scaling compute up or down typically takes up to 2 minutes regardless of data size.
-
-### Is my database offline while the scaling up/down operation is in progress
-
-No. The scaling up and down will be online.
-
-### Should I expect connection drop when the scaling operations are in progress
-
-Scaling up or down results in existing connections being dropped when a failover happens at the end of the scaling operation. Adding secondary replicas does not result in connection drops.
-
-### Is the scaling up and down of compute replicas automatic or end-user triggered operation
-
-End-user. Not automatic.
-
-### Does the size of my `tempdb` database and RBPEX cache also grow as the compute is scaled up
-
-Yes. The `tempdb` database and [RBPEX cache](service-tier-hyperscale.md#distributed-functions-architecture) size on compute nodes will scale up automatically as the number of cores is increased.
-
-### Can I provision multiple primary compute replicas, such as a multi-master system, where multiple primary compute heads can drive a higher level of concurrency
-
-No. Only the primary compute replica accepts read/write requests. Secondary compute replicas only accept read-only requests.
-
-## Read scale-out questions
-
-### How many secondary compute replicas can I provision
-
-We create one secondary replica for Hyperscale databases by default. If you want to adjust the number of replicas, you can do so using [Azure portal](https://portal.azure.com) or [REST API](/rest/api/sql/databases/createorupdate).
-
-### How do I connect to these secondary compute replicas
-
-You can connect to these additional read-only compute replicas by setting the `ApplicationIntent` argument on your connection string to `ReadOnly`. Any connections marked with `ReadOnly` are automatically routed to one of the additional read-only compute replicas. For details, see [Use read-only replicas to offload read-only query workloads](read-scale-out.md).
-
-### How do I validate if I have successfully connected to secondary compute replica using SSMS or other client tools?
-
-You can execute the following T-SQL query:
-`SELECT DATABASEPROPERTYEX ('<database_name>', 'Updateability')`.
-The result is `READ_ONLY` if you are connected to a read-only secondary replica, and `READ_WRITE` if you are connected to the primary replica. Note that the database context must be set to the name of the Hyperscale database, not to the `master` database.
-
-### Can I create a dedicated endpoint for a Read Scale-out replica
-
-No. You can only connect to Read Scale-out replicas by specifying `ApplicationIntent=ReadOnly`.
-
-### Does the system do intelligent load balancing of the read workload
-
-No. A new connection with read-only intent is redirected to an arbitrary Read Scale-out replica.
-
-### Can I scale up/down the secondary compute replicas independently of the primary replica
-
-No. The secondary compute replica are also used as high availability failover targets, so they need to have the same configuration as the primary to provide expected performance after failover.
-
-### Do I get different `tempdb` sizing for my primary compute and my additional secondary compute replicas
-
-No. Your `tempdb` database is configured based on the compute size provisioning, your secondary compute replicas are the same size as the primary compute.
-
-### Can I add indexes and views on my secondary compute replicas
-
-No. Hyperscale databases have shared storage, meaning that all compute replicas see the same tables, indexes, and views. If you want additional indexes optimized for reads on secondary, you must add them on the primary.
-
-### How much delay is there going to be between the primary and secondary compute replicas
-
-Data latency from the time a transaction is committed on the primary to the time it is readable on a secondary depends on current log generation rate, transaction size, load on the replica, and other factors. Typical data latency for small transactions is in tens of milliseconds, however there is no upper bound on data latency. Data on a given secondary replica is always transactionally consistent. However, at a given point in time data latency may be different for different secondary replicas. Workloads that need to read committed data immediately should run on the primary replica.
-
-## Next steps
-
-For more information about the Hyperscale service tier, see [Hyperscale service tier](service-tier-hyperscale.md).
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
These are the current limitations to the Hyperscale service tier as of GA. We'r
## Next steps -- For an FAQ on Hyperscale, see [Frequently asked questions about Hyperscale](service-tier-hyperscale-frequently-asked-questions-faq.md).
+- For an FAQ on Hyperscale, see [Frequently asked questions about Hyperscale](service-tier-hyperscale-frequently-asked-questions-faq.yml).
- For information about service tiers, see [Service tiers](purchasing-models.md) - See [Overview of resource limits on a server](resource-limits-logical-server.md) for information about limits at the server and subscription levels. - For purchasing model limits for a single database, see [Azure SQL Database vCore-based purchasing model limits for a single database](resource-limits-vcore-single-databases.md).
azure-sql Service Tiers General Purpose Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
The following table describes the key differences between service tiers for the
| | SQL Managed Instance | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) | | **Log write throughput** | SQL Database | [1.875 MB/s per vCore (max 30 MB/s)](resource-limits-vcore-single-databases.md#general-purposeprovisioned-computegen4) | 100 MB/s | [6 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md#business-criticalprovisioned-computegen4) | | | SQL Managed Instance | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | [4 MB/s per vcore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) |
-|**Availability**|All| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.md#what-slas-are-provided-for-a-hyperscale-database) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
+|**Availability**|All| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
|**Backups**|All|RA-GRS, 7-35 days (7 days by default). Maximum retention for Basic tier is 7 days. | RA-GRS, 7 days, constant time point-in-time recovery (PITR) | RA-GRS, 7-35 days (7 days by default) | |**In-memory OLTP** | | N/A | N/A | Available | |**Read-only replicas**| | 0 built-in <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 - 4 built-in | 1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) |
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
The following virtual network features are currently *not supported* with SQL Ma
- **AzurePlatformDNS**: Using the AzurePlatformDNS [service tag](../../virtual-network/service-tags-overview.md) to block platform DNS resolution would render SQL Managed Instance unavailable. Although SQL Managed Instance supports customer-defined DNS for DNS resolution inside the engine, there is a dependency on platform DNS for platform operations. - **NAT gateway**: Using [Azure Virtual Network NAT](../../virtual-network/nat-overview.md) to control outbound connectivity with a specific public IP address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with Virtual Network NAT. - **IPv6 for Azure Virtual Network**: Deploying SQL Managed Instance to [dual stack IPv4/IPv6 virtual networks](../../virtual-network/ipv6-overview.md) is expected to fail. Associating network security group (NSG) or route table (UDR) containing IPv6 address prefixes to SQL Managed Instance subnet, or adding IPv6 address prefixes to NSG or UDR that is already associated with Managed instance subnet, would render SQL Managed Instance unavailable. SQL Managed Instance deployments to a subnet with NSG and UDR that already have IPv6 prefixes are expected to fail.
+- **Azure DNS private zones with a name reserved for Microsoft services**: Following is the list of reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net, table.core.windows.net, management.core.windows.net, monitoring.core.windows.net, queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net, servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with associated [Azure DNS private zone](../../dns/private-dns-privatednszone.md) with a name reserved for Microsoft services would fail. Associating Azure DNS private zone with reserved name with a virtual network containing Managed Instance, would render SQL Managed Instance unavailable. Please folow [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md) for the proper Private Link configuration.
## Next steps
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md Binary files differ
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
Last updated 03/22/2021
# Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
For more information, see [private cloud upgrades concepts article][concepts-upgrades].
You can view the privileges granted to the Azure VMware Solution CloudAdmin role
:::image type="content" source="media/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
-The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more details, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
+The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more information, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
| Privilege | Description | | | -- |
To prevent the creation of roles that can't be assigned or deleted, Azure VMware
## NSX-T Manager access and identity >[!NOTE]
->NSX-T 2.5 is currently supported.
+>NSX-T 2.5 is currently supported for all new private clouds.
Use the *administrator* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) Gateways, segments (logical switches), and all services. The privileges give you access to the NSX-T Tier-0 (T0) Gateway. A change to the T0 Gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 Gateway.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
Azure VMware Solution clusters are based on hyper-converged, bare-metal infrastr
| Host Type | CPU | RAM (GB) | vSAN NVMe cache Tier (TB, raw) | vSAN SSD capacity tier (TB, raw) | | : | :: | :: | :: | :: |
-| AVS36 | dual Intel 18 core 2.3 GHz | 576 | 3.2 | 15.20 |
+| AV36 | dual Intel 18 core 2.3 GHz | 576 | 3.2 | 15.20 |
Hosts used to build or scale clusters come from an isolated pool of hosts. Those hosts have passed hardware tests and have had all data securely deleted.
Private cloud vCenter and NSX-T configurations are on an hourly backup schedule.
Now that you've covered Azure VMware Solution private cloud concepts, you may want to learn about: -- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md).-- [Azure VMware Solution storage concepts](concepts-storage.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md)
+- [Azure VMware Solution storage concepts](concepts-storage.md)
+- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md)
<!-- LINKS - internal --> [concepts-networking]: ./concepts-networking.md
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage
-description: Learn about the key storage capabilities in Azure VMware Solution private clouds.
+description: Learn about storage capacity, storage policies, fault tolerance, and storage integration in Azure VMware Solution private clouds.
Previously updated : 03/13/2021+ Last updated : 04/23/2021
-# Azure VMware Solution storage concepts
+# Azure VMware Solution storage concepts
Azure VMware Solution private clouds provide native, cluster-wide storage with VMware vSAN. All local storage from each host in a cluster is used in a vSAN datastore, and data-at-rest encryption is available and enabled by default. You can use Azure Storage resources to extend storage capabilities of your private clouds. ## vSAN clusters
-Local storage in each cluster host is used as part of a vSAN datastore. All diskgroups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of 15.4 TB. The size of the raw capacity tier of a cluster is the per host capacity times the number of hosts. For example, a four host cluster will provide 61.6-TB raw capacity in the vSAN capacity tier.
+The [AVS36 SKU](https://azure.microsoft.com/pricing/details/azure-vmware/) includes two 1.6-TB NVMe cache and eight 1.9-TB raw storage capacity. These are then split into two disk groups. The size of the raw capacity tier of a cluster is the per-host capacity times the number of hosts. For example, a four host cluster provides 61.6-TB raw capacity in the vSAN capacity tier.
-Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datastores are created as part of a private cloud deployment and are available for use immediately. The cloudadmin user and all users in the CloudAdmin group can manage datastores with these vSAN privileges:
+Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datastores are created as part of private cloud deployment and are available for use immediately. The **cloudadmin** user and all users assigned to the CloudAdmin role can manage datastores with these vSAN privileges:
- Datastore.AllocateSpace - Datastore.Browse
Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datas
- Datastore.FileManagement - Datastore.UpdateVirtualMachineMetadata
-## Data-at-rest encryption
+>[!IMPORTANT]
+>You can't change the name of datastores or clusters.
+
+## Storage policies and fault tolerance
+
+That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or you apply a new policy, the cluster continues to grow with this configuration. In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an architecture perspective.
-vSAN datastores use data-at-rest encryption by default. The encryption solution is KMS-based and supports vCenter operations for key management. Keys are stored encrypted, wrapped by an Azure Key Vault master key. When a host is removed from a cluster, data on SSDs is invalidated immediately.
-## Scaling
-Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use AVS36 hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
+|Provisioning type |Description |
+|||
+|**Thick** | Is reserved or pre-allocated storage space. It protects systems by allowing them to function even if the vSAN datastore is full because the space is already reserved. For example, if you create a 10-GB virtual disk with thick provisioning, the full amount of virtual disk storage capacity is pre-allocated on the physical storage of the virtual disk and consumes all the space allocated to it in the datastore. It won't allow other virtual machines (VMs) to share the space from the datastore. |
+|**Thin** | Consumes the space that it needs initially and grows to the data space demand used in the datastore. Outside the default (thick provision), you can create VMs with FTT-1 thin provisioning. For dedupe setup, use thin provisioning for your VM template. |
+
+>[!TIP]
+>If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend to deploy the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2.
+>
+>:::image type="content" source="media/vsphere-vm-storage-policies-2.png" alt-text="Screenshot ":::
++
+## Data-at-rest encryption
+
+vSAN datastores use data-at-rest encryption by default using keys stored in Azure Key Vault. The encryption solution is KMS-based and supports vCenter operations for key management. When a host is removed from a cluster, data on SSDs is invalidated immediately.
## Azure storage integration You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+## Alerts and monitoring
+
+Microsoft provides alerts when capacity consumption exceeds 75%. You can monitor capacity consumption metrics that are integrated into Azure Monitor. For more information, see [Configure Azure Alerts in Azure VMware Solution](configure-alerts-for-azure-vmware-solution.md).
+ ## Next steps Now that you've covered Azure VMware Solution storage concepts, you may want to learn about: -- [Private cloud identity concepts](concepts-identity.md).-- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md).-- [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
+- [Scale clusters in the private cloud][tutorial-scale-private-cloud]
- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md)
+- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md)
+ <!-- LINKS - external-->
backup Active Directory Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/active-directory-backup-restore.md
This article outlines the proper procedures for backing up and restoring Active
- Make sure at least one domain controller is backed up. If you back up more than one domain controller, make sure all the ones holding the [FSMO (Flexible Single Master Operation) roles](/windows-server/identity/ad-ds/plan/planning-operations-master-role-placement) are backed up. -- Back up Active Directory frequently. The backup should never be more than the tombstone lifetime (by default 60 days), because objects older than the tombstone lifetime will be "tombstoned" and no longer considered valid.
+- Back up Active Directory frequently. The backup age should never be older than the tombstone lifetime (TSL) because objects older than the TSL will be "tombstoned" and no longer considered valid.
+ - The default TSL, for domains built on Windows Server 2003 SP2 and later, is 180 days.
+ - You can verify the configured TSL by using the following PowerShell script:
+
+ ```powershell
+ (Get-ADObject $('CN=Directory Service,CN=Windows NT,CN=Services,{0}' -f (Get-ADRootDSE).configurationNamingContext) -Properties tombstoneLifetime).tombstoneLifetime
+ ```
- Have a clear disaster recovery plan that includes instructions on how to restore your domain controllers. To prepare for restoring an Active Directory forest, read the [Active Directory Forest Recovery Guide](/windows-server/identity/ad-ds/manage/ad-forest-recovery-guide).
To restore an on-premises domain controller, follow the directions in for restor
## Next steps -- [Support matrix for Azure Backup](backup-support-matrix.md)
+- [Support matrix for Azure Backup](backup-support-matrix.md)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql.md Binary files differ
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 09/13/2019 Last updated : 04/21/2021
Here's what's supported if you want to back up Linux machines.
**Action** | **Support** |
-Back up Linux Azure VMs with the Linux Azure VM agent | File consistent backup.<br/><br/> App-consistent backup using [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk and use it to replace a disk on an existing VM. You can also restore individual files and folders.
+Back up Linux Azure VMs with the Linux Azure VM agent | File consistent backup.<br/><br/> App-consistent backup using [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk, and use it to replace a disk on an existing VM. You can also restore individual files and folders.
Back up Linux Azure VMs with MARS agent | Not supported.<br/><br/> The MARS agent can only be installed on Windows machines. Back up Linux Azure VMs with DPM/MABS | Not supported. Backup Linux Azure VMs with docker mount points | Currently, Azure Backup doesnΓÇÖt support exclusion of docker mount points as these are mounted at different paths every time.
Back up VMs that are deployed from [Azure Marketplace](https://azuremarketplace.
Back up VMs that are deployed from a custom image (third-party) |Supported.<br/><br/> The VM must be running a supported operating system.<br/><br/> When recovering files on the VM, you can restore only to a compatible OS (not an earlier or later OS). Back up VMs that are migrated to Azure| Supported.<br/><br/> To back up the VM, the VM agent must be installed on the migrated machine. Back up Multi-VM consistency | Azure Backup doesn't provide data and application consistency across multiple VMs.
-Backup with [Diagnostic Settings](../azure-monitor/essentials/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
+Backup with [Diagnostic Settings](../azure-monitor/essentials/platform-logs-overview.md) | Unsupported. <br/><br/> If the restore of the Azure VM with diagnostic settings is triggered using the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, then the restore fails.
Restore of Zone-pinned VMs | Supported (for a VM that's backed-up after Jan 2019 and where [availability zones](https://azure.microsoft.com/global-infrastructure/availability-zones/) are available).<br/><br/>We currently support restoring to the same zone that's pinned in VMs. However, if the zone is unavailable due to an outage, the restore will fail. Gen2 VMs | Supported <br> Azure Backup supports backup and restore of [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). When these VMs are restored from Recovery point, they're restored as [Gen2 VMs](https://azure.microsoft.com/updates/generation-2-virtual-machines-in-azure-public-preview/). Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Supported for managed VMs. [Spot VMs](../virtual-machines/spot-vms.md) | Unsupported. Azure Backup restores Spot VMs as regular Azure VMs.
-[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) | Supported<br></br>While restoring an Azure VM through the [Create New](backup-azure-arm-restore-vms.md#create-a-vm) option, though the restore gets successful, Azure VM can't be restored in the dedicated host. To achieve this, we recommend you to restore as disks. While [restoring as disks](backup-azure-arm-restore-vms.md#restore-disks) with the template, create a VM in dedicated host, and then attach the disks.<br></br>This is also applicable in secondary region, while performing [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore).
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure VM Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for both uniform and flexible orchestration models to back up and restore Single Azure VM.
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 11/11/2020 Last updated : 04/22/2021 # What's new in Azure Backup
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- April 2021
+ - [Enhancements to encryption using customer-managed keys for Azure Backup (in preview)](#enhancements-to-encryption-using-customer-managed-keys-for-azure-backup-in-preview)
- March 2021 - [Azure Disk Backup is now generally available](#azure-disk-backup-is-now-generally-available) - [Backup center is now generally available](#backup-center-is-now-generally-available)
Now, in addition to soft delete support for Azure VMs, SQL Server and SAP HANA w
For more information, see [Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads](soft-delete-sql-saphana-in-azure-vm.md).
+## Enhancements to encryption using customer-managed keys for Azure Backup (in preview)
+
+Azure Backup now provides enhanced capabilities (in preview) to manage encryption with customer-managed keys. Azure Backup allows you to bring in your own keys to encrypt the backup data in the Recovery Services vaults, thus providing you a better control.
+
+- Supports user-assigned managed identities to grant permissions to the keys to manage data encryption in the Recovery Services vault.
+- Enables encryption with customer-managed keys while creating a Recovery Services vault.
+ >[!NOTE]
+ >This feature is currently in limited preview. To sign up, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0H3_nezt2RNkpBCUTbWEapURDNTVVhGOUxXSVBZMEwxUU5FNDkyQkU4Ny4u), and write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
+- Allows you to use Azure Policies to audit and enforce encryption using customer-managed keys.
+>[!NOTE]
+>- The above capabilities are supported through the Azure portal only, PowerShell is currently not supported.<br>If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you canΓÇÖt use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
+>- You can use the audit policy for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021.
+>- For vaults with the CMK encryption enabled before this date, the policy might fail to apply, or might show false negative results (that is, these vaults may be reported as non-compliant, despite having CMK encryption enabled). [Learn more](encryption-at-rest-with-cmk.md#using-azure-policies-for-auditing-and-enforcing-encryption-utilizing-customer-managed-keys-in-preview).
+
+For more information, see [Encryption for Azure Backup using customer-managed keys](encryption-at-rest-with-cmk.md).
+ ## Next steps - [Azure Backup guidance and best practices](guidance-best-practices.md)
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/disk-encryption.md Binary files differ
blockchain Hyperledger Fabric Consortium Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/templates/hyperledger-fabric-consortium-azure-kubernetes-service.md Binary files differ
cognitive-services Luis Migration Api V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-migration-api-v3.md
Title: Prediction endpoint changes in the V3 API description: The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs. +
+ms.
+ Previously updated : 06/30/2020 Last updated : 04/21/2021
If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app
If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
+For information on using the Bing Spell Check API, see [How to correct misspelled words](luis-tutorial-bing-spellcheck.md).
-## Not supported
-
-### Bing Spell Check
-This API is not supported in V3 prediction endpoint - continue to use V2 API prediction endpoint for spelling corrections. If you need spelling correction while using V3 API, have the client application call the [Bing Spell Check](../bing-spell-check/overview.md) API, and change the text to the correct spelling, prior to sending the text to the LUIS API.
+## Not supported
-## Bot Framework and Azure Bot Service client applications
+### Bot Framework and Azure Bot Service client applications
Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
+## Speech Devices SDK 1.16.0:
+
+- Fixed [Github issue #22](https://github.com/Azure-Samples/Cognitive-Services-Speech-Devices-SDK/issues/22).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 1.16.0. For more information, see its [release notes](./releasenotes.md).
+ ## Speech Devices SDK 1.15.0: - Upgraded to new Microsoft Audio Stack (MAS) with improved beamforming and noise reduction for speech.
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Title: How to get facial pose events for lip-sync
-description: The Speech SDK supports viseme event in speech synthesis, which are used to represent the key poses in observed speech, such as the position of the lips, jaw and tongue when producing a particular phoneme.
+description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw and tongue when producing a particular phoneme.
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events > [!NOTE]
-> Viseme only works for `en-US-AriaNeural` voice for now.
+> Viseme events are only available for `en-US-AriaNeural` voice for now.
-A viseme is the visual description of a phoneme in spoken language.
+A _viseme_ is the visual description of a phoneme in spoken language.
It defines the position of the face and mouth when speaking a word. Each viseme depicts the key facial poses for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
With this release, we now support a total of 142 neural voices across 60 languag
**Get facial pose events to animate characters**
-The [Viseme event](how-to-speech-synthesis-viseme.md) is added to Neural TTS, which allows users to get the facial pose sequence and duration from synthesized speech. Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthesized speech. Now, viseme only works for en-US-AriaNeural voice.
+Neural Text-to-speech now includes the [viseme event](how-to-speech-synthesis-viseme.md). Viseme events allow users to get a sequence of facial poses along with synthesized speech. Visemes can be used to control the movement of 2D and 3D avatar models, matching mouth movements to synthesized speech. Viseme events are only available for `en-US-AriaNeural` voice at this time.
**Add the bookmark element in Speech Synthesis Markup Language (SSML)**
cognitive-services Text Analytics How To Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection.md
A positive score of 1.0 expresses the highest possible confidence level of the a
### Ambiguous content
-In some cases it may be hard to disambiguate languages based on the input. You can use the `countryHint` parameter to specify a 2-letter country/region code. By default the API is using the "US" as the default countryHint, to remove this behavior you can reset this parameter by setting this value to empty string `countryHint = ""` .
+In some cases it may be hard to disambiguate languages based on the input. You can use the `countryHint` parameter to specify an [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) country/region code. By default the API is using the "US" as the default countryHint, to remove this behavior you can reset this parameter by setting this value to empty string `countryHint = ""` .
For example, "Impossible" is common to both English and French and if given with limited context the response will be based on the "US" country/region hint. If the origin of the text is known to be coming from France that can be given as a hint.
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
zone_pivot_groups: acs-plat-ios-android
Get started with Azure Communication Services by using the Communication Services Teams Embed SDK to add teams meetings to your app. - ::: zone pivot="platform-android" [!INCLUDE [Teams Embed with Android](./includes/get-started-android.md)] ::: zone-end
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
zone_pivot_groups: acs-azp-java-net-python-csharp-js
## Troubleshooting
-Common questions and issues:
+Common Questions and Issues:
- Purchasing phone is supported in the US only. To purchase phone numbers, ensure that: - The associated Azure subscription billing address is located in the United States. You cannot move a resource to another subscription at this time.
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md Binary files differ
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md Binary files differ
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
Using Azure Synapse Link, you can now build no-ETL HTAP solutions by directly li
When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the change feed or ETL to sync the data.
-### Column store for analytical workloads on operational data
+## Column store for analytical workloads on operational data
Analytical workloads typically involve aggregations and sequential scans of selected fields. By storing the data in a column-major order, the analytical store allows a group of values for each field to be serialized together. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets.
The following image shows transactional row store vs. analytical column store in
:::image type="content" source="./media/analytical-store-introduction/transactional-analytical-data-stores.png" alt-text="Transactional row store Vs analytical column store in Azure Cosmos DB" border="false":::
-### Decoupled performance for analytical workloads
+## Decoupled performance for analytical workloads
There is no impact on the performance of your transactional workloads due to analytical queries, as the analytical store is separate from the transactional store. Analytical store does not need separate request units (RUs) to be allocated.
-### Auto-Sync
+## Auto-Sync
Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com).
-The auto-sync capability along with analytical store provides the following key benefits:
+At the end of each execution of the automatic sync process, your transactional data will be immediately available for Azure Synapse Analytics runtimes:
-### Scalability & elasticity
+* Azure Synapse Analytics Spark pools can read all data, including the most recent updates, through Spark tables, which are updated automatically, or via the `spark.read` command, that always reads the last state of the data.
+
+* Azure Synapse Analytics SQL Serverless pools can read all data, including the most recent updates, through views, which are updated automatically, or via `SELECT` together with the` OPENROWSET` commands, which always reads the latest status of the data.
+
+> [!NOTE]
+> Your transactional data will be synchronized to analytical store even if your transactional TTL is smaller than 2 minutes.
+
+## Scalability & elasticity
By using horizontal partitioning, Azure Cosmos DB transactional store can elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it is 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store.
-### <a id="analytical-schema"></a>Automatically handle schema updates
+## <a id="analytical-schema"></a>Automatically handle schema updates
Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. With the auto-sync capability, Azure Cosmos DB manages the schema inference over the latest updates from the transactional store. It also manages the schema representation in the analytical store out-of-the-box which, includes handling nested data types. As your schema evolves, and new properties are added over time, the analytical store automatically presents a unionized schema across all historical schemas in the transactional store.
-#### Schema constraints
+### Schema constraints
The following constraints are applicable on the operational data in Azure Cosmos DB when you enable analytical store to automatically infer and represent the schema correctly:
The following constraints are applicable on the operational data in Azure Cosmos
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
-#### Schema representation
+### Schema representation
There are two modes of schema representation in the analytical store. These modes have tradeoffs between the simplicity of a columnar representation, handling the polymorphic schemas, and simplicity of query experience:
There are two modes of schema representation in the analytical store. These mode
* Full fidelity schema representation > [!NOTE]
-> For SQL (Core) API accounts, when analytical store is enabled, the default schema representation in the analytical store is well-defined. Whereas for Azure Cosmos DB API for MongoDB accounts, the default schema representation in the analytical store is a full fidelity schema representation. If you have scenarios requiring a different schema representation than the default offering for each of these APIs, reach out to the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com) to enable it.
+> For SQL (Core) API accounts, when analytical store is enabled, the default schema representation in the analytical store is well-defined. Whereas for Azure Cosmos DB API for MongoDB accounts, the default schema representation in the analytical store is a full fidelity schema representation.
**Well-defined schema representation**
Here is a map of all the property data types and their suffix representations in
|ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")| |Document |".object" | {"a": "a"}|
-### Cost-effective archival of historical data
+## Cost-effective archival of historical data
Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis. After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure the 'Transactional Store Time to Live (Transactional TTL)' property to have records automatically deleted from the transactional store after a certain time period. Similarly, the 'Analytical Store Time To Live (Analytical TTL)' allows you to manage the lifecycle of data retained in the analytical store independent from the transactional store. By enabling analytical store and configuring TTL properties, you can seamlessly tier and define the data retention period for the two stores.
-### Global Distribution
+> [!NOTE]
+Currently analytical store doesn't support backup and restore. Your backup policy can't be planned relying on analytical store. For more information, check the limitations section of [this](synapse-link.md#limitations) document. It is important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, at no RUs costs, we cannot guarantee the use of this snapshot to backfeed the transactional store. This process is not supported.
+
+## Global Distribution
If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions of that account. Any changes to operational data are globally replicated in all regions. You can run analytical queries effectively against the nearest regional copy of your data in Azure Cosmos DB.
-### Security
+## Security
Authentication with the analytical store is the same as the transactional store for a given database. You can use primary or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. Access to this Linked Service is available to anyone who has access into the workspace.
-### Support for multiple Azure Synapse Analytics runtimes
+## Support for multiple Azure Synapse Analytics runtimes
The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts. By decoupling the analytical storage system from the analytical compute system, data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. As of today, Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. > [!NOTE]
-> You can only read from analytical store using Azure Synapse Analytics run time. You can write the data back to your transactional store as a serving layer.
+> You can only read from analytical store using Azure Synapse Analytics runtimes. And Azure Synapse Analytics runtimes can read from analytical store, that is read-only from the customer's perspective. You can write the data back to Cosmos DB transactional store using Azure Synapse Analytics Spark pool.
## <a id="analytical-store-pricing"></a> Pricing
Some points to consider:
* You can achieve longer retention of your operational data in the analytical store by setting analytical TTL >= transactional TTL at the container level. * The analytical store can be made to mirror the transactional store by setting analytical TTL = transactional TTL.
-When you enable analytical store on a container:
+How to enable analytical store on a container:
-* From the Azure portal, the analytical TTL option is set to the default value of -1. You can change this value to 'n' seconds, by navigating to container settings under Data Explorer.
+* From the Azure portal, the analytical TTL option, when turned on, is set to the default value of -1. You can change this value to 'n' seconds, by navigating to container settings under Data Explorer.
-* From the Azure SDK or PowerShell or CLI, the analytical TTL option can be enabled by setting it to either -1 or 'n'.
+* From the Azure Management SDK, Azure Cosmos DB SDKs, PowerShell, or CLI, the analytical TTL option can be enabled by setting it to either -1 or 'n' seconds.
To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md#create-analytical-ttl).
To learn more, see the following docs:
* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
-* [Frequently asked questions about Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.md)
+* [Frequently asked questions about Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml)
* [Azure Synapse Link for Azure Cosmos DB Use cases](synapse-link-use-cases.md)
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
To learn more, see the following docs:
* [Azure Cosmos DB analytical store overview.](analytical-store-introduction.md)
-* [Frequently asked questions about Synapse Link for Azure Cosmos DB.](synapse-link-frequently-asked-questions.md)
+* [Frequently asked questions about Synapse Link for Azure Cosmos DB.](synapse-link-frequently-asked-questions.yml)
* [Apache Spark in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-concepts.md).
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet.md Binary files differ
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-consistency.md Binary files differ
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.11.13 (21 April 2021)
+
+ - This release updates the local Data Explorer content to latest Azure Portal version and adds a new MongoDB endpoint configuration, "4.0".
+ ### 2.11.11 (22 February 2021) - This release updates the local Data Explorer content to latest Azure portal version.
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md
-
+ Title: How to use a system-assigned managed identity to access Azure Cosmos DB data description: Learn how to configure an Azure Active Directory (Azure AD) system-assigned managed identity (managed service identity) to access keys from Azure Cosmos DB.
cosmos-db Mongodb Introduction Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-introduction-experiment.md
+
+ Title: Introduction to Azure Cosmos DB API for MongoDB
+description: Learn how you can use Azure Cosmos DB to store and query massive amounts of data using Azure Cosmos DB's API for MongoDB.
+++ Last updated : 04/20/2021+++++
+# Azure Cosmos DB's API for MongoDB
+
+The Azure Cosmos DB API for MongoDB makes it easy to use Cosmos DB as if it were a MongoDB database. You can leverage your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB account's connection string.
+
+The API for MongoDB has the following added benefits of being built on [Azure Cosmos DB](introduction.md):
+
+* **Instantaneous scalability**: By enabling the [Autoscale](provision-throughput-autoscale.md) feature, your database can scale up/down with zero warmup period.
+* **Fully managed sharding**: Managing database infrastructure is hard and time consuming. The API for MongoDB manages the infrastructure for you, which includes sharding. So you don't have to worry about the management and focus your time on building applications for your users.
+* **Up to five 9's of availability**: [99.999% availability](high-availability.md) is easily configurable to ensure your data is always there for you.
+* **Cost efficient, unlimited scalability**: Sharded collections can scale to any size, in a cost-efficient manner, in increments as small as 1/100th of a VM due to economies of scale and resource governance.
+* **Upgrades take seconds**: All API versions are contained within one codebase, making version changes as simple as [flipping a switch](mongodb-version-upgrade.md), with zero downtime.
+* **Synapse link analytics**: Analyze your real-time data using the fully-isolated [Azure Synapse analytical store](synapse-link.md) for fast and cheap analytics queries. A simple checkbox ensures your data is available in Synapse with no-ETL (extract-transform-load).
+
+> [!NOTE]
+> [You can use Azure Cosmos DB API for MongoDB for free with the free Tier!](how-pricing-works.md). With Azure Cosmos DB free tier, you'll get the first 400 RU/s and 5 GB of storage in your account for free, applied at the account level.
++
+## How the API works
+
+Azure Cosmos DB API for MongoDB implements the wire protocol for MongoDB. This implementation allows transparent compatibility with native MongoDB client SDKs, drivers, and tools. Azure Cosmos DB does not host the MongoDB database engine. Any MongoDB client driver compatible with the API version you are using should be able to connect, with no special configuration.
+
+MongoDB feature compatibility:
+
+Azure Cosmos DB API for MongoDB is compatible with the following MongoDB server versions:
+- [Version 4.0](mongodb-feature-support-40.md)
+- [Version 3.6](mongodb-feature-support-36.md)
+- [Version 3.2](mongodb-feature-support.md)
+
+All the API for MongoDB versions run on the same codebase, making upgrades a simple task that can be completed in seconds with zero downtime. Azure Cosmos DB simply flips a few feature flags to go from one version to another. The feature flags also enable continued support for older API versions such as 3.2 and 3.6. You can choose the server version that works best for you.
++
+## What you need to know to get started
+
+* You are not billed for virtual machines in a cluster. [Pricing](how-pricing-works.md) is based on throughput in request units (RUs) configured on a per database or per collection basis. The first 400 RUs per second are free with [Free Tier](how-pricing-works.md).
+
+* There are three ways to deploy Azure Cosmos DB API for MongoDB:
+ * [Provisioned throughput](set-throughput.md): Set a RU/sec number and change it manually. This model best fits consistent workloads.
+ * [Autoscale](provision-throughput-autoscale.md): Set an upper bound on the throughput you need. Throughput instantly scales to match your needs. This model best fits workloads that change frequently and optimizes their costs.
+ * [Serverless](serverless.md) (preview): Only pay for the throughput you use, period. This model best fits dev/test workloads.
+
+* Sharded cluster performance is dependent on the shard key you choose when creating a collection. Choose a shard key carefully to ensure that your data is evenly distributed across shards.
+
+## Quickstart
+
+* [Migrate an existing MongoDB Node.js web app](create-mongodb-nodejs.md).
+* [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md)
+* [Build a console app using Azure Cosmos DB's API for MongoDB and Java SDK](create-mongodb-java.md)
+
+## Next steps
+
+* Follow the [Connect a MongoDB application to Azure Cosmos DB](connect-mongodb-account.md) tutorial to learn how to get your account connection string information.
+* Follow the [Use Studio 3T with Azure Cosmos DB](mongodb-mongochef.md) tutorial to learn how to create a connection between your Cosmos database and MongoDB app in Studio 3T.
+* Follow the [Import MongoDB data into Azure Cosmos DB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) tutorial to import your data to a Cosmos database.
+* Connect to a Cosmos account using [Robo 3T](mongodb-robomongo.md).
+* Learn how to [Configure read preferences for globally distributed apps](../cosmos-db/tutorial-global-distribution-mongodb.md).
+* Find the solutions to commonly found errors in our [Troubleshooting guide](mongodb-troubleshoot.md)
cosmos-db Mongodb Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-introduction.md
description: Learn how you can use Azure Cosmos DB to store and query massive am
Previously updated : 03/02/2021 Last updated : 04/21/2021
+adobe-target: true
+adobe-target-activity: DocsExpΓÇô 396298ΓÇôA/BΓÇôDocsΓÇôIntroToCosmosDBAPIforMongoDB-RevampΓÇôFY21Q4
+adobe-target-experience: Experience B
+adobe-target-content: ./mongodb-introduction-experiment
+ # Azure Cosmos DB's API for MongoDB [!INCLUDE[appliesto-mongodb-api](includes/appliesto-mongodb-api.md)]
cosmos-db Synapse Link Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-frequently-asked-questions.md
- Title: Frequently asked questions about Azure Synapse Link for Azure Cosmos DB
-description: Get answers to frequently asked questions about Synapse Link for Azure Cosmos DB in areas such as billing, analytical store, security, time to live on analytical store.
---- Previously updated : 11/30/2020----
-# Frequently asked questions about Azure Synapse Link for Azure Cosmos DB
-
-Azure Synapse Link for Azure Cosmos DB creates a tight integration between Azure Cosmos DB and Azure Synapse Analytics. It enables customers to run near real-time analytics over their operational data with full performance isolation from their transactional workloads and without an ETL pipeline. This article answers commonly asked questions about Synapse Link for Azure Cosmos DB.
-
-## General FAQ
-
-### Is Azure Synapse Link supported for all Azure Cosmos DB APIs?
-
-Azure Synapse Link is supported for the Azure Cosmos DB SQL (Core) API and for the Azure Cosmos DB API for MongoDB.
-
-### Is Azure Synapse Link supported for multi-region Azure Cosmos DB accounts?
-
-Yes, for multi-region Azure Cosmos accounts, the data stored in the analytical store is also globally distributed. Regardless of single write region or multiple write regions, analytical queries performed from Azure Synapse Analytics can be served from the closest local region.
-
-When planning to configure a multi-region Azure Cosmos DB account with analytical store support, it is recommended to have all the necessary regions added at time of account creation.
-
-### Can I choose to enable Azure Synapse Link for only certain region and not all regions in a multi-region account set-up?
-
-When Azure Synapse Link is enabled for a multi-region account, the analytical store is created in all regions. The underlying data is optimized for throughput and transactional consistency in the transactional store.
-
-### Is analytical store supported in all Azure Cosmos DB regions?
-
-Yes.
-
-### Is backup and restore supported for Azure Synapse Link enabled accounts?
-
-For the containers with analytical store turned on, automatic backup and restore of your data in the analytical store is not supported at this time.
-
-When Synapse Link is enabled on a database account, Azure Cosmos DB will continue to automatically [take backups](./online-backup-and-restore.md) of your data in the transactional store (only) of containers at scheduled backup interval, as always. It is important to note that when a container with analytical store turned on is restored to a new account, the container will be restored with only transactional store and no analytical store enabled.
-
-### Can I disable the Azure Synapse Link feature for my Azure Cosmos DB account?
-
-Currently, after the Synapse Link capability is enabled at the account level, you cannot disable it. Understand that you will not have any billing implications if the Synapse Link capability is enabled at the account level and there is no analytical store enabled containers.
-
-If you need to turn off the capability, you have 2 options. The first one is to delete and re-create a new Azure Cosmos DB account, migrating the data if necessary. The second option is to open a support ticket, to get help on a data migration to another account.
-
-### Does analytical store have any impact on Cosmos DB transactional SLAs?
-
-No, there is no impact.
-
-## Azure Cosmos DB analytical store
-
-### Can I enable analytical store on existing containers?
-
-Currently, the analytical store can only be enabled for new containers (both in new and existing accounts).
-
-### Can I disable analytical store on my Azure Cosmos DB containers after enabling it during container creation?
-
-Currently, the analytical store cannot be disabled on an Azure Cosmos DB container after it is enabled during container creation.
-
-### Is analytical store supported for Azure Cosmos DB containers with autoscale provisioned throughput?
-
-Yes, the analytical store can be enabled on containers with autoscale provisioned throughput.
-
-### Is there any effect on Azure Cosmos DB transactional store provisioned RUs?
-
-Azure Cosmos DB guarantees performance isolation between the transactional and analytical workloads. Enabling the analytical store on a container will not impact the RU/s provisioned on the Azure Cosmos DB transactional store. The transactions (read & write) and storage costs for the analytical store will be charged separately. See the [pricing for Azure Cosmos DB analytical store](analytical-store-introduction.md#analytical-store-pricing) for more details.
-
-### Can I restrict network access to Azure Cosmos DB analytical store?
-
-Yes you can configure a [managed private endpoint](analytical-store-private-endpoints.md) and restrict network access of analytical store to Azure Synapse managed virtual network. Managed private endpoints establish a private link to your analytical store.
-
-You can add both transactional store and analytical store private endpoints to the same Azure Cosmos DB account in an Azure Synapse Analytics workspace. If you only want to run analytical queries, you may only want to enable the analytical private endpoint in Synapse Analytics workspace.
-
-### Can I use customer-managed keys with the Azure Cosmos DB analytical store?
-
-You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner.
-To use customer-managed keys with the analytical store, you need to use your Azure Cosmos DB account's system-assigned managed identity in your Azure Key Vault access policy. This is described [here](how-to-setup-cmk.md#using-managed-identity). You should then be able to enable the analytical store on your account.
-
-### Are delete and update operations on the transactional store reflected in the analytical store?
-
-Yes, deletes and updates to the data in the transactional store will be reflected in the analytical store. You can configure the Time to Live (TTL) on the container to include historical data so that the analytical store retains all versions of items that satisfy the analytical TTL criteria. See the [overview of analytical TTL](analytical-store-introduction.md#analytical-ttl) for more details.
-
-### Can I connect to analytical store from analytics engines other than Azure Synapse Analytics?
-
-You can only access and run queries against the analytical store using the various run-times provided by Azure Synapse Analytics. The analytical store can be queried and analyzed using:
-
-* Synapse Spark with full support for Scala, Python, SparkSQL, and C#. Synapse Spark is central to data engineering and science scenarios
-* Serverless SQL pool with T-SQL language and support for familiar BI tools (For example, Power BI Premium, etc.)
-
-### Can I connect to analytical store from Synapse SQL provisioned?
-
-At this time, the analytical store cannot be accessed from Synapse SQL provisioned.
-
-### Can I write back the query aggregation results from Synapse back to the analytical store?
-
-Analytical store is a read-only store in an Azure Cosmos DB container. So, you cannot directly write back the aggregation results to the analytical store, but can write them to the Azure Cosmos DB transactional store of another container, which can later be leveraged as a serving layer.
-
-### Is the autosync replication from transactional store to the analytical store asynchronous or synchronous and what are the latencies?
-
-Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. We would like to learn more how this latency fits your scenarios. For that, please reach out to the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com).
-
-### Are there any scenarios where the items from the transactional store are not automatically propagated to the analytical store?
-
-If specific items in your container violate the [well-defined schema for analytics](analytical-store-introduction.md#analytical-schema), they will not be included in the analytical store. If you have scenarios blocked by well-defined schema for analytics, email the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com) for help.
-
-### Can I partition the data in analytical store differently from transactional store?
-
-The data in analytical store is partitioned based on the horizontal partitioning of shards in the transactional store. Currently, you cannot choose a different partitioning strategy for the analytical store.
-
-### Can I customize or override the way transactional data is transformed into columnar format in the analytical store?
-
-Currently you canΓÇÖt transform the data items when they are automatically propagated from the transactional store to analytical store. If you have scenarios blocked by this limitation, email the [Azure Cosmos DB team](mailto:cosmosdbsynapselink@microsoft.com).
-
-### Is analytical store supported by Terraform?
-
-Currently Terraform doesnΓÇÖt support analytical store containers. Please check [Terraform GitHub Issues](https://github.com/hashicorp/terraform/issues) for more information.
-
-## Analytical Time to live (TTL)
-
-### Is TTL for analytical data supported at both container and item level?
-
-At this time, TTL for analytical data can only be configured at container level and there is no support to set analytical TTL at item level.
-
-### After setting the container level analytical TTL on an Azure Cosmos DB container, can I change to a different value later?
-
-Yes, analytical TTL can be updated to any valid value. See the [Analytical TTL](analytical-store-introduction.md#analytical-ttl) article for more details about analytical TTL.
-
-### Can I update or delete an item from the analytical store after it has been TTLΓÇÖd out from the transactional store?
-
-All transactional updates and deletes are copied to the analytical store but if the item has been purged from the transactional store, then it cannot be updated in the analytical store. To learn more, see the [Analytical TTL](analytical-store-introduction.md#analytical-ttl) article.
-
-## Billing
-
-### What is the billing model of Azure Synapse Link for Azure Cosmos DB?
-
-The billing model of Azure Synapse Link includes the costs incurred by using the Azure Cosmos DB analytical store and the Synapse runtime. To learn more, see the [Azure Cosmos DB analytical store pricing](analytical-store-introduction.md#analytical-store-pricing) and [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/synapse-analytics/) articles.
-
-### What is the billing impact if I enable Synapse Link in my Azure Cosmos DB database account?
-
-None. You will only be charged when you create an analytical store enabled container and start to load data.
--
-## Security
-
-### What are the ways to authenticate with the analytical store?
-
-Authentication with the analytical store is the same as a transactional store. For a given database, you can authenticate with the primary or read-only key. You can leverage linked service in Azure Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. Access to this Linked Service is available for everyone who has access to the workspace.
-
-When using Synapse serverless SQL pools, you can query the Azure Cosmos DB analytical store by pre-creating SQL credentials storing the account keys and referencing these in the OPENROWSET function. To learn more, see [Query with a serverless SQL pool in Azure Synapse Link](../synapse-analytics/sql/query-cosmos-db-analytical-store.md) article.
-
-## Synapse run-times
-
-### What are the currently supported Synapse run-times to access Azure Cosmos DB analytical store?
-
-|Azure Synapse runtime |Current support |
-|||
-|Azure Synapse Spark pools | Read, Write (through transactional store), Table, Temporary View |
-|Azure Synapse serverless SQL pool | Read, View |
-|Azure Synapse SQL Provisioned | Not available |
-
-### Do my Azure Synapse Spark tables sync with my Azure Synapse serverless SQL pool tables the same way they do with Azure Data Lake?
-
-Currently, this feature is not available.
-
-### Can I do Spark structured streaming from analytical store?
-
-Currently Spark structured streaming support for Azure Cosmos DB is implemented using the change feed functionality of the transactional store and itΓÇÖs not yet supported from analytical store.
-
-### Is streaming supported?
-
-We do not support streaming of data from the analytical store.
-
-## Azure Synapse Studio
-
-### In the Azure Synapse Studio, how do I recognize if I'm connected to an Azure Cosmos DB container with the analytics store enabled?
-
-An Azure Cosmos DB container enabled with analytical store has the following icon:
--
-A transactional store container will be represented with the following icon:
-
-
-### How do you pass Azure Cosmos DB credentials from Azure Synapse Studio?
-
-Currently Azure Cosmos DB credentials are passed while creating the linked service by the user who has access to the Azure Cosmos DB databases. Access to that store is available to other users who have access to the workspace.
-
-## Next steps
-
-* Learn about the [benefits of Azure Synapse Link](synapse-link.md#synapse-link-benefits)
-
-* Learn about the [integration between Azure Synapse Link and Azure Cosmos DB](synapse-link.md#synapse-link-integration).
cosmos-db Synapse Link Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-use-cases.md
To learn more, see the following docs:
* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
-* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.md)
+* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml)
* [Apache Spark in Azure Synapse Analytics](../synapse-analytics/spark/apache-spark-concepts.md)
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
To learn more, see the following docs:
* [What is supported in Azure Synapse Analytics run time](../synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md)
-* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.md)
+* [Frequently asked questions about Azure Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml)
* [Azure Synapse Link for Azure Cosmos DB Use cases](synapse-link-use-cases.md)
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md Binary files differ
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-export-acm-data.md Binary files differ
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Title: Assign roles to Azure Enterprise Agreement service principal names
-description: This article helps you assign roles to service principal names using PowerShell and REST APIs.
+description: This article helps you assign roles to service principal names by using PowerShell and REST APIs.
tags: billing
# Assign roles to Azure Enterprise Agreement service principal names
-You can manage your Enterprise Agreement (EA) enrollment in the [Azure Enterprise portal](https://ea.azure.com/). You can create different roles to manage your organization, view costs, and create subscriptions. This article helps you automate some of those tasks using Azure PowerShell and REST APIs with Azure service principal names (SPNs).
+You can manage your Enterprise Agreement (EA) enrollment in the [Azure Enterprise portal](https://ea.azure.com/). You can create different roles to manage your organization, view costs, and create subscriptions. This article helps you automate some of those tasks by using Azure PowerShell and REST APIs with Azure service principal names (SPNs).
Before you begin, ensure that you're familiar with the following articles:
Before you begin, ensure that you're familiar with the following articles:
## Create and authenticate your service principal
-To automate EA actions using an SPN, you need to create an Azure Active Directory (Azure AD) application. It can authenticate in an automated manner. Read the following articles and following the steps in them to create and authenticate your service principal.
+To automate EA actions by using an SPN, you need to create an Azure Active Directory (Azure AD) application. It can authenticate in an automated manner.
-1. [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)
-2. [Get tenant and app ID values for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in)
+Follow the steps in these articles to create and authenticate your service principal.
-Here's an example screenshot showing application registration.
+- [Create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal)
+- [Get tenant and app ID values for signing in](../../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in)
+
+Here's an example of the application registration page.
:::image type="content" source="./media/assign-roles-azure-service-principals/register-an-application.png" alt-text="Screenshot showing Register an application." lightbox="./media/assign-roles-azure-service-principals/register-an-application.png" :::
-### Find your SPN and Tenant ID
+### Find your SPN and tenant ID
-You also need the Object ID of the SPN and the Tenant ID of the app. You need the information for permission assignment operations in later sections.
+You also need the object ID of the SPN and the tenant ID of the app. You need this information for permission assignment operations later in this article.
-You can find the Tenant ID of the Azure AD app on the overview page for the application. To find it in the Azure portal, navigate to Azure Active Directory and select **Enterprise applications**. Search for the app.
+1. Open Azure Active Directory, and then select **Enterprise applications**.
+1. Find your app in the list.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/enterprise-application.png" alt-text="Screenshot showing an example enterprise application." lightbox="./media/assign-roles-azure-service-principals/enterprise-application.png" :::
-Select the app. Here's an example showing the Application ID and Object ID.
+1. Select the app to find the application ID and object ID:
+ :::image type="content" source="./media/assign-roles-azure-service-principals/application-id-object-id.png" alt-text="Screenshot showing an application ID and object ID for an enterprise application." lightbox="./media/assign-roles-azure-service-principals/application-id-object-id.png" :::
-You can find the Tenant ID on the Microsoft Azure AD Overview page.
+1. Go to the Microsoft Azure AD **Overview** page to find the tenant ID.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/tenant-id.png" alt-text="Screenshot showing the tenant ID." lightbox="./media/assign-roles-azure-service-principals/tenant-id.png" :::
-Your principal tenant ID is also referred to as Principal ID, SPN, and Object ID in various locations. The value of your Azure AD tenant ID looks like a GUID with the following format: `11111111-1111-1111-1111-111111111111`.
+>[!NOTE]
+>Your tenant ID might be called a principal ID, SPN, or object ID in other locations. The value of your Azure AD tenant ID looks like a GUID with the following format: `11111111-1111-1111-1111-111111111111`.
## Permissions that can be assigned to the SPN
-For the next steps, you give permission to the Azure AD app to do actions using an EA role. You can assign only the following roles to the SPN. The role definition ID, exactly as shown, is used later in assignment steps.
+Later in this article, you'll give permission to the Azure AD app to act by using an EA role. You can assign only the following roles to the SPN, and you need the role definition ID, exactly as shown.
| Role | Actions allowed | Role definition ID | | | | |
For the next steps, you give permission to the Azure AD app to do actions using
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 | -- An enrollment reader can be assigned to an SPN only by a user with enrollment writer role.-- A department reader can be assigned to an SPN only by a user that has enrollment writer role or department writer role.-- A subscription creator role can be assigned to an SPN only by a user that is the Account Owner of the enrollment account. The role isn't shown in the EA portal. It's only created by programmatic means and is only for programmatic use.-- The EA purchaser role isn't shown in the EA portal. It's only created by programmatic means and is only for programmatic use.
+- An EnrollmentReader role can be assigned to an SPN only by a user who has an enrollment writer role.
+- A DepartmentReader role can be assigned to an SPN only by a user who has an enrollment writer or department writer role.
+- A SubscriptionCreator role can be assigned to an SPN only by a user who is the owner of the enrollment account. The role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
+- The EA purchaser role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
## Assign enrollment account role permission to the SPN
-Read the [Role Assignments - Put](/rest/api/billing/2019-10-01-preview/roleassignments/put) REST API article.
-
-While reading the article, select **Try it** to get started using the SPN.
--
-Sign in with your account into the tenant that has access to the enrollment where you want to assign access.
-
-Provide the following parameters as part of the API request.
-
-**billingAccountName**
+1. Read the [Role Assignments - Put](/rest/api/billing/2019-10-01-preview/roleassignments/put) REST API article. While you read the article, select **Try it** to get started by using the SPN.
-The parameter is the Billing account ID. You can find it in the Azure portal on the Cost Management + Billing overview page.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/put-try-it.png" alt-text="Screenshot showing the Try It option in the Put article." lightbox="./media/assign-roles-azure-service-principals/put-try-it.png" :::
+1. Use your account credentials to sign in to the tenant with the enrollment access that you want to assign.
-**billingRoleAssignmentName**
+1. Provide the following parameters as part of the API request.
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
+ - `billingAccountName`: This parameter is the **Billing account ID**. You can find it in the Azure portal on the **Cost Management + Billing** overview page.
-Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/billing-account-id.png" alt-text="Screenshot showing Billing account ID." lightbox="./media/assign-roles-azure-service-principals/billing-account-id.png" :::
-**api-version**
+ - `billingRoleAssignmentName`: This parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
-Use the **2019-10-01-preview** version.
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample request body at [Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/roleassignments/put#examples).
-The request body has JSON code that you need to use.
+ The request body has JSON code with three parameters that you need to use.
-Use the sample request body at [Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/roleassignments/put#examples).
+ | Parameter | Where to find it |
+ | | |
+ | `properties.principalId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.principalTenantId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` |
-There are three parameters that you need to use as part of the JSON.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
-| Parameter | Where to find it |
-| | |
-| properties.principalId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.principalTenantId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.roleDefinitionId | "/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/24f8edb6-1668-4659-b5e2-40bb5f3a7d7e" |
+ Notice that `24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` is a billing role definition ID for an EnrollmentReader.
-The Billing Account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
+1. Select **Run** to start the command.
-Notice that `24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` is a billing role definition ID for a EnrollmentReader.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/roleassignments-put-try-it-run.png" alt-text="Screenshot showing an example role assignment put Try It with example information ready to run." lightbox="./media/assign-roles-azure-service-principals/roleassignments-put-try-it-run.png" :::
-Select **Run** to start the command.
+ A `200 OK` response shows that the SPN was successfully added.
+Now you can use the SPN to automatically access EA APIs. The SPN has the EnrollmentReader role.
-A `200 OK` response shows that the SPN was successfully added.
+## Assign EA Purchaser role permission to the SPN
-Now you can use the SPN (Azure AD App with the object ID) to access EA APIs in an automated manner. The SPN has the EnrollmentReader role.
-
-## Assign EA Purchaser role permission to the SPN
-
-For the EA purchaser role, use the same steps for the enrollment reader. Specify the `roleDefinitionId`, using the following example.
+For the EA purchaser role, use the same steps for the enrollment reader. Specify the `roleDefinitionId`, using the following example:
`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ da6647fb-7651-49ee-be91-c43c4877f0c4"`
-
- ## Assign the department reader role to the SPN
-Before you begin, read the [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put) REST API article.
-
-While reading the article, select **Try it**.
--
-Sign in with your account into the tenant that has access to the enrollment where you want to assign access.
+1. Read the [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put) REST API article. While you read the article, select **Try it**.
-Provide the following parameters as part of the API request.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" alt-text="Screenshot showing the Try It option in the Enrollment Department Role Assignments Put article." lightbox="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" :::
-**billingAccountName**
+1. Use your account credentials to sign in to the tenant with the enrollment access that you want to assign.
-It's the Billing account ID. You can find it in the Azure portal on the Cost Management + Billing overview page.
+1. Provide the following parameters as part of the API request.
+ - `billingAccountName`: This parameter is the **Billing account ID**. You can find it in the Azure portal on the **Cost Management + Billing** overview page.
-**billingRoleAssignmentName**
+ :::image type="content" source="./media/assign-roles-azure-service-principals/billing-account-id.png" alt-text="Screenshot showing Billing account ID." lightbox="./media/assign-roles-azure-service-principals/billing-account-id.png" :::
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
+ - `billingRoleAssignmentName`: This parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
-Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
+ - `departmentName`: This parameter is the department ID. You can see department IDs in the Azure portal on the **Cost Management + Billing** > **Departments** page.
-**departmentName**
+ For this example, we used the ACE department. The ID for the example is `84819`.
-It's the Department ID. You can see department IDs in the Azure portal. Navigate to Cost Management + Billing > **Departments**.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/department-id.png" alt-text="Screenshot showing an example department ID." lightbox="./media/assign-roles-azure-service-principals/department-id.png" :::
-For this example, we used the ACE department. The ID for the example is `84819`.
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put](/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put).
+ The request body has JSON code with three parameters that you need to use.
-**api-version**
+ | Parameter | Where to find it |
+ | | |
+ | `properties.principalId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.principalTenantId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/db609904-a47f-4794-9be8-9bd86fbffd8a` |
-Use the **2019-10-01-preview** version.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
-The request body has JSON code that you need to use.
+ The billing role definition ID of `db609904-a47f-4794-9be8-9bd86fbffd8a` is for a department reader.
-Use the sample at [Enrollment Department Role Assignments - Put](/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put). There are three parameters that you need to use as part of the JSON.
+1. Select **Run** to start the command.
-| Parameter | Where to find it |
-| | |
-| properties.principalId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.principalTenantId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.roleDefinitionId | "/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/db609904-a47f-4794-9be8-9bd86fbffd8a" |
+ :::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it-run.png" alt-text="Screenshot showing an example Enrollment Department Role Assignments ΓÇô Put REST Try It with example information ready to run." lightbox="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it-run.png" :::
-The Billing Account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
+ A `200 OK` response shows that the SPN was successfully added.
-The billing role definition ID of `db609904-a47f-4794-9be8-9bd86fbffd8a` is for a Department Reader.
-
-Select **Run** to start the command.
--
-A `200 OK` response shows that the SPN was successfully added.
-
-Now you can use the SPN (Azure AD App with the object ID) to access EA APIs in an automated manner. The SPN has the DepartmentReader role.
+Now you can use the SPN to automatically access EA APIs. The SPN has the DepartmentReader role.
## Assign the subscription creator role to the SPN
-Read the [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) article.
-
-While reading it, select **Try It** to assign the subscription creator role to the SPN.
--
-Sign in with your account into the tenant that has access to the enrollment where you want to assign access.
-
-Provide the following parameters as part of the API request. Read the article at [Enrollment Account Role Assignments - Put - URI Parameters](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put#uri-parameters).
-
-**billingAccountName**
-
-The parameter is the Billing account ID. You can find it in the Azure portal on the Cost Management + Billing overview page.
-
+1. Read the [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) article. While you read it, select **Try It** to assign the subscription creator role to the SPN.
-**billingRoleAssignmentName**
+ :::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" alt-text="Screenshot showing the Try It option in the Enrollment Account Role Assignments Put article." lightbox="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" :::
-The parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command.
+1. Use your account credentials to sign in to the tenant with the enrollment access that you want to assign.
-Or, you can use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
-**enrollmentAccountName**
+1. Provide the following parameters as part of the API request. Read the article at [Enrollment Account Role Assignments - Put - URI Parameters](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put#uri-parameters).
-The parameter is the account ID. Find the account ID for the account name in the Azure portal in Cost Management + Billing in the Enrollment and department scope.
+ - `billingAccountName`: This parameter is the **Billing account ID**. You can find it in the Azure portal on the **Cost Management + Billing overview** page.
-For this example, we used the GTM Test account. The ID is `196987`.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/billing-account-id.png" alt-text="Screenshot showing the Billing account ID." lightbox="./media/assign-roles-azure-service-principals/billing-account-id.png" :::
+ - `billingRoleAssignmentName`: This parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID/UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
-**api-version**
+ - `enrollmentAccountName`: This parameter is the account **ID**. Find the account ID for the account name in the Azure portal on the **Cost Management + Billing** page.
-Use the **2019-10-01-preview** version.
+ For this example, we used the GTM Test Account. The ID is `196987`.
-The request body has JSON code that you need to use.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/account-id.png" alt-text="Screenshot showing the account ID." lightbox="./media/assign-roles-azure-service-principals/account-id.png" :::
-Use the sample at [Enrollment Department Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put#putenrollmentdepartmentadministratorroleassignment).
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put#putenrollmentdepartmentadministratorroleassignment).
-There are three parameters that you need to use as part of the JSON.
+ The request body has JSON code with three parameters that you need to use.
-| Parameter | Where to find it |
-| | |
-| properties.principalId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.principalTenantId | See [Find your SPN and Tenant ID](#find-your-spn-and-tenant-id). |
-| properties.roleDefinitionId | "/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/196987/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71" |
+ | Parameter | Where to find it |
+ | | |
+ | `properties.principalId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.principalTenantId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/196987/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
-The Billing Account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and the Azure portal.
-The billing role definition ID of `a0bcee42-bf30-4d1b-926a-48d21664ef71` is for the subscription creator role.
+ The billing role definition ID of `a0bcee42-bf30-4d1b-926a-48d21664ef71` is for the subscription creator role.
-Select **Run** to start the command.
+1. Select **Run** to start the command.
+ :::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-account-role-assignments-put-try-it.png" alt-text="Screenshot showing the Try It option in the Enrollment Account Role Assignments - Put article" lightbox="./media/assign-roles-azure-service-principals/enrollment-account-role-assignments-put-try-it.png" :::
-A `200 OK` response shows that the SPN has been successfully added.
+ A `200 OK` response shows that the SPN has been successfully added.
-Now you can use the SPN (Azure AD App with the object ID) to access EA APIs in an automated manner. The SPN has the SubscriptionCreator role.
+Now you can use the SPN to automatically access EA APIs. The SPN has the SubscriptionCreator role.
## Next steps -- Learn more about [Azure EA portal administration](ea-portal-administration.md).
+Learn more about [Azure EA portal administration](ea-portal-administration.md).
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
Title: Global parameters
description: Set global parameters for each of your Azure Data Factory environments --++ Last updated 03/15/2021
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-management-hub.md
Title: Management hub
description: Manage your connections, source control configuration and global authoring properties in the Azure Data Factory management hub --++ Last updated 02/01/2021
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-visually.md
Title: Visual authoring
description: Learn how to use visual authoring in Azure Data Factory --++ Last updated 09/08/2020
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
The following linked service is an Azure Blob storage linked service. Notice tha
Linked services can be created in the Azure Data Factory UX via the [management hub](author-management-hub.md) and any activities, datasets, or data flows that reference them.
-You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), Azure Resource Manager Template, and Azure portal.
+You can create linked services by using one of these tools or SDKs: [.NET API](quickstart-create-data-factory-dot-net.md), [PowerShell](quickstart-create-data-factory-powershell.md), [REST API](quickstart-create-data-factory-rest-api.md), [Azure Resource Manager Template](quickstart-create-data-factory-resource-manager-template.md), and [Azure portal](quickstart-create-data-factory-portal.md).
## Data store linked services
data-factory Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
description: Describes the roles and permissions required to create Data Factori
Last updated 11/5/2018 --++ # Roles and permissions for Azure Data Factory
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
Title: Connect to GitHub description: Use GitHub to specify your Common Data Model entity references-+ Last updated 06/03/2020-+
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Title: Continuous integration and delivery in Azure Data Factory description: Learn how to use continuous integration and delivery to move Data Factory pipelines from one environment (development, test, production) to another. --++ Last updated 04/01/2021
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-append-variable-activity.md
Title: Append Variable Activity in Azure Data Factory
description: Learn how to set the Append Variable activity to add a value to an existing array variable defined in a Data Factory pipeline --++ Last updated 10/09/2018
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
Title: Azure Function Activity in Azure Data Factory description: Learn how to use the Azure Function activity to run an Azure Function in a Data Factory pipeline--++
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-pipeline-activity.md
Title: Execute Pipeline Activity in Azure Data Factory description: Learn how you can use the Execute Pipeline Activity to invoke one Data Factory pipeline from another Data Factory pipeline.--++
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-expression-language-functions.md
Title: Expression and functions in Azure Data Factory description: This article provides information about expressions and functions that you can use in creating data factory entities.--++
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-filter-activity.md
Title: Filter activity in Azure Data Factory description: The Filter activity filters the inputs. --++
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-for-each-activity.md
Title: ForEach activity in Azure Data Factory description: The For Each Activity defines a repeating control flow in your pipeline. It is used for iterating over a collection and execute specified activities.--++
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-if-condition-activity.md
Title: If Condition activity in Azure Data Factory description: The If Condition activity allows you to control the processing flow based on a condition.--++
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-set-variable-activity.md
description: Learn how to use the Set Variable activity to set the value of an e
Last updated 04/07/2020--++ # Set Variable Activity in Azure Data Factory
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-switch-activity.md
Title: Switch activity in Azure Data Factory description: The Switch activity allows you to control the processing flow based on a condition.--++
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
Title: System variables in Azure Data Factory description: This article describes system variables supported by Azure Data Factory. You can use these variables in expressions when defining Data Factory entities.--++
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-until-activity.md
Title: Until activity in Azure Data Factory description: The Until activity executes a set of activities in a loop until the condition associated with the activity evaluates to true or it times out. --++
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-validation-activity.md
Title: Validation activity in Azure Data Factory description: The Validation activity does not continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies.--++
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-wait-activity.md
Title: Wait activity in Azure Data Factory description: The Wait activity pauses the execution of the pipeline for the specified period. --++ Last updated 01/12/2018
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-web-activity.md
Title: Web Activity in Azure Data Factory description: Learn how you can use Web Activity, one of the control flow activities supported by Data Factory, to invoke a REST endpoint from a pipeline.--++ Last updated 12/19/2018
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-webhook-activity.md
Title: Webhook activity in Azure Data Factory description: The webhook activity doesn't continue execution of the pipeline until it validates the attached dataset with certain criteria the user specifies.--++
data-factory Copy Clone Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-clone-data-factory.md
Title: Copy or clone a data factory in Azure Data Factory description: Learn how to copy or clone a data factory in Azure Data Factory --++ Last updated 06/30/2020
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
Last updated 09/03/2020 -+ # Troubleshoot Azure Data Factory UX Issues
data-factory Data Flow Rank https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-rank.md
Title: Rank transformation in mapping data flow description: How to use Azure Data Factory's mapping data flow rank transformation generate a ranking column--++
data-factory Data Flow Transformation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-transformation-overview.md
Title: Mapping data flow transformation overview description: An overview of the different transformations available in mapping data flow--++ Last updated 10/27/2020
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delta.md
Title: Delta format in Azure Data Factory description: Transform and move data from a delta lake using the delta format-+ Last updated 03/26/2020-+ # Delta format in Azure Data Factory
data-factory Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/frequently-asked-questions.md
Title: 'Azure Data Factory: Frequently asked questions ' description: Get answers to frequently asked questions about Azure Data Factory.--++ Last updated 02/10/2020
data-factory How To Fixed Width https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-fixed-width.md
Title: Process fixed-length text files with mapping data flows in Azure Data Factory description: Learn how to process fixed-length text files in Azure Data Factory using mapping data flows.-+ Last updated 8/18/2019
data-factory How To Invoke Ssis Package Managed Instance Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
description: Learn how to run SSIS packages by using Azure SQL Managed Instance
-+ Last updated 04/14/2020
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
Title: Use Azure Key Vault secrets in pipeline activities description: Learn how to fetch stored credentials from Azure key vault and use them during data factory pipeline runs. --++ Last updated 10/31/2019
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/iterative-development-debugging.md
Title: Iterative development and debugging in Azure Data Factory description: Learn how to develop and debug Data Factory pipelines iteratively in the ADF UX Previously updated : 02/23/2021 Last updated : 04/21/2021
data-factory Monitor Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-integration-runtime.md
description: Learn how to monitor different types of integration runtime in Azur
Last updated 08/11/2020--++ # Monitor an integration runtime in Azure Data Factory
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-programmatically.md
description: Learn how to monitor a pipeline in a data factory by using differen
Last updated 01/16/2018--++ # Programmatically monitor an Azure data factory
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Title: Monitor data factories using Azure Monitor description: Learn how to use Azure Monitor to monitor /Azure Data Factory pipelines by enabling diagnostic logs with information from Data Factory.--++
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
Title: Visually monitor Azure Data Factory description: Learn how to visually monitor Azure data factories--++
data-factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/naming-rules.md
Title: Rules for naming Azure Data Factory entities description: Describes naming rules for Data Factory entities.--++
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameterize-linked-services.md
description: Learn how to parameterize linked services in Azure Data Factory and
Last updated 03/18/2021--++ # Parameterize linked services in Azure Data Factory
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pricing-concepts.md
Title: Understanding Azure Data Factory pricing through examples description: This article explains and demonstrates the Azure Data Factory pricing model with detailed examples--++
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
Title: 'Quickstart: Create an Azure Data Factory using Python' description: Use a data factory to copy data from one location in Azure Blob storage to another location.--++ ms.devlang: python
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Title: Create an Azure Data Factory using an Azure Resource Manager template (AR
description: Create a sample Azure Data Factory pipeline using an Azure Resource Manager template (ARM template). tags: azure-resource-manager--++
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/samples-powershell.md
Title: Azure PowerShell Samples for Azure Data Factory description: Azure PowerShell Samples - Scripts to help you create and manage data factories. --++ Last updated 03/16/2021
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-templates-introduction.md
Title: Overview of templates
description: Learn how to use a pre-defined template to get started quickly with Azure Data Factory. --++ Last updated 01/04/2019
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/source-control.md
Title: Source control description: Learn how to configure source control in Azure Data Factory --++ Last updated 02/26/2021
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-databricks-python.md
description: Learn how to process or transform data by running a Databricks Pyth
Last updated 03/15/2018--++ # Transform data by running a Python activity in Azure Databricks
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-machine-learning-service.md
Title: Execute Azure Machine Learning pipelines
description: Learn how to run your Azure Machine Learning pipelines in your Azure Data Factory pipelines. --++ Last updated 07/16/2020
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow-portal.md
Title: Branching and chaining activities in a pipeline using Azure portal description: Learn how to control flow of data in Azure Data Factory pipeline by using the Azure portal.--++
data-factory Tutorial Control Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-control-flow.md
Title: Branching in Azure Data Factory pipeline description: Learn how to control flow of data in Azure Data Factory by branching and chaining activities.--++
data-factory Tutorial Data Flow Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-private.md
Title: Transform data with an Azure Data Factory managed virtual network mapping data flow description: This tutorial provides step-by-step instructions for using Azure Data Factory to transform data with mapping data flows.--++
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow.md
Title: Transform data using a mapping data flow description: This tutorial provides step-by-step instructions for using Azure Data Factory to transform data with mapping data flow--++
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Title: Incrementally copy data using Change Data Capture description: In this tutorial, you create an Azure Data Factory pipeline that copies delta data incrementally from a table in Azure SQL Managed Instance database to Azure Storage.--++ Last updated 02/18/2021
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
To view all of the digital twins in your instance, use a [query](how-to-query-gr
Here is the body of the basic query that will return a list of all digital twins in the instance: ## Update a digital twin
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
This article begins with sample queries that illustrate the query language struc
Here is the basic query that will return a list of all digital twins in the instance: ## Query by property Get digital twins by **properties** (including ID and metadata): As shown in the query above, the ID of a digital twin is queried using the metadata field `$dtId`.
As shown in the query above, the ID of a digital twin is queried using the metad
You can also get twins based on **whether a certain property is defined**. Here is a query that gets twins that have a defined *Location* property: This can help you to get twins by their *tag* properties, as described in [Add tags to digital twins](how-to-use-tags.md). Here is a query that gets all twins tagged with *red*: You can also get twins based on the **type of a property**. Here is a query that gets twins whose *Temperature* property is a number: >[!TIP] > If a property is of type `Map`, you can use the map keys and values directly in the query, like this:
-> :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="QueryByProperty4":::
+> :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByProperty4":::
## Query by model
So for example, if you query for twins of the model `dtmi:example:widget;4`, the
The simplest use of `IS_OF_MODEL` takes only a `twinTypeName` parameter: `IS_OF_MODEL(twinTypeName)`. Here is a query example that passes a value in this parameter: To specify a twin collection to search when there is more than one (like when a `JOIN` is used), add the `twinCollection` parameter: `IS_OF_MODEL(twinCollection, twinTypeName)`. Here is a query example that adds a value for this parameter: To do an exact match, add the `exact` parameter: `IS_OF_MODEL(twinTypeName, exact)`. Here is a query example that adds a value for this parameter: You can also pass all three arguments together: `IS_OF_MODEL(twinCollection, twinTypeName, exact)`. Here is a query example specifying a value for all three parameters: ## Query by relationship
The following sections give examples of what this looks like.
Here is a sample relationship-based query. This code snippet selects all digital twins with an *ID* property of 'ABC', and all digital twins related to these digital twins via a *contains* relationship. > [!NOTE] > The developer does not need to correlate this `JOIN` with a key value in the `WHERE` clause (or specify a key value inline with the `JOIN` definition). This correlation is computed automatically by the system, as the relationship properties themselves identify the target entity.
You can use the relationship query structure to identify a digital twin that's t
For instance, you can start with a source twin and follow its relationships to find the target twins of the relationships. Here is an example of a query that finds the target twins of the *feeds* relationships coming from the twin *source-twin*. You can also start with the target of the relationship and trace the relationship back to find the source twin. Here's an example of a query that finds the source twin of a *feeds* relationship to the twin *target-twin*. ### Query the properties of a relationship
The Azure Digital Twins query language allows filtering and projection of relati
As an example, consider a *servicedBy* relationship that has a *reportedCondition* property. In the below query, this relationship is given an alias of 'R' in order to reference its property. In the example above, note how *reportedCondition* is a property of the *servicedBy* relationship itself (NOT of some digital twin that has a *servicedBy* relationship).
To query on multiple levels of relationships, use a single `FROM` statement foll
Here is an example of a multi-join query, which gets all the light bulbs contained in the light panels in rooms 1 and 2. ## Count items You can count the number of items in a result set using the `Select COUNT` clause: Add a `WHERE` clause to count the number of items that meet a certain criteria. Here are some examples of counting with an applied filter based on the type of twin model (for more on this syntax, see [*Query by model*](#query-by-model) below): You can also use `COUNT` along with the `JOIN` clause. Here is a query that counts all the light bulbs contained in the light panels of rooms 1 and 2: ## Filter results: select top items You can select the several "top" items in a query using the `Select TOP` clause. ## Filter results: specify return set with projections
By using projections in the `SELECT` statement, you can choose which columns a q
Here is an example of a query that uses projection to return twins and relationships. The following query projects the *Consumer*, *Factory* and *Edge* from a scenario where a *Factory* with an ID of *ABC* is related to the *Consumer* through a relationship of *Factory.customer*, and that relationship is presented as the *Edge*. You can also use projection to return a property of a twin. The following query projects the *Name* property of the *Consumers* that are related to the *Factory* with an ID of *ABC* through a relationship of *Factory.customer*. You can also use projection to return a property of a relationship. Like in the previous example, the following query projects the *Name* property of the *Consumers* related to the *Factory* with an ID of *ABC* through a relationship of *Factory.customer*; but now it also returns two properties of that relationship, *prop1* and *prop2*. It does this by naming the relationship *Edge* and gathering its properties. You can also use aliases to simplify queries with projection. The following query does the same operations as the previous example, but it aliases the property names to `consumerName`, `first`, `second`, and `factoryArea`. Here is a similar query that queries the same set as above, but projects only the *Consumer.name* property as `consumerName`, and projects the complete *Factory* as a twin. ## Build efficient queries with the IN operator
For example, consider a scenario in which *Buildings* contain *Floors* and *Floo
1. Find floors in the building based on the `contains` relationship.
- :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="INOperatorWithout":::
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="INOperatorWithout":::
2. To find rooms, instead of considering the floors one-by-one and running a `JOIN` query to find the rooms for each one, you can query with a collection of the floors in the building (named *Floor* in the query below).
For example, consider a scenario in which *Buildings* contain *Floors* and *Floo
In query:
- :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="INOperatorWith":::
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="INOperatorWith":::
## Other compound query examples You can **combine** any of the above types of query using combination operators to include more detail in a single query. Here are some additional examples of compound queries that query for more than one type of twin descriptor at once. * Out of the devices that *Room 123* has, return the MxChip devices that serve the role of Operator
- :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples1":::
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="OtherExamples1":::
* Get twins that have a relationship named *Contains* with another twin that has an ID of *id1*
- :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples2":::
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="OtherExamples2":::
* Get all the rooms of this room model that are contained by *floor11*
- :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="OtherExamples3":::
+ :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="OtherExamples3":::
## Run queries with the API
digital-twins How To Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-tags.md
Once tags have been added to digital twins, the tags can be used to filter the t
Here is a query to get all twins that have been tagged as "red": You can also combine tags for more complex queries. Here is a query to get all twins that are round, and not red: ## Value tags
As with marker tags, you can use value tags to filter the twins in queries. You
From the example above, `red` is being used as a marker tag. Remember that this is a query to get all twins that have been tagged as "red": Here is a query to get all entities that are small (value tag), and not red: ## Next steps
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
Open a console window to the folder location **digital-twins-explorer-main/clien
After a few seconds, a browser window opens and the app appears in the browser.
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/explorer-blank.png" alt-text="Browser window showing an app running at localhost:3000. The app is called Azure Digital Twins Explorer and contains boxes for Query Explorer, Model View, Graph View, and Property Explorer. There's no onscreen data yet." lightbox="media/quickstart-azure-digital-twins-explorer/explorer-blank.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/explorer-blank.png" alt-text="Browser window showing an app running at localhost:3000. The app is called Azure Digital Twins Explorer and contains panels for Query Explorer, Model View, Graph View, and Property Explorer. There's no onscreen data yet." lightbox="media/quickstart-azure-digital-twins-explorer/explorer-blank.png":::
1. Select the **Sign In** button in the upper-right corner of the window, as shown in the following image, to configure Azure Digital Twins Explorer to work with the instance you've set up.
For this quickstart, the model files are already written and validated for you.
Follow these steps to upload models.
-1. In the **MODEL VIEW** box, select the **Upload a Model** icon.
+1. In the **MODEL VIEW** panel, select the **Upload a Model** icon.
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/upload-model.png" alt-text="In the Model View box, the middle icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/upload-model.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/upload-model.png" alt-text="In the Model View panel, the middle icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/upload-model.png":::
-1. In the file selector box that appears, go to the **digital-twins-explorer-main/client/examples** folder in the downloaded repository.
+1. In the file selector window that appears, go to the **digital-twins-explorer-main/client/examples** folder in the downloaded repository.
1. Select **Room.json** and **Floor.json**, and select **OK**. You can upload additional models if you want, but they won't be used in this quickstart. 1. Follow the pop-up dialog box that asks you to sign in to your Azure account.
Follow these steps to upload models.
> :::image type="content" source="media/quickstart-azure-digital-twins-explorer/error-models-popup.png" alt-text="A pop-up box reading 'Error: Error fetching models: ClientAuthError: Error opening popup window. This can happen if you are using IE or if popups are blocked in the browser.' with a Close button at the bottom." border="false"::: > Try disabling your pop-up blocker or using a different browser.
-Azure Digital Twins Explorer now uploads these model files to your Azure Digital Twins instance. They should show up in the **MODEL VIEW** box and display their friendly names and full model IDs. You can select the **View Model** information icons to see the DTDL code behind them.
+Azure Digital Twins Explorer now uploads these model files to your Azure Digital Twins instance. They should show up in the **MODEL VIEW** panel and display their friendly names and full model IDs. You can select the **View Model** information icons to see the DTDL code behind them.
:::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/model-info.png" alt-text="A view of the Model View box with two model definitions listed inside, Floor (dtmi:example:Floor;1) and Room (dtmi:example:Room;1). The View Model information icon showing a letter 'i' in a circle is highlighted for each model." lightbox="media/quickstart-azure-digital-twins-explorer/model-info.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/model-info.png" alt-text="A view of the Model View panel with two model definitions listed inside, Floor (dtmi:example:Floor;1) and Room (dtmi:example:Room;1). The View Model information icon showing a letter 'i' in a circle is highlighted for each model." lightbox="media/quickstart-azure-digital-twins-explorer/model-info.png":::
:::column-end::: :::column::: :::column-end:::
In this section, you'll upload precreated twins that are connected into a precre
Follow these steps to import the graph.
-1. In the **GRAPH VIEW** box, select the **Import Graph** icon.
+1. In the **GRAPH VIEW** panel, select the **Import Graph** icon.
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-graph.png" alt-text="In the Graph View box, an icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/import-graph.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-graph.png" alt-text="In the Graph View panel, an icon is highlighted. It shows an arrow pointing into a cloud." lightbox="media/quickstart-azure-digital-twins-explorer/import-graph.png":::
-2. In the file selector box, go to the **digital-twins-explorer-main/client/examples** folder, and select the **buildingScenario.xlsx** spreadsheet file. This file contains a description of the sample graph. Select **OK**.
+2. In the file selector window, go to the **digital-twins-explorer-main/client/examples** folder, and select the **buildingScenario.xlsx** spreadsheet file. This file contains a description of the sample graph. Select **OK**.
After a few seconds, Azure Digital Twins Explorer opens an **Import** view that shows a preview of the graph to be loaded.
-3. To confirm the graph upload, select the **Save** icon in the upper-right corner of the **GRAPH VIEW** box.
+3. To confirm the graph upload, select the **Save** icon in the upper-right corner of the **GRAPH VIEW** panel.
:::row::: :::column:::
Follow these steps to import the graph.
:::column-end::: :::row-end:::
-5. The graph has now been uploaded to Azure Digital Twins Explorer. To see the graph, select the **Run Query** button in the **GRAPH EXPLORER** box, near the top of the Azure Digital Twins Explorer window.
+5. The graph has now been uploaded to Azure Digital Twins Explorer. To see the graph, select the **Run Query** button in the **GRAPH EXPLORER** panel, near the top of the Azure Digital Twins Explorer window.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/run-query.png" alt-text="The Run Query button in the upper-right corner of the window is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/run-query.png":::
-This action runs the default query to select and display all digital twins. Azure Digital Twins Explorer retrieves all twins and relationships from the service. It draws the graph defined by them in the **GRAPH VIEW** box.
+This action runs the default query to select and display all digital twins. Azure Digital Twins Explorer retrieves all twins and relationships from the service. It draws the graph defined by them in the **GRAPH VIEW** panel.
## Explore the graph Now you can see the uploaded graph of the sample scenario. The circles (graph "nodes") represent digital twins. The lines represent relationships. The **Floor0** twin contains **Room0**, and the **Floor1** twin contains **Room1**.
If you're using a mouse, you can drag pieces of the graph to move them around.
### View twin properties
-You can select a twin to see a list of its properties and their values in the **PROPERTY EXPLORER** box.
+You can select a twin to see a list of its properties and their values in the **PROPERTY EXPLORER** panel.
Here are the properties of Room0: :::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Highlight around the Property Explorer box showing properties for Room0, which include (among others) a $dtId field of Room0, a Temperature field of 70, and a Humidity field of 30." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Highlight around the Property Explorer panel showing properties for Room0, which include (among others) a $dtId field of Room0, a Temperature field of 70, and a Humidity field of 30." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
:::column-end::: :::column::: :::column-end:::
Here are the properties of Room1:
:::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Highlight around the Property Explorer box showing properties for Room1, which include (among others) a $dtId field of Room1, a Temperature field of 80, and a Humidity field of 60." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Highlight around the Property Explorer panel showing properties for Room1, which include (among others) a $dtId field of Room1, a Temperature field of 80, and a Humidity field of 60." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
:::column-end::: :::column::: :::column-end:::
One way to query the twins in your graph is by their **properties**. Querying ba
In this section, you'll run a query to answer the question of how many twins in your environment have a temperature above 75.
-To see the answer, run the following query in the **QUERY EXPLORER** box.
+To see the answer, run the following query in the **QUERY EXPLORER** panel.
Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. For this reason, only Room1 shows up in the results here.
Recall from viewing the twin properties earlier that Room0 has a temperature of
You can use Azure Digital Twins Explorer to edit the properties of the twins represented in your graph. In this section, we'll raise the temperature of Room0 to 76.
-To start, select **Room0** to bring up its property list in the **PROPERTY EXPLORER** box.
+To start, rerun the following query to select all digital twins. This will display the full graph once more in the **GRAPH VIEW** panel.
++
+Select **Room0** to bring up its property list in the **PROPERTY EXPLORER** panel.
The properties in this list are editable. Select the temperature value of **70** to enable entering a new value. Enter **76**, and select the **Save** icon to update the temperature to **76**. :::row::: :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="The Property Explorer box showing properties for Room0. The temperature value is an editable box showing 76, and there's a highlight around the Save icon." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="The Property Explorer panel showing properties for Room0. The temperature value is an editable box showing 76, and there's a highlight around the Save icon." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
:::column-end::: :::column::: :::column-end:::
Now, you'll see a **Patch Information** window where the patch code appears that
To verify that the graph successfully registered your update to the temperature for Room0, rerun the query from earlier to get all the twins in the environment with a temperature above 75. Now that the temperature of Room0 has been changed from 70 to 76, both twins should show up in the result.
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/tutorial-end-to-end.md
Query
> > Here is the full query body to get all digital twins in your instance: >
-> :::code language="sql" source="~/digital-twins-docs-samples/queries/queries.sql" id="GetAllTwins":::
+> :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins":::
After this, you can stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial.
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-overview.md
description: Overview of DNS hosting service on Microsoft Azure. Host your domai
Previously updated : 4/20/2021 Last updated : 4/22/2021 #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
event-grid Custom Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-disaster-recovery.md
Title: Disaster recovery for custom topics in Azure Event Grid description: This tutorial will walk you through how to set up your eventing architecture to recover if the Event Grid service becomes unhealthy in a region. Previously updated : 07/07/2020 Last updated : 04/22/2021
Now that you have a regionally redundant pair of topics and subscriptions setup,
The following sample code is a simple .NET publisher that will always attempt to publish to your primary topic first. If it doesn't succeed, it will then failover the secondary topic. In either case, it also checks the health api of the other topic by doing a GET on `https://<topic-name>.<topic-region>.eventgrid.azure.net/api/health`. A healthy topic should always respond with **200 OK** when a GET is made on the **/api/health** endpoint.
+> [!NOTE]
+> The following sample code is only for demonstration purposes and is not intended for production use.
+ ```csharp using System; using System.Net.Http;
event-grid Custom Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart-portal.md
Title: 'Quickstart: Send custom events to web endpoint - Event Grid, Azure portal' description: 'Quickstart: Use Azure Event Grid and Azure portal to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/07/2020 Last updated : 04/22/2021
Azure Event Grid is an eventing service for the cloud. In this article, you use
An event grid topic provides a user-defined endpoint that you post your events to. 1. Sign in to [Azure portal](https://portal.azure.com/).
-2. In the search bar at the topic, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop down list.
+2. In the search bar at the topic, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list.
:::image type="content" source="./media/custom-event-quickstart-portal/select-event-grid-topics.png" alt-text="Search for and select Event Grid Topics"::: 3. On the **Event Grid Topics** page, select **+ Add** on the toolbar.
The first example uses Azure CLI. It gets the URL and key for the custom topic,
1. Select **Bash** in the top-left corner of the Cloud Shell window. ![Cloud Shell - Bash](./media/custom-event-quickstart-portal/cloud-shell-bash.png)
-1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command. You will publish sample events to this topic endpoint.
+1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command. You'll publish sample events to this topic endpoint.
```azurecli endpoint=$(az eventgrid topic show --name <topic name> -g <resource group name> --query "endpoint" --output tsv) ```
-2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. This is the primary key of the Event Grid topic. To get this key from the Azure portal, switch to the **Access keys** tab of the **Event Grid Topic** page. To be able post an event to a custom topic, you need the access key.
+2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command. It's the primary key of the Event Grid topic. To get this key from the Azure portal, switch to the **Access keys** tab of the **Event Grid Topic** page. To be able post an event to a custom topic, you need the access key.
```azurecli key=$(az eventgrid topic key list --name <topic name> -g <resource group name> --query "key1" --output tsv)
The first example uses Azure CLI. It gets the URL and key for the custom topic,
``` ### Azure PowerShell
-The second example uses PowerShell to perform similar steps.
+The second example uses PowerShell to do similar steps.
1. In the Azure portal, select **Cloud Shell** (alternatively go to `https://shell.azure.com/`). The Cloud Shell opens in the bottom pane of the web browser.
Now that you know how to create custom topics and event subscriptions, learn mor
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart-powershell.md
Title: 'Quickstart: Send custom events to web endpoint - Event Grid, PowerShell' description: 'Quickstart: Use Azure Event Grid and PowerShell to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/07/2020 Last updated : 04/22/2021
Now that you know how to create topics and event subscriptions, learn more about
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart.md
Title: 'Quickstart: Send custom events with Event Grid and Azure CLI' description: 'Quickstart Use Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/07/2020 Last updated : 04/22/2021
Now that you know how to create topics and event subscriptions, learn more about
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event To Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-to-eventhub.md
Now that you know how to create topics and event subscriptions, learn more about
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event To Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-to-function.md
Now that you know how to create topics and event subscriptions, learn more about
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event To Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-to-queue-storage.md
Now that you know how to create topics and event subscriptions, learn more about
- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json) - [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-event-grid-logic-app.md) - [Stream big data into a data warehouse](event-grid-event-hubs-integration.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-properties.md
Authorization: BEARER SlAV32hkKG...
``` > [!NOTE]
-> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/eventsubscriptions/createorupdate#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
+> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/version2020-06-01/eventsubscriptions/createorupdate#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid.
### Service Bus example Azure Service Bus supports the use of a [BrokerProperties HTTP header](/rest/api/servicebus/message-headers-and-properties#message-headers) to define message properties when sending single messages. The value of the `BrokerProperties` header should be provided in the JSON format. For example, if you need to set message properties when sending a single message to Service Bus, set the header in the following way:
event-grid Enable Diagnostic Logs Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-diagnostic-logs-topic.md
Title: Azure Event Grid - Enable diagnostic logs for topics or domains description: This article provides step-by-step instructions on how to enable diagnostic logs for an Azure event grid topic. Previously updated : 12/03/2020 Last updated : 04/22/2021 # Enable Diagnostic logs for Azure event grid topics or domains This article provides step-by-step instructions to enable diagnostic settings for Event Grid topics or domains. These settings allow you to capture and view **publish and delivery failure** logs.
+> [!IMPORTANT]
+> For the schema for diagnostic logs, see [Diagnostic logs](diagnostic-logs.md).
++ ## Prerequisites - A provisioned event grid topic
event-grid Monitor Virtual Machine Changes Event Grid Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md
This tutorial uses resources and performs actions that incur charges on your Azu
## Next steps * [Create and route custom events with Event Grid](../event-grid/custom-event-quickstart.md)+
+See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+
+- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Partner Onboarding Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/partner-onboarding-overview.md
After posting to the partner namespace endpoint, you receive a response. The res
* [ARM template](/azure/templates/microsoft.eventgrid/allversions) * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/master/schemas/2020-04-01-preview/Microsoft.EventGrid.json) * [REST APIs](/azure/templates/microsoft.eventgrid/2020-04-01-preview/partnernamespaces)
- * [CLI extension](/cli/azure/)
+ * [CLI extension](/cli/azure/eventgrid)
### SDKs * [.NET](https://www.nuget.org/packages/Microsoft.Azure.Management.EventGrid/5.3.1-preview)
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Advance to part three of the Storage tutorial series to learn how to secure acce
+ To try another tutorial that features Azure Functions, see [Create a function that integrates with Azure Logic Apps](../azure-functions/functions-twitter-email.md). [previous-tutorial]: ../storage/blobs/storage-upload-process-images.md+
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Location** | **Address** | **Zone** | **Local Azure regions** | **ER Direct** | **Service providers** | | | | | | | | | **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | 10G, 100G | Equinix, Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, UIH |
The following table shows connectivity locations and the service providers for e
| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo | | **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Equinix, Internet2, Level 3 Communications, Megaport, Neutrona Networks, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
-| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | n/a | CoreSite, Megaport, Zayo |
+| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | n/a | CoreSite, Megaport, PacketFabric, Zayo |
| **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport |
-| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems |
+| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems |
| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Equinix | | **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport, Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/locations/asia-colocation/hong-kong-colocation/hong-kong-data-center/hk1/) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon |
| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel | | **Jakarta** | Telin, Telkom Indonesia | 4 | n/a | 10G | Telin | | **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, Orange, Teraco | | **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom | | **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, Megaport, SES, Sohonet, Telehouse - KDDI |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, GTT, IX Reach, Equinix, JISC, Megaport, SES, Sohonet, Telehouse - KDDI |
| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix, Megaport, Neutrona Networks, NTT, Zayo | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | Interxion |
-| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
+| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Telstra Corporation, TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks | | **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit |
The following table shows connectivity locations and the service providers for e
| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | DE-CIX | | **New York** | [Equinix NY9](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny9/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Colt, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo | | **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | n/a | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data |
-| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | 10G, 100G | AT TOKYO, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
+| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | 10G, 100G | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | 10G, 100G | GlobalConnect, Megaport, Telenor, Telia Carrier | | **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | 10G | Megaport, NextDC |
The following table shows connectivity locations and the service providers for e
| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | 10G, 100G | KINX, KT, LG CNS, Equinix, Sejong Telecom | | **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | 10G, 100G | Colt, Coresite |
-| **Singapore** | [Equinix SG1](https://www.equinix.com/locations/asia-colocation/singapore-colocation/singapore-data-center/sg1/) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
+| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | China Unicom Global, Colt, Epsilon Global Communications, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI | | **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | 10G, 100G |GlobalConnect, Megaport | | **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | 10G | Equinix, Telia Carrier |
If your connectivity provider is not listed in previous sections, you can still
* [Cologix](https://www.cologix.com/) * [CoreSite](https://www.coresite.com/) * [DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange)
- * [Equinix Cloud Exchange](https://www.equinix.com/services/interconnection-connectivity/cloud-exchange/)
+ * [Equinix Cloud Exchange](https://www.equinix.com/resources/videos/cloud-exchange-overview)
* [InterXion](https://www.interxion.com/) * [NextDC](https://www.nextdc.com/) * [Megaport](https://www.megaport.com/services/microsoft-expressroute/)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** |Supported |Supported |Sao Paulo | | **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC | | **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 |
-| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Tokyo |
+| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2 |
+| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo |
| **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported |Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported |Montreal, Toronto, Quebec City |
-| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported |Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported |Chennai, Mumbai |
-| **[C3ntro](https://www.c3ntro.com/data1/express-route1.php)** |Supported |Supported |Miami |
+| **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported |Miami |
| **CDC** | Supported | Supported | Canberra, Canberra2 | | **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |Amsterdam2, Chicago, Dublin, Frankfurt, Hong Kong, Las Vegas, London2, New York, Paris, San Antonio, Silicon Valley, Tokyo, Toronto, Washington DC, Washington DC2 | | **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported |Hong Kong, Taipei |
The following table shows locations by service provider. If you want to view ava
| **[Chunghwa Telecom](https://www.cht.com.tw/en/home/cht/about-cht/products-and-services/International/Cloud-Service)** |Supported |Supported |Taipei | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported |Miami | | **[Cologix](https://www.cologix.com/hyperscale/microsoft-azure/)** |Supported |Supported |Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Hong Kong, London, London2, Milan, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Hong Kong, London, London2, Marseille, Milan, Newport, New York, Osaka, Paris, Silicon Valley, Silicon Valley2, Singapore2, Tokyo, Washington DC, Zurich |
| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported |Chicago, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 | | **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported |Amsterdam2, Dubai2, Frankfurt, Marseille, Mumbai, Munich, New York |
The following table shows locations by service provider. If you want to view ava
| **[Optus](https://www.optus.com.au/enterprise/)** |Supported |Supported |Melbourne, Sydney | | **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported |Amsterdam, Amsterdam2, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported |Chicago, Dallas, Las Vegas, Silicon Valley, Washington DC |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported |Chicago, Dallas, Denver, Las Vegas, Silicon Valley, Washington DC |
| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported |Chicago, Hong Kong, Hong Kong2, London, Singapore2 | | **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
If your connectivity provider is not listed in previous sections, you can still
* [Cologix](https://www.cologix.com/) * [CoreSite](https://www.coresite.com/) * [DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange)
- * [Equinix Cloud Exchange](https://www.equinix.com/services/interconnection-connectivity/cloud-exchange/)
+ * [Equinix Cloud Exchange](https://www.equinix.com/interconnection-services/equinix-fabric)
* [Interxion](https://www.interxion.com/products/interconnection/cloud-connect/) * [IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/) * [Megaport](https://www.megaport.com/services/microsoft-expressroute/)
If you are remote and don't have fiber connectivity or you want to explore other
| **[Bezeq International Ltd.](https://www.bezeqint.net/english)** | euNetworks | London | | **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/)** | Equinix | Amsterdam, Frankfurt, London, Singapore, Washington DC | | **[BroadBand Tower, Inc.](https://www.bbtower.co.jp/product-service/data-center/network/dcconnect-for-azure/)** | Equinix | Tokyo |
-| **[C3ntro Telecom](https://www.c3ntro.com/data/express-route)** | Equinix, Megaport | Dallas |
+| **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix, Megaport | Dallas |
| **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR | | **[Cinia](https://www.cinia.fi/en/services/connectivity-services/direct-public-cloud-connection.html)** | Equinix, Megaport | Frankfurt, Hamburg | | **[CloudXpress](https://www2.telenet.be/fr/business/produits-services/internet/cloudxpress/)** | Equinix | Amsterdam |
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-routing.md
You can use either private IP addresses or public IP addresses to configure the
* The subnets used for routing can be either private IP addresses or public IP addresses. * The subnets must not conflict with the range reserved by the customer for use in the Microsoft cloud. * If a /125 subnet is used, it is split into two /126 subnets.
- * The first /126 subnet is used for the primary link and the second /30 subnet is used for the secondary link.
+ * The first /126 subnet is used for the primary link and the second /126 subnet is used for the secondary link.
* For each of the /126 subnets, you must use the first IP address of the /126 subnet on your router. Microsoft uses the second IP address of the /126 subnet to set up a BGP session. * You must set up both BGP sessions for our [availability SLA](https://azure.microsoft.com/support/legal/sla/) to be valid.
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 04/07/2021 Last updated : 04/22/2021
Untrusted customer signed certificates|Customer signed certificates are not trus
|IDPS Bypass|IDPS Bypass doesn't work for TLS terminated traffic, and Source IP address and Source IP Groups aren't supported.|Fix scheduled for GA.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|Fix scheduled for GA.|
+|IP Groups support|Azure Firewall Premium Preview does not support IP Groups.|Fix scheduled for GA.|
## Next steps
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/snat-private-range.md
# Azure Firewall SNAT private IP address ranges
-Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. By default, Azure Firewall doesn't SNAT with Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918). Application rules are always applied using a [transparent proxy](https://wikipedia.org/wiki/Proxy_server#Transparent_proxy) whatever the destination IP address.
+Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. By default, Azure Firewall doesn't SNAT with Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918](https://tools.ietf.org/html/rfc1918) or shared address space per [IANA RFC 6598](https://tools.ietf.org/html/rfc6598). Application rules are always applied using a [transparent proxy](https://wikipedia.org/wiki/Proxy_server#Transparent_proxy) whatever the destination IP address.
This logic works well when you route traffic directly to the Internet. However, if you've enabled [forced tunneling](forced-tunneling.md), Internet-bound traffic is SNATed to one of the firewall private IP addresses in AzureFirewallSubnet, hiding the source from your on-premises firewall.
You can use the Azure portal to specify private IP address ranges for the firewa
## Next steps -- Learn about [Azure Firewall forced tunneling](forced-tunneling.md).
+- Learn about [Azure Firewall forced tunneling](forced-tunneling.md).
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-waf.md
When you no longer need the resources used in this tutorial, use the [az group d
To learn how to troubleshoot your Front Door, see the troubleshooting guides: > [!div class="nextstepaction"]
-> [Troubleshooting common routing issues](front-door-troubleshoot-routing.md)
+> [Troubleshooting common routing issues](front-door-troubleshoot-routing.md)
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
The following table includes links to Azure Resource Manager templates for Azure
| [Rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-rule-set/) | Creates a Front Door profile and rule set. | | [WAF policy with managed rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-waf-managed/) | Creates a Front Door profile and WAF with managed rule set. | | [WAF policy with custom rule](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-waf-custom/) | Creates a Front Door profile and WAF with custom rule. |
+| [WAF policy with rate limit](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-rate-limit/) | Creates a Front Door profile and WAF with a custom rule to perform rate limiting. |
+| [WAF policy with geo-filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-geo-filtering/) | Creates a Front Door profile and WAF with a custom rule to perform geo-filtering. |
|**App Service origins**| **Description** | | [App Service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-app-service-public) | Creates an App Service app with a public endpoint, and a Front Door profile. | | [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
germany Germany Migration Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-databases.md
Copy-AzSqlDatabaseLongTermRetentionBackup
-TargetDatabaseName $targetDatabaseName -TargetSubscriptionId $targetSubscriptionId -TargetResourceGroupName $targetRGName
- - TargetServerFullyQualifiedDomainName $targetServerFQDN
+ -TargetServerFullyQualifiedDomainName $targetServerFQDN
``` 2. **Copy LTR backup using backup resourceID**
Copy-AzSqlDatabaseLongTermRetentionBackup
-TargetDatabaseName $targetDatabaseName -TargetSubscriptionId $targetSubscriptionId -TargetResourceGroupName $targetRGName
- - TargetServerFullyQualifiedDomainName $targetServerFQDN
+ -TargetServerFullyQualifiedDomainName $targetServerFQDN
```
governance Resource Locking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/concepts/resource-locking.md
Title: Understand resource locking description: Learn about the locking options in Azure Blueprints to protect resources when assigning a blueprint. Previously updated : 01/27/2021 Last updated : 04/22/2021 # Understand resource locking in Azure Blueprints
see an example of resource locking and application of _deny assignments_, see th
[protecting new resources](../tutorials/protect-new-resources.md) tutorial. > [!NOTE]
-> Resource locks deployed by Azure Blueprints are only applied to resources deployed by the
-> blueprint assignment. Existing resources, such as those in resource groups that already exist,
-> don't have locks added to them.
+> Resource locks deployed by Azure Blueprints are only applied to
+> [non-extension resources](../../../azure-resource-manager/templates/scope-extension-resources.md)
+> deployed by the blueprint assignment. Existing resources, such as those in resource groups that
+> already exist, don't have locks added to them.
## Locking modes and states
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
resource, but it doesn't stop the request.
### Audit evaluation Audit is the last effect checked by Azure Policy during the creation or update of a resource. For a
-Resource Manager mode, Azure Policy then sends the resource to the Resource Provider. Audit works
-the same for a resource request and an evaluation cycle. For new and updated resources, Azure Policy
-adds a `Microsoft.Authorization/policies/audit/action` operation to the activity log and marks the
-resource as non-compliant.
+Resource Manager mode, Azure Policy then sends the resource to the Resource Provider. When
+evaluating a create or update request for a resource, Azure Policy adds a
+`Microsoft.Authorization/policies/audit/action` operation to the activity log and marks the resource
+as non-compliant. During a standard compliance evaluation cycle, only the compliance status on the
+resource is updated.
### Audit properties
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md Binary files differ
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-upload-data.md Binary files differ
hpc-cache Add Namespace Paths https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/add-namespace-paths.md
description: How to create client-facing paths for back-end storage with Azure H
Previously updated : 03/11/2021 Last updated : 04/22/2021
An NFS storage target can have multiple virtual paths, as long as each path repr
When planning your namespace for an NFS storage target, keep in mind that each path must be unique, and can't be a subdirectory of another namespace path. For example, if you have a namespace path that is called ``/parent-a``, you can't also create namespace paths like ``/parent-a/user1`` and ``/parent-a/user2``. Those directory paths are already accessible in the namespace as subdirectories of ``/parent-a``.
-All of the namespace paths for an NFS storage system are created on one storage target. Most cache configurations can support up to ten namespace paths per storage target, but larger configurations can support up to 20.
-
-This list shows the maximum number of namespace paths per configuration.
-
-* Up to 2 GB/s throughput:
-
- * 3 TB cache - 10 namespace paths
- * 6 TB cache - 10 namespace paths
- * 12 TB cache - 20 namespace paths
-
-* Up to 4 GB/s throughput:
-
- * 6 TB cache - 10 namespace paths
- * 12 TB cache - 10 namespace paths
- * 24 TB cache -20 namespace paths
-
-* Up to 8 GB/s throughput:
-
- * 12 TB cache - 10 namespace paths
- * 24 TB cache - 10 namespace paths
- * 48 TB cache - 20 namespace paths
+All of the namespace paths for an NFS storage system are created on one storage target.
For each NFS namespace path, provide the client-facing path, the storage system export, and optionally an export subdirectory.
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-add-storage.md
description: How to define storage targets so that your Azure HPC Cache can use
Previously updated : 03/15/2021 Last updated : 04/22/2021
You can define up to 20 different storage targets for one cache. The cache presents all of the storage targets in one aggregated namespace.
-The namespace paths are configured separately after you add the storage targets. In general, an NFS storage target can have up to ten namespace paths, or more for some large configurations. Read [NFS namespace paths](add-namespace-paths.md#nfs-namespace-paths) for details.
+The namespace paths are configured separately after you add the storage targets.
Remember that the storage exports must be accessible from your cache's virtual network. For on-premises hardware storage, you might need to set up a DNS server that can resolve hostnames for NFS storage access. Read more in [DNS access](hpc-cache-prerequisites.md#dns-access).
iot-edge How To Auto Provision X509 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
Have the following information ready:
* The DPS **ID Scope** value. You can retrieve this value from the overview page of your DPS instance in the Azure portal. * The device identity certificate chain file on the device. * The device identity key file on the device.
-* An optional registration ID. If not supplied, the ID is pulled from the common name in the device identity certificate.
### Linux device
Have the following information ready:
`file:///<path>/identity_certificate_chain.pem` `file:///<path>/identity_key.pem`
-1. Optionally, provide a `registration_id` for the device. Otherwise, leave that line commented out to register the device with the CN name of the identity certificate.
+1. Optionally, provide the `registration_id` for the device, which needs to match the common name (CN) of the identity certificate. If you leave that line commented out, the CN will automatically be applied.
1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
Have the following information ready:
[provisioning.attestation] method = "x509"
- # registration_id = "<OPTIONAL REGISTRATION ID. LEAVE COMMENTED OUT TO REGISTER WITH CN OF identity_cert>"
+ registration_id = "<REGISTRATION ID>"
- identity_cert = "<REQUIRED URI TO DEVICE IDENTITY CERTIFICATE>"
+ identity_cert = "<DEVICE IDENTITY CERTIFICATE>"
- identity_pk = "<REQUIRED URI TO DEVICE IDENTITY PRIVATE KEY>"
+ identity_pk = "<DEVICE IDENTITY PRIVATE KEY>"
```
-1. Update the values of `id_scope`, `identity_cert`, and `identity_pk` with your DPS and device information.
+1. Update the value of `id_scope` with the scope ID you copied from your instance of DPS.
+
+1. Provide a `registration_id` for the device, which is the ID that the device will have in IoT Hub. The registration ID must match the common name (CN) of the identity certificate.
+
+1. Update the values of `identity_cert` and `identity_pk` with your certificate and key information.
The identity certificate value can be provided as a file URI, or can be dynamically issued using EST or a local certificate authority. Uncomment only one line, based on the format you choose to use.
Have the following information ready:
If you use any PKCS#11 URIs, find the **PKCS#11** section in the config file and provide information about your PKCS#11 configuration.
-1. Optionally, provide a `registration_id` for the device. Otherwise, leave that line commented out to register the device with the common name of the identity certificate.
- 1. Save and close the file. 1. Apply the configuration changes that you made to IoT Edge.
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-device.md
You should see a message that says, "Updating certificates in /etc/ssl/certs...
The following steps are an example of how to install a CA certificate on a Windows host. This example assumes that you're using the **azure-iot-test-only.root.ca.cert.pem** certificate from the prerequisites articles, and that you've copied the certificate into a location on the downstream device.
-You can install certificates using PowerShell's [Import-Certificate](/powershell/module/pkiclient/import-certificate) as an administrator:
+You can install certificates using PowerShell's [Import-Certificate](/powershell/module/pki/import-certificate) as an administrator:
```powershell import-certificate <file path>\azure-iot-test-only.root.ca.cert.pem -certstorelocation cert:\LocalMachine\root
If your leaf device has intermittent connection to its gateway device, try the f
## Next steps
-Learn how IoT Edge can extend [offline capabilities](offline-capabilities.md) to downstream devices.
+Learn how IoT Edge can extend [offline capabilities](offline-capabilities.md) to downstream devices.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-vs-code-develop-module.md
If you're developing in C#, Node.js, or Java, your module requires use of a **Mo
On your development machine, you can start an IoT Edge simulator instead of installing the IoT Edge security daemon so that you can run your IoT Edge solution.
-1. In device explorer on the left side, right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
+1. In the **Explorer** tab on the left side, expand the **Azure IoT Hub** section. Right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
1. You can see the IoT Edge Simulator has been successfully set up by reading the progress detail in the integrated terminal. ### Set up IoT Edge simulator for single module app
Currently, debugging in attach mode is supported only as follows:
In your development machine, you can start an IoT Edge simulator instead of installing the IoT Edge security daemon so that you can run your IoT Edge solution.
-1. In device explorer on the left side, right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
+1. In the **Explorer** tab on the left side, expand the **Azure IoT Hub** section. Right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
1. You can see the IoT Edge Simulator has been successfully set up by reading the progress detail in the integrated terminal.
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites * Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or above.
-* A Device Update instance and account linked to your IoT Hub. Follow the guide to [create and link](http://create-device-update-account.md/) a device update account if you have not done so previously.
+* A Device Update instance and account linked to your IoT Hub. Follow the guide to [create and link](create-device-update-account.md) a device update account if you have not done so previously.
## Get started
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md Binary files differ
iot-hub Iot Hub Node Node Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-device-management-get-started.md Binary files differ
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-schedule-jobs.md Binary files differ
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-twin-getstarted.md Binary files differ
iot-hub Iot Hub Python Python Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-python-python-device-management-get-started.md Binary files differ
iot-hub Quickstart Device Streams Proxy C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-c.md Binary files differ
iot-hub Quickstart Device Streams Proxy Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-csharp.md Binary files differ
iot-hub Quickstart Device Streams Proxy Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-nodejs.md Binary files differ
key-vault Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/backup.md Binary files differ
key-vault Key Vault Integrate Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/key-vault-integrate-kubernetes.md
kubectl exec nginx-secrets-store-inline -- cat /mnt/secrets-store/secret1
Verify that the contents of the secret are displayed. ## Resources
-[About Azure Key Vault](overview.md)
-[Azure Key Vault developer's guide](developers-guide.md)
-[CSI Secrets Driver](https://secrets-store-csi-driver.sigs.k8s.io/introduction.html)
+- [About Azure Key Vault](overview.md)
+- [Azure Key Vault developer's guide](developers-guide.md)
+- [CSI Secrets Driver](https://azure.github.io/secrets-store-csi-driver-provider-azure/)
-
-To help ensure that your key vault is recoverable, see:
-> [!div class="nextstepaction"]
-> [Turn on soft delete](./key-vault-recovery.md)
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-guide.md
-
+ Title: Grant permission to applications to access an Azure key vault using Azure RBAC | Microsoft Docs description: Learn how to provide access to keys, secrets, and certificates using Azure role-based access control.
For more Information about how to create custom roles, see:
## Learn more - [Azure RBAC Overview](../../role-based-access-control/overview.md)-- [Custom Roles Tutorial](../../role-based-access-control/tutorial-custom-role-cli.md)
+- [Custom Roles Tutorial](../../role-based-access-control/tutorial-custom-role-cli.md)
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, in t
Now configure your web app to deploy from the `main` branch: ```azurecli-interactive
- az webapp config appsettings set -g MyResourceGroup -name "<your-webapp-name>"--settings deployment_branch=main
+ az webapp config appsettings set -g MyResourceGroup --name "<your-webapp-name>" --settings deployment_branch=main
``` Go to your new app by using the following command. Replace `<your-webapp-name>` with your app name.
Where before you saw "Hello World!", you should now see the value of your secret
- [Use Azure Key Vault with applications deployed to a virtual machine in .NET](./tutorial-net-virtual-machine.md) - Learn more about [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md) - View the [Developer's Guide](./developers-guide.md)-- [Secure access to a key vault](./security-features.md)
+- [Secure access to a key vault](./security-features.md)
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md Binary files differ
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-distribution-mode.md Binary files differ
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-multi-availability-sets-portal.md
description: In this tutorial, deploy an Azure Load Balancer with more than one availability set in the backend pool. --+ Previously updated : 04/16/2021 Last updated : 04/21/2021 # Tutorial: Create a load balancer with more than one availability set in the backend pool using the Azure portal
load-balancer Update Load Balancer With Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/update-load-balancer-with-vm-scale-set.md Binary files differ
machine-learning Convert To Image Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/convert-to-image-directory.md
This article describes how to use the Convert to Image Directory module to help
Your_image_folder_name/Category_2/asd932_.png ```
- In the image dataset folder, there are multiple subfolders. Each subfolder contains images of one category respectively. The names of subfolders are considered as the labels for tasks like image classification. Refer to [torchvision datasets](https://pytorch.org/docs/stable/torchvision/datasets.html#imagefolder) for more information.
+ In the image dataset folder, there are multiple subfolders. Each subfolder contains images of one category respectively. The names of subfolders are considered as the labels for tasks like image classification. Refer to [torchvision datasets](https://pytorch.org/vision/stable/datasets.html#imagefolder) for more information.
> [!WARNING] > Currently labeled datasets exported from Data Labeling are not supported in the designer.
machine-learning Init Image Transformation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/init-image-transformation.md
After transformation is completed, you can find transformed images in the output
## Technical notes
-Refer to [https://pytorch.org/docs/stable/torchvision/transforms.html](https://pytorch.org/docs/stable/torchvision/transforms.html) for more info about image transformation.
+Refer to [https://pytorch.org/vision/stable/transforms.html](https://pytorch.org/vision/stable/transforms.html) for more info about image transformation.
### Module parameters
machine-learning Resnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/resnet.md
You can train the model by providing a model and a labeled image directory as in
### More about ResNet
-Refer to [this paper](https://pytorch.org/docs/stable/torchvision/models.html?highlight=resnext101_32x8d#torchvision.models.resnext101_32x8d) for more details about ResNet.
+Refer to [this paper](https://pytorch.org/vision/stable/models.html#torchvision.models.resnext101_32x8d) for more details about ResNet.
## How to configure ResNet
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
|Key benefits|Description| |-|-|
-|Productivity|You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:<br/>- Jupyter<br/>- JupyterLab<br/>- RStudio (preview)<br/>Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.<br/> You can also use [VS Code](https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630) with compute instances.
+|Productivity|You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:<br/>- Jupyter<br/>- JupyterLab<br/>- VS Code (preview)<br/>- RStudio (preview)<br/>Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.<br/> You can also use [VS Code](https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630) with compute instances.
|Managed & secure|Reduce your security footprint and add compliance with enterprise security requirements. Compute instances provide robust management policies and secure networking configurations such as:<br/><br/>- Autoprovisioning from Resource Manager templates or Azure Machine Learning SDK<br/>- [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)<br/>- [Virtual network support](./how-to-secure-training-vnet.md#compute-instance)<br/>- SSH policy to enable/disable SSH access<br/>TLS 1.2 enabled | |Preconfigured&nbsp;for&nbsp;ML|Save time on setup tasks with pre-configured and up-to-date ML packages, deep learning frameworks, GPU drivers.| |Fully customizable|Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. |
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data-encryption.md
Previously updated : 11/09/2020 Last updated : 04/21/2021 # Data encryption with Azure Machine Learning
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md Binary files differ
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
description: 'Control access to Azure Machine Learning workspaces with Azure Fir
-+ Last updated 11/18/2020-+ # Use workspace behind a Firewall for Azure Machine Learning
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
description: Learn how to use datastores to securely connect to Azure storage se
-+ Last updated 11/03/2020-+ # Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-terminal.md
--+ Last updated 02/05/2021 #Customer intent: As a data scientist, I want to use Git, install packages and add kernels to a compute instance in my workspace in Azure Machine Learning studio.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-assign-roles.md
description: Learn how to access to an Azure Machine Learning workspace using Az
-+
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-attach-compute-targets.md
Last updated 10/02/2020--++ # Set up compute targets for model training and deployment
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-authenticate-web-service.md
Last updated 11/06/2020--++ # Configure authentication for models deployed as web services
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-auto-train-forecast.md
--++ Last updated 08/20/2020
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-change-storage-access-key.md
description: Learn how to change the access keys for the Azure Storage account u
--+
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-cicd-data-ingestion.md
description: Learn how to apply DevOps practices to build a data ingestion pipel
--++
machine-learning How To Compute Cluster Instance Os Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-compute-cluster-instance-os-upgrade.md
Last updated 03/03/2021--++
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-features.md
--++ Last updated 12/18/2020
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
Last updated 09/29/2020--++ # Configure automated ML experiments in Python
The following table summarizes the supported models by task type.
Classification | Regression | Time Series Forecasting |-- |-- |--
-[Logistic Regression](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)
-[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)* |[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)*|[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)
-[Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#classification)* |[Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)* |[Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)
-[Decision Tree](https://scikit-learn.org/stable/modules/tree.html#decision-trees)* |[Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)* |[Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
-[K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression)* |[K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression)* |[K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression)
-[Linear SVC](https://scikit-learn.org/stable/modules/svm.html#classification)* |[LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)* |[LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)
-[Support Vector Classification (SVC)](https://scikit-learn.org/stable/modules/svm.html#classification)* |[Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)* |[Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)
-[Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)* |[Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)* |[Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)
-[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* |[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* |[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)
-[Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)* |[Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)* | [Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)
-[Averaged Perceptron Classifier](/python/api/nimbusml/nimbusml.linear_model.averagedperceptronbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)|[Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) |[Auto-ARIMA](https://www.alkaline-ml.com/pmdarimarima.arima.auto_arima)
-[Naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html#bernoulli-naive-bayes)* |[Fast Linear Regressor](/python/api/nimbusml/nimbusml.linear_model.fastlinearregressor?preserve-view=true&view=nimbusml-py-latest)|[Prophet](https://facebook.github.io/prophet/docs/quick_start.html)
-[Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#sgd)* ||ForecastTCN
-|[Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)*||
-
+[Logistic Regression](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)* | [AutoARIMA](https://www.alkaline-ml.com/pmdarimarima.arima.auto_arima)
+[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)* | [Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)* | [Prophet](https://facebook.github.io/prophet/docs/quick_start.html)
+[Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#classification)* | [Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)* | [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net)
+[Decision Tree](https://scikit-learn.org/stable/modules/tree.html#decision-trees)* |[Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)* |[Light GBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html)
+[K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression)* |[K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression)* | [Gradient Boosting](https://scikit-learn.org/stable/modules/ensemble.html#regression)
+[Linear SVC](https://scikit-learn.org/stable/modules/svm.html#classification)* |[LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)* | [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
+[Support Vector Classification (SVC)](https://scikit-learn.org/stable/modules/svm.html#classification)* |[Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)* | [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression)
+[Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)* | [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests) | [LARS Lasso](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso)
+[Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees)* | [Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#regression)
+[Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)* |[Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)* | [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests)
+[Averaged Perceptron Classifier](/python/api/nimbusml/nimbusml.linear_model.averagedperceptronbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)| [Online Gradient Descent Regressor](/python/api/nimbusml/nimbusml.linear_model.onlinegradientdescentregressor?preserve-view=true&view=nimbusml-py-latest) | [Xgboost](https://xgboost.readthedocs.io/en/latest/parameter.html)
+[Naive Bayes](https://scikit-learn.org/stable/modules/naive_bayes.html#bernoulli-naive-bayes)* |[Fast Linear Regressor](/python/api/nimbusml/nimbusml.linear_model.fastlinearregressor?preserve-view=true&view=nimbusml-py-latest)| ForecastTCN
+[Stochastic Gradient Descent (SGD)](https://scikit-learn.org/stable/modules/sgd.html#sgd)* || Naive
+[Linear SVM Classifier](/python/api/nimbusml/nimbusml.linear_model.linearsvmbinaryclassifier?preserve-view=true&view=nimbusml-py-latest)* || SeasonalNaive
+||| Average
+||| SeasonalAverage
+||| [ExponentialSmoothing](https://www.statsmodels.org/v0.10.2/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html)
### Primary Metric The `primary metric` parameter determines the metric to be used during model training for optimization. The available metrics you can select is determined by the task type you choose, and the following table shows valid primary metrics for each task type.
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
description: Learn how to configure dataset splits and cross-validation for auto
--++
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Last updated 10/21/2020--++ # Set up a development environment with Azure Databricks and AutoML in Azure Machine Learning
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-environment.md
Last updated 03/22/2021--++ # Set up a Python development environment for Azure Machine Learning
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
description: 'Use Azure Private Link to securely access your Azure Machine Learn
--++
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
description: Create datastores and datasets to securely connect to data in stora
-+ Last updated 09/22/2020-+ # Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-consume-web-service.md
Last updated 10/12/2020--++ #Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-cluster.md
description: Learn how to create compute clusters in your Azure Machine Learning
--++
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-studio.md
Last updated 08/06/2020--++ # Create compute targets for model training and deployment in Azure Machine Learning studio
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
description: 'Learn how to create a new Azure Kubernetes Service cluster through
--++
machine-learning How To Create Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-labeling-projects.md
-+ Last updated 07/27/2020
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-machine-learning-pipelines.md
Last updated 03/02/2021--++
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
description: Learn how to create and manage an Azure Machine Learning compute in
--++
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
description: Learn how to create Azure Machine Learning datasets to access your
--++
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-workspace-template.md
description: Learn how to use an Azure Resource Manager template to create a new
--++ Previously updated : 09/30/2020 Last updated : 04/21/2021 # Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
To avoid this problem, we recommend one of the following approaches:
/subscriptions/{subscription-guid}/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault ```
-### Virtual network not linked to private DNS zone
-
-When creating a workspace with a private endpoint, the template creates a Private DNS Zone named __privatelink.api.azureml.ms__. A __virtual network link__ is automatically added to this private DNS zone. The link is only added for the first workspace and private endpoint you create in a resource group; if you create another virtual network and workspace with a private endpoint in the same resource group, the second virtual network may not get added to the private DNS zone.
-
-To view the virtual network links that already exist for the private DNS zone, use the following Azure CLI command:
-
-```azurecli
-az network private-dns link vnet list --zone-name privatelink.api.azureml.ms --resource-group myresourcegroup
-```
-
-To add the virtual network that contains another workspace and private endpoint, use the following steps:
-
-1. To find the virtual network ID for the network that you want to add, use the following command:
-
- ```azurecli
- az network vnet show --name myvnet --resource-group myresourcegroup --query id
- ```
-
- This command returns a value similar to `"/subscriptions/GUID/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet"'. Save this value and use it in the next step.
-
-2. To add a virtual network link to the privatelink.api.azureml.ms Private DNS Zone, use the following command. For the `--virtual-network` parameter, use the output of the previous command:
-
- ```azurecli
- az network private-dns link vnet create --name mylinkname --registration-enabled true --resource-group myresourcegroup --virtual-network myvirtualnetworkid --zone-name privatelink.api.azureml.ms
- ```
- ## Next steps * [Deploy resources with Resource Manager templates and Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Last updated 04/01/2021--++ # How to use your workspace with a custom DNS server
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-data-ingest-adf.md
Last updated 01/26/2021--++ # Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
description: Learn how to attach and launch Apache Spark pools for data wranglin
-+ Last updated 03/02/2021-+ # Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-visual-studio-code.md
description: Interactively debug Azure Machine Learning code, pipelines, and dep
-+ Last updated 09/30/2020
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-advanced-entry-script.md
-+ Last updated 09/17/2020
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
Last updated 03/25/2021--++ adobe-target: true
machine-learning How To Deploy App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-app-service.md
Title: Deploy ml models to Azure App Service (preview)
+ Title: Deploy ML models to Azure App Service (preview)
description: Learn how to use Azure Machine Learning to deploy a trained ML model to a Web App using Azure App Service.
Last updated 06/23/2020--++
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-container-instance.md
description: 'Learn how to deploy your Azure Machine Learning models as a web se
--++
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
description: 'Learn how to deploy your Azure Machine Learning models as a web se
--++
machine-learning How To Deploy Continuously Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-continuously-deploy.md
Last updated 08/03/2020-+ -+ # Continuously deploy models
machine-learning How To Deploy Custom Docker Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-custom-docker-image.md
Last updated 11/16/2020--++ # Deploy a model using a custom Docker base image
The steps in this section walk-through creating a custom Docker image in your Az
```text FROM ubuntu:16.04
- ARG CONDA_VERSION=4.7.12
+ ARG CONDA_VERSION=4.9.2
ARG PYTHON_VERSION=3.7
- ARG AZUREML_SDK_VERSION=1.13.0
+ ARG AZUREML_SDK_VERSION=1.27.0
ARG INFERENCE_SCHEMA_VERSION=1.1.0 ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
machine-learning How To Deploy Existing Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-existing-model.md
Last updated 07/17/2020--++ # Deploy your existing model with Azure Machine Learning
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-fpga-web-service.md
Last updated 09/24/2020--++ # Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning
machine-learning How To Deploy Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-functions.md
Last updated 03/06/2020--++
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-inferencing-gpus.md
Last updated 06/17/2020--++ # Deploy a deep learning model for inference with GPU
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-local-container-notebook-vm.md
description: 'Learn how to deploy your Azure Machine Learning models as a web se
--++
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-local.md
Last updated 11/20/2020--++ # Deploy models trained with Azure Machine Learning on your local machines
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-mlflow-models.md
Last updated 12/23/2020--++ # Deploy MLflow models as Azure web services (preview)
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-model-cognitive-search.md
description: Learn how to use Azure Machine Learning to deploy a model for use w
-+
machine-learning How To Deploy Model Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-model-designer.md
Last updated 10/29/2020--++ # Use the studio to deploy models trained in the designer
machine-learning How To Deploy No Code Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-no-code-deployment.md
Last updated 07/31/2020-+
machine-learning How To Deploy Package Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-package-models.md
Last updated 07/31/2020-+
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-pipelines.md
Last updated 8/25/2020--++
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-profile-model.md
Last updated 07/31/2020-+ zone_pivot_groups: aml-control-methods
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-update-web-service.md
description: Learn how to refresh a web service that is already deployed in Azure Machine Learning. You can update settings such as model, environment, and entry script. -+
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-triton.md Binary files differ
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-designer-import-data.md
Last updated 11/13/2020--++ # Import data into Azure Machine Learning designer
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-designer-python.md
Last updated 09/09/2020--++ # Run Python code in Azure Machine Learning designer
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-designer-transform-data.md
Last updated 06/28/2020--++ # Transform data in Azure Machine Learning designer
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-differential-privacy.md
description: Learn how to apply differential privacy best practices to Azure Mac
--++
machine-learning How To Enable App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-app-insights.md
Last updated 09/15/2020--++ # Monitor and collect data from ML web service endpoints
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-data-collection.md
Last updated 07/14/2020--++ # Collect data from models in production
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-studio-virtual-network.md
-
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-export-delete-data.md
Last updated 04/24/2020--++ # Export or delete your Machine Learning service workspace data
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-github-actions-machine-learning.md
Last updated 10/19/2020-+
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-high-availability-machine-learning.md
description: Learn how to make your Azure Machine Learning resources more resili
-+
machine-learning How To Homomorphic Encryption Seal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-homomorphic-encryption-seal.md
Last updated 07/09/2020
--++ #Customer intent: As a data scientist, I want to deploy a service that uses homomorphic encryption to make predictions on encrypted data.
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-identity-based-data-access.md
description: Learn how to use identity-based data access to connect to storage s
-+ Last updated 02/22/2021-+ # Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute to train my machine learning models.
machine-learning How To Label Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-label-images.md
-+ Last updated 07/27/2020
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
description: Learn how to link Azure Synapse and Azure Machine Learning workspac
-+ Last updated 03/08/2021-+ # Customer intent: As a workspace administrator, I want to link Azure Synapse workspaces and Azure Machine Learning workspaces for a unified data wrangling experience.
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-log-pipelines-application-insights.md
Last updated 08/11/2020--++ # Collect machine learning pipeline log files in Application Insights for alerts and debugging
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-log-view-metrics.md
Last updated 04/19/2021--++ # Log & view metrics and log files
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-fairness-aml.md
Last updated 11/16/2020--++ # Use Azure Machine Learning with the Fairlearn open-source package to assess the fairness of ML models (preview)
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
Last updated 07/09/2020--++ # Use the interpretability package to explain ML models & predictions in Python (preview)
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
description: Learn how to get explanations for how your automated ML model deter
--++ Last updated 07/09/2020
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability.md
description: Learn how to understand & explain how your machine learning model m
--++
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-files.md
--+ Last updated 02/05/2021 #Customer intent: As a data scientist, I want to create and manage the files in my workspace in Azure Machine Learning studio.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-quotas.md
Last updated 12/1/2020-+
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-resources-vscode.md
--+ Last updated 11/16/2020
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-rest.md
Last updated 01/31/2020--++ # Create, run, and delete Azure ML resources using REST
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-cli.md
Last updated 04/02/2021--++ # Create a workspace for Azure Machine Learning with Azure CLI
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace.md
Previously updated : 09/30/2020-- Last updated : 04/22/2021++
The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/pyth
> Using a private endpoint with Azure Machine Learning workspace is currently in public preview. This preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-### Multiple workspaces with private endpoint
-
-When you create a private endpoint, a new Private DNS Zone named __privatelink.api.azureml.ms__ is created. This contains a link to the virtual network. If you create multiple workspaces with private endpoints in the same resource group, only the virtual network for the first private endpoint may be added to the DNS zone. To add entries for the virtual networks used by the additional workspaces/private endpoints, use the following steps:
-
-1. In the [Azure portal](https://portal.azure.com), select the resource group that contains the workspace. Then select the Private DNS Zone resource named __privatelink.api.azureml.ms__
-2. In the __Settings__, select __Virtual network links__.
-3. Select __Add__. From the __Add virtual network link__ page, provide a unique __Link name__, and then select the __Virtual network__ to be added. Select __OK__ to add the network link.
-
-For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
- ### Vulnerability scanning Azure Security Center provides unified security management and advanced threat protection across hybrid cloud workloads. You should allow Azure Security Center to scan your resources and follow its recommendations. For more, see [Azure Container Registry image scanning by Security Center](../security-center/defender-for-container-registries-introduction.md) and [Azure Kubernetes Services integration with Security Center](../security-center/defender-for-kubernetes-introduction.md).
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-migrate-from-estimators-to-scriptrunconfig.md
Last updated 12/14/2020--++ # Migrating from Estimators to ScriptRunConfig
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-datasets.md
Last updated 06/25/2020--++ #Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-tensorboard.md
Last updated 02/27/2020--++ # Visualize experiment runs and metrics with TensorBoard and Azure Machine Learning
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
Last updated 02/26/2021--++ #Customer intent: As a data scientist using Python, I want to get data into my pipeline and flowing between steps.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
Last updated 03/02/2021--++
machine-learning How To Retrain Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-retrain-designer.md
Last updated 03/06/2021--++ # Use pipeline parameters to retrain models in the designer
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-batch-predictions-designer.md
Last updated 02/05/2021--++ # Run batch predictions using Azure Machine Learning designer
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-jupyter-notebooks.md
--+ Last updated 01/19/2021 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-save-write-experiment-files.md
--+ Last updated 03/10/2020
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
-
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
Last updated 03/11/2021--++ # Use TLS to secure a web service through Azure Machine Learning
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
Last updated 03/17/2021--++
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-select-algorithms.md
description: How to select Azure Machine Learning algorithms for supervised and
--+
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
Last updated 09/28/2020--++ # Configure and submit training runs
machine-learning How To Set Up Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-vs-code-remote.md
description: Learn how to connect to an Azure Machine Learning compute instance
--+ Last updated 04/08/2021
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-setup-authentication.md
Last updated 04/02/2021--++ # Set up authentication for Azure Machine Learning resources and workflows
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-designer-experiments.md
Last updated 01/11/2021-+
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Last updated 04/19/2021--++ # Start, monitor, and track run history
machine-learning How To Train Keras https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-keras.md
Last updated 09/28/2020--+ #Customer intent: As a Python Keras developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-pytorch.md
Last updated 01/14/2020--+ #Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-scikit-learn.md
Last updated 09/28/2020--++ #Customer intent: As a Python scikit-learn developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my machine learning models at scale.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-tensorflow.md
Last updated 09/28/2020--+ # Customer intent: As a TensorFlow developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
Last updated 07/31/2020--++ # Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-trigger-published-pipeline.md
Last updated 01/29/2021--++ # Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-auto-ml.md
Last updated 03/08/2021--++ # Troubleshoot automated ML experiments in Python
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-tune-hyperparameters.md
Last updated 02/26/2021--++
Once all of the hyperparameter tuning runs have completed, identify the best per
```Python best_run = hyperdrive_run.get_best_run_by_primary_metric() best_run_metrics = best_run.get_metrics()
-parameter_values = best_run.get_details()['runDefinition']['Arguments']
+parameter_values = best_run.get_details()['runDefinition']['arguments']
print('Best Run Id: ', best_run.id) print('\n Accuracy:', best_run_metrics['accuracy'])
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
Last updated 12/09/2020--++ # Evaluate automated machine learning experiment results
While model evaluation metrics and charts are good for measuring the general qua
For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](how-to-machine-learning-interpretability-automl.md). > [!NOTE]
-> The ForecastTCN model is not currently supported by automated ML explanations and other forecasting models may have limited access to interpretability tools.
+> Interpretability, best model explanation, is not available for automated ML forecasting experiments that recommend the following algorithms as the best model or ensemble:
+> * TCNForecaster
+> * AutoArima
+> * ExponentialSmoothing
+> * Prophet
+> * Average
+> * Naive
+> * Seasonal Average
+> * Seasonal Naive
## Next steps * Try the [automated machine learning model explanation sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Last updated 12/20/2020--++ # Create, review, and deploy automated machine learning models with Azure Machine Learning
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
Last updated 10/30/2020-+ -+ # Make predictions with an AutoML ONNX model in .NET
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
Last updated 02/28/2020--++
Comparing the two techniques:
The outputs of the `AutoMLStep` are the final metric scores of the higher-performing model and that model itself. To use these outputs in further pipeline steps, prepare `OutputFileDatasetConfig` objects to receive them. ```python
-from azureml.pipeline.core import TrainingOutput
+from azureml.pipeline.core import TrainingOutput, PipelineData
metrics_data = PipelineData(name='metrics_data', datastore=datastore,
machine-learning How To Use Azure Ad Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-azure-ad-identity.md
Last updated 11/16/2020--++ # Use Azure AD identity with your machine learning web service in Azure Kubernetes Service
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
Last updated 07/23/2020--++ ## As a developer, I need to configure my experiment context with the necessary software packages so my machine learning models can be trained and deployed on different compute targets.
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-event-grid.md Binary files differ
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-labeled-dataset.md
--++ Last updated 05/14/2020 # Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-identities.md
-+ Last updated 10/22/2020
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Last updated 09/22/2020--++ # Track Azure Databricks ML experiments with MLflow and Azure Machine Learning (preview)
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
Last updated 12/23/2020--++ # Train and track ML models with MLflow and Azure Machine Learning (preview)
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-pipeline-parameter.md
Last updated 04/09/2020--++ # Use pipeline parameters in the designer to build versatile pipelines
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-private-python-packages.md
-+ Last updated 07/10/2020 ## As a developer, I need to use private Python packages securely when training machine learning models.
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-reinforcement-learning.md
Last updated 05/05/2020--++
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-secrets-in-runs.md
Last updated 03/09/2020--++ # Use authentication credential secrets in Azure Machine Learning training runs
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-synapsesparkstep.md
Last updated 03/04/2021--++ # Customer intent: As a user of both Azure Machine Learning pipelines and Azure Synapse Analytics, I'd like to use Apache Spark for the data preparation of my pipeline
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-version-track-datasets.md
Last updated 03/09/2020--++ # Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments.
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-pipeline-yaml.md Binary files differ
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
This code will print a URL to the experiment in the Azure Machine Learning studi
### <a name="inspect-log"></a> Inspect the log file
-In the studio, go to the experiment run (by selecting the previous URL output) followed by **Outputs + logs**. Select the `70_driver_log.txt` file. You should see the following output:
+In the studio, go to the experiment run (by selecting the previous URL output) followed by **Outputs + logs**. Select the `70_driver_log.txt` file. Scroll down through the log file until you see the following output:
```txt Processing 'input'.
marketplace Azure Container Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-offer-listing.md
+
+ Title: Configure Azure Container offer listing details on Microsoft AppSource
+description: Configure Azure Container offer listing details on Microsoft AppSource.
+++++ Last updated : 03/30/2021++
+# Configure Azure Container offer listing details
+
+This page lets you define the offer details such as offer name, description, links, contacts, logos, and screenshots.
+
+> [!NOTE]
+> Provide offer listing details in one language only. English is not required as long as the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a *Useful link URL* to offer content in a language other than the one used in the Offer listing content.
+
+## Marketplace details
+
+The **Name** you enter here is shown to customers as the title of the offer. This field is pre-populated with the name you entered for **Offer alias** when you created the offer, but you can change it. The name:
+
+- Can include trademark and copyright symbols.
+- Must be 50 characters or less.
+- Can't include emojis.
+
+Provide a short description of your offer for the **Search results summary** (up to 100 characters). This description may be used in marketplace search results.
+
+Provide a **Short description** of your offer, up to 256 characters. This will appear in search results and on your offer's details page.
+++
+Use HTML tags to format your description so it's more engaging. For a list of allowed tags, see [Supported HTML tags](supported-html-tags.md).
+
+Enter the web address (URL) of your organization's privacy policy. Ensure your offer complies with privacy laws and regulations. You must also post a valid privacy policy on your website.
+
+## Useful links
+
+Provide supplemental online documents about your offer. You can add up to 25 links. To add a link, select **+ Add a link** and then complete the following fields:
+
+- **Name** ΓÇô Customers will see this on your offer's details page.
+- **Link** (URL) ΓÇô Enter a link for customers to view your online document. The link must start with http:// or https://.
+
+### Contact information
+
+Provide the name, email, and phone number for a **Support contact**, **Engineering contact**, and **Cloud Solution Provider Program** contact. This information is not shown to customers, but will be available to Microsoft, and may be provided to CSP partners.
+
+In the **Support contact** section, provide the **Support website** where Azure Global and Azure Government (if applicable) customers can reach your support team.
+
+## Marketplace media
+
+Provide logos and images to use with your offer. All images must be in PNG format. Blurry images will cause your submission to be rejected.
++
+>[!NOTE]
+>If you have an issue uploading files, ensure that your local network doesn't block the https://upload.xboxlive.com service that's used by Partner Center.
+
+### Logos
+
+Provide a PNG file for the **Large** size logo. Partner Center will use this to create other required sizes. You can optionally replace this with a different image later.
+
+These logos are used in different places in the listing:
+++
+### Screenshots
+
+Add at least one (and up to five) screenshots that show how your offer works. All screenshots must be 1280 x 720 pixels and in PNG format. Add a caption for each screenshot.
+
+### Videos
+
+Add up to five optional videos that demonstrate your offer. They should be hosted on an external video service. Enter each video's name, web address, and a thumbnail PNG image of the video at 1280 x 720 pixels.
+
+For additional marketplace listing resources, see [Best practices for marketplace offer listings](gtm-offer-listing-best-practices.md).
+
+Select **Save draft** before continuing to the next tab in the left-nav menu, **Preview audience**.
+<!-- #### Offer examples
+
+The following examples show how the offer listing fields appear in different places of the offer.
+
+This shows search results in Azure Marketplace:
+
+[![Illustrates the search results in Azure Marketplace](media/azure-container/azure-create-7-search-results-mkt-plc-small.png)](media/azure-container/azure-create-7-search-results-mkt-plc.png#lightbox)
+
+This shows the **Offer listing** page in Azure portal:
++
+This shows search results in Azure portal:
+
+[![Illustrates the search results in Azure portal.](media/azure-container/azure-create-9-search-results-portal-small.png)](media/azure-container/azure-create-9-search-results-portal.png#lightbox) -->
+
+## Next steps
+
+- [Set offer preview audience](azure-container-preview-audience.md)
marketplace Azure Container Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-offer-setup.md
+
+ Title: Create an Azure Container offer on Azure Marketplace
+description: Create an Azure Container offer on Azure Marketplace.
+++++ Last updated : 04/21/2021++
+# Create an Azure Container offer
+
+This article describes how to create an Azure Container offer. All offers go through our certification process, which checks your solution for standard requirements, compatibility, and proper practices.
+
+Before you start, create a commercial marketplace account in [Partner Center](partner-center-portal/create-account.md) and ensure it is enrolled in the commercial marketplace program.
+
+## Before you begin
+
+Review [Plan an Azure Container offer](marketplace-containers.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it.
+
+## Create a new offer
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
+2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
+3. On the Overview page, select **+ New offer** > **Azure Container**.
+
+ :::image type="content" source="media/azure-container/new-offer-azure-container.png" alt-text="The left pane menu options and the 'New offer' button.":::
+
+> [!IMPORTANT]
+> After an offer is published, any edits you make to it in Partner Center appear on Azure Marketplace only after you republish the offer. Be sure to always republish an offer after changing it.
+
+## New offer
+
+Enter an **Offer ID**. This is a unique identifier for each offer in your account.
+
+- This ID is visible to customers in the web address for the offer and in Azure Resource Manager templates, if applicable.
+- Use only lowercase letters and numbers. The ID can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if your Publisher ID is `testpublisherid` and you enter **test-offer-1**, the offer web address will be `https://appsource.microsoft.com/product/dynamics-365/testpublisherid.test-offer-1`.
+- The Offer ID can't be changed after you select **Create**.
+
+Enter an **Offer alias**. This is the name used for the offer in Partner Center.
+
+- This name isn't used on Azure Marketplace. It is different from the offer name and other values shown to customers.
+
+Select **Create** to generate the offer. Partner Center opens the **Offer setup** page.
+
+## Alias
+
+Enter a descriptive name that we'll use to refer to this offer solely within Partner Center. The offer alias (pre-populated with what you entered when you created the offer) won't be used in the marketplace and is different than the offer name shown to customers. If you want to update the offer name later, see the [Offer listing](azure-container-offer-listing.md) page.
+
+## Customer leads
++
+For more information, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+Select **Save draft** before continuing to the next tab in the left-nav menu, **Properties**.
+
+## Next steps
+
+- [Configure offer properties](azure-container-properties.md)
+- [Offer listing best practices](gtm-offer-listing-best-practices.md)
marketplace Azure Container Plan Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-plan-availability.md
+
+ Title: Set plan availability for an Azure Container offer in Microsoft AppSource.
+description: Set plan availability for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 04/21/2021++
+# Set plan availability for an Azure Container offer
+
+Use this tab to set the availability of your Azure Container plan.
+
+## Set plan availability
+
+To hide your published offer so customers can't search, browse, or purchase it in the marketplace, select the **Hide plan** check box.
+
+This field is commonly used when:
+
+- The offer is only to be used only indirectly when referenced though another application.
+- The offer should not be purchased individually.
+- The plan was used for initial testing and is no longer relevant.
+- The plan was used for temporary or seasonal offers and should no longer be offered.
+
+Select **Save draft** before continuing to the next tab in the **Plan overview** left-nav menu, **Technical configuration**.
+
+## Next steps
+
+- [Set plan technical configuration](azure-container-plan-technical-configuration.md)
marketplace Azure Container Plan Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-plan-listing.md
+
+ Title: Set up plan listing details for an Azure Container offer in Microsoft AppSource.
+description: Set up plan listing details for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 03/30/2021++
+# Set up plan listing details for an Azure Container offer
+
+This page displays information specific to the current plan.
+
+## Plan name
+
+This is pre-filled with the name you gave your plan when you created it, but you can change it. It can be up to 50 characters long. This name appears as the title of this plan in Azure Marketplace and the Azure portal. It's used as the default module name after the plan is ready to be used.
+
+## Plan summary
+
+Provide a short summary of your plan (not the offer). This summary appears in Azure Marketplace search results and can contain up to 100 characters.
+
+## Plan description
+
+Describe what makes this plan unique, as well as differences between plans within your offer. Don't describe the offer, just the plan. This description will appear in Azure Marketplace and in the Azure portal on the offer listing page. It can be the same content you provided in the plan summary and contain up to 2,000 characters.
+
+Select **Save draft** before continuing to the next tab in the **Plan overview** left-nav menu, **Availability**.
+
+## Next steps
+
+- [Set plan availability](azure-container-plan-availability.md)
marketplace Azure Container Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-plan-overview.md
+
+ Title: Create and edit plans for an Azure Container offer in Microsoft AppSource.
+description: Create and edit plans for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 03/30/2021++
+# Create and edit plans for an Azure Container offer
+
+This overview page lets you create different plan options within the same offer. Plans (formerly called SKUs) can differ in terms of in where they are available (Azure Global or Azure Government) and the image referenced by the plan. Your offer must contain at least one plan.
+
+You can create up to 100 plans for each offer: up to 45 of these can be private. Learn more about private plans in [Private offers in the Microsoft commercial marketplace](private-offers.md).
+
+After you create a plan, the **Plan overview** page shows:
+
+- Plan names
+- Pricing model
+- Azure regions (Global or Government)
+- Current publishing status
+- Any available actions
+
+The actions available for a plan vary depending on the current status of your plan. They include:
+
+- **Delete draft** if the plan status is a Draft.
+- **Stop sell plan** if the plan status is Published Live.
+
+## Edit a plan
+
+Select a plan **Name** to edit its details.
+
+## Create a plan
+
+To set up a new plan, select **+ Create new plan**.
+
+Enter a unique **Plan ID** for each plan. This ID will be visible to customers in the product's web address. Use only lowercase letters and numbers, dashes, or underscores, and a maximum of 50 characters. You cannot change the Plan ID after you select **Create**.
+
+Enter a **Plan name**. Customers see this name when deciding which plan to select within your offer. Each plan in this offer must have a unique name. For example, you might use an offer name of **Windows Server** with plans **Windows Server 2016** and **Windows Server 2019**.
+
+Select **Create** and continue below.
+
+## Next steps
+
+- [+ Create new plan](azure-container-plan-setup.md), or
+- Exit plan setup and continue with optional [Co-sell with Microsoft](marketplace-co-sell.md), or
+- [Review and publish your offer](review-publish-offer.md)
marketplace Azure Container Plan Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-plan-setup.md
+
+ Title: Set up plans for an Azure Container offer in Microsoft AppSource.
+description: Set up plans for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 03/30/2021++
+# Set up plans for an Azure Container offer
+
+The **Plan setup** page lets you configure which clouds the plan is available in. Your answers on this tab affect which fields are displayed on other tabs.
+
+## Azure regions
+
+### Azure Global
+
+All Azure Container offers are automatically available in **Azure Global**. Your plan can be used by customers in all global Azure regions that use the marketplace. For details, see [Geographic availability and currency support](marketplace-geo-availability-currencies.md).
+
+### Azure Government
+
+Select **[Azure Government](../azure-government/documentation-government-welcome.md)** to make your offer appear there. This is a government community cloud with controlled access for customers from U.S. federal, state, and local or tribal government agencies, as well as partners eligible to serve them. As the publisher, you're responsible for any compliance controls, security measures, and best practices for this cloud community. Azure Government uses physically isolated data centers and networks (located in the U.S. only). Before [publishing](../azure-government/documentation-government-manage-marketplace-partners.md) to Azure Government, test and confirm your solution within that area as the results may be different. To stage and test your solution, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
+
+> [!NOTE]
+> After your plan is published and available in a specific region, you can't remove that region.
+
+#### Azure Government certifications
+
+If you select **Azure Government**, add your **certifications**. Azure Government services handle data that's subject to certain government regulations and requirements. For example, FedRAMP, NIST 800.171 (DIB), ITAR, IRS 1075, DoD L4, and CJIS. To bring awareness to your certifications for these programs, you can provide up to 100 links that describe your certifications. These can be links to your listings on the program directly or to your own website. These links are visible to Azure Government customers only.
+
+Select **Save draft** before continuing to the next tab in the **Plan overview** left-nav menu, **Plan listing**.
+
+## Next steps
+
+- [Set up the plan listing](azure-container-plan-listing.md)
marketplace Azure Container Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-plan-technical-configuration.md
+
+ Title: Set plan technical configuration for an Azure Container offer in Microsoft AppSource.
+description: Set plan technical configuration for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 04/21/2021++
+# Set plan technical configuration for an Azure Container offer
+
+Container images must be hosted in a private [Azure Container Registry](https://azure.microsoft.com/services/container-registry/). Use this page to provide reference information for your container image repository inside the Azure Container Registry.
+
+After you submit the offer, your container image is copied to Azure Marketplace in a specific public container registry. All requests from Azure users to use your module are served from the Azure Marketplace public container registry, not your private container registry.
+
+You can target multiple platforms and provide several versions of your module container image using tags. To learn more about tags and versioning, see [Prepare Azure Container technical assets](azure-container-technical-assets.md).
+
+## Image repository details
+
+Provide the **Azure subscription ID** where resource usage is reported and services are billed for the Azure Container Registry that includes your container image. You can find this ID on the [Subscriptions page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal.
+
+Provide the [**Azure resource group name**](../azure-resource-manager/management/manage-resource-groups-portal.md) that contains the Azure Container Registry with your container image. The resource group must be accessible in the subscription ID (above). You can find the name on the [Resource groups](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) page in the Azure portal.
+
+Provide the [**Azure container registry name**](../container-registry/container-registry-intro.md) that has your container image. The container registry must be present in the Azure resource group you provided earlier. Provide only the registry name, not the full login server name. Omit **azurecr.io** from the name. You can find the registry name on the [Container Registries page](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.ContainerRegistry%2Fregistries) in the Azure portal.
+
+Provide the [**Admin username for the Azure Container Registry**](../container-registry/container-registry-authentication.md#admin-account) associated with the Azure Container Registry that has your container image. The username and password (next step) are required to ensure your company has access to the registry. To get the admin username and password, set the **admin-enabled** property to **True** using the Azure Command-Line Interface (CLI). You can optionally set **Admin user** to **Enable** in the Azure portal.
++
+Provide the **Admin password for the Azure Container Registry** for the admin username associated with the Azure Container Registry and has your container image. The username and password are required to ensure your company has access to the registry. You can get the password from the Azure portal by going to **Container Registry** > **Access Keys** or with Azure CLI using the [show command](/cli/azure/acr/credential#az-acr-credential-show).
++
+Provide the **Repository name within the Azure Container Registry** that has your image. You specify the name of the repository when you push the image to the registry. You can find the name of the repository by going to the [Container Registry](https://azure.microsoft.com/services/container-registry/) > **Repositories page**. For more information, see [View container registry repositories in the Azure portal](../container-registry/container-registry-repositories.md). After the name is set, it can't be changed. Use a unique name for each offer in your account.
+
+## Image versions
+
+Customers must be able to automatically get updates from the Azure Marketplace when you publish an update. If they don't want to update, they must be able to stay on a specific version of your image. You can do this by adding new image tags each time you make an update to the image.
+
+Select **Add Image version** to include an **Image tag** that points to the latest version of your image on all supported platforms. It must also include a version tag (for example, starting with xx.xx.xx, where xx is a number). Customers should use [manifest tags](https://github.com/estesp/manifest-tool) to target multiple platforms. All tags referenced by a manifest tag must also be added so we can upload them. All manifest tags (except the latest tag) must start with either X.Y- or X.Y.Z- where X, Y, and Z are integers. For example, if a latest tag points to `1.0.1-linux-x64`, `1.0.1-linux-arm32`, and `1.0.1-windows-arm32`, these six tags need to be added to this field. For details about tags and versioning, see [Prepare your Azure Container technical assets](azure-container-technical-assets.md).
+
+> [!TIP]
+> Add a test tag to your image so you can identify the image during testing.
+
+<!-- possible future restore
+
+## Samples
+
+These examples show how the plan listing fields appear in different views.
+
+These are the fields in Azure Marketplace when viewing plan details:
++
+These are plan details on the Azure portal:
++
+Select **Save draft**, then **← Plan overview** in the left-nav menu to return to the plan overview page.
+-->
+## Next steps
+
+- To **Co-sell with Microsoft** (optional), select it in the left-nav menu. For details, see [Co-sell partner engagement](marketplace-co-sell.md).
+- To **Resell through CSPs** (Cloud Solution Partners, also optional), select it in the left-nav menu. For details, see [Resell through CSP Partners](cloud-solution-providers.md).
+- If you're not setting up either of these or you've finished, it's time to [Review and publish your offer](review-publish-offer.md).
marketplace Azure Container Preview Audience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-preview-audience.md
+
+ Title: Set the preview audience for an Azure Container offer in Microsoft AppSource.
+description: Set the preview audience for an Azure Container offer in Microsoft AppSource.
+++++ Last updated : 03/30/2021++
+# Set the preview audience for an Azure Container offer
+
+This article describes how to configure a preview audience for an Azure Container offer in the commercial marketplace using Partner Center. The preview audience can review your offer before it goes live.
+
+## Define a preview audience
+
+On the **Preview audience** page, define a limited audience who can review your Container offer before you publish it live to the broader marketplace audience. You define the preview audience using Azure subscription IDs, along with an optional description for each. Neither of these fields can be seen by customers. You can find your Azure subscription ID on the **Subscriptions** page on the Azure portal.
+
+Add at least one Azure subscription ID, either individually (up to 10) or by uploading a CSV file (up to 100) to define who can preview your offer. If your offer is already live, you may still define a preview audience for testing updates to your offer.
+
+## Add email addresses manually
+
+1. On the **Preview audience** page, add a single Azure subscription ID and an optional description in the boxes provided.
+1. To add another email address, select the **Add ID (Max 10)** link.
+1. Select **Save draft** before continuing to the next tab to set up plans.
+
+## Add email addresses using a CSV file
+
+1. On the **Preview audience** page, select the **Export Audience (csv)** link.
+1. Open the CSV file. In the **Id** column, enter the Azure subscription IDs you want to add to the preview audience.
+1. In the **Description** column, you have the option to add a description for each entry.
+1. In the **Type** column, add **SubscriptionId** to each row that has an ID.
+1. Save the file as a CSV file.
+1. On the **Preview audience** page, select the **Import Audience (csv)** link.
+1. In the **Confirm** dialog box, select **Yes**, then upload the CSV file.
+1. Select **Save draft** before continuing to the next tab to set up plans.
+
+> [!IMPORTANT]
+> After you view your offer in Preview, you must select **Go live** to publish your offer to the public.
+
+## Next steps
+
+- [Create and manage plans](azure-container-plan-overview.md)
marketplace Azure Container Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-properties.md
+
+ Title: Configure Azure Container offer properties on Azure Marketplace
+description: Configure Azure Container offer properties on Azure Marketplace.
+++++ Last updated : 03/30/2021++
+# Configure Azure Container offer properties
+
+This page lets you define the categories used to group your offer on Azure Marketplace, your application version, and the legal contracts that support your offer.
+
+## Categories
+
+Select categories and subcategories to place your offer in the appropriate marketplace search areas. Be sure to describe later in the offer description how your offer supports these categories.
+
+- Select a Primary category.
+- To add a second optional category (Secondary), select the **+Categories** link.
+- Select up to two subcategories for the Primary and/or Secondary category. If no subcategory is applicable to your offer, select **Not applicable**. Use Ctrl+click to select a second subcategory.
+
+See the full list of categories and subcategories in [Offer Listing Best Practices](gtm-offer-listing-best-practices.md).
+
+### Legal
+
+<!-- Don't use [!INCLUDE [Legal contracts section](includes/legal-contracts-intro.md)] because amendments are not applicable to containers. -->
+Under **Legal**, provide terms and conditions for your offer. You have two options:
+
+- [Use the standard contract](#use-the-standard-contract)
+- [Use your own terms and conditions](#use-your-own-terms-and-conditions)
+
+To learn about the standard contract, see [Standard Contract for the Microsoft commercial marketplace](standard-contract.md). You can download the [Standard Contract](https://go.microsoft.com/fwlink/?linkid=2041178) PDF (make sure your pop-up blocker is off).
+
+#### Use the standard contract
++
+Select **Save draft** before continuing to the next tab in the left-nav menu, **Offer listing**.
+
+#### Use your own terms and conditions
++
+Select **Save draft** before continuing to the next tab in the left-nav menu, **Offer listing**.
+
+## Next steps
+
+- [Configure offer listing](azure-container-offer-listing.md)
marketplace Azure Container Technical Assets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-container-technical-assets.md
+
+ Title: Prepare your Azure container technical assets
+description: Technical resource and guidelines to help you configure a container offer on Azure Marketplace.
+++++ Last updated : 03/30/2021++
+# Prepare Azure container technical assets
+
+This article gives technical resources and recommendations to help you create a container offer on Azure Marketplace.
+
+## Before you begin
+
+For Quickstarts, Tutorials, and Samples, see the [Azure Container Instances documentation](../container-instances/index.yml).
+
+## Fundamental technical knowledge
+
+Designing, building, and testing these assets takes time and requires technical knowledge of both the Azure platform and the technologies used to build the offer.
+
+In addition to your solution domain, your engineering team should have knowledge about the following Microsoft technologies:
+
+- Basic understanding of [Azure Services](https://azure.microsoft.com/services/)
+- How to [design and architect Azure applications](https://azure.microsoft.com/solutions/architecture/)
+- Working knowledge of [Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/), [Azure Storage](https://azure.microsoft.com/services/?filter=storage), and [Azure Networking](https://azure.microsoft.com/services/?filter=networking)
+- Working knowledge of [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/)
+- Working Knowledge of [JSON](https://www.json.org/).
+
+## Suggested tools
+
+Choose one or both of the following scripting environments to help manage your Container image:
+
+- [Azure PowerShell](/powershell/azure/)
+- [Azure CLI](/cli/azure/)
+
+We recommend adding these tools to your development environment:
+
+- [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows)
+- [Visual Studio Code](https://code.visualstudio.com/)
+ - Extension: [Azure Resource Manager Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
+ - Extension: [Beautify](https://marketplace.visualstudio.com/items?itemName=HookyQR.beautify)
+ - Extension: [Prettify JSON](https://marketplace.visualstudio.com/items?itemName=mohsen1.prettify-json).
+
+Review the available tools on the [Azure Developer Tools](https://azure.microsoft.com/) page. If you're using Visual Studio, review the tools available in the [Visual Studio Marketplace](https://marketplace.visualstudio.com/).
+
+## Create the container image
+
+You can't deploy an image to Azure Container Instances from an on-premises registry.
+
+- If you already have a working container in your local registry, create an Azure Registry and upload your container image to the Azure Container Registry. To learn more, see [Tutorial: Build and deploy container images in the cloud with Azure Container Registry Tasks](../container-registry/container-registry-tutorial-quick-task.md).
+
+- If donΓÇÖt have a container image yet and need to containerize your existing application or create a new container based application, clone the application source code from GitHub, create a container image from the application source, and test the image in a local Docker environment. To learn more, see [Tutorial: Create a container image for deployment to Azure Container Instances](../container-instances/container-instances-tutorial-prepare-app.md).
+
+## Next steps
+
+- [Create your container offer](azure-container-offer-setup.md)
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
Previously updated : 03/10/2021 Last updated : 04/21/2021
Last updated 03/10/2021
Generating SAS URIs for your VHDs has these requirements: -- They only support unmanaged VHDs. - Only List and Read permissions are required. DonΓÇÖt provide Write or Delete access. - The duration for access (expiry date) should be a minimum of three weeks from when the SAS URI is created. - To protect against UTC time changes, set the start date to one day before the current date. For example, if the current date is June 16, 2020, select 6/15/2020.
marketplace Create Azure Container Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-azure-container-offer.md
- Title: Create an Azure container offer - Azure Marketplace
-description: Learn how to create and publish a container offer to Azure Marketplace.
----- Previously updated : 06/17/2020--
-# Create an Azure container offer in Azure Marketplace
-
-This article describes how to create and publish a container offer for Azure Marketplace. Before starting, [Create a Commercial Marketplace account in Partner Center](create-account.md) if you haven't done so yet. Ensure your account is enrolled in the commercial marketplace program.
-
-## Create a new offer
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-
-2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-
-3. On the Overview page, select **+ New offer** > **Azure Container**.
-
- ![Illustrates the left-navigation menu.](./partner-center-portal/media/new-offer-azure-container.png)
-
-> [!TIP]
-> After an offer is published, edits made to it in Partner Center only appear in online stores after republishing the offer. Make sure you always republish after making changes.
-
-### Offer ID and alias
-
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
--- This ID is visible to customers in the web address for the marketplace offer and Azure Resource Manager templates, if applicable.-- Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.-- The Offer ID can't be changed after you select **Create**.-
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
--- This name isn't used in the marketplace and is different from the offer name and other values shown to customers.-- This can't be changed after you select **Create**.-
-Select **Create** to generate the offer and continue.
-
-## Offer overview
-
-The **Offer overview** page shows a visual representation of the steps required to publish this offer (both completed and upcoming) and how long each step should take to complete.
-
-This page shows different links based on the current status of the offer. For example:
--- If the offer is a draft - Delete draft offer-- If the offer is live - [Stop selling the offer](./partner-center-portal/update-existing-offer.md#stop-selling-an-offer-or-plan)-- If the offer is in preview - [Go-live](review-publish-offer.md#previewing-and-approving-your-offer)-- If you haven't completed publisher sign-out - [Cancel publishing.](review-publish-offer.md#cancel-publishing)-
-## Offer setup
-
-Follow these steps to set up your offer.
-
-### Customer leads ΓÇô optional
-
-When publishing your offer to the commercial marketplace with Partner Center, you can
-connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product.
-
-1. **Select a lead destination where you want us to send customer leads**. Partner Center supports the following CRM systems:
-
- - [Dynamics 365](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md) for Customer Engagement
- - [Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md)
- - [Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md)
-
- > [!NOTE]
- > If your CRM system isn't listed above, use [Azure Table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md) or [Https Endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md) to store customer lead data, then export the data to your CRM system.
-
-2. Connect your offer to the lead destination when publishing in Partner Center.
-3. Confirm the connection to the lead destination is configured properly. After you publish it in Partner Center, we'll validate the connection and send you a test lead. While you preview the offer before it goes live, you can also test your lead connection by trying to purchase the offer yourself in the preview environment.
-4. Make sure the connection to the lead destination stays updated so you don't lose any leads.
-
-Here are some additional lead management resources:
--- [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md)-- [Common questions about lead management](lead-management-faq.md#common-questions-about-lead-management)-- [Troubleshooting lead configuration errors](lead-management-faq.md#publishing-config-errors)-- [Lead Management Overview](https://assetsprod.microsoft.com/mpn/cloud-marketplace-lead-management.pdf) PDF (Make sure your pop-up blocker is turned off).-
-Select **Save draft** before continuing.
-
-### Properties
-
-This page lets you define the categories used to group your offer on the marketplace and the legal contracts that support your offer.
-
-#### Category
-
-Select categories and subcategories to place your offer in the appropriate marketplace search areas. Be sure to describe how your offer supports these categories in the offer description. Select:
--- At least one and up to two categories, including a primary and a secondary category (optional).-- Up to two subcategories for each primary and/or secondary category. If no subcategory is applicable to your offer, select **Not applicable**.-
-See the full list of categories and subcategories in [Offer Listing Best Practices](gtm-offer-listing-best-practices.md). Containers always appear under **Containers** and then the **Container Images** category.
-
-#### Legal
-
-You must provide terms and conditions for the offer. There are two options:
--- Use the Standard Contract for the Microsoft commercial marketplace.-- Provide your own terms and conditions.-
-#### Standard contract for the Microsoft commercial marketplace
-
-We offer a Standard Contract template to help facilitate transactions in the commercial marketplace. You can choose to offer your solution under the Standard Contract, which customers only need to check and accept once. This is a good option if you don't want to create custom terms and conditions.
-
-To learn more about the Standard Contract, see [Standard Contract for the Microsoft commercial marketplace](standard-contract.md). You can also download the [Standard Contract](https://go.microsoft.com/fwlink/?linkid=2041178) PDF (make sure your pop-up blocker is off).
-
-To use the Standard Contract, select the **Use the Standard Contract for Microsoft's commercial marketplace](../standard-contract.md)
-
-> [!NOTE]
-> After you publish an offer using the Standard contract for Microsoft commercial marketplace, you can't use your own custom terms and conditions. Either offer your solution under the Standard Contract or under your own terms and conditions.
--
-##### Your own terms and conditions
-
-To provide your own custom terms and conditions, enter them in the **Terms and conditions** box. You can enter an unlimited number of characters of text in this box. Customers must accept these terms before they can try your offer.
-
-Select **Save draft** before continuing to the next section, Offer listing.
-
-## Offer listing
-
-This page lets you define the offer details that are displayed in the commercial marketplace. This includes the offer name, description, and images.
-
-> [!NOTE]
-> Offer details aren't required to be in English if the offer description begins with the phrase,"This application is available only in [non-English language]." It's also okay to provide a Useful Link to offer content in a language that's different from the one used in the Offer listing details.
-
-### Name
-
-The name you enter here displays as the title of your offer. This field is pre-filled with the text you entered in the **Offer alias** box when you created the offer. You can change this name later.
-
-The name:
--- May be trademarked (and you may include trademark and copyright symbols).-- Can't be more than 50 characters long.-- Can't include emojis.-
-### Search results summary
-
-A short description of your offer. This can be up to 100 characters long and is used in marketplace search results.
-
-### Long summary
-
-A more detailed description of your offer. This can be up to 256 characters long and is used in marketplace search results.
-
-### Description
----
-#### Privacy policy link
-
-Enter the web address of your organization's privacy policy. You're responsible for ensuring that your offer complies with privacy laws and regulations. You're also responsible for posting a valid privacy policy on your website.
-
-#### Useful links
-
-Provide supplemental online documents about your offer. You can add up to 25 links. To add a link, select **+ Add a link** and then complete the following fields:
--- **Title** ΓÇô Customers will see this on your offer's details page.-- **Link (URL)** ΓÇô Enter a link for customers to view your online document. The link must start with http:// or https://.-
-### Contact Information
-
-You must provide the name, email, and phone number for a **Support contact** and an **Engineering contact**. This information isn't shown to customers but it is available to Microsoft. It may also be provided to Cloud Solution Provider (CSP) partners.
--- Support contact (required): For general support questions.-- Engineering contact (required): For technical questions and certification issues.-- CSP Program contact (optional): For reseller questions related to the CSP program.-
-In the **Support contact** section, provide the **Support website** where partners can find support for your offer based on whether the offer is available in global Azure, Azure Government, or both.
-
-In the **CSP Program contact** section, provide the link (**CSP Program Marketing Materials**) where CSP partners can find marketing materials for your offer.
-
-#### Additional marketplace listing resources
-
-To learn more about creating offer listings, see [Offer listing best practices](gtm-offer-listing-best-practices.md)
-
-### Marketplace images
-
-Provide logos and images to use with your offer. All images must be in PNG format. Blurry images will be rejected.
--
->[!Note]
->If you have an issue uploading files, make sure your local network does not block the https://upload.xboxlive.com service used by Partner Center.
-
-#### Store logos
-
-Provide a PNG file for the **Large** size logo. Partner Center will use this to create a **Small** and a **Medium** logo. You can optionally replace these with different images later.
--- **Large** (from 216 x 216 to 350 x 350 px, required)-- **Medium** (90 x 90 px, optional)-- **Small** (48 x 48 px, optional)-
-These logos are used in different places in the listing:
---
-#### Screenshots (optional)
-
-Add up to five screenshots that show how your offer works. Each must be 1280 x 720 pixels in size and in PNG format.
-
-#### Videos (optional)
-
-Add up to five videos that demonstrate your offer. Enter the video's name, its web address, and a thumbnail PNG image of the video at 1280 x 720 pixels in size.
-
-#### Offer examples
-
-The following examples show how the offer listing fields appear in different places of the offer.
-
-This shows the **Offer listing** page in Azure Marketplace:
--
-This shows search results in Azure Marketplace:
--
-This shows the **Offer listing** page in Azure portal:
--
-This shows search results in Azure portal:
--
-## Preview
-
-On the Preview tab, you can choose a limited **Preview Audience** for validating your offer before publishing it live.
-
-> [!IMPORTANT]
-> After you view your offer in **Preview**, you must select **Go live** to publish your offer to the public.
-
-Specify your preview audience using Azure subscription ID GUIDs, along with an optional description for each. Neither of these fields can be seen by customers.
-
-> [!NOTE]
-> You can find your Azure subscription ID on the Subscriptions page in Azure portal.
-
-Add at least one Azure subscription ID, either individually (up to 10) or by uploading a CSV file (up to 100). By adding these subscription IDs, you determine who can preview your offer before it's published live. If your offer is already live, you can choose a preview audience to test changes or updates to your offer.
-
-Select **Save draft** before continuing.
-
-## Plan overview
-
-This tab lets you provide different plan options within the same offer. Plans (formerly called SKUs) can differ in terms of what clouds are available, such as global clouds, Government clouds, and the image referenced by the plan. To list your offer in the commercial marketplace, you must set up at least one plan.
-
-You can create up to 100 plans for each offer: up to 45 of these can be private. Learn more about private plans in [Private offers in the Microsoft commercial marketplace](private-offers.md).
-
-After you create your plans, the **Plan overview** tab shows:
--- Plan names-- Pricing model-- Azure regions (Global or Government)-- Current publishing status-- Any available actions-
-The actions available in the Plan overview vary depending on the current status of your plan. They include:
--- **Delete draft** ΓÇô If the plan status is a Draft.-- **Stop sell plan** ΓÇô If the plan status is published live.-
-### Create new plan
-
-Select **Create new plan**. The **New plan** dialog box appears.
-
-In the **Plan ID** box, create a unique plan identifier for each plan in this offer. This ID will be visible to customers in the product's web address. Use only lowercase letters and numbers, dashes, or underscores, and a maximum of 50 characters.
-
-> [!NOTE]
-> The plan ID can't be changed after you select **Create**.
-
-In the **Plan name** box, enter a name for this plan. Customers see this name when deciding which plan to select within your offer. Create a unique name for each plan in this offer. For example, you might use an offer name of **Windows Server** with plans **Windows Server 2016** and **Windows Server 2019**.
-
-### Plan setup
-
-This tab lets you choose which clouds the plan is available in. Your answers on this tab affect which fields are displayed on other tabs.
-
-#### Azure regions
-
-All plans for Azure Container offers are automatically made available in **Azure Global**. Your plan can be used by customers in all global Azure regions that use the commercial marketplace. For details, see [Geographic availability and currency support](marketplace-geo-availability-currencies.md).
-
-Select the [Azure Government](../azure-government/documentation-government-welcome.md) option to make your solution appear here. This is a government community cloud with controlled access for customers from U.S. federal, state, and local or tribal government agencies, as well as partners eligible to serve them. As the publisher, you're responsible for any compliance controls, security measures, and best practices for this cloud community. Azure Government uses physically isolated data centers and networks (located in the U.S. only). Before [publishing](../azure-government/documentation-government-manage-marketplace-partners.md) to Azure Government, test and confirm your solution within that area as the results may be different. To create and test your solution, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
-
-> [!NOTE]
-> After your plan is published and available in a specific region, you can't remove that region.
-
-#### Azure Government certifications
-
-This option can only be seen if **Azure Government** is selected under **Azure regions**.
-
-Azure Government services handle data that's subject to certain government regulations and requirements. For example, FedRAMP, NIST 800.171 (DIB), ITAR, IRS 1075, DoD L4, and CJIS.
-
-To show your certifications for these programs, you can provide up to 100 links that describe them. These can be links to your listings on the program directly or to your own website. These links are visible to Azure Government customers only.
-
-### Plan listing
-
-This tab displays specific information for each different plan within the current offer.
-
-### Plan name
-
-This is pre-filled with the name you gave your plan when you created it. You can change this name as needed. It can be up to 50 characters long. This name appears as the title of this plan in Azure Marketplace and Azure portal. It's used as the default module name after the plan is ready to be used.
-
-### Plan summary
-
-A short summary of your software plan (not the offer). This summary appears in Azure Marketplace search results and can contain up to 100 characters.
-
-### Plan description
-
-Describe what makes this software plan unique, as well as differences between plans within your offer. Don't describe the offer, just the plan. This description will appear in Azure Marketplace and in the Azure portal on the **Offer listing** page. It can be the same content you provided in the plan summary and contain up to 2,000 characters.
-
-Select **Save** after completing these fields.
-
-#### Plan examples
-
-The following examples show how the plan listing fields appear in different views.
-
-These are the fields in Azure Marketplace when viewing plan details:
--
-These are plan details on the Azure portal:
--
-### Plan availability
-
-If you want to hide your published offer so customers can't search, browse, or purchase it in the marketplace, select the **Hide plan** check box on the **Availability** tab.
-
-This field is used when:
--- The offer is intended to be used indirectly when referenced through another application.-- The offer should not be purchased individually.-- The plan was used for initial testing and is no longer relevant.-- The plan was used for temporary or seasonal offers and should no longer be offered.-
-## Technical configuration
-
-Container images must be hosted in a private [Azure Container Registry](https://azure.microsoft.com/services/container-registry/). On the **Technical Configuration** tab, provide reference information for your container image repository inside the Azure Container Registry.
-
-After the offer is published, your container image is copied to Azure Marketplace in a specific public container registry. All requests to use your container image are served from the Azure Marketplace public container registry, not your private one. For details, see [Prepare your Azure Container technical assets](create-azure-container-technical-assets.md).
-
-### Image repository details
-
-Provide the following information on the **Image repository details** tab.
-
-**Azure subscription ID** ΓÇô Provide the subscription ID where usage is reported and services are billed for the Azure Container Registry that includes your container image. You can find this ID on the [Subscriptions page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) in the Azure portal.
-
-**Azure resource group name** ΓÇô Provide the [resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) name that contains the Azure Container Registry with your container image. The resource group must be accessible in the subscription ID (above). You can find the name on the [Resource groups](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceGroups) page in the Azure portal.
-
-**Azure Container Registry name** ΓÇô Provide the name of the [Azure Container Registry](../container-registry/container-registry-intro.md) that has your container image. The container registry must be in the Azure resource group you provided earlier. Include only the registry name, not the full login server name. Be sure to omit **azurecr.io** from the name. You can find the registry name on the [Container Registries page](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.ContainerRegistry%2Fregistries) in the Azure portal.
-
-**Admin username for the Azure Container Registry** ΓÇô Provide the [admin username](../container-registry/container-registry-authentication.md#admin-account)) linked with the Azure Container Registry that has your container image. The username and password are required to ensure your company has access to the registry. To get the admin username and password, set the **admin-enabled** property to **True** using the Azure Command-Line Interface (CLI). You can optionally set **Admin user** to **Enable** in Azure portal.
-
- :::image type="content" source="./partner-center-portal/media/azure-create-container-offer-images/azure-create-12-update-container-registry-edit.png" alt-text="Illustrates the Update container registry dialog box.":::
-
-**Password for the Azure Container Registry** ΓÇô Provide the password for the admin username that's associated with the Azure Container Registry and has your container image. The username and password are required to ensure your company has access to the registry. You can get the password from the Azure portal by going to **Container Registry** > **Access Keys** or with Azure CLI using the [show command](/cli/azure/acr/credential#az_acr_credential_show).
--
-**Repository name within the Azure Container Registry**. Provide the name of the Azure Container Registry repository that has your image. Include the name of the repository when you push the image to the registry. You can find the name of the repository by going to the [Container Registry](https://azure.microsoft.com/services/container-registry/) > **Repositories** page. For more information, see [View container registry repositories in Azure portal](../container-registry/container-registry-repositories.md).
-
-> [!NOTE]
-> After the name is set, it can't be changed. Use a unique name for each offer in your account.
-
-### Image tags for new versions of your offer
-
-Customers must be able to automatically get updates from the Azure Marketplace when you publish an update. If they don't want to update, they must be able to stay on a specific version of your image. You can do this by adding new image tags each time you make an update to the image.
-
-### Image tag
-
-This field must include a **latest** tag that points to the latest version of your image on all supported platforms. It must also include a version tag (for example, starting with xx.xx.xx, where xx is a number). Customers should use [manifest tags](https://github.com/estesp/manifest-tool) to target multiple platforms. All tags referenced by a manifest tag must also be added so we can upload them.
-
-All manifest tags (except the latest tag) must start with either X.Y **-** or X.Y.Z- where X, Y, and Z are integers. For example, if a **latest** tag points to 1.0.1-linux-x64, 1.0.1-linux-arm32, and 1.0.1-windows-arm32, these six tags need to be added to this field. For details, see [Prepare your Azure Container technical assets](create-azure-container-technical-assets.md).
-
-> [!NOTE]
-> Remember to add a test tag to your image so you can identify the image during testing.
-
-## Review and publish
-
-After you've completed all the required sections of the offer, you can submit it to review and publish.
-
-In the top-right corner of the portal, select **Review and** **publish**.
-
-On the review page you can:
--- See the completion status for each section of the offer. You can't publish until all sections of the offer are marked as complete.
- - **Not started** ΓÇô Hasn't been started and needs to be completed.
- - **Incomplete** ΓÇô Has errors that need to be fixed or requires that you provide more information. See the sections earlier in this document for help.
- - **Complete** ΓÇô Includes all required data with no errors. All sections of the offer must be complete before you can submit the offer.
-- Provide testing instructions to the certification team to ensure your offer is tested correctly. Also, provide any supplementary notes that are helpful for understanding your offer.-
-To submit the offer for publishing, select **Publish**.
-
-We'll send you an email to let you know when a preview version of the offer is available to review and approve.
-
-To publish your offer to the public, go to Partner Center and select **Go-live**.
-
-## Next step
--- [Update an existing offer in the commercial marketplace](./partner-center-portal/update-existing-offer.md)
marketplace Marketplace Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-containers.md
Previously updated : 11/30/2020 Last updated : 03/30/2021
-# Publishing guide for Azure container offers
+# Plan an Azure container offer
Azure container offers help you publish your container image to Azure Marketplace. Use this guide to understand the requirements for this offer type.
-Azure container offers are transaction offers that are deployed and billed through Azure Marketplace. The listing option that a user sees is "Get It Now."
+Azure container offers are transaction offers that are deployed and billed through Azure Marketplace. The listing option a user sees is **Get It Now**.
Use the Azure Container offer type when your solution is a Docker container image that's set up as a Kubernetes-based Azure Container instance. > [!NOTE]
-> An Azure Container instance is a run time docker instance that provides the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service. Container instances can be deployed directly to Azure or orchestrated by Azure Kubernetes Services or Azure Kubernetes Service Engine.
+> An Azure Container instance is a run-time docker instance that provides the fastest and simplest way to run a container in Azure, without having to manage any virtual machines or adopt a higher-level service. Container instances can be deployed directly to Azure or orchestrated by Azure Kubernetes Services or Azure Kubernetes Service Engine.
-Microsoft currently supports free and bring-your-own-license (BYOL) licensing models.
+## Licensing options
+
+These are the available licensing options for Azure Container offers:
+
+| Licensing option | Transaction process |
+| | |
+| Free | List your offer to customers for free. |
+| BYOL | The Bring Your Own Licensing option lets your customers bring existing software licenses to Azure.\* |
+|
+
+\* As the publisher, you support all aspects of the software license transaction, including (but not limited to) order, fulfillment, metering, billing, invoicing, payment, and collection.
+
+## Customer leads
+
+When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive; otherwise, connecting to a CRM is optional. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+
+## Legal contracts
+
+To simplify the procurement process for customers and reduce legal complexity for software vendors, Microsoft offers a standard contract you can use for your offers in the commercial marketplace. When you offer your software under the standard contract, customers only need to read and accept it one time, and you don't have to create custom terms and conditions.
+
+You can choose to provide your own terms and conditions, instead of the standard contract. Customers must accept these terms before they can try your offer.
+
+## Offer listing details
+
+> [!NOTE]
+> Offer listing content is not required to be in English if the offer description begins with the phrase "This application is available only in [non-English language]".
+
+To help create your offer more easily, prepare these items ahead of time. All are required except where noted.
+
+- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 50 characters.
+- **Search results summary** ΓÇô The purpose or function of your offer as a single sentence with no line breaks in 100 characters or less. This is used in the commercial marketplace listing(s) search results.
+- **Short description** ΓÇô Details of the purpose or function of the offer, written in plain text with no line breaks. This will appear on your offer's details page.
+- **Description** ΓÇô This description displays in the commercial marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, any required disclosures, and a link to learn more. This text box has rich text editor controls to make your description more engaging. Optionally, use HTML tags for formatting.
+- **Privacy policy link** ΓÇô The URL for your companyΓÇÖs privacy policy. You are responsible for ensuring your app complies with privacy laws and regulations.
+- **Useful links** (optional): Links to various resources for users of your offer. For example, forums, FAQs, and release notes.
+- **Contact information**
+ - **Support contact** ΓÇô The name, phone, and email that Microsoft partners will use when your customers open tickets. Include the URL for your support website.
+ - **Engineering contact** ΓÇô The name, phone, and email for Microsoft to use directly when there are problems with your offer. This contact information isnΓÇÖt listed in the commercial marketplace.
+ - **CSP Program contact** (optional): The name, phone, and email if you opt in to the CSP program, so those partners can contact you with any questions. You can also include a URL to your marketing materials.
+- **Media**
+ - **Logos** ΓÇô A PNG file for the **Large** logo. Partner Center will use this to create other required logo sizes. You can optionally replace these with different images later.
+ - **Screenshots** ΓÇô At least one and up to five screenshots that show how your offer works. Images must be 1280 x 720 pixels, in PNG format, and include a caption.
+ - **Videos** (optional) ΓÇô Up to four videos that demonstrate your offer. Include a name, URL for YouTube or Vimeo, and a 1280 x 720 pixel PNG thumbnail.
+
+> [!Note]
+> Your offer must meet the general [commercial marketplace certification policies](/legal/marketplace/certification-policies#100-general) to be published to the commercial marketplace.
+
+## Preview audience
+
+A preview audience can access your offer prior to being published live in the online stores in order to test the end-to-end functionality before you publish it live. On the **Preview audience** page, you can define a limited preview audience.
+
+You can send invites to Azure subscription IDs. Add up to 10 IDs manually or import up to 100 with a .csv file. If your offer is already live, you can still define a preview audience for testing any changes or updates to your offer.
+
+## Plans and pricing
+
+Container offers require at least one plan. A plan defines the solution scope and limits. You can create multiple plans for your offer to give your customers different technical and licensing options.
+
+Containers support two licensing models: Free or Bring Your Own License (BYOL). BYOL means youΓÇÖll bill your customers directly, and Microsoft wonΓÇÖt charge you any fees. Microsoft only passes through Azure infrastructure usage fees. For more information, see [Commercial marketplace transact capabilities](marketplace-commercial-transaction-capabilities-and-considerations.md).
+
+## Additional sales opportunities
+
+You can choose to opt into Microsoft-supported marketing and sales channels. When creating your offer in Partner Center, you will see two tabs toward the end of the process:
+
+- **Resell through CSPs** ΓÇô Allow Microsoft Cloud Solution Providers (CSP) partners to resell your solution as part of a bundled offer. For more information about this program, see [Cloud Solution Provider program](cloud-solution-providers.md).
+- **Co-sell with Microsoft** ΓÇô Let Microsoft sales teams consider your IP co-sell eligible solution when evaluating their customersΓÇÖ needs. For details about co-sell eligibility, see [Requirements for co-sell status](/legal/marketplace/certification-policies). For details on preparing your offer for evaluation, see [Co-sell option in Partner Center](commercial-marketplace-co-sell.md).
## Container offer requirements | Requirement | Details | |: |: |
-| Billing and metering | Support either the free or BYOL billing model.<br><br> |
-| Image built from a Dockerfile | Container images must be based on the Docker image specification and built from a Dockerfile.<br> <br>For more information about building Docker images, see the "Usage" section of [Dockerfile reference](https://docs.docker.com/engine/reference/builder/#usage).<br><br> |
-| Hosting in an Azure Container Registry repository | Container images must be hosted in an Azure Container Registry repository.<br> <br>For more information about working with Azure Container Registry, see [Quickstart: Create a private container registry by using the Azure portal](../container-registry/container-registry-get-started-portal.md).<br><br> |
-| Image tagging | Container images must contain at least one tag (maximum number of tags: 16).<br><br>For more information about tagging an image, see the `docker tag` page on the [Docker Documentation](https://docs.docker.com/engine/reference/commandline/tag) site.<br><br> |
+| Billing and metering | Support either the free or BYOL billing model. |
+| Image built from a Dockerfile | Container images must be based on the Docker image specification and built from a Dockerfile. For more information about building Docker images, see the "Usage" section of [Dockerfile reference](https://docs.docker.com/engine/reference/builder/#usage). |
+| Hosting in an Azure Container Registry repository | Container images must be hosted in an Azure Container Registry repository. For more information about working with Azure Container Registry, see [Quickstart: Create a private container registry by using the Azure portal](../container-registry/container-registry-get-started-portal.md).<br><br> |
+| Image tagging | Container images must contain at least one tag (maximum number of tags: 16). For more information about tagging an image, see the `docker tag` page on the [Docker Documentation](https://docs.docker.com/engine/reference/commandline/tag) site. |
## Next steps -- To prepare technical assets for a container offer, see [Create Azure container technical assets](create-azure-container-technical-assets.md).--- To create an Azure container offer, see [Create an Azure container offer in Azure Marketplace](create-azure-container-offer.md) for more information.
+- [Prepare technical assets](azure-container-technical-assets.md)
marketplace Marketplace Geo Availability Currencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-geo-availability-currencies.md
Individual prices (which, depending on how they were set, may have been influenc
For details on how to enter prices for specific offer types, refer to these articles: - [Create an Azure application offer](create-new-azure-apps-offer.md)-- [Create an Azure container offer](./create-azure-container-offer.md)
+- [Create an Azure container offer](azure-container-offer-setup.md)
- [Create an Azure virtual machine offer](azure-vm-create.md) - [Create a consulting service offer](./create-consulting-service-offer.md) - [Create a Dynamics 365 for Customer Engagement & Power Apps offer](dynamics-365-customer-engage-offer-setup.md)
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/azure-app-apis.md
https://apidocs.microsoft.com/services/partneringestion/
## Next steps
-* Learn how to create an [Create an Azure VM technical asset](../create-azure-container-technical-assets.md)
-* Learn how to Create an [Azure Container offer](../create-azure-container-offer.md)
+* [Create an Azure Container technical asset](../azure-container-technical-assets.md)
+* [Create an Azure Container offer](../azure-container-offer-setup.md)
media-services Deploy Azure Stack Edge How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/deploy-azure-stack-edge-how-to.md
For Live Video Analytics, we will deploy via IoT Hub, but the Azure Stack Edge r
## Prerequisites * Azure subscription to which you have [owner privileges](../../role-based-access-control/built-in-roles.md#owner).
-* An [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-deploy-prep.md) resource
+* An [Azure Stack Edge](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-prep) resource
* [An IoT Hub](../../iot-hub/iot-hub-create-through-portal.md) * A [service principal](./create-custom-azure-resource-manager-role-how-to.md#create-service-principal) for the Live Video Analytics module.
For Live Video Analytics, we will deploy via IoT Hub, but the Azure Stack Edge r
## Configuring Azure Stack Edge for using Live Video Analytics
-Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge computing device with network data transfer capabilities. Read more about [Azure Stack Edge and detailed setup instructions](../../databox-online/azure-stack-edge-deploy-prep.md). To get started, follow the instructions in the links below:
+Azure Stack Edge is a Hardware-as-a-Service solution and an AI-enabled edge computing device with network data transfer capabilities. Read more about [Azure Stack Edge and detailed setup instructions](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-prep). To get started, follow the instructions in the links below:
-* [Azure Stack Edge / Data Box Gateway Resource Creation](../../databox-online/azure-stack-edge-deploy-prep.md)
-* [Install and Setup](../../databox-online/azure-stack-edge-deploy-install.md)
-* [Connection and Activation](../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md)
+* [Azure Stack Edge / Data Box Gateway Resource Creation](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-prep?tabs=azure-portal#create-a-new-resource)
+* [Install and Setup](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-install)
+* Connection and Activation
+
+ 1. [Connect](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-connect)
+ 2. [Configure network](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy)
+ 3. [Configure device](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time)
+ 4. [Configure certificates](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates)
+ 5. [Activate](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-activate)
* [Attach an IoT Hub to Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-deploy-configure-compute.md#configure-compute) ### Enable Compute Prerequisites on the Azure Stack Edge Local UI
Before you continue, make sure that:
* You've activated your Azure Stack Edge resource. * You have access to a Windows client system running PowerShell 5.0 or later to access the Azure Stack Edge resource. * To deploy a Kubernetes cluster, you need to configure your Azure Stack Edge resource via its [local web UI](../../databox-online/azure-stack-edge-deploy-connect-setup-activate.md#connect-to-the-local-web-ui-setup). +
+ * Connect and configure:
+ 1. [Connect](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-connect)
+ 2. [Configure network](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy)
+ 3. [Configure device](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time)
+ 4. [Configure certificates](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates)
+ 5. [Activate](https://docs.microsoft.com/azure/databox-online/azure-stack-edge-gpu-deploy-activate)
* To enable the compute, in the local web UI of your device, go to the Compute page. * Select a network interface that you want to enable for compute. Select Enable. Enabling compute results in the creation of a virtual switch on your device on that network interface.
Before you continue, make sure that:
* Select Apply - This operation should take about 2 minutes. > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/deploy-azure-stack-edge-how-to/azure-stack-edge-commercial.png" alt-text=" Compute Prerequisites on the Azure Stack Edge Local UI":::
+ > :::image type="content" source="../../databox-online/media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/compute-network-2.png" alt-text=" Compute Prerequisites on the Azure Stack Edge Local UI":::
* If DNS is not configured for the Kubernetes API and Azure Stack Edge resource, you can update your Window's host file.
The container will read videos from exactly one folder within the container. If
## Next steps
-You can use the module to analyze live video streams by invoking direct methods. [Invoke the direct methods](get-started-detect-motion-emit-events-quickstart.md#use-direct-method-calls) on the module.
+You can use the module to analyze live video streams by invoking direct methods. [Invoke the direct methods](get-started-detect-motion-emit-events-quickstart.md#use-direct-method-calls) on the module.
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/common-questions-server-migration.md
You can throttle using NetQosPolicy. For example:
The AppNamePrefix to use in the NetQosPolicy is "GatewayWindowsService.exe". You could create a policy on the Azure Migrate appliance to throttle replication traffic from the appliance by creating a policy such as this one:
-New-NetQosPolicy -Name "ThrottleReplication" -AppPathNameMatchCondition "GatewayWindowsService.exe" -ThrottleRateActionBitsPerSecond 1 MB
+```powershell
+New-NetQosPolicy -Name "ThrottleReplication" -AppPathNameMatchCondition "GatewayWindowsService.exe" -ThrottleRateActionBitsPerSecond 1MB
+```
+
+In order to increase and decrease replication bandwidth based on a schedule, you can leverage Windows scheduled task to scale the bandwidth as needed. One task will be used to decrease the bandwidth, and another task will be used to increase the bandwidth.
+Note: You need to create the NetQosPolicy, outlined above, prior to executing the commands below.
+```powershell
+#Replace with an account part of the local Administrators group
+$User = "localVmName\userName"
+
+#Set the task names
+$ThrottleBandwidthTask = "ThrottleBandwidth"
+$IncreaseBandwidthTask = "IncreaseBandwidth"
+
+#Create a directory to host PowerShell scaling scripts
+if (!(Test-Path "C:\ReplicationBandwidthScripts"))
+{
+ New-Item -Path "C:\" -Name "ReplicationBandwidthScripts" -Type Directory
+}
+
+#Set your minimum bandwidth to be used during replication by changing the ThrottleRateActionBitsPerSecond parameter
+#Currently set to 10 MBps
+New-Item C:\ReplicationBandwidthScripts\ThrottleBandwidth.ps1
+Set-Content C:\ReplicationBandwidthScripts\ThrottleBandwidth.ps1 'Set-NetQosPolicy -Name "ThrottleReplication" -ThrottleRateActionBitsPerSecond 10MB'
+$ThrottleBandwidthScript = "C:\ReplicationBandwidthScripts\ThrottleBandwidth.ps1"
+
+#Set your maximum bandwidth to be used during replication by changing the ThrottleRateActionBitsPerSecond parameter
+#Currently set to 1000 MBps
+New-Item C:\ReplicationBandwidthScripts\IncreaseBandwidth.ps1
+Set-Content C:\ReplicationBandwidthScripts\IncreaseBandwidth.ps1 'Set-NetQosPolicy -Name "ThrottleReplication" -ThrottleRateActionBitsPerSecond 1000MB'
+$IncreaseBandwidthScript = "C:\ReplicationBandwidthScripts\IncreaseBandwidth.ps1"
+
+#Timezone set on the Azure Migrate Appliance (VM) will be used; change the frequency to meet your needs
+#In this example, the bandwidth is being throttled every weekday at 8:00 AM local time
+#The bandwidth is being increased every weekday at 6:00 PM local time
+$ThrottleBandwidthTrigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Monday,Tuesday,Wednesday,Thursday,Friday -At 8:00am
+$IncreaseBandwidthTrigger = New-ScheduledTaskTrigger -Weekly -DaysOfWeek Monday,Tuesday,Wednesday,Thursday,Friday -At 6:00pm
+
+#Setting the task action to execute the scripts
+$ThrottleBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $ThrottleBandwidthScript"
+$IncreaseBandwidthAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-executionpolicy bypass -noprofile -file $IncreaseBandwidthScript"
+
+#Creating the Scheduled tasks
+Register-ScheduledTask -TaskName $ThrottleBandwidthTask -Trigger $ThrottleBandwidthTrigger -User $User -Action $ThrottleBandwidthAction -RunLevel Highest -Force
+Register-ScheduledTask -TaskName $IncreaseBandwidthTask -Trigger $IncreaseBandwidthTrigger -User $User -Action $IncreaseBandwidthAction -RunLevel Highest -Force
+```
## How is the data transmitted from on-prem environment to Azure? Is it encrypted before transmission?
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-migration.md
Required changes are summarized in the table.
**Action** | **VMware (agentless migration)** | **VMware (agent-based)/physical machines** | **Windows on Hyper-V** | | |
-**Configure the SAN policy as Online All**<br/><br/> This ensures that Windows volumes in Azure VM use the same drive letter assignments as the on-premises VM. | Set automatically for machines running Windows Server 2008 R2 or later.<br/><br/> Configure manually for earlier operating systems. | Set automatically in most cases. | Configure manually.
+**Configure the SAN policy as Online All**<br/><br/> | Set automatically for machines running Windows Server 2008 R2 or later.<br/><br/> Configure manually for earlier operating systems. | Set automatically in most cases. | Configure manually.
**Install Hyper-V Guest Integration** | [Install manually](prepare-windows-server-2003-migration.md#install-on-vmware-vms) on machines running Windows Server 2003. | [Install manually](prepare-windows-server-2003-migration.md#install-on-vmware-vms) on machines running Windows Server 2003. | [Install manually](prepare-windows-server-2003-migration.md#install-on-hyper-v-vms) on machines running Windows Server 2003. **Enable Azure Serial Console**.<br/><br/>[Enable the console](/troubleshoot/azure/virtual-machines/serial-console-windows) on Azure VMs to help with troubleshooting. You don't need to reboot the VM. The Azure VM will boot by using the disk image. The disk image boot is equivalent to a reboot for the new VM. | Enable manually | Enable manually | Enable manually **Connect after migration**<br/><br/> To connect after migration, there are a number of steps to take before you migrate. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually. | [Set up](#prepare-to-connect-to-azure-windows-vms) manually.
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/quickstart-create-migrate-project.md
+
+ Title: Quickstart to create an Azure Migrate project using an Azure Resource Manager template.
+description: In this quickstart, you learn how to create an Azure Migrate project using an Azure Resource Manager template (ARM template).
Last updated : 04/23/2021+++++
+ - subject-armqs
+ - mode-arm
++
+# Quickstart: Create an Azure Migrate project using an ARM template
+
+This quickstart describes how to set up an Azure Migrate project Recovery by using an Azure Resource Manager template (ARM template). Azure Migrate provides a centralized hub to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. Azure Migrate supports assessment and migration of on-premises VMware VMs, Hyper-V VMs, physical servers, other virtualized VMs, databases, web apps, and virtual desktops.
+
+This template creates an Azure Migrate project that will be used further for assessing and migrating your Azure on-premises servers, infrastructure, applications, and data.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-migrate-project-create%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an active Azure subscription, you can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/en-us/resources/templates/101-migrate-project-create/).
++++
+## Deploy the template
+
+To deploy the template, the **Subscription**, **Resource group**, **Project name**, and **Location** are required.
+
+1. To sign in to Azure and open the template, select the **Deploy to Azure** image.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-migrate-project-create%2Fazuredeploy.json)
+
+2. Select or enter the following values:
+
+ :::image type="content" source="media/quickstart-create-migrate-project-template/create-migrate-project.png" alt-text="Template to create an Azure Migrate project.":::
+
+ - **Subscription**: Select your Azure subscription.
+ - **Resource group**: Select an existing group or select **Create new** to add a group.
+ - **Location**: Defaults to the resource group's location and becomes unavailable after a
+ resource group is selected.
+ - **Migrate Project Name**: Provide a name for the vault.
+ - **Location**: Select the location where you want to deploy the Azure Migrate project and its resources.
+
+3. Click **Review + create** button to start the deployment.
+
+## Validate the deployment
+
+To confirm that the Azure Migrate project was created, use the Azure portal.
++
+1. Navigate to Azure Migrate by searching for **Azure Migrate** in the search bar on the Azure portal.
+2. Click the **Discover, assess, and migrate** button under the Windows, Linux, and SQL Server tile.
+3. Select the **Azure subscription** and **Project** as per the values specified in the deployment.
++
+## Next steps
+
+In this quickstart, you created an Azure Migrate project. To learn more about Azure Migrate and its capabilities,
+continue to the Azure Migrate overview.
+
+> [!div class="nextstepaction"]
+> [Azure Migrate overview](migrate-services-overview.md)
+>
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-azure-cli.md Binary files differ
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics.md
You can use traffic analytics for NSGs in any of the following supported regions
Central India Central US China East 2
- China North 2
- East Asia
+ China North
+ China North 2
:::column-end::: :::column span="":::
- East US
+ East Asia
+ East US
East US 2 East US 2 EUAP France Central
You can use traffic analytics for NSGs in any of the following supported regions
Japan West Korea Central Korea South
- North Central US
- North Europe
+ North Central US
:::column-end::: :::column span="":::
- South Africa North
+ North Europe
+ South Africa North
South Central US South India Southeast Asia
You can use traffic analytics for NSGs in any of the following supported regions
Switzerland West UAE North UK South
- UK West
- USGov Arizona
- USGov Texas
+ UK West
+ USGov Arizona
:::column-end::: :::column span="":::
- USGov Virginia
+ USGov Texas
+ USGov Virginia
USNat East USNat West USSec East
The Log Analytics workspace must exist in the following regions:
Australia East Australia Southeast Brazil South
+ Brazil Southeast
Canada Central Central India Central US
- China East 2
+ China East 2
East Asia
- East US
:::column-end::: :::column span="":::
- East US 2
+ East US
+ East US 2
East US 2 EUAP
- France Central
- Japan East
- Korea Central
+ France Central
+ Germany West Central
+ Japan East
+ Japan West
+ Korea Central
North Central US North Europe
- South Africa North
- South Central US
:::column-end::: :::column span="":::
- Southeast Asia
+ Norway East
+ South Africa North
+ South Central US
+ Southeast Asia
Switzerland North Switzerland West UAE Central UAE North UK South
- UK West
- USGov Arizona
- USGov Virginia
- USNat East
+ UK West
:::column-end::: :::column span="":::
- USNat West
+ USGov Arizona
+ USGov Virginia
+ USNat East
+ USNat West
USSec East USSec West West Central US
The Log Analytics workspace must exist in the following regions:
:::column-end::: :::row-end:::
+> [!NOTE]
+> If NSGs support a region but the log analytics workspace does not support that region for traffic analytics as per above lists, then you can use log analytics workspace of any other supported region as a workaround.
+ ## Prerequisites ### User access requirements
openshift Howto Create A Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-backup.md Binary files differ
openshift Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/tutorial-delete-cluster.md Binary files differ
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/connect-azure-cli.md Binary files differ
private-link Tutorial Private Endpoint Sql Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/tutorial-private-endpoint-sql-cli.md
-
+ Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure CLI' description: Use this tutorial to learn how to create an Azure SQL server with a private endpoint using Azure CLI
You used the virtual machine to test connectivity securely to the SQL server acr
As a next step, you may also be interested in the **Web app with private connectivity to Azure SQL database** architecture scenario, which connects a web application outside of the virtual network to the private endpoint of a database. > [!div class="nextstepaction"]
-> [Web app with private connectivity to Azure SQL database](/azure/architecture/example-scenario/private-web-app/private-web-app)
+> [Web app with private connectivity to Azure SQL database](/azure/architecture/example-scenario/private-web-app/private-web-app)
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-powerbi.md
+
+ Title: Metadata and Lineage from Power BI
+description: This article describes the data lineage extraction from Power BI source.
+++++ Last updated : 03/30/2021+
+# How to get lineage from Power BI into Azure Purview
+
+This article elaborates on the data lineage aspects of Power BI source in Azure Purview. The prerequisite to see data lineage in Purview for Power BI is to [scan your Power BI.](../purview/register-scan-power-bi-tenant.md)
+
+## Common scenarios
+
+1. After the Power BI source is scanned, data consumers can perform root cause analysis of a report or dashboard from Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
+
+2. Data producers can see the downstream reports or dashboards consuming their dataset. Before making any changes to their datasets, the data owners can make informed decisions.
+
+2. Users can search by name, endorsement status, sensitivity label, owner, description, and other business facets to return the relevant Power BI artifacts.
+
+## Power BI artifacts in Azure Purview
+
+Once the [scan of your Power BI](../purview/register-scan-power-bi-tenant.md) is complete, following Power BI artifacts will be inventoried in Purview
+
+* Capacity
+* Workspaces
+* Dataflow
+* Dataset
+* Report
+* Dashboard
+
+The workspace artifacts will show lineage of Dataflow -> Dataset -> Report -> Dashboard
++
+>[!Note]
+> * Column lineage and transformations inside of PowerBI Datasets is currently not supported
+> * Limited information is currently shown for the Data sources from which the PowerBI Dataflow or PowerBI Dataset is created. E.g.: For SQL server source of a PowerBI datasets only server name is captured.
+
+## Lineage of Power BI artifacts in Azure Purview
+
+Users can search for the Power BI artifact by name, description, or other details to see relevant results. Under the asset overview & properties tab the basic details such as description, classification and other information are shown. Under the lineage tab, asset relationships are shown with the upstream and downstream dependencies.
++
+## Next steps
+
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-link-azure-data-factory.md
Multiple Azure Data Factories can connect to a single Azure Purview Data Catalog
- **Disconnected**: The data factory has access to the catalog, but it's connected to another catalog. As a result, data lineage won't be reported to the catalog automatically. - **CannotAccess**: The current user doesn't have access to the data factory, so the connection status is unknown. >[!Note]
- >In order to view the Data Factory connections, you need to be assigned any one of Purview roles:
+ >To view the Data Factory connections, you need to be assigned any one of Purview roles. Role inheritance from Management group is **not supported**:
>- Contributor >- Owner >- Reader
Multiple Azure Data Factories can connect to a single Azure Purview Data Catalog
## Create new Data Factory connection >[!Note]
->In order to add or remove the Data Factory connections, you need to be assigned any one of Purview roles:
+>To add or remove the Data Factory connections, you need to be assigned any one of Purview roles. Role inheritance from Management group is **not supported**:
>- Owner >- User Access Administrator >
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
You can deploy the Azure Defender extension using a range of methods. For detail
### [**Azure portal**](#tab/k8s-deploy-asc)
-### Use the "Quick fix" option from the Security Center recommendation
+### Use the fix button from the Security Center recommendation
A dedicated recommendation in Azure Security Center provides: - **Visibility** about which of your clusters has the Defender for Kubernetes extension deployed-- **A "Quick fix" option** to deploy it to those clusters without the extension
+- **Fix** button to deploy it to those clusters without the extension
1. From Azure Security Center's recommendations page, open the **Enable Azure Defender** security control.
A dedicated recommendation in Azure Security Center provides:
:::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Azure Security Center's recommendation for deploying the Azure Defender extension for Azure Arc enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png"::: > [!TIP]
- > Notice the Quick Fix icon in the actions column
+ > Notice the **Fix** icon in the actions column
1. Select the extension to see the details of the healthy and unhealthy resources - clusters with and without the extension.
A dedicated recommendation in Azure Security Center provides:
1. Select the relevant Log Analytics workspace and select **Remediate x resource**.
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/security-center-deploy-extension.gif" alt-text="Deploy Azure Defender extension for Azure Arc with Security Center's quick fix option.":::
+ :::image type="content" source="media/defender-for-kubernetes-azure-arc/security-center-deploy-extension.gif" alt-text="Deploy Azure Defender extension for Azure Arc with Security Center's fix option.":::
### [**Azure CLI**](#tab/k8s-deploy-cli)
security-center Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-usage.md
Azure Defender for SQL alerts are available in Security Center's alerts page, th
[Learn more about managing and responding to alerts](security-center-managing-and-responding-alerts.md).
+## FAQ - Azure Defender for SQL servers on machines
+
+### If I enable this Azure Defender plan on my subscription, are all SQL servers on the subscription protected?
+
+No. To defend a SQL Server deployment on an Azure Virtual Machine, or a SQL Server running on an Azure Arc enabled machine, Azure Defender requires both of the following:
+
+- a Log Analytics agent on the machine
+- the relevant Log Analytics workspace to have the Azure Defender for SQL solution enabled
+
+The subscription *status*, shown in the SQL server page in the Azure portal, reflects the default workspace status and applies to all connected machines. Only the SQL servers on hosts with a Log Analytics agent reporting to that workspace are protected by Azure Defender.
++++ ## Next steps For related material, see the following article:
security-center Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/exempt-resource.md
Title: Exempt an Azure Security Center recommendation from a resource, subscript
description: Learn how to create rules to exempt security recommendations from subscriptions or management groups and prevent them from impacting your secure score Previously updated : 03/11/2021 Last updated : 04/21/2021
To keep track of how your users are exercising this capability, we've created an
- You'll find the ARM template in the [Azure Security Center GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption) - To deploy all the necessary components, [use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json)
+## Use the inventory to find resources that have exemptions applied
+
+The asset inventory page of Azure Security Center provides a single page for viewing the security posture of the resources you've connected to Security Center. Learn more in [Explore and manage your resources with asset inventory](asset-inventory.md).
+
+The inventory page includes many filters to let you narrow the list of resources to the ones of most interest for any given scenario. One such filter is the **Contains exemptions**. Use this filter to find all resources that have been exempted from one or more recommendation.
++ ## Find recommendations with exemptions using Azure Resource Graph
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
We've released an improved version of the recommendations list to present more i
Now on the page you'll see: 1. The maximum score and current score for each security control.
-1. Icons replacing tags such as **Quick fix** and **Preview**.
+1. Icons replacing tags such as **Fix** and **Preview**.
1. A new column showing the [Policy initiative](security-policy-concept.md) related to each recommendation - visible when "Group by controls" is disabled. :::image type="content" source="media/release-notes/recommendations-grid-enhancements.png" alt-text="Enhancements to Azure Security Center's recommendations page - March 2021" lightbox="media/release-notes/recommendations-grid-enhancements.png":::
The filters added this month provide options to refine the recommendations list
- **Environment** - View recommendations for your AWS, GCP, or Azure resources (or any combination) - **Severity** - View recommendations according to the severity classification set by Security Center-- **Response actions** - View recommendations according to the availability of Security Center response options: Quick fix, Deny, and Enforce
+- **Response actions** - View recommendations according to the availability of Security Center response options: Fix, Deny, and Enforce
> [!TIP] > The response actions filter replaces the **Quick fix available (Yes/No)** filter. > > Learn more about each of these response options:
- > - [Quick fix remediation](security-center-remediate-recommendations.md#quick-fix-remediation)
+ > - [Fix button](security-center-remediate-recommendations.md#fix-button)
> - [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md) :::image type="content" source="./media/release-notes/added-recommendations-filters.png" alt-text="Recommendations grouped by security control" lightbox="./media/release-notes/added-recommendations-filters.png":::
security-center Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
An example of a preview recommendation:
## Improve your secure score
-To improve your secure score, remediate security recommendations from your recommendations list. You can remediate each recommendation manually for each resource, or by using the **Quick Fix!** option (when available) to apply a remediation for a recommendation to a group of resources quickly. For more information, see [Remediate recommendations](security-center-remediate-recommendations.md).
+To improve your secure score, remediate security recommendations from your recommendations list. You can remediate each recommendation manually for each resource, or by using the **Fix** option (when available) to resolve an issue on multiple resources quickly. For more information, see [Remediate recommendations](security-center-remediate-recommendations.md).
Another way to improve your score and ensure your users don't create resources that negatively impact your score is to configure the Enforce and Deny options on the relevant recommendations. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).
security-center Security Center Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-permissions.md
The following table displays roles and allowed actions in Security Center.
| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | - | Γ£ö | | Enable / disable Azure Defender | - | Γ£ö | - | - | Γ£ö | | Enable / disable auto-provisioning | - | Γ£ö | - | Γ£ö | Γ£ö |
-| Apply security recommendations for a resource</br> (and use [Quick Fix!](security-center-remediate-recommendations.md#quick-fix-remediation)) | - | - | Γ£ö | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Fix](security-center-remediate-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö | | View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
security-center Security Center Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-recommendations.md
Security Center analyzes the security state of your resources to identify potent
1. **Freshness interval** (where relevant) 1. **Count of exempted resources** if exemptions exist for this recommendation, this shows the number of resources that have been exempted 1. **Description** - A short description of the issue
- 1. **Remediation steps** - A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with 'quick fix', you can select **View remediation logic** before applying the suggested fix to your resources.
+ 1. **Remediation steps** - A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option**, you can select **View remediation logic** before applying the suggested fix to your resources.
1. **Affected resources** - Your resources are grouped into tabs: - **Healthy resources** ΓÇô Relevant resources which either aren't impacted or on which you've already remediated the issue. - **Unhealthy resources** ΓÇô Resources which are still impacted by the identified issue.
security-center Security Center Remediate Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-remediate-recommendations.md
After reviewing all the recommendations, decide which one to remediate first. We
1. Once completed, a notification appears informing you whether the issue is resolved.
-## Quick fix remediation
+## Fix button
-To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a quick fix option.
+To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option.
-Quick fix helps you to quickly remediate a recommendation on multiple resources.
+**Fix** helps you quickly remediate a recommendation on multiple resources.
> [!TIP]
-> Quick fix solutions are only available for specific recommendations. To find the recommendations that have an available quick fix, use the **Response actions** filter for the list of recommendations:
+> The **Fix** feature is only available for specific recommendations. To find recommendations that have an available fix, use the **Response actions** filter for the list of recommendations:
>
-> :::image type="content" source="media/security-center-remediate-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the quick fix option":::
+> :::image type="content" source="media/security-center-remediate-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the Fix option":::
-To implement a quick fix solution:
+To implement a **Fix**:
-1. From the list of recommendations that have the **Quick Fix!** label, select a recommendation.
+1. From the list of recommendations that have the **Fix** action icon, :::image type="icon" source="media/security-center-remediate-recommendations/fix-icon.png" border="false":::, select a recommendation.
- [![Select Quick Fix!](media/security-center-remediate-recommendations/security-center-quick-fix-select.png)](media/security-center-remediate-recommendations/security-center-quick-fix-select.png#lightbox)
+ :::image type="content" source="./media/security-center-remediate-recommendations/security-center-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/security-center-remediate-recommendations/security-center-recommendations-fix-action.png#lightbox":::
1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Remediate**.
To implement a quick fix solution:
![Quick fix](./media/security-center-remediate-recommendations/security-center-quick-fix-view.png) > [!NOTE]
- > The implications are listed in the grey box in the **Remediate resources** window that opens after clicking **Remediate**. They list what changes happen when proceeding with the quick fix remediation.
+ > The implications are listed in the grey box in the **Remediate resources** window that opens after clicking **Remediate**. They list what changes happen when proceeding with the **Fix**.
1. Insert the relevant parameters if necessary, and approve the remediation.
To implement a quick fix solution:
1. Once completed, a notification appears informing you if the remediation succeeded.
-## Quick fix remediation logging in the activity log <a name="activity-log"></a>
+## Fix actions logged to the activity log <a name="activity-log"></a>
The remediation operation uses a template deployment or REST PATCH API call to apply the configuration on the resource. These operations are logged in [Azure activity log](../azure-resource-manager/management/view-activity-logs.md).
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/end-to-end.md
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
| [Azure Sentinel](../../sentinel/overview.md) | A scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response. | | **Identity&nbsp;&&nbsp;Access&nbsp;Management** | | | [Microsoft 365 Defender](https://docs.microsoft.com/microsoft-365/security/defender/microsoft-365-defender) | A unified pre- and post-breach enterprise defense suite that natively coordinates detection, prevention, investigation, and response across endpoints, identities, email, and applications to provide integrated protection against sophisticated attacks. |
-| | [Microsoft Defender for Endpoint](https://docs.microsoft.com/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint.md) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. |
+| | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. |
| | [Microsoft Defender for Identity](https://docs.microsoft.com/defender-for-identity/what-is) is a cloud-based security solution that leverages your on-premises Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions directed at your organization. | | [Azure AD Identity Protection](../../active-directory/identity-protection/howto-identity-protection-configure-notifications.md) | Sends two types of automated notification emails to help you manage user risk and risk detections: Users at risk detected email and Weekly digest email. | | **Infrastructure & Network** | |
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
- Understand your [shared responsibility in the cloud](shared-responsibility.md). -- Understand the [isolation choices in the Azure cloud](isolation-choices.md) against both malicious and non-malicious users.
+- Understand the [isolation choices in the Azure cloud](isolation-choices.md) against both malicious and non-malicious users.
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/steps-secure-identity.md
Microsoft recommends adopting the following modern password policy based on [NIS
2. Disable expiration rules, which drive users to easily guessed passwords such as **Spring2019!** 3. Disable character-composition requirements and prevent users from choosing commonly attacked passwords, as they cause users to choose predictable character substitutions in passwords.
-You can use [PowerShell to prevent passwords from expiring](../../active-directory/authentication/concept-sspr-policy.md) for users if you create identities in Azure AD directly. Hybrid organizations should implement these policies using [domain group policy settings](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh994572(v%3dws.10)) or [Windows PowerShell](/powershell/module/addsadministration/set-addefaultdomainpasswordpolicy).
+You can use [PowerShell to prevent passwords from expiring](../../active-directory/authentication/concept-sspr-policy.md) for users if you create identities in Azure AD directly. Hybrid organizations should implement these policies using [domain group policy settings](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh994572(v%3dws.10)) or [Windows PowerShell](/powershell/module/activedirectory/set-addefaultdomainpasswordpolicy).
### Protect against leaked credentials and add resilience against outages
sentinel Connect Azure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-key-vault.md
Previously updated : 03/07/2021 Last updated : 04/22/2021 # Connect Azure Key Vault diagnostics logs
To ingest Azure Key Vault logs into Azure Sentinel:
- You must have read and write permissions on the Azure Sentinel workspace. -- To use Azure Policy to apply a log streaming policy to Azure Key Vault resources, you must be assigned the Owner role for the policy assignment scope.
+- To use Azure Policy to apply a log streaming policy to Azure Key Vault resources, you must have the Owner role for the policy assignment scope.
## Connect to Azure Key Vault
-This connector uses Azure Policy to apply a single Azure Key Vault log-streaming configuration to a collection of instances, defined as a scope. You can see the log types ingested from Azure Key Vault on the left side of connector page, under **Data types**.
+This connector uses Azure Policy to apply a single Azure Key Vault log streaming configuration to a collection of instances, defined as a scope. You can see the log types ingested from Azure Key Vault on the left side of connector page, under **Data types**.
1. From the Azure Sentinel navigation menu, select **Data connectors**. 1. Select **Azure Key Vault** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
-1. In the **Configuration** section of the connector page, expand **Enable Diagnostics logs on Azure Key Vault**.
+1. In the **Configuration** section of the connector page, expand **Stream diagnostics logs from your Azure Key Vault at scale**.
1. Select the **Launch Azure Policy Assignment wizard** button.
This connector uses Azure Policy to apply a single Azure Key Vault log-streaming
1. In the **Basics** tab, click the button with the three dots under **Scope** to select your subscription (and, optionally, a resource group). You can also add a description.
- 1. In the **Parameters** tab, choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. The remaining drop-down fields represent the available diagnostic log types. Leave marked as ΓÇ£TrueΓÇ¥ all the log types you want to ingest.
+ 1. In the **Parameters** tab, leave the **Effect** and **Setting name** fields as is. Choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. The remaining drop-down fields represent the available diagnostic log types. Leave marked as ΓÇ£TrueΓÇ¥ all the log types you want to ingest.
- 1. To apply the policy on your existing resources, select the **Remediation** tab and mark the **Create a remediation task** check box.
+ 1. The policy will be applied to resources added in the future. To apply the policy on your existing resources as well, select the **Remediation** tab and mark the **Create a remediation task** check box.
1. In the **Review + create** tab, click **Create**. Your policy is now assigned to the scope you chose. > [!NOTE] >
-> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past two weeks. Once two weeks have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
+> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past 14 days. Once 14 days have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
## Next steps
-In this document, you learned how to connect Azure Key Vault to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+In this document, you learned how to use Azure Policy to connect Azure Key Vault to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md). - Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
sentinel Connect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-kubernetes-service.md
Previously updated : 03/07/2021 Last updated : 04/22/2021 # Connect Azure Kubernetes Service diagnostics logs
To ingest AKS logs into Azure Sentinel:
- You must have read and write permissions on the Azure Sentinel workspace. -- To use Azure Policy to apply a log streaming policy to AKS resources, you must be assigned the Owner role for the policy assignment scope.
+- To use Azure Policy to apply a log streaming policy to AKS resources, you must have the Owner role for the policy assignment scope.
## Connect to Azure Kubernetes Service
-This connector uses Azure Policy to apply a single Azure Kubernetes Service log-streaming configuration to a collection of instances, defined as a scope. You can see the log types ingested from Azure Kubernetes Service on the left side of connector page, under **Data types**.
+This connector uses Azure Policy to apply a single Azure Kubernetes Service log streaming configuration to a collection of resources, defined as a scope. You can see the log types ingested from Azure Kubernetes Service on the left side of connector page, under **Data types**.
1. From the Azure Sentinel navigation menu, select **Data connectors**. 1. Select **Azure Kubernetes Service (AKS)** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
-1. In the **Configuration** section of the connector page, expand **Enable Diagnostics logs on Azure Kubernetes Service (AKS)**.
+1. In the **Configuration** section of the connector page, expand **Stream diagnostics logs from your Azure Kubernetes Service (AKS) at scale**.
1. Select the **Launch Azure Policy Assignment wizard** button.
This connector uses Azure Policy to apply a single Azure Kubernetes Service log-
1. In the **Basics** tab, click the button with the three dots under **Scope** to select your subscription (and, optionally, a resource group). You can also add a description.
- 1. In the **Parameters** tab, choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. The remaining drop-down fields represent the available diagnostic log types. Leave marked as ΓÇ£TrueΓÇ¥ all the log types you want to ingest.
+ 1. In the **Parameters** tab, leave the **Effect** and **Setting name** fields as is. Choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. The remaining drop-down fields represent the available diagnostic log types. Leave marked as ΓÇ£TrueΓÇ¥ all the log types you want to ingest.
- 1. To apply the policy on your existing resources, select the **Remediation** tab and mark the **Create a remediation task** check box.
+ 1. The policy will be applied to resources added in the future. To apply the policy on your existing resources as well, select the **Remediation** tab and mark the **Create a remediation task** check box.
1. In the **Review + create** tab, click **Create**. Your policy is now assigned to the scope you chose. > [!NOTE] >
-> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past two weeks. Once two weeks have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
+> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past 14 days. Once 14 days have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
## Next steps
-In this document, you learned how to connect Azure Kubernetes Service to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+In this document, you learned how to use Azure Policy to connect Azure Kubernetes Service to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md). - Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
sentinel Connect Azure Sql Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-sql-logs.md
 Title: Connect Azure SQL database diagnostics and auditing logs to Azure Sentinel
-description: Learn how to connect Azure SQL database diagnostics logs and security auditing logs to Azure Sentinel.
+ Title: Connect all Azure SQL database diagnostics and auditing logs to Azure Sentinel
+description: Learn how to use Azure Policy to enforce the connection of Azure SQL database diagnostics logs and security auditing logs to Azure Sentinel.
Previously updated : 01/06/2021 Last updated : 04/21/2021 # Connect Azure SQL database diagnostics and auditing logs
-Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without user involvement.
+Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without necessitating user involvement.
The Azure SQL database connector lets you stream your databases' auditing and diagnostic logs into Azure Sentinel, allowing you to continuously monitor activity in all your instances.
The Azure SQL database connector lets you stream your databases' auditing and di
- Connecting auditing logs allows you to stream security audit logs from all your Azure SQL databases at the server level.
-Learn more about [monitoring Azure SQL Databases](../azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md).
+Learn more about [Azure SQL Database diagnostic telemetry](../azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md) and about [Azure SQL server auditing](../azure-sql/database/auditing-overview.md).
## Prerequisites
Learn more about [monitoring Azure SQL Databases](../azure-sql/database/metrics-
- To connect auditing logs, you must have read and write permissions to Azure SQL Server audit settings.
+- To use Azure Policy to apply a log streaming policy to Azure SQL database and server resources, you must have the Owner role for the policy assignment scope.
+ ## Connect to Azure SQL database
-
+
+This connector uses Azure Policy to apply a single Azure SQL log streaming configuration to a collection of instances, defined as a scope. The Azure SQL Database connector sends two types of logs to Azure Sentinel: diagnostics logs (from SQL databases) and auditing logs (at the SQL server level). You can see the log types ingested from Azure SQL databases and servers on the left side of connector page, under **Data types**.
+ 1. From the Azure Sentinel navigation menu, select **Data connectors**.
-1. Select **Azure SQL Database** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
+1. Select **Azure SQL Databases** from the data connectors gallery, and then select **Open Connector Page** on the preview pane.
1. In the **Configuration** section of the connector page, note the two categories of logs you can connect. ### Connect diagnostics logs
-1. Under **Diagnostics logs**, expand **Enable diagnostics logs on each of your Azure SQL databases manually**.
-
-1. Select the **Open Azure SQL >** link to open the **Azure SQL** resources blade.
-
-1. **(Optional)** To find your database resource easily, select **Add filter** on the filters bar at the top.
- 1. In the **Filter** drop-down list, select **Resource type**.
- 1. In the **Value** drop-down list, deselect **Select all**, then select **SQL database**.
- 1. Click **Apply**.
-
-1. Select the database resource whose diagnostics logs you want to send to Azure Sentinel.
-
- > [!NOTE]
- > For each database resource whose logs you want to collect, you must repeat this process, starting from this step.
-
-1. From the resource page of the database you selected, under **Monitoring** on the navigation menu, select **Diagnostic settings**.
+1. Expand **Stream diagnostics logs from your Azure SQL databases at scale**.
- 1. Select the **+ Add diagnostic setting** link at the bottom of the table.ΓÇï
+1. Select the **Launch Azure Policy Assignment wizard** button.
- 1. In the **Diagnostic setting** screen, enter a name in the **Diagnostic setting name** field.
-
- 1. In the **Destination details** column, mark the **Send to Log Analytics workspace** check box. Two new fields will be displayed below it. Choose the relevant **Subscription** and **Log Analytics workspace** (where Azure Sentinel resides).ΓÇï
+ The policy assignment wizard opens, ready to create a new policy called **Deploy - Configure diagnostic settings for SQL Databases to Log Analytics workspace**.
- 1. In the **Category details** column, mark the check boxes of the log and metric types you want to ingest. We recommend selecting all available types under both **log** and **metric**.ΓÇï
+ 1. In the **Basics** tab, click the button with the three dots under **Scope** to select your subscription (and, optionally, a resource group). You can also add a description.
- 1. Select **Save** at the top of the screen.
+ 1. In the **Parameters** tab, leave the first two settings as they are. Choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. The remaining drop-down fields represent the available diagnostic log types. Leave marked as ΓÇ£TrueΓÇ¥ all the log types you want to ingest.
-- Alternatively, you can use the supplied **PowerShell script** to connect your diagnostics logs.
- 1. Under **Diagnostics logs**, expand **Enable by PowerShell script**.
+ 1. The policy will be applied to resources added in the future. To apply the policy on your existing resources as well, select the **Remediation** tab and mark the **Create a remediation task** check box.
- 1. Copy the code block and paste in PowerShell.
+ 1. In the **Review + create** tab, click **Create**. Your policy is now assigned to the scope you chose.
### Connect audit logs
-1. Under **Auditing logs (preview)**, expand **Enable auditing logs on all Azure SQL databases (at the server level)**.
+1. Back in the connector page, expand **Stream auditing logs from your Azure SQL databases at the server level at scale**.
-1. Select the **Open Azure SQL >** link to open the **SQL servers** resource blade.
+1. Select the **Launch Azure Policy Assignment wizard** button.
-1. Select the SQL server whose auditing logs you want to send to Azure Sentinel.
+ The policy assignment wizard opens, ready to create a new policy called **Deploy - Configure auditing settings for SQL Databases to Log Analytics workspace**.
- > [!NOTE]
- > For each server resource whose logs you want to collect, you must repeat this process, starting from this step.
+ 1. In the **Basics** tab, click the button with the three dots under **Scope** to select your subscription (and, optionally, a resource group). You can also add a description.
-1. From the resource page of the server you selected, under **Security** on the navigation menu, select **Auditing**.
+ 1. In the **Parameters** tab, choose your Azure Sentinel workspace from the **Log Analytics workspace** drop-down list. Leave the **Effect** setting as is.
- 1. Move the **Enable Azure SQL Auditing** toggle to **ON**.ΓÇï
-
- 1. Under **Audit log destination**, select **Log Analytics (Preview)**.
-
- 1. From the list of workspaces that appears, choose your workspace (where Azure Sentinel resides).ΓÇï
-
- 1. Select **Save** at the top of the screen.
--- Alternatively, you can use the supplied **PowerShell script** to connect your diagnostics logs.
- 1. Under **Auditing logs**, expand **Enable by PowerShell script**.
-
- 1. Copy the code block and paste in PowerShell.
+ 1. The policy will be applied to resources added in the future. To apply the policy on your existing resources as well, select the **Remediation** tab and mark the **Create a remediation task** check box.
+ 1. In the **Review + create** tab, click **Create**. Your policy is now assigned to the scope you chose.
> [!NOTE] >
-> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past two weeks. Once two weeks have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
+> With this particular data connector, the connectivity status indicators (a color stripe in the data connectors gallery and connection icons next to the data type names) will show as *connected* (green) only if data has been ingested at some point in the past 14 days. Once 14 days have passed with no data ingestion, the connector will show as being disconnected. The moment more data comes through, the *connected* status will return.
## Next steps
-In this document, you learned how to connect Azure SQL database diagnostics and auditing logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).-- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).+
+In this document, you learned how to use Azure Policy to connect Azure SQL database diagnostics and auditing logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
sentinel Connect Azure Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-storage-account.md
# Connect Azure Storage account diagnostics logs
+> [!IMPORTANT]
+> The Azure Storage account connector is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Azure Storage account is a cloud solution for modern data storage scenarios. It contains all your data objects: blobs, files, queues, tables, and disks. This connector lets you stream your Azure Storage accountsΓÇÖ diagnostics logs into Azure Sentinel, allowing you to continuously monitor activity and detect security threats in all your Azure storage resources throughout your organization.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
## April 2021
+- [Azure Policy-based data connectors](#azure-policy-based-data-connectors)
- [Incident timeline (Public preview)](#incident-timeline-public-preview)
+### Azure Policy-based data connectors
+
+Azure Policy allows you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Azure Sentinel.
+
+Continuing our efforts to bring the power of [Azure Policy](../governance/policy/overview.md) to the task of data collection configuration, we are now offering another Azure Policy-enhanced data collector, for [Azure Storage account](connect-azure-storage-account.md) resources, released to public preview.
+
+Also, two of our in-preview connectors, for [Azure Key Vault](connect-azure-key-vault.md) and [Azure Kubernetes Service](connect-azure-kubernetes-service.md), have now been released to general availability (GA), joining our [Azure SQL Databases](connect-azure-sql-logs.md) connector.
+ ### Incident timeline (Public preview) The first tab on an incident details page is now the **Timeline**, which shows a timeline of alerts and bookmarks in the incident. An incident's timeline can help you understand the incident better and reconstruct the timeline of attacker activity across the related alerts and bookmarks.
For more information, see [Tutorial: Investigate incidents with Azure Sentinel](
- [Set workbooks to automatically refresh while in view mode](#set-workbooks-to-automatically-refresh-while-in-view-mode) - [New detections for Azure Firewall](#new-detections-for-azure-firewall)-- [Automation rules and incident-triggered playbooks](#automation-rules-and-incident-triggered-playbooks) (including all-new playbook documentation)-- [New alert enrichments: enhanced entity mapping and custom details](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details)
+- [Automation rules and incident-triggered playbooks (Public preview)](#automation-rules-and-incident-triggered-playbooks-public-preview) (including all-new playbook documentation)
+- [New alert enrichments: enhanced entity mapping and custom details (Public preview)](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details-public-preview)
- [Print your Azure Sentinel workbooks or save as PDF](#print-your-azure-sentinel-workbooks-or-save-as-pdf) - [Incident filters and sort preferences now saved in your session (Public preview)](#incident-filters-and-sort-preferences-now-saved-in-your-session-public-preview) - [Microsoft 365 Defender incident integration (Public preview)](#microsoft-365-defender-incident-integration-public-preview)
Detections for Azure Firewalls are continuously added to the built-in template g
For more information, see [New detections for Azure Firewall in Azure Sentinel](https://techcommunity.microsoft.com/t5/azure-network-security/new-detections-for-azure-firewall-in-azure-sentinel/ba-p/2244958).
-### Automation rules and incident-triggered playbooks
+### Automation rules and incident-triggered playbooks (Public preview)
Automation rules are a new concept in Azure Sentinel, allowing you to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Azure Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
As mentioned above, playbooks can now be activated with the incident trigger in
Learn more about [playbooks' enhanced capabilities](automate-responses-with-playbooks.md), and how to [craft a response workflow](tutorial-respond-threats-playbook.md) using playbooks together with automation rules.
-### New alert enrichments: enhanced entity mapping and custom details
+### New alert enrichments: enhanced entity mapping and custom details (Public preview)
Enrich your alerts in two new ways to make them more usable and more informative.
Among the properties of resources that can be controlled by policies are the cre
Azure Policy-based connectors are now available for the following Azure - [Azure Key Vault](connect-azure-key-vault.md) (public preview) - [Azure Kubernetes Service](connect-azure-kubernetes-service.md) (public preview)-- Azure SQL databases/servers (GA)
+- [Azure SQL databases/servers](connect-azure-sql-logs.md) (GA)
Customers will still be able to send the logs manually for specific instances and donΓÇÖt have to use the policy engine.
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-performance-improvements.md
This article describes how to use Azure Service Bus to optimize performance when
Throughout this article, the term "client" refers to any entity that accesses Service Bus. A client can take the role of a sender or a receiver. The term "sender" is used for a Service Bus queue client or a topic client that sends messages to a Service Bus queue or a topic. The term "receiver" refers to a Service Bus queue client or subscription client that receives messages from a Service Bus queue or a subscription.
+## Resource planning and considerations
+
+As with any technical resourcing, prudent planning is key in ensuring that Azure Service Bus is providing the performance that your application expects. The right configuration or topology for your Service Bus namespaces depends on a host of factors involving your application architecture and how each of the Service Bus features are used.
+
+### Pricing tier
+
+Service Bus offers various pricing tiers. It is recommended to pick the appropriate tier for your application requirements.
+
+ * **Standard tier** - Suited for developer/test environments or low throughput scenarios where the applications are **not sensitive** to throttling.
+
+ * **Premium tier** - Suited for production environments with varied throughput requirements where predictable latency and throughput is required. Additionally, Service Bus premium namespaces can be [auto scaled](automate-update-messaging-units.md) can be enabled to accommodate spikes in throughput.
+
+> [!NOTE]
+> If the right tier is not picked, there is a risk of overwhelming the Service Bus namespace which may lead to [throttling](service-bus-throttling.md).
+>
+> Throttling does not lead to loss of data. Applications leveraging the Service Bus SDK can utilize the default retry policy to ensure that the data is eventually accepted by Service Bus.
+>
+
+### Calculating throughput for Premium
+
+Data sent to Service Bus is serialized to binary and then deserialized when received by the receiver. Thus, while applications think of **messages** as atomic units of work, Service Bus measures throughput in terms of bytes (or megabytes).
+
+When calculating the throughput requirement, consider the data that is being sent to Service Bus (ingress) and data that is received from Service Bus (egress).
+
+As expected, throughput is higher for smaller message payloads that can be batched together.
+
+#### Benchmarks
+
+Here is a [GitHub sample](https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance) which you can run to see the expected throughput you will receive for your SB namespace. In our [benchmark tests](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Premium-Messaging-How-fast-is-it/ba-p/370722), we observed approximately 4 MB/second per Messaging Unit (MU) of ingress and egress.
+
+The benchmarking sample doesn't use any advanced features, so the throughput your applications observe will be different based on your scenarios.
+
+#### Compute considerations
+
+Using certain Service Bus features may require compute utilization that may decrease the expected throughput. Some of these features are -
+
+1. Sessions.
+2. Fanning out to multiple subscriptions on a single topic.
+3. Running many filters on a single subscription.
+4. Scheduled messages.
+5. Deferred messages.
+6. Transactions.
+7. De-duplication & look back time window.
+8. Forward to (forwarding from one entity to another).
+
+If your application leverages any of the above features and you are not receiving the expected throughput, you can review the **CPU usage** metrics and consider scaling up your Service Bus Premium namespace.
+
+You can also utilize Azure Monitor to [automatically scale the Service Bus namespace](automate-update-messaging-units.md).
+
+### Sharding across namespaces
+
+While scaling up Compute (Messaging Units) allocated to the namespace is an easier solution, it **may not** provide a linear increase in the throughput. This is because of Service Bus internals (storage, network, etc.) which may be limiting the throughput.
+
+The cleaner solution in this case is to shard your entities (queues, and topics) across different Service Bus Premium namespaces. You may also consider sharding across different namespaces in different Azure regions.
+ ## Protocols Service Bus enables clients to send and receive messages via one of three protocols:
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/cluster-security-certificate-management.md
At this point, a certificate exists in the vault, ready for consumption. Onward
### Certificate provisioning We mentioned a 'provisioning agent', which is the entity that retrieves the certificate, inclusive of its private key, from the vault and installs it on to each of the hosts of the cluster. (Recall that Service Fabric does not provision certificates.) In our context, the cluster will be hosted on a collection of Azure VMs and/or virtual machine scale sets. In Azure, provisioning a certificate from a vault to a VM/VMSS can be achieved with the following mechanisms - assuming, as above, that the provisioning agent was previously granted 'get' permissions on the vault by the vault owner: - ad-hoc: an operator retrieves the certificate from the vault (as pfx/PKCS #12 or pem) and installs it on each node
- - as a virtual machine scale set 'secret' during deployment: the Compute service retrieves, using its first party identity on behalf of the operator, the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#certificates)); note this allows the provisioning of versioned secrets only
- - using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](/virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
+ - as a virtual machine scale set 'secret' during deployment: the Compute service retrieves, using its first party identity on behalf of the operator, the certificate from a template-deployment-enabled vault and installs it on each node of the virtual machine scale set ([like so](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#certificates)); note this allows the provisioning of versioned secrets only
+ - using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md); this allows the provisioning of certificates using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](/azure/virtual-machines/security-policy#managed-identities-for-azure-resources), an identity that has been granted access to the vault(s) containing the observed certificates.
-The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#certificates).
+The ad-hoc mechanism is not recommended for multiple reasons, ranging from security to availability, and won't be discussed here further; for details, refer to [certificates in virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#certificates).
The VMSS-/Compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires - by design - declaring certificates as versioned secrets, which makes it suitable only for clusters secured with certificates declared by thumbprint. In contrast, the Key Vault VM extension-based provisioning will always install the latest version of each observed certificate, which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the KVVM extension) for certificates declared by instance (that is, by thumbprint) - the risk of losing availability is considerable.
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Title: Replicate Azure VMs running in Proximity Placement Groups
-description: Learn how to replicate Azure VMs running in Proximity Placement Groups using Azure Site Recovery.
+ Title: Replicate Azure VMs running in a proximity placement group
+description: Learn how to replicate Azure VMs running in proximity placement groups by using Azure Site Recovery.
Last updated 02/11/2021
-# Replicate Azure virtual machines running in Proximity Placement Groups to another region
+# Replicate virtual machines running in a proximity placement group to another region
-This article describes how to replicate, failover and failback virtual machines running in a Proximity Placement Group to a secondary region.
+This article describes how to replicate, fail over, and fail back Azure virtual machines (VMs) running in a proximity placement group to a secondary region.
-[Proximity Placement Groups](../virtual-machines/windows/proximity-placement-groups-portal.md) is an Azure Virtual Machine logical grouping capability that you can use to decrease the inter-VM network latency associated with your applications. When the VMs are deployed within the same proximity placement group, they are physically located as close as possible to each other. Proximity placement groups are particularly useful to address the requirements of latency-sensitive workloads.
+[Proximity placement groups](../virtual-machines/windows/proximity-placement-groups-portal.md) are a logical grouping capability in Azure Virtual Machines. You can use them to decrease the inter-VM network latency associated with your applications.
-## Disaster recovery with Proximity Placement Groups
+When VMs are deployed within the same proximity placement group, they're physically located as close as possible to each other. Proximity placement groups are useful to address the requirements of latency-sensitive workloads.
-In a typical scenario, you may have your virtual machines running in a proximity placement group to avoid the network latency between the various tiers of your application. While this can provide your application optimal network latency, you would like to protect these applications using Site Recovery for any region level failure. Site Recovery replicates the data from one region to another Azure region and brings up the machines in disaster recovery region in an event of failover.
+## Disaster recovery with proximity placement groups
+
+In a typical scenario, you might have your virtual machines running in a proximity placement group to avoid the network latency between the tiers of your application. Although this approach can provide optimal network latency for your application, you might want to protect these applications by using Azure Site Recovery for any region-level failure.
+
+Site Recovery replicates the data from one Azure region to another region. It brings up the machines in disaster recovery (DR) region in a failover event.
## Considerations -- The best effort will be to failover/failback the virtual machines into a proximity placement group. However, if VM is unable to be brought up inside Proximity Placement during failover/failback, then failover/failback will still happen, and virtual machines will be created outside of a proximity placement group.-- If an Availability Set is pinned to a Proximity Placement Group and during failover/failback VMs in the availability set have an allocation constraint, then the virtual machines will be created outside of both the availability set and proximity placement group.-- Site Recovery for Proximity Placement Groups is not supported for unmanaged disks.
+- The best effort will be to fail over and fail back the virtual machines into a proximity placement group. If you can't bring up the VMs inside a proximity placement group, the failover and failback will still happen, but VMs will be created outside the proximity placement group.
+- If an availability set is pinned to a proximity placement group and VMs in the availability set have an allocation constraint during failback or failover, the VMs will be created outside both the availability set and the proximity placement group.
+- Site Recovery for proximity placement groups is not supported for unmanaged disks.
> [!NOTE]
-> Azure Site Recovery does not support failback from managed disks for Hyper-V to Azure scenarios. Hence, failback from Proximity Placement Group in Azure to Hyper-V is not supported.
+> Azure Site Recovery does not support failback from managed disks for scenarios of moving from Hyper-V to Azure. Failback from proximity placement groups in Azure to Hyper-V is not supported.
+
+## Set up disaster recovery for VMs in proximity placement groups via the Azure portal
+
+### Azure to Azure
+
+You can choose to enable replication for a virtual machine through the VM disaster recovery page. Or you can enable replication by going to a pre-created vault, going to the Site Recovery section, and then enabling replication. Let's look at how you can set up Site Recovery VMs inside a proximity placement group through both approaches.
+
+To select a proximity placement group in the DR region while enabling replication through the infrastructure as a service (IaaS) VM DR page:
-## Set up Disaster Recovery for VMs in Proximity Placement Groups via Portal
+1. Go to the virtual machine. On the left pane, under **Operations**, select **Disaster Recovery**.
+2. On the **Basics** tab, choose the DR region that you want to replicate the VM to. Go to **Advanced Settings**.
+3. You can see the proximity placement group of your VM and the option to select a proximity placement group in the DR region. Site Recovery also gives you the option of using a new proximity placement group that it creates for you, if you choose to use this default option.
+
+ Choose the proximity placement group that you want. Then select **Review + Start replication**.
-### Azure to Azure via Portal
+ :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-group-a2a-1.png" alt-text="Screenshot that shows advanced settings to enable replication.":::
-You can choose to enable replication for a virtual machine through the VM disaster recovery page or by going to a pre-created vault and navigating to the Site Recovery section and then enabling replication. LetΓÇÖs look at how Site Recovery can be set up for VMs inside a PPG through both approaches:
+To select a proximity placement group in the DR region while enabling replication through the vault page:
-- How to select PPG in the DR region while enabling replication through the IaaS VM DR blade:
- 1. Go to the virtual machine. On the left hand side blade, under ΓÇÿOperationsΓÇÖ, select ΓÇÿDisaster RecoveryΓÇÖ
- 2. In the ΓÇÿBasicsΓÇÖ tab, choose the DR region that you would like to replicate the VM to. Go to ΓÇÿAdvanced SettingsΓÇÖ
- 3. Here, you can see the Proximity Placement Group of your VM and the option to select a PPG in the DR region. Site Recovery also gives you the option of using a new Proximity Placement Group that it creates for you if you choose to use this default option. You are free to choose the Proximity Placement Group you want and then go to ΓÇÿReview + Start replicationΓÇÖ and then finally enable replication.
+1. Go to your Recovery Services vault, and then go to the **Site Recovery** tab.
+2. Select **+ Enable Site Recovery**. Then select **1: Enable Replication** under **Azure virtual machines** (because you want to replicate an Azure VM).
+3. Fill in the required fields on the **Source** tab, and then select **Next**.
+4. Select the list of VMs that you want to enable replication for on the **Virtual machines** tab, and then select **Next**.
+5. You can see the option to select a proximity placement group in the DR region. Site Recovery also gives you the option of using a new proximity placement group that it creates for you, if you choose to use this default option.
- :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-group-a2a-1.png" alt-text="Enable replication.":::
+ Choose the proximity placement group that you want, and then proceed to enabling replication.
-- How to select PPG in the DR region while enabling replication through the vault blade:
- 1. Go to your Recovery Services Vault and go to the Site Recovery tab
- 2. Click on ΓÇÿ+ Enable Site RecoveryΓÇÖ and then select ΓÇÿ1: Enable ReplicationΓÇÖ under Azure virtual machines (as you are looking to replicate an Azure VM)
- 3. Fill in the required fields in the ΓÇÿSourceΓÇÖ tab and click ΓÇÿNextΓÇÖ
- 4. Select the list of VMs you want to enable replication for in the ΓÇÿVirtual machinesΓÇÖ tab and click ΓÇÿNextΓÇÖ
- 5. Here, you can see the option to select a PPG in the DR region. Site Recovery also gives you the option of using a new PPG that it creates for you if you choose to use this default option. You are free to choose the PPG you want and then proceed to enabling replication.
+ :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-group-a2a-2.png" alt-text="Screenshot that shows selections for customizing target settings.":::
- :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-group-a2a-2.png" alt-text="Enable replication via vault.":::
+You can easily update your selection of a proximity placement group in the DR region after replication has been enabled for the VM:
-Note that you can easily update the PPG selection in the DR region after replication has been enabled for the VM.
+1. Go to the virtual machine. On the left pane, under **Operations**, select **Disaster Recovery**.
+2. Go to the **Compute and Network** pane and select **Edit**.
+3. You can see the options to edit multiple target settings, including the target proximity placement group. Choose the proximity placement group that you want the VM to fail over into, and then select **Save**.
-1. Go to the virtual machine and on the left side blade, under ΓÇÿOperationsΓÇÖ, select ΓÇÿDisaster RecoveryΓÇÖ
-2. Go to the ΓÇÿCompute and NetworkΓÇÖ blade and click on ΓÇÿEditΓÇÖ at the top of the page
-3. You can see the options to edit multiple target settings, including target PPG. Choose the PPG you would like the VM to failover into and click ΓÇÿSaveΓÇÖ.
+### VMware to Azure
-### VMware to Azure via Portal
+You can set up a proximity placement group for the target VM after you enable replication for the VM. Make sure that you separately create the proximity placement group in the target region according to your requirement.
-Proximity placement group for the target VM can be set up after enabling replication for the VM. Please ensure you separately create the PPG in the target region according to your requirement. Subsequently, you can easily update the PPG selection in the DR region after replication has been enabled for the VM.
+You can easily update your selection of a proximity placement group in the DR region after replication has been enabled for the VM:
-1. Select the virtual machine from the vault and on the left side blade, under ΓÇÿOperationsΓÇÖ, select ΓÇÿDisaster RecoveryΓÇÖ
-2. Go to the ΓÇÿCompute and NetworkΓÇÖ blade and click on ΓÇÿEditΓÇÖ at the top of the page
-3. You can see the options to edit multiple target settings, including target PPG. Choose the PPG you would like the VM to failover into and click ΓÇÿSaveΓÇÖ.
+1. Select the virtual machine from the vault. On the left pane, under **Operations**, select **Disaster Recovery**.
+2. Go to the **Compute and Network** pane and select **Edit**.
+3. You can see the options to edit multiple target settings, including the target proximity placement group. Choose the proximity placement group that you want the VM to fail over into, and then select **Save**.
- :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-groups-update-v2a.png" alt-text="Update PPG V2A":::
+ :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-groups-update-v2a.png" alt-text="Screenshot that shows compute and network selections for VMware to Azure.":::
-### Hyper-V to Azure via Portal
+### Hyper-V to Azure
-Proximity placement group for the target VM can be set up after enabling replication for the VM. Please ensure you separately create the PPG in the target region according to your requirement. Subsequently, you can easily update the PPG selection in the DR region after replication has been enabled for the VM.
+You can set up a proximity placement group for the target VM after you enable replication for the VM. Make sure that you separately create the proximity placement group in the target region according to your requirement.
-1. Select the virtual machine from the vault and on the left side blade, under ΓÇÿOperationsΓÇÖ, select ΓÇÿDisaster RecoveryΓÇÖ
-2. Go to the ΓÇÿCompute and NetworkΓÇÖ blade and click on ΓÇÿEditΓÇÖ at the top of the page
-3. You can see the options to edit multiple target settings, including target PPG. Choose the PPG you would like the VM to failover into and click ΓÇÿSaveΓÇÖ.
+You can easily update your selection of a proximity placement group in the DR region after replication has been enabled for the VM:
- :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-groups-update-h2a.png" alt-text="Update PPG H2A":::
+1. Select the virtual machine from the vault. On the left pane, under **Operations**, select **Disaster Recovery**.
+2. Go to the **Compute and Network** pane and select **Edit**.
+3. You can see the options to edit multiple target settings, including the target proximity placement group. Choose the proximity placement group that you want the VM to fail over into, and then select **Save**.
-## Set up Disaster Recovery for VMs in Proximity Placement Groups via PowerShell
+ :::image type="content" source="media/how-to-enable-replication-proximity-placement-groups/proximity-placement-groups-update-h2a.png" alt-text="Screenshot that shows compute and network selections for Hyper-V to Azure.":::
+
+## Set up disaster recovery for VMs in proximity placement groups via PowerShell
### Prerequisites
-1. Make sure that you have the Azure PowerShell Az module. If you need to install or upgrade Azure PowerShell, follow this [Guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
-2. The minimum Azure PowerShell Az version should be 4.1.0. To check the current version, use the below command -
+- Make sure that you have the Azure PowerShell Az module. If you need to install or upgrade Azure PowerShell, follow the [guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
+- The minimum Azure PowerShell Az version should be 4.1.0. To check the current version, use the following command:
``` Get-InstalledModule -Name Az ```
-### Set up Site Recovery for Virtual Machines in Proximity Placement Group
- > [!NOTE]
-> Make sure that you have the unique ID of target Proximity Placement Group handy. If you're creating a new Proximity Placement Group, then check the command [here](../virtual-machines/windows/proximity-placement-groups.md#create-a-proximity-placement-group) and if you're using an existing Proximity Placement Group, then use the command [here](../virtual-machines/windows/proximity-placement-groups.md#list-proximity-placement-groups).
+> Make sure that you have the unique ID of the target proximity placement group handy. The command that you use depends on whether you're [creating a new proximity placement group](../virtual-machines/windows/proximity-placement-groups.md#create-a-proximity-placement-group) or [using an existing proximity placement group](../virtual-machines/windows/proximity-placement-groups.md#list-proximity-placement-groups).
### Azure to Azure
-1. [Sign in](./azure-to-azure-powershell.md#sign-in-to-your-microsoft-azure-subscription) to your account and set your subscription.
-2. Get the details of the virtual machine youΓÇÖre planning to replicate as mentioned [here](./azure-to-azure-powershell.md#get-details-of-the-virtual-machine-to-be-replicated).
-3. [Create](./azure-to-azure-powershell.md#create-a-recovery-services-vault) your recovery services vault and [set](./azure-to-azure-powershell.md#set-the-vault-context) the vault context.
-4. Prepare the vault to start replication virtual machine. This involves creating a [service fabric object](./azure-to-azure-powershell.md#create-a-site-recovery-fabric-object-to-represent-the-primary-source-region) for both primary and recovery regions.
-5. [Create](./azure-to-azure-powershell.md#create-a-site-recovery-protection-container-in-the-primary-fabric) a Site Recovery protection container, for both the primary and recovery fabrics.
-6. [Create](./azure-to-azure-powershell.md#create-a-replication-policy) a replication policy.
-7. Create a protection container mapping between primary and recovery protection container using [these](./azure-to-azure-powershell.md#create-a-protection-container-mapping-between-the-primary-and-recovery-protection-container) steps and a protection container mapping for failback as mentioned [here](./azure-to-azure-powershell.md#create-a-protection-container-mapping-for-failback-reverse-replication-after-a-failover).
-8. Create cache storage account by following [these](./azure-to-azure-powershell.md#create-cache-storage-account-and-target-storage-account) steps.
-9. Create the required network mappings as mentioned [here](./azure-to-azure-powershell.md#create-network-mappings).
-10. To replicate Azure virtual machine with managed disks, use the below PowerShell cmdlet -
-
-```azurepowershell
-#Get the resource group that the virtual machine must be created in when failed over.
-$RecoveryRG = Get-AzResourceGroup -Name "a2ademorecoveryrg" -Location "West US 2"
-
-#Specify replication properties for each disk of the VM that is to be replicated (create disk replication configuration)
-#Make sure to replace the variables $OSdiskName with OS disk name.
-
-#OS Disk
-$OSdisk = Get-AzDisk -DiskName $OSdiskName -ResourceGroupName "A2AdemoRG"
-$OSdiskId = $OSdisk.Id
-$RecoveryOSDiskAccountType = $OSdisk.Sku.Name
-$RecoveryReplicaDiskAccountType = $OSdisk.Sku.Name
-
-$OSDiskReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $OSdiskId -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryOSDiskAccountType
-
-#Make sure to replace the variables $datadiskName with data disk name.
+1. [Sign in to your account and set your subscription](./azure-to-azure-powershell.md#sign-in-to-your-microsoft-azure-subscription).
+2. [Get the details of the virtual machine that you're planning to replicate](./azure-to-azure-powershell.md#get-details-of-the-virtual-machine-to-be-replicated).
+3. [Create your recovery services vault](./azure-to-azure-powershell.md#create-a-recovery-services-vault) and [set the vault context](./azure-to-azure-powershell.md#set-the-vault-context).
+4. Prepare the vault to start the replication virtual machine. This step involves [creating an Azure Service Fabric object](./azure-to-azure-powershell.md#create-a-site-recovery-fabric-object-to-represent-the-primary-source-region) for both the primary and recovery regions.
+5. [Create a Site Recovery protection container](./azure-to-azure-powershell.md#create-a-site-recovery-protection-container-in-the-primary-fabric) for both the primary and recovery fabrics.
+6. [Create a replication policy](./azure-to-azure-powershell.md#create-a-replication-policy).
+7. [Create a protection container mapping between the primary and recovery protection containers](./azure-to-azure-powershell.md#create-a-protection-container-mapping-between-the-primary-and-recovery-protection-container), and [create a protection container mapping for failback](./azure-to-azure-powershell.md#create-a-protection-container-mapping-for-failback-reverse-replication-after-a-failover).
+8. [Create cache storage account](./azure-to-azure-powershell.md#create-cache-storage-account-and-target-storage-account).
+9. [Create the required network mappings](./azure-to-azure-powershell.md#create-network-mappings).
+10. Replicate an Azure virtual machine with managed disks by using the following PowerShell cmdlet:
+
+ ```azurepowershell
+ #Get the resource group that the virtual machine must be created in when it's failed over.
+ $RecoveryRG = Get-AzResourceGroup -Name "a2ademorecoveryrg" -Location "West US 2"
+
+ #Specify replication properties for each disk of the VM that will be replicated (create disk replication configuration).
+ #Make sure to replace the variable $OSdiskName with the OS disk name.
+
+ #OS Disk
+ $OSdisk = Get-AzDisk -DiskName $OSdiskName -ResourceGroupName "A2AdemoRG"
+ $OSdiskId = $OSdisk.Id
+ $RecoveryOSDiskAccountType = $OSdisk.Sku.Name
+ $RecoveryReplicaDiskAccountType = $OSdisk.Sku.Name
+
+ $OSDiskReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $OSdiskId -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryOSDiskAccountType
+
+ #Make sure to replace the variable $datadiskName with the data disk name.
+
+ #Data disk
+ $datadisk = Get-AzDisk -DiskName $datadiskName -ResourceGroupName "A2AdemoRG"
+ $datadiskId1 = $datadisk[0].Id
+ $RecoveryReplicaDiskAccountType = $datadisk[0].Sku.Name
+ $RecoveryTargetDiskAccountType = $datadisk[0].Sku.Name
-#Data disk
-$datadisk = Get-AzDisk -DiskName $datadiskName -ResourceGroupName "A2AdemoRG"
-$datadiskId1 = $datadisk[0].Id
-$RecoveryReplicaDiskAccountType = $datadisk[0].Sku.Name
-$RecoveryTargetDiskAccountType = $datadisk[0].Sku.Name
+ $DataDisk1ReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $datadiskId1 -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryTargetDiskAccountType
-$DataDisk1ReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $datadiskId1 -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryTargetDiskAccountType
+ #Create a list of disk replication configuration objects for the disks of the virtual machine that will be replicated.
-#Create a list of disk replication configuration objects for the disks of the virtual machine that are to be replicated.
+ $diskconfigs = @()
+ $diskconfigs += $OSDiskReplicationConfig, $DataDisk1ReplicationConfig
-$diskconfigs = @()
-$diskconfigs += $OSDiskReplicationConfig, $DataDisk1ReplicationConfig
+ #Start replication by creating a replication protected item. Use a GUID for the name of the replication protected item to ensure uniqueness of the name.
-#Start replication by creating replication protected item. Using a GUID for the name of the replication protected item to ensure uniqueness of name.
+ $TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id
+ ```
-$TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id
-```
+ When you're enabling replication for multiple data disks, use the following PowerShell cmdlet:
-When enabling replication for multiple data disks, use the below PowerShell cmdlet -
+ ```azurepowershell
+ #Get the resource group that the virtual machine must be created in when it's failed over.
+ $RecoveryRG = Get-AzResourceGroup -Name "a2ademorecoveryrg" -Location "West US 2"
-```azurepowershell
-#Get the resource group that the virtual machine must be created in when failed over.
-$RecoveryRG = Get-AzResourceGroup -Name "a2ademorecoveryrg" -Location "West US 2"
+ #Specify replication properties for each disk of the VM that will be replicated (create disk replication configuration).
+ #Make sure to replace the variable $OSdiskName with the OS disk name.
-#Specify replication properties for each disk of the VM that is to be replicated (create disk replication configuration)
-#Make sure to replace the variables $OSdiskName with OS disk name.
+ #OS Disk
+ $OSdisk = Get-AzDisk -DiskName $OSdiskName -ResourceGroupName "A2AdemoRG"
+ $OSdiskId = $OSdisk.Id
+ $RecoveryOSDiskAccountType = $OSdisk.Sku.Name
+ $RecoveryReplicaDiskAccountType = $OSdisk.Sku.Name
-#OS Disk
-$OSdisk = Get-AzDisk -DiskName $OSdiskName -ResourceGroupName "A2AdemoRG"
-$OSdiskId = $OSdisk.Id
-$RecoveryOSDiskAccountType = $OSdisk.Sku.Name
-$RecoveryReplicaDiskAccountType = $OSdisk.Sku.Name
+ $OSDiskReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $OSdiskId -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryOSDiskAccountType
-$OSDiskReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id -DiskId $OSdiskId -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType -RecoveryTargetDiskAccountType $RecoveryOSDiskAccountType
+ $diskconfigs = @()
+ $diskconfigs += $OSDiskReplicationConfig
-$diskconfigs = @()
-$diskconfigs += $OSDiskReplicationConfig
+ #Data disk
-#Data disk
+ # Add data disks
+ Foreach( $disk in $VM.StorageProfile.DataDisks)
+ {
+ $datadisk = Get-AzDisk -DiskName $datadiskName -ResourceGroupName "A2AdemoRG"
+ $dataDiskId1 = $datadisk[0].Id
+ $RecoveryReplicaDiskAccountType = $datadisk[0].Sku.Name
+ $RecoveryTargetDiskAccountType = $datadisk[0].Sku.Name
+ $DataDisk1ReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id `
+ -DiskId $dataDiskId1 -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType `
+ -RecoveryTargetDiskAccountType $RecoveryTargetDiskAccountType
+ $diskconfigs += $DataDisk1ReplicationConfig
+ }
-# Add data disks
-Foreach( $disk in $VM.StorageProfile.DataDisks)
-{
- $datadisk = Get-AzDisk -DiskName $datadiskName -ResourceGroupName "A2AdemoRG"
- $dataDiskId1 = $datadisk[0].Id
- $RecoveryReplicaDiskAccountType = $datadisk[0].Sku.Name
- $RecoveryTargetDiskAccountType = $datadisk[0].Sku.Name
- $DataDisk1ReplicationConfig = New-AzRecoveryServicesAsrAzureToAzureDiskReplicationConfig -ManagedDisk -LogStorageAccountId $EastUSCacheStorageAccount.Id `
- -DiskId $dataDiskId1 -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryReplicaDiskAccountType $RecoveryReplicaDiskAccountType `
- -RecoveryTargetDiskAccountType $RecoveryTargetDiskAccountType
- $diskconfigs += $DataDisk1ReplicationConfig
-}
-
-#Start replication by creating replication protected item. Using a GUID for the name of the replication protected item to ensure uniqueness of name.
+ #Start replication by creating a replication protected item. Use a GUID for the name of the replication protected item to ensure uniqueness of the name.
-$TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id
-```
+ $TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id
+ ```
-When enabling zone to zone replication with PPG, the command to start replication will be exchanged with the PowerShell cmdlet -
+ When you're enabling zone-to-zone replication with a proximity placement group, the command to start replication will be exchanged with the PowerShell cmdlet:
-```azurepowershell
-$TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id -RecoveryAvailabilityZone "2"
-```
+ ```azurepowershell
+ $TempASRJob = New-AzRecoveryServicesAsrReplicationProtectedItem -AzureToAzure -AzureVmId $VM.Id -Name (New-Guid).Guid -ProtectionContainerMapping $EusToWusPCMapping -AzureToAzureDiskReplicationConfiguration $diskconfigs -RecoveryResourceGroupId $RecoveryRG.ResourceId -RecoveryProximityPlacementGroupId $targetPpg.Id -RecoveryAvailabilityZone "2"
+ ```
-Once the start replication operation succeeds, virtual machine data is replicated to the recovery region.
+ After the operation to start replication succeeds, virtual machine data is replicated to the recovery region.
-The replication process starts by initially seeding a copy of the replicating disks of the virtual machine in the recovery region. This phase is called the initial replication phase.
+ The replication process starts by initially seeding a copy of the replicating disks of the virtual machine in the recovery region. This phase is called the *initial replication* phase.
-After initial replication completes, replication moves to the differential synchronization phase. At this point, the virtual machine is protected, and a test failover operation can be performed on it. The replication state of the replicated item representing the virtual machine goes to the Protected state after initial replication completes.
+ After initial replication finishes, replication moves to the *differential synchronization* phase. At this point, the virtual machine is protected, and you can perform a test failover operation on it. The replication state of the replicated item that represents the virtual machine goes to the protected state after initial replication finishes.
-Monitor the replication state and replication health for the virtual machine by getting details of the replication protected item corresponding to it.
+ Monitor the replication state and replication health for the virtual machine by getting details of the replication protected item that corresponds to it:
-```azurepowershell
-Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $PrimaryProtContainer | Select FriendlyName, ProtectionState, ReplicationHealth
-```
+ ```azurepowershell
+ Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $PrimaryProtContainer | Select FriendlyName, ProtectionState, ReplicationHealth
+ ```
-11. To do a test failover, validate and cleanup test failover, follow [these](./azure-to-azure-powershell.md#do-a-test-failover-validate-and-cleanup-test-failover) steps.
-12. To failover, follow the steps as mentioned [here](./azure-to-azure-powershell.md#fail-over-to-azure).
-13. To reprotect and failback to the source region, use the below PowerShell cmdlet ΓÇô
+11. [Perform, validate, and clean up a test failover](./azure-to-azure-powershell.md#do-a-test-failover-validate-and-cleanup-test-failover).
+12. [Fail over the virtual machine](./azure-to-azure-powershell.md#fail-over-to-azure).
+13. Reprotect and fail back to the source region by using the following PowerShell cmdlet:
-```azurepowershell
-#Create Cache storage account for replication logs in the primary region
-$WestUSCacheStorageAccount = New-AzStorageAccount -Name "a2acachestoragewestus" -ResourceGroupName "A2AdemoRG" -Location 'West US' -SkuName Standard_LRS -Kind Storage
+ ```azurepowershell
+ #Create a cache storage account for replication logs in the primary region.
+ $WestUSCacheStorageAccount = New-AzStorageAccount -Name "a2acachestoragewestus" -ResourceGroupName "A2AdemoRG" -Location 'West US' -SkuName Standard_LRS -Kind Storage
-#Use the recovery protection container, new cache storage account in West US and the source region VM resource group
-Update-AzRecoveryServicesAsrProtectionDirection -ReplicationProtectedItem $ReplicationProtectedItem -AzureToAzure -ProtectionContainerMapping $WusToEusPCMapping -LogStorageAccountId $WestUSCacheStorageAccount.Id -RecoveryResourceGroupID $sourceVMResourcegroup.ResourceId -RecoveryProximityPlacementGroupId $vm.ProximityPlacementGroup.Id
-```
+ #Use the recovery protection container, the new cache storage account in West US, and the source region VM resource group.
+ Update-AzRecoveryServicesAsrProtectionDirection -ReplicationProtectedItem $ReplicationProtectedItem -AzureToAzure -ProtectionContainerMapping $WusToEusPCMapping -LogStorageAccountId $WestUSCacheStorageAccount.Id -RecoveryResourceGroupID $sourceVMResourcegroup.ResourceId -RecoveryProximityPlacementGroupId $vm.ProximityPlacementGroup.Id
+ ```
-14. To disable replication, follow the steps [here](./azure-to-azure-powershell.md#disable-replication).
+14. [Disable replication](./azure-to-azure-powershell.md#disable-replication).
-### VMware to Azure via PowerShell
+### VMware to Azure
-1. Make sure that you [prepare your on-premises VMware servers](./vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
-2. Sign in to your account and set your subscription as specified [here](./vmware-azure-disaster-recovery-powershell.md#log-into-azure).
-3. [Set up](./vmware-azure-disaster-recovery-powershell.md#set-up-a-recovery-services-vault) a Recovery Services Vault and [set vault context](./vmware-azure-disaster-recovery-powershell.md#set-the-vault-context).
-4. [Validate](./vmware-azure-disaster-recovery-powershell.md#validate-vault-registration) your vault registration.
-5. [Create](./vmware-azure-disaster-recovery-powershell.md#create-a-replication-policy) a replication policy.
-6. [Add](./vmware-azure-disaster-recovery-powershell.md#add-a-vcenter-server-and-discover-vms) a vCenter server and discover virtual machines and [create](./vmware-azure-disaster-recovery-powershell.md#create-storage-accounts-for-replication) storage accounts for replication.
-7. To replicate VMware Virtual Machines, check the details here and follow the below PowerShell cmdlet ΓÇô
+1. [Prepare your on-premises VMware servers](./vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
+2. [Sign in to your account and set your subscription](./vmware-azure-disaster-recovery-powershell.md#log-into-azure).
+3. [Set up a Recovery Services vault](./vmware-azure-disaster-recovery-powershell.md#set-up-a-recovery-services-vault) and [set a vault context](./vmware-azure-disaster-recovery-powershell.md#set-the-vault-context).
+4. [Validate your vault registration](./vmware-azure-disaster-recovery-powershell.md#validate-vault-registration).
+5. [Create a replication policy](./vmware-azure-disaster-recovery-powershell.md#create-a-replication-policy).
+6. [Add a vCenter server and discover virtual machines](./vmware-azure-disaster-recovery-powershell.md#add-a-vcenter-server-and-discover-vms), and [create storage accounts for replication](./vmware-azure-disaster-recovery-powershell.md#create-storage-accounts-for-replication).
+7. Replicate VMware virtual machines and check the details by using the following PowerShell cmdlet:
-```azurepowershell
-#Get the target resource group to be used
-$ResourceGroup = Get-AzResourceGroup -Name "VMwareToAzureDrPs"
+ ```azurepowershell
+ #Get the target resource group to be used.
+ $ResourceGroup = Get-AzResourceGroup -Name "VMwareToAzureDrPs"
-#Get the target virtual network to be used
-$RecoveryVnet = Get-AzVirtualNetwork -Name "ASR-vnet" -ResourceGroupName "asrrg"
+ #Get the target virtual network to be used.
+ $RecoveryVnet = Get-AzVirtualNetwork -Name "ASR-vnet" -ResourceGroupName "asrrg"
-#Get the protection container mapping for replication policy named ReplicationPolicy
-$PolicyMap = Get-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainer $ProtectionContainer | where PolicyFriendlyName -eq "ReplicationPolicy"
+ #Get the protection container mapping for the replication policy named ReplicationPolicy.
+ $PolicyMap = Get-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainer $ProtectionContainer | where PolicyFriendlyName -eq "ReplicationPolicy"
-#Get the protectable item corresponding to the virtual machine CentOSVM1
-$VM1 = Get-AzRecoveryServicesAsrProtectableItem -ProtectionContainer $ProtectionContainer -FriendlyName "CentOSVM1"
+ #Get the protectable item that corresponds to the virtual machine CentOSVM1.
+ $VM1 = Get-AzRecoveryServicesAsrProtectableItem -ProtectionContainer $ProtectionContainer -FriendlyName "CentOSVM1"
-# Enable replication for virtual machine CentOSVM1 using the Az.RecoveryServices module 2.0.0 onwards to replicate to managed disks
-# The name specified for the replicated item needs to be unique within the protection container. Using a random GUID to ensure uniqueness
-$Job_EnableReplication1 = New-AzRecoveryServicesAsrReplicationProtectedItem -VMwareToAzure -ProtectableItem $VM1 -Name (New-Guid).Guid -ProtectionContainerMapping $PolicyMap -ProcessServer $ProcessServers[1] -Account $AccountHandles[2] -RecoveryResourceGroupId $ResourceGroup.ResourceId -logStorageAccountId $LogStorageAccount.Id -RecoveryAzureNetworkId $RecoveryVnet.Id -RecoveryAzureSubnetName "Subnet-1" -RecoveryProximityPlacementGroupId $targetPpg.Id
-```
+ # Enable replication for virtual machine CentOSVM1 by using the Az.RecoveryServices module 2.0.0 onward to replicate to managed disks.
+ # The name specified for the replicated item needs to be unique within the protection container. Use a random GUID to ensure uniqueness.
+ $Job_EnableReplication1 = New-AzRecoveryServicesAsrReplicationProtectedItem -VMwareToAzure -ProtectableItem $VM1 -Name (New-Guid).Guid -ProtectionContainerMapping $PolicyMap -ProcessServer $ProcessServers[1] -Account $AccountHandles[2] -RecoveryResourceGroupId $ResourceGroup.ResourceId -logStorageAccountId $LogStorageAccount.Id -RecoveryAzureNetworkId $RecoveryVnet.Id -RecoveryAzureSubnetName "Subnet-1" -RecoveryProximityPlacementGroupId $targetPpg.Id
+ ```
-8. You can check the replication state and replication health of the virtual machine with the Get-ASRReplicationProtectedItem cmdlet.
+8. Check the replication state and replication health of the virtual machine by using the `Get-ASRReplicationProtectedItem` cmdlet:
-```azurepowershell
-Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $ProtectionContainer | Select FriendlyName, ProtectionState, ReplicationHealth
-```
+ ```azurepowershell
+ Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $ProtectionContainer | Select FriendlyName, ProtectionState, ReplicationHealth
+ ```
-9. Configure the failover settings by following the steps [here](./vmware-azure-disaster-recovery-powershell.md#configure-failover-settings).
-10. [Run](./vmware-azure-disaster-recovery-powershell.md#run-a-test-failover) a test failover.
-11. Failover to Azure using [these](./vmware-azure-disaster-recovery-powershell.md#fail-over-to-azure) steps.
+9. [Configure the failover settings](./vmware-azure-disaster-recovery-powershell.md#configure-failover-settings).
+10. [Run a test failover](./vmware-azure-disaster-recovery-powershell.md#run-a-test-failover).
+11. [Fail over to Azure](./vmware-azure-disaster-recovery-powershell.md#fail-over-to-azure).
-### Hyper-V to Azure via PowerShell
+### Hyper-V to Azure
-1. Make sure that you [prepare your on-premises Hyper-V servers](./hyper-v-prepare-on-premises-tutorial.md) for disaster recovery to Azure.
-2. [Sign in](./hyper-v-azure-powershell-resource-manager.md#step-1-sign-in-to-your-azure-account) to Azure.
-3. [Set up](./hyper-v-azure-powershell-resource-manager.md#step-2-set-up-the-vault) your vault and [set](./hyper-v-azure-powershell-resource-manager.md#step-3-set-the-recovery-services-vault-context) the Recovery Services Vault context.
-4. [Create](./hyper-v-azure-powershell-resource-manager.md#step-4-create-a-hyper-v-site) a Hyper-V Site.
-5. [Install](./hyper-v-azure-powershell-resource-manager.md#step-5-install-the-provider-and-agent) the provider and agent.
-6. [Create](./hyper-v-azure-powershell-resource-manager.md#step-6-create-a-replication-policy) a replication policy.
-7. Enable replication by using the below steps ΓÇô
+1. [Prepare your on-premises Hyper-V servers](./hyper-v-prepare-on-premises-tutorial.md) for disaster recovery to Azure.
+2. [Sign in to Azure](./hyper-v-azure-powershell-resource-manager.md#step-1-sign-in-to-your-azure-account).
+3. [Set up your vault](./hyper-v-azure-powershell-resource-manager.md#step-2-set-up-the-vault) and [set the Recovery Services vault context](./hyper-v-azure-powershell-resource-manager.md#step-3-set-the-recovery-services-vault-context).
+4. [Create a Hyper-V site](./hyper-v-azure-powershell-resource-manager.md#step-4-create-a-hyper-v-site).
+5. [Install the provider and agent](./hyper-v-azure-powershell-resource-manager.md#step-5-install-the-provider-and-agent).
+6. [Create a replication policy](./hyper-v-azure-powershell-resource-manager.md#step-6-create-a-replication-policy).
+7. Enable replication by using the following steps:
- a. Retrieve the protectable item that corresponds to the VM you want to protect, as follows:
+ a. Retrieve the protectable item that corresponds to the VM you want to protect:
- ```azurepowershell
- $VMFriendlyName = "Fabrikam-app" #Name of the VM
- $ProtectableItem = Get-AzRecoveryServicesAsrProtectableItem -ProtectionContainer $protectionContainer -FriendlyName $VMFriendlyName
- ```
- b. Protect the VM. If the VM you're protecting has more than one disk attached to it, specify the operating system disk by using the OSDiskName parameter.
+ ```azurepowershell
+ $VMFriendlyName = "Fabrikam-app" #Name of the VM
+ $ProtectableItem = Get-AzRecoveryServicesAsrProtectableItem -ProtectionContainer $protectionContainer -FriendlyName $VMFriendlyName
+ ```
+ b. Protect the VM. If the VM you're protecting has more than one disk attached to it, specify the operating system disk by using the `OSDiskName` parameter:
- ```azurepowershell
- $OSType = "Windows" # "Windows" or "Linux"
- $DRjob = New-AzRecoveryServicesAsrReplicationProtectedItem -ProtectableItem $VM -Name $VM.Name -ProtectionContainerMapping $ProtectionContainerMapping -RecoveryAzureStorageAccountId $StorageAccountID -OSDiskName $OSDiskNameList[$i] -OS $OSType -RecoveryResourceGroupId $ResourceGroupID -RecoveryProximityPlacementGroupId $targetPpg.Id
- ```
- c. Wait for the VMs to reach a protected state after the initial replication. This can take a while, depending on factors such as the amount of data to be replicated, and the available upstream bandwidth to Azure. When a protected state is in place, the job State and StateDescription are updated as follows:
+ ```azurepowershell
+ $OSType = "Windows" # "Windows" or "Linux"
+ $DRjob = New-AzRecoveryServicesAsrReplicationProtectedItem -ProtectableItem $VM -Name $VM.Name -ProtectionContainerMapping $ProtectionContainerMapping -RecoveryAzureStorageAccountId $StorageAccountID -OSDiskName $OSDiskNameList[$i] -OS $OSType -RecoveryResourceGroupId $ResourceGroupID -RecoveryProximityPlacementGroupId $targetPpg.Id
+ ```
+ c. Wait for the VMs to reach a protected state after the initial replication. This process can take a while, depending on factors like the amount of data to be replicated and the available upstream bandwidth to Azure.
+
+ When a protected state is in place, `State` and `StateDescription` for the job are updated as follows:
- ```azurepowershell
- $DRjob = Get-AzRecoveryServicesAsrJob -Job $DRjob
- $DRjob | Select-Object -ExpandProperty State
+ ```azurepowershell
+ $DRjob = Get-AzRecoveryServicesAsrJob -Job $DRjob
+ $DRjob | Select-Object -ExpandProperty State
- $DRjob | Select-Object -ExpandProperty StateDescription
- ```
- d. Update recovery properties (such as the VM role size) and the Azure network to which to attach the VM NIC after failover.
+ $DRjob | Select-Object -ExpandProperty StateDescription
+ ```
+ d. Update recovery properties (such as the VM role size) and the Azure network to which to attach the VM NIC after failover:
- ```azurepowershell
- $nw1 = Get-AzVirtualNetwork -Name "FailoverNw" -ResourceGroupName "MyRG"
+ ```azurepowershell
+ $nw1 = Get-AzVirtualNetwork -Name "FailoverNw" -ResourceGroupName "MyRG"
- $VMFriendlyName = "Fabrikam-App"
+ $VMFriendlyName = "Fabrikam-App"
- $rpi = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $protectionContainer -FriendlyName $VMFriendlyName
+ $rpi = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $protectionContainer -FriendlyName $VMFriendlyName
- $UpdateJob = Set-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $rpi -PrimaryNic $VM.NicDetailsList[0].NicId -RecoveryNetworkId $nw1.Id -RecoveryNicSubnetName $nw1.Subnets[0].Name
+ $UpdateJob = Set-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $rpi -PrimaryNic $VM.NicDetailsList[0].NicId -RecoveryNetworkId $nw1.Id -RecoveryNicSubnetName $nw1.Subnets[0].Name
- $UpdateJob = Get-AzRecoveryServicesAsrJob -Job $UpdateJob
+ $UpdateJob = Get-AzRecoveryServicesAsrJob -Job $UpdateJob
- $UpdateJob | Select-Object -ExpandProperty state
+ $UpdateJob | Select-Object -ExpandProperty state
- Get-AzRecoveryServicesAsrJob -Job $job | Select-Object -ExpandProperty state
- ```
-8. Run a test [failover](./hyper-v-azure-powershell-resource-manager.md#step-8-run-a-test-failover).
+ Get-AzRecoveryServicesAsrJob -Job $job | Select-Object -ExpandProperty state
+ ```
+8. [Run a test failover](./hyper-v-azure-powershell-resource-manager.md#step-8-run-a-test-failover).
## Next steps
-To perform reprotect and failback for VMware to Azure, follow the steps outlined [here](./vmware-azure-prepare-failback.md).
+To perform reprotect and failback for VMware to Azure, see [Prepare for reprotection and failback of VMware VMs](./vmware-azure-prepare-failback.md).
-To perform failover for Hyper-V to Azure follow the steps outlined [here](./site-recovery-failover.md) and to perform failback, follow the steps outlined [here](./hyper-v-azure-failback.md).
+To fail over from Hyper-V to Azure, see [Run a failover from on-premises to Azure](./site-recovery-failover.md). To fail back, see [Run a failback for Hyper-V VMs](./hyper-v-azure-failback.md).
For more information, see [Failover in Site Recovery](site-recovery-failover.md).
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server ova** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 55](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 9.42.5941.1 | 5.1.6692.0 | 9.42.5941.1 | 5.1.6692.0 | 2.0.9208.0
[Rollup 54](https://support.microsoft.com/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | 9.41.5888.1 | 5.1.6620.0 | 9.41.5888.1 | 5.1.6620.0 | 2.0.9202.0 [Rollup 53](https://support.microsoft.com/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | 9.40.5850.1 | 5.1.6537.0 | 9.40.5850.1 | 5.1.6537.0 | 2.0.9202.0 [Rollup 52](https://support.microsoft.com/help/4597409/) | 9.39.5796.1 | 5.1.6458.0 | 9.39.5796.1 | 5.1.6458.0 | 2.0.9196.0
For Site Recovery components, we support N-4 versions, where N is the latest rel
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (April 2021)
+
+### Update rollup 55
+
+[Update rollup 55](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup.
+**Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup.
+**Azure VM disaster recovery** | Support added for cross-continental disaster recovery of Azure VMs.<br/><br/> Rest API support for protection of VMSS Flex.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
+**VMware VM/physical disaster recovery to Azure** | Added support for using Ubuntu-20.04 while setting up master target server.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
++ ## Updates (February 2021) ### Update rollup 54
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup. **Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup. **Azure VM disaster recovery** | Zone to Zone Disaster Recovery using Azure Site Recovery is now GA in 4 more regions ΓÇô North Europe, East US, Central US, and West US 2.<br/>
-**VMware VM/physical disaster recovery to Azure** | The update includes portal support for selecting Proximity Placements Groups for VMware/Physical machines after enabling replication.<br/><br/> Protecting VMware machines with data disk size up to 32 TB iss now supported.
+**VMware VM/physical disaster recovery to Azure** | The update includes portal support for selecting Proximity Placements Groups for VMware/Physical machines after enabling replication.<br/><br/> Protecting VMware machines with data disk size up to 32 TB is now supported.
**Hyper-V disaster recovery to Azure** | The update includes portal support for selecting Proximity Placements Groups for Hyper-V machines after enabling replication.
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup. **Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup, including new Linux support for the Mobility service. **Azure VM disaster recovery** | Now supported for VMs running RHEL 8.3 and Oracle Linux 7.9
-**VMware VM/physical disaster recovery to Azure** | Now supported for VMs running RHEL 8.3, Oracle Linux 7.9/8.3.
+**VMware VM/physical disaster recovery to Azure** | Now supported for VMs running RHEL 8.3, Oracle Linux 7.9.
## Updates (October 2020)
spatial-anchors Spatial Anchor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/spatial-anchor-faq.md
Azure Spatial Anchors adheres to the [Azure Service Agreement Terms](https://go.
**Q: In what regions is Azure Spatial Anchors available?**
-**A:** Azure Spatial Anchors is currently available in West US 2, East US, East US 2, South Central US, West Europe, North Europe, UK South, and Australia East. Additional regions will be available in the future.
+**A:** Azure Spatial Anchors is currently available in West US 2, East US, East US 2, South Central US, West Europe, North Europe, UK South, Australia East, Southeast Asia, and Korea Central. Additional regions will be available in the future.
What this means is that both compute and storage powering this service are in these regions. That said, there are no restrictions on where your clients are located.
spring-cloud How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-application-insights.md
# Application Insights Java In-Process Agent in Azure Spring Cloud (Preview)
-This document explains how to monitor apps and microservices using the Application Insights Java agent in Azure Spring Cloud.
+This article explains how to monitor apps and microservices by using the Application Insights Java agent in Azure Spring Cloud.
With this feature you can:
In the left navigation pane, click **Application Insights** to jump to the **Ove
[ ![IPA 9](media/spring-cloud-application-insights/petclinic-microservices-availability.jpg)](media/spring-cloud-application-insights/petclinic-microservices-availability.jpg) ## ARM Template+ To use the Azure Resource Manager template, copy following content to `azuredeploy.json`. ```json
To use the Azure Resource Manager template, copy following content to `azuredepl
``` ## CLI+ Apply ARM template with the CLI command: * For an existing Azure Spring Cloud instance:
spring-cloud How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-circuit-breaker-metrics.md Binary files differ
spring-cloud How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-deploy-in-azure-virtual-network.md Binary files differ
spring-cloud How To Intellij Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-intellij-deploy-apps.md
Before running this example, you can try the [basic quickstart](spring-cloud-qui
* [IntelliJ IDEA, Community/Ultimate Edition, version 2020.1/2020.2](https://www.jetbrains.com/idea/download/#section=windows) ## Install the plug-in
-You can add the Azure Toolkit for IntelliJ IDEA 3.43.0 from the IntelliJ **Plugins** UI.
+You can add the Azure Toolkit for IntelliJ IDEA 3.51.0 from the IntelliJ **Plugins** UI.
1. Start IntelliJ. If you have opened a project previously, close the project to show the welcome dialog. Select **Configure** from link lower right, and then click **Plugins** to open the plug-in configuration dialog, and select **Install Plugins from disk**.
spring-cloud Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart.md
To complete this quickstart:
## Generate a Spring Cloud project
-Start with [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.9.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Cloud. The following image shows the Initializr set up for this sample project.
+Start with [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.10.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Cloud. The following image shows the Initializr set up for this sample project.
```url
-https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.4.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client
+https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.10.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client
``` Note that this example uses Java version 8. If you want to use Java version 11, change the option under **Project Metadata**.
spring-cloud Structured App Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/structured-app-log.md
This article explains how to generate and collect structured application log data in Azure Spring Cloud. With proper configuration, Azure Spring Cloud provides useful application log query and analysis through Log Analytics. ## Log schema requirements+ To improve log query experience, an application log is required to be in JSON format and conform to a schema. Azure Spring Cloud uses this schema to parse your application and stream to Log Analytics. **JSON schema requirements:**
To improve log query experience, an application log is required to be in JSON fo
| stackTrace | string | No | StackTrace | exception stack trace | | exceptionClass| string | No | ExceptionClass | exception class name | | mdc | nested JSON | No | | mapped diagnostic context|
-| mdc.traceId | string | No | TraceId |trace Id for distributed tracing|
-| mdc.spanId | string | No | SpanId |span Id for distributed tracing |
+| mdc.traceId | string | No | TraceId |trace ID for distributed tracing|
+| mdc.spanId | string | No | SpanId |span ID for distributed tracing |
| | | | | | * The "timestamp" field is required, and should be in UTC format, all other fields are optional.
To improve log query experience, an application log is required to be in JSON fo
* Log each JSON record in one line. **Log record sample** + ``` {"timestamp":"2021-01-08T09:23:51.280Z","logger":"com.example.demo.HelloController","level":"ERROR","thread":"http-nio-1456-exec-4","mdc":{"traceId":"c84f8a897041f634","spanId":"c84f8a897041f634"},"stackTrace":"java.lang.RuntimeException: get an exception\r\n\tat com.example.demo.HelloController.throwEx(HelloController.java:54)\r\n\","message":"Got an exception","exceptionClass":"RuntimeException"} ``` ## Generate schema-compliant JSON log + For Spring applications, you can generate expected JSON log format using common [logging frameworks](https://docs.spring.io/spring-boot/docs/2.1.13.RELEASE/reference/html/boot-features-logging.html#boot-features-custom-log-configuration), such as [logback](http://logback.qos.ch/) and [log4j2](https://logging.apache.org/log4j/2.x/). ### Log with logback + When using Spring Boot starters, logback is used by default. For logback apps, use [logstash-encoder](https://github.com/logstash/logstash-logback-encoder) to generate JSON formatted log. This method is supported in Spring Boot version 2.1+. The procedure:
-1. Add logstash dependency in your pom.xml file.
+1. Add logstash dependency in your `pom.xml` file.
- ```json
+ ```xml
<dependency>
- <groupId>net.logstash.logback</groupId>
- <artifactId>logstash-logback-encoder</artifactId>
- <version>6.5</version>
- </dependency>
+ <groupId>net.logstash.logback</groupId>
+ <artifactId>logstash-logback-encoder</artifactId>
+ <version>6.5</version>
+ </dependency>
```
-1. Update your logback.xml config file to set the JSON format.
- ```json
+1. Update your `logback-spring.xml` config file to set the JSON format.
+ ```xml
<configuration> <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
- <providers>
- <timestamp>
- <fieldName>timestamp</fieldName>
- <timeZone>UTC</timeZone>
- </timestamp>
- <loggerName>
- <fieldName>logger</fieldName>
- </loggerName>
- <logLevel>
- <fieldName>level</fieldName>
- </logLevel>
- <threadName>
- <fieldName>thread</fieldName>
- </threadName>
- <nestedField>
- <fieldName>mdc</fieldName>
- <providers>
- <mdc/>
- </providers>
- </nestedField>
- <stackTrace>
- <fieldName>stackTrace</fieldName>
- </stackTrace>
- <message/>
- <throwableClassName>
- <fieldName>exceptionClass</fieldName>
- </throwableClassName>
- </providers>
+ <providers>
+ <timestamp>
+ <fieldName>timestamp</fieldName>
+ <timeZone>UTC</timeZone>
+ </timestamp>
+ <loggerName>
+ <fieldName>logger</fieldName>
+ </loggerName>
+ <logLevel>
+ <fieldName>level</fieldName>
+ </logLevel>
+ <threadName>
+ <fieldName>thread</fieldName>
+ </threadName>
+ <nestedField>
+ <fieldName>mdc</fieldName>
+ <providers>
+ <mdc />
+ </providers>
+ </nestedField>
+ <stackTrace>
+ <fieldName>stackTrace</fieldName>
+ </stackTrace>
+ <message />
+ <throwableClassName>
+ <fieldName>exceptionClass</fieldName>
+ </throwableClassName>
+ </providers>
</encoder> </appender> <root level="info">
- <appender-ref ref="stdout"/>
+ <appender-ref ref="stdout" />
</root> </configuration> ```
+1. When using the logging configuration file with `-spring` suffix like `logback-spring.xml`, you can set the logging configuration based on the Spring active profile.
+
+ ```xml
+ <configuration>
+ <springProfile name="dev">
+ <!-- JSON appender definitions for local development, in human readable format -->
+ <include resource="org/springframework/boot/logging/logback/defaults.xml" />
+ <include resource="org/springframework/boot/logging/logback/console-appender.xml" />
+ <root level="info">
+ <appender-ref ref="CONSOLE" />
+ </root>
+ </springProfile>
+
+ <springProfile name="!dev">
+ <!-- JSON appender configuration from previous step, used for staging / production -->
+ ...
+ </springProfile>
+ </configuration>
+ ```
+
+ For local development, run the Spring Cloud application with JVM argument `-Dspring.profiles.active=dev`, then you can see human readable logs instead of JSON formatted lines.
### Log with log4j2
For log4j2 apps, use [json-template-layout](https://logging.apache.org/log4j/2.x
The procedure:
-1. Exclude `spring-boot-starter-logging` from `spring-boot-starter`, add dependencies `spring-boot-starter-log4j2`, `log4j-layout-template-json` in your pom.xml file.
+1. Exclude `spring-boot-starter-logging` from `spring-boot-starter`, add dependencies `spring-boot-starter-log4j2`, `log4j-layout-template-json` in your `pom.xml` file.
```xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-web</artifactId>
- <exclusions>
- <exclusion>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-logging</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-log4j2</artifactId>
- </dependency>
- <dependency>
- <groupId>org.apache.logging.log4j</groupId>
- <artifactId>log4j-layout-template-json</artifactId>
- <version>2.14.0</version>
- </dependency>
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-web</artifactId>
+ <exclusions>
+ <exclusion>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-logging</artifactId>
+ </exclusion>
+ </exclusions>
+ </dependency>
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-log4j2</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.logging.log4j</groupId>
+ <artifactId>log4j-layout-template-json</artifactId>
+ <version>2.14.0</version>
+ </dependency>
```
-2. Prepare a JSON layout template file jsonTemplate.json in your class path.
+2. Prepare a JSON layout template file `jsonTemplate.json` in your class path.
```json
- {
- "mdc": {
- "$resolver": "mdc"
- },
- "exceptionClass": {
- "$resolver": "exception",
- "field": "className"
- },
- "stackTrace": {
- "$resolver": "exception",
- "field": "stackTrace",
- "stringified": true
- },
- "message": {
- "$resolver": "message",
- "stringified": true
- },
- "thread": {
- "$resolver": "thread",
- "field": "name"
- },
- "timestamp": {
- "$resolver": "timestamp",
- "pattern": {
- "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'",
- "timeZone": "UTC"
- }
- },
- "level": {
- "$resolver": "level",
- "field": "name"
- },
- "logger": {
- "$resolver": "logger",
- "field": "name"
+ {
+ "mdc": {
+ "$resolver": "mdc"
+ },
+ "exceptionClass": {
+ "$resolver": "exception",
+ "field": "className"
+ },
+ "stackTrace": {
+ "$resolver": "exception",
+ "field": "stackTrace",
+ "stringified": true
+ },
+ "message": {
+ "$resolver": "message",
+ "stringified": true
+ },
+ "thread": {
+ "$resolver": "thread",
+ "field": "name"
+ },
+ "timestamp": {
+ "$resolver": "timestamp",
+ "pattern": {
+ "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'",
+ "timeZone": "UTC"
}
+ },
+ "level": {
+ "$resolver": "level",
+ "field": "name"
+ },
+ "logger": {
+ "$resolver": "logger",
+ "field": "name"
}
+ }
```
-3. Use this JSON layout template in your log4j2.xml config file.
+3. Use this JSON layout template in your `log4j2-spring.xml` config file.
- ```json
- <configuration>
- <appenders>
- <console name="Console" target="SYSTEM_OUT">
- <JsonTemplateLayout eventTemplateUri="classpath:jsonTemplate.json"/>
- </console>
- </appenders>
- <loggers>
- <root level="info">
- <appender-ref ref="Console"/>
- </root>
- </loggers>
- </configuration>
+ ```xml
+ <configuration>
+ <appenders>
+ <console name="Console" target="SYSTEM_OUT">
+ <JsonTemplateLayout eventTemplateUri="classpath:jsonTemplate.json" />
+ </console>
+ </appenders>
+ <loggers>
+ <root level="info">
+ <appender-ref ref="Console" />
+ </root>
+ </loggers>
+ </configuration>
``` ## Analyze the logs in Log Analytics
The procedure:
After your application is properly set up, your application console log will be streamed to Log Analytics. The structure enables efficient query in Log Analytics. ### Check log structure in Log Analytics+ Use the following procedure:+ 1. Go to service overview page of your service instance. 2. Click `Logs` entry under `Monitoring` section. 3. Run this query.
Use the following procedure:
4. Application logs return as shown in the following image:
-![Json Log show](media/spring-cloud-structured-app-log/json-log-query.png)
+ ![Json Log show](media/spring-cloud-structured-app-log/json-log-query.png)
+ ### Show log entries containing errors
AppPlatformLogsforSpring
Use this query to find errors, or modify the query terms to find specific exception class or error code. ### Show log entries for a specific traceId+ To review log entries for a specific tracing ID "trace_id", run the following query: ```
AppPlatformLogsforSpring
| sort by AppTimestamp ```
-## Next Steps
-* To learn more about the Log Query, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
+## Next steps
+
+* To learn more about the Log Query, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
sql-database Sql Database Auditing And Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-auditing-and-threat-detection-cli.md Binary files differ
sql-database Sql Database Setup Geodr And Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-setup-geodr-and-failover-pool-cli.md Binary files differ
static-web-apps Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/apis.md
Azure Static Web Apps provides an API through Azure Functions. The capabilities
- Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md). - Input and output [bindings](../azure-functions/functions-triggers-bindings.md#supported-bindings) are supported. - Logs are only available if you add [Application Insights](../azure-functions/functions-monitoring.md) to your Functions app.-- Some application settings are managed by the service. You can't configure app settings that start with the following prefixes: `APPSETTING_`, `AZUREBLOBSTORAGE_`, `AZUREFILESSTORAGE_`, `AZURE_FUNCTION_`, `CONTAINER_`, `DIAGNOSTICS_`, `DOCKER_`, `FUNCTIONS_`, `IDENTITY_`, `MACHINEKEY_`, `MAINSITE_`, `MSDEPLOY_`, `SCMSITE_`, `SCM_`, `WEBSITES_`, `WEBSITE_`, `WEBSOCKET_`, `AzureWeb`.
+- Some application settings are managed by the service. Therefore, you can't configure app settings that start with the following prefixes:
+ - `APPSETTING_`
+ - `AZUREBLOBSTORAGE_`
+ - `AZUREFILESSTORAGE_`
+ - `AZURE_FUNCTION_`
+ - `CONTAINER_`
+ - `DIAGNOSTICS_`
+ - `DOCKER_`
+ - `FUNCTIONS_`
+ - `IDENTITY_`
+ - `MACHINEKEY_`
+ - `MAINSITE_`
+ - `MSDEPLOY_`
+ - `SCMSITE_`
+ - `SCM_`
+ - `WEBSITES_`
+ - `WEBSITE_`
+ - `WEBSOCKET_`
+ - `AzureWeb`
## Next steps
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/local-development.md
Open a terminal to the root folder of your existing Azure Static Web Apps site.
`swa start`
-1. Navigate to http://localhost:4280 to view the app in the browser.
+1. Navigate to `http://localhost:4280` to view the app in the browser.
### Other ways to start the CLI
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-overview.md
The following table describes the types of storage accounts recommended by Micro
| Premium file shares<sup>4</sup> | File shares only | LRS<br /><br />ZRS<sup>2</sup> | Resource Manager<sup>3</sup> | Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high performance scale applications.<br />[Learn more...](../files/storage-files-planning.md#management-concepts) | | Premium page blobs<sup>4</sup> | Page blobs only | LRS | Resource Manager<sup>3</sup> | Premium storage account type for page blobs only.<br />[Learn more...](../blobs/storage-blob-pageblob-overview.md) |
-<sup>1</sup> Azure Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. Data Lake Storage is only supported on general-purpose V2 storage accounts with a hierarchical namespace enabled. For more information on Data Lake Storage Gen2, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md).
+<sup>1</sup> Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For more information, see [Introduction to Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md).
<sup>2</sup> Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for standard general-purpose v2, premium block blob, and premium file share accounts in certain regions. For more information about Azure Storage redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
The following table describes the legacy storage account types. These account ty
- [Create a storage account](storage-account-create.md) - [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md)-- [Recover a deleted storage account](storage-account-recover.md)
+- [Recover a deleted storage account](storage-account-recover.md)
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-upgrade.md Binary files differ
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
Replace the `<application-id>` placeholder with the application ID of your servi
If you prefer to use your own credentials for authorization, you can upload a certificate to your app registration, and then use that certificate to log in.
-In addition to uploading your certificate to your app registration, you'll also need to have a copy of the certificate saved to the machine or VM where AzCopy will be running. This copy of the certificate should be in .PFX or .PEM format, and must include the private key. The private key should be password-protected. If you're using Windows, and your certificate exists only in a certificate store, make sure to export that certificate to a PFX file (including the private key). For guidance, see [Export-PfxCertificate](/powershell/module/pkiclient/export-pfxcertificate)
+In addition to uploading your certificate to your app registration, you'll also need to have a copy of the certificate saved to the machine or VM where AzCopy will be running. This copy of the certificate should be in .PFX or .PEM format, and must include the private key. The private key should be password-protected. If you're using Windows, and your certificate exists only in a certificate store, make sure to export that certificate to a PFX file (including the private key). For guidance, see [Export-PfxCertificate](/powershell/module/pki/export-pfxcertificate)
Next, set the `AZCOPY_SPA_CERT_PASSWORD` environment variable to the certificate password.
Then, run any azcopy command (For example: `azcopy list https://contoso.blob.cor
- For more information about AzCopy, [Get started with AzCopy](storage-use-azcopy-v10.md) -- If you have questions, issues, or general feedback, submit them [on GitHub](https://github.com/Azure/azure-storage-azcopy) page.
+- If you have questions, issues, or general feedback, submit them [on GitHub](https://github.com/Azure/azure-storage-azcopy) page.
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-ad-ds-enable.md
The AD DS account created by the cmdlet represents the storage account. If the A
Replace the placeholder values with your own in the parameters below before executing it in PowerShell. > [!IMPORTANT] > The domain join cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. For computer accounts, there is a default password expiration age set in AD at 30 days. Similarly, the service logon account may have a default password expiration age set on the AD domain or Organizational Unit (OU).
-> For both account types, we recommend you check the password expiration age configured in your AD environment and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit (OU) in AD](/powershell/module/addsadministration/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
+> For both account types, we recommend you check the password expiration age configured in your AD environment and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit (OU) in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
If you have already executed the `Join-AzStorageAccountForAuth` script above suc
### Checking environment
-First, you must check the state of your environment. Specifically, you must check if [Active Directory PowerShell](/powershell/module/addsadministration/) is installed, and if the shell is being executed with administrator privileges. Then check to see if the [Az.Storage 2.0 module](https://www.powershellgallery.com/packages/Az.Storage/2.0.0) is installed, and install it if it isn't. After completing those checks, check your AD DS to see if there is either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or [service logon account](/windows/win32/ad/about-service-logon-accounts) that has already been created with SPN/UPN as "cifs/your-storage-account-name-here.file.core.windows.net". If the account doesn't exist, create one as described in the following section.
+First, you must check the state of your environment. Specifically, you must check if [Active Directory PowerShell](/powershell/module/activedirectory/) is installed, and if the shell is being executed with administrator privileges. Then check to see if the [Az.Storage 2.0 module](https://www.powershellgallery.com/packages/Az.Storage/2.0.0) is installed, and install it if it isn't. After completing those checks, check your AD DS to see if there is either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or [service logon account](/windows/win32/ad/about-service-logon-accounts) that has already been created with SPN/UPN as "cifs/your-storage-account-name-here.file.core.windows.net". If the account doesn't exist, create one as described in the following section.
### Creating an identity representing the storage account in your AD manually
AzureStorageID:<yourStorageSIDHere>
You've now successfully enabled the feature on your storage account. To use the feature, you must assign share-level permissions. Continue to the next section.
-[Part two: assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md)
+[Part two: assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md)
storsimple Storsimple 8000 Install Update 51 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-8000-install-update-51.md
ms.devlang: NA
NA Previously updated : 03/05/2020 Last updated : 04/21/2021
## Overview
-This tutorial explains how to install Update 5.1 on a StorSimple device running an earlier software version via the Azure portal. <!--The hotfix method is used when you are trying to install Update 5.1 on a device running pre-Update 3 versions. The hotfix method is also used when a gateway is configured on a network interface other than DATA 0 of the StorSimple device and you are trying to update from a pre-Update 1 software version.-->
+This tutorial explains how to install Update 5.1 on a StorSimple device running an earlier software version via the Azure portal or the hotfix method.
-Update 5.1 includes non-disruptive security updates. The non-disruptive or regular updates can be applied through the Azure portal <!--or by the hotfix method-->.